[jira] [Updated] (HIVE-21531) Vectorization: all NULL hashcodes are not computed using Murmur3

2019-03-28 Thread Gopal V (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-21531:
---
Affects Version/s: 3.1.1

> Vectorization: all NULL hashcodes are not computed using Murmur3
> 
>
> Key: HIVE-21531
> URL: https://issues.apache.org/jira/browse/HIVE-21531
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Gopal V
>Assignee: Gopal V
>Priority: Critical
> Attachments: HIVE-21531.WIP.patch
>
>
> The comments in Vectorized hash computation call out the MurmurHash 
> implementation (the one using 0x5bd1e995), while the non-vectorized codepath 
> calls out the Murmur3 one (using 0xcc9e2d51).
> The comments here are wrong
> {code}
>  /**
>* Batch compute the hash codes for all the serialized keys.
>*
>* NOTE: MAJOR MAJOR ASSUMPTION:
>* We assume that HashCodeUtil.murmurHash produces the same result
>* as MurmurHash.hash with seed = 0 (the method used by 
> ReduceSinkOperator for
>* UNIFORM distribution).
>*/
>   protected void computeSerializedHashCodes() {
> int offset = 0;
> int keyLength;
> byte[] bytes = output.getData();
> for (int i = 0; i < nonNullKeyCount; i++) {
>   keyLength = serializedKeyLengths[i];
>   hashCodes[i] = Murmur3.hash32(bytes, offset, keyLength, 0);
>   offset += keyLength;
> }
>   }
> {code}
> but the wrong comment is followed in the Vector RS operator 
> {code}
>   System.arraycopy(nullKeyOutput.getData(), 0, nullBytes, 0, 
> nullBytesLength);
>   nullKeyHashCode = HashCodeUtil.calculateBytesHashCode(nullBytes, 0, 
> nullBytesLength);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21531) Vectorization: all NULL hashcodes are not computed using Murmur3

2019-03-28 Thread Gopal V (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-21531:
---
Priority: Critical  (was: Major)

> Vectorization: all NULL hashcodes are not computed using Murmur3
> 
>
> Key: HIVE-21531
> URL: https://issues.apache.org/jira/browse/HIVE-21531
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal V
>Assignee: Gopal V
>Priority: Critical
> Attachments: HIVE-21531.WIP.patch
>
>
> The comments in Vectorized hash computation call out the MurmurHash 
> implementation (the one using 0x5bd1e995), while the non-vectorized codepath 
> calls out the Murmur3 one (using 0xcc9e2d51).
> The comments here are wrong
> {code}
>  /**
>* Batch compute the hash codes for all the serialized keys.
>*
>* NOTE: MAJOR MAJOR ASSUMPTION:
>* We assume that HashCodeUtil.murmurHash produces the same result
>* as MurmurHash.hash with seed = 0 (the method used by 
> ReduceSinkOperator for
>* UNIFORM distribution).
>*/
>   protected void computeSerializedHashCodes() {
> int offset = 0;
> int keyLength;
> byte[] bytes = output.getData();
> for (int i = 0; i < nonNullKeyCount; i++) {
>   keyLength = serializedKeyLengths[i];
>   hashCodes[i] = Murmur3.hash32(bytes, offset, keyLength, 0);
>   offset += keyLength;
> }
>   }
> {code}
> but the wrong comment is followed in the Vector RS operator 
> {code}
>   System.arraycopy(nullKeyOutput.getData(), 0, nullBytes, 0, 
> nullBytesLength);
>   nullKeyHashCode = HashCodeUtil.calculateBytesHashCode(nullBytes, 0, 
> nullBytesLength);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21109) Stats replication for ACID tables.

2019-03-28 Thread Ashutosh Bapat (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Bapat updated HIVE-21109:
--
Attachment: HIVE-21109.07.patch
Status: Patch Available  (was: In Progress)

> Stats replication for ACID tables.
> --
>
> Key: HIVE-21109
> URL: https://issues.apache.org/jira/browse/HIVE-21109
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21109.01.patch, HIVE-21109.02.patch, 
> HIVE-21109.03.patch, HIVE-21109.04.patch, HIVE-21109.05.patch, 
> HIVE-21109.06.patch, HIVE-21109.07.patch
>
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>
> Transactional tables require a writeID associated with the stats update. This 
> writeId needs to be in sync with the writeId on the source and hence needs to 
> be replicated from the source.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21109) Stats replication for ACID tables.

2019-03-28 Thread Ashutosh Bapat (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Bapat updated HIVE-21109:
--
Status: In Progress  (was: Patch Available)

> Stats replication for ACID tables.
> --
>
> Key: HIVE-21109
> URL: https://issues.apache.org/jira/browse/HIVE-21109
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21109.01.patch, HIVE-21109.02.patch, 
> HIVE-21109.03.patch, HIVE-21109.04.patch, HIVE-21109.05.patch, 
> HIVE-21109.06.patch
>
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>
> Transactional tables require a writeID associated with the stats update. This 
> writeId needs to be in sync with the writeId on the source and hence needs to 
> be replicated from the source.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21396) TestCliDriver#vector_groupby_reduce is flaky - rounding error

2019-03-28 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804595#comment-16804595
 ] 

Hive QA commented on HIVE-21396:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
49s{color} | {color:blue} itests/util in master has 48 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 7 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16744/dev-support/hive-personality.sh
 |
| git revision | master / 559efea |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16744/yetus/patch-asflicense-problems.txt
 |
| modules | C: itests/util U: itests/util |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16744/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> TestCliDriver#vector_groupby_reduce is flaky - rounding error
> -
>
> Key: HIVE-21396
> URL: https://issues.apache.org/jira/browse/HIVE-21396
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Laszlo Bodor
>Assignee: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-21396.01.patch
>
>
> http://104.198.109.242/logs/PreCommit-HIVE-Build-16349/failed/61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more/TEST-61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more-TEST-org.apache.hadoop.hive.cli.TestCliDriver.xml
> http://104.198.109.242/logs/PreCommit-HIVE-Build-16351/failed/61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more/TEST-61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more-TEST-org.apache.hadoop.hive.cli.TestCliDriver.xml
> -5080.17 --> -5080.1699
> actual:
> {code:java}
> 1 85411 816 58.285714285714285 -5080.1699 -362.86928571428564 
> 621.35 44.382142857142857143
> {code}
> expected:
> {code:java}
> 1 85411 816 58.285714285714285 -5080.17 -362.8692857142857 
> 621.35 44.382142857142857143
> {code}
> https://github.com/apache/hive/blob/268a6e5af11e0fdc3887d570c1680035fd9426c3/ql/src/test/results/clientpositive/vector_groupby_reduce.q.out
> it's a result of sum 

[jira] [Commented] (HIVE-21484) Metastore API getVersion() should return real version

2019-03-28 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804545#comment-16804545
 ] 

Hive QA commented on HIVE-21484:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12964064/HIVE-21484.03.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15882 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16743/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16743/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16743/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12964064 - PreCommit-HIVE-Build

> Metastore API getVersion() should return real version
> -
>
> Key: HIVE-21484
> URL: https://issues.apache.org/jira/browse/HIVE-21484
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Minor
> Attachments: HIVE-21484.01.patch, HIVE-21484.02.patch, 
> HIVE-21484.03.patch
>
>
> Currently I see the {{getVersion}} implementation in the metastore is 
> returning a hard-coded "3.0". It would be good to return the real version of 
> the metastore server using {{HiveversionInfo}} so that clients can take 
> certain actions based on metastore server versions.
> Possible use-cases are:
> 1. Client A can make use of new features introduced in given Metastore 
> version else stick to the base functionality.
> 2. This version number  can be used to do a version handshake between client 
> and server in the future to improve our cross-version compatibity story.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21484) Metastore API getVersion() should return real version

2019-03-28 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804529#comment-16804529
 ] 

Hive QA commented on HIVE-21484:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
47s{color} | {color:blue} standalone-metastore/metastore-common in master has 
29 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
19s{color} | {color:blue} standalone-metastore/metastore-server in master has 
179 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
14s{color} | {color:red} standalone-metastore/metastore-common: The patch 
generated 1 new + 387 unchanged - 0 fixed = 388 total (was 387) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 7 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16743/dev-support/hive-personality.sh
 |
| git revision | master / 559efea |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16743/yetus/diff-checkstyle-standalone-metastore_metastore-common.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16743/yetus/patch-asflicense-problems.txt
 |
| modules | C: standalone-metastore/metastore-common 
standalone-metastore/metastore-server U: standalone-metastore |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16743/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Metastore API getVersion() should return real version
> -
>
> Key: HIVE-21484
> URL: https://issues.apache.org/jira/browse/HIVE-21484
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Minor
> Attachments: HIVE-21484.01.patch, HIVE-21484.02.patch, 
> HIVE-21484.03.patch
>
>
> Currently I see the {{getVersion}} implementation in the metastore is 
> returning a hard-coded "3.0". It would be good to return the real version of 
> the metastore server using {{HiveversionInfo}} so that clients can take 
> certain actions based on metastore server versions.
> Possible use-cases 

[jira] [Updated] (HIVE-21536) Backport HIVE-17764 to branch-2.3

2019-03-28 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HIVE-21536:
---
Attachment: HIVE-21536-branch-2.3.patch
Status: Patch Available  (was: Open)

> Backport HIVE-17764 to branch-2.3
> -
>
> Key: HIVE-21536
> URL: https://issues.apache.org/jira/browse/HIVE-21536
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 2.3.4
>Reporter: Yuming Wang
>Assignee: Yuming Wang
>Priority: Major
> Attachments: HIVE-21536-branch-2.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21536) Backport HIVE-17764 to branch-2.3

2019-03-28 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HIVE-21536:
---
Attachment: (was: HIVE-21536.01.patch)

> Backport HIVE-17764 to branch-2.3
> -
>
> Key: HIVE-21536
> URL: https://issues.apache.org/jira/browse/HIVE-21536
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 2.3.4
>Reporter: Yuming Wang
>Assignee: Yuming Wang
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21536) Backport HIVE-17764 to branch-2.3

2019-03-28 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HIVE-21536:
---
Status: Open  (was: Patch Available)

> Backport HIVE-17764 to branch-2.3
> -
>
> Key: HIVE-21536
> URL: https://issues.apache.org/jira/browse/HIVE-21536
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 2.3.4
>Reporter: Yuming Wang
>Assignee: Yuming Wang
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21230) LEFT OUTER JOIN does not generate transitive IS NOT NULL filter on right side (HiveJoinAddNotNullRule bails out for outer joins)

2019-03-28 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804506#comment-16804506
 ] 

Hive QA commented on HIVE-21230:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12964061/HIVE-21230.5.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 15876 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.TestObjectStore.catalogs (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDatabaseOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDeprecatedConfigIsOverwritten
 (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropParitionsCleanup
 (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropPartitionsCacheCrossSession
 (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSqlErrorMetrics 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testEmptyTrustStoreProps 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testMasterKeyOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testMaxEventResponse 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testPartitionOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testQueryCloseOnError 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testRoleOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testTableOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testUseSSLProperty 
(batchId=230)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16742/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16742/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16742/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 14 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12964061 - PreCommit-HIVE-Build

> LEFT OUTER JOIN does not generate transitive IS NOT NULL filter on right side 
> (HiveJoinAddNotNullRule bails out for outer joins)
> 
>
> Key: HIVE-21230
> URL: https://issues.apache.org/jira/browse/HIVE-21230
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Vineet Garg
>Priority: Major
>  Labels: newbie
> Attachments: HIVE-21230.1.patch, HIVE-21230.2.patch, 
> HIVE-21230.3.patch, HIVE-21230.4.patch, HIVE-21230.5.patch
>
>
> For instance, given the following query:
> {code:sql}
> SELECT t0.col0, t0.col1
> FROM
>   (
> SELECT col0, col1 FROM tab
>   ) AS t0
>   LEFT JOIN
>   (
> SELECT col0, col1 FROM tab
>   ) AS t1
> ON t0.col0 = t1.col0 AND t0.col1 = t1.col1
> {code}
> we could still infer that col0 and col1 cannot be null in the right input and 
> introduce the corresponding filter predicate. Currently, the rule just bails 
> out if it is not an inner join.
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveJoinAddNotNullRule.java#L79



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21230) LEFT OUTER JOIN does not generate transitive IS NOT NULL filter on right side (HiveJoinAddNotNullRule bails out for outer joins)

2019-03-28 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804490#comment-16804490
 ] 

Hive QA commented on HIVE-21230:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
32s{color} | {color:blue} ql in master has 2256 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} ql: The patch generated 0 new + 1 unchanged - 9 
fixed = 1 total (was 10) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 7 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16742/dev-support/hive-personality.sh
 |
| git revision | master / 559efea |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16742/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql itests U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16742/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> LEFT OUTER JOIN does not generate transitive IS NOT NULL filter on right side 
> (HiveJoinAddNotNullRule bails out for outer joins)
> 
>
> Key: HIVE-21230
> URL: https://issues.apache.org/jira/browse/HIVE-21230
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Vineet Garg
>Priority: Major
>  Labels: newbie
> Attachments: HIVE-21230.1.patch, HIVE-21230.2.patch, 
> HIVE-21230.3.patch, HIVE-21230.4.patch, HIVE-21230.5.patch
>
>
> For instance, given the following query:
> {code:sql}
> SELECT t0.col0, t0.col1
> FROM
>   (
> SELECT col0, col1 FROM tab
>   ) AS t0
>   LEFT JOIN
>   (
> SELECT col0, col1 FROM tab
>   ) AS t1
> ON t0.col0 = t1.col0 AND t0.col1 = t1.col1
> {code}
> we could still infer that col0 and col1 cannot be null in the right input and 
> introduce the corresponding filter predicate. Currently, the rule just bails 
> out if it is not an inner join.
> 

[jira] [Commented] (HIVE-21001) Upgrade to calcite-1.19

2019-03-28 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804480#comment-16804480
 ] 

Hive QA commented on HIVE-21001:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
26s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
23s{color} | {color:blue} ql in master has 2256 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
34s{color} | {color:blue} accumulo-handler in master has 21 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
36s{color} | {color:blue} hbase-handler in master has 15 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  9m  
8s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
54s{color} | {color:red} ql: The patch generated 7 new + 303 unchanged - 45 
fixed = 310 total (was 348) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  2m 
27s{color} | {color:red} root: The patch generated 7 new + 303 unchanged - 45 
fixed = 310 total (was 348) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
46s{color} | {color:red} ql generated 1 new + 2256 unchanged - 0 fixed = 2257 
total (was 2256) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 13m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
35s{color} | {color:red} The patch generated 7 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Switch statement found in 
org.apache.hadoop.hive.ql.optimizer.calcite.translator.ASTBuilder.literal(RexLiteral)
 where default case is missing  At ASTBuilder.java:where default case is 
missing  At ASTBuilder.java:[lines 279-290] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  findbugs  
checkstyle  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16739/dev-support/hive-personality.sh
 |
| git revision | master / 72d72d4 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16739/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16739/yetus/diff-checkstyle-root.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16739/yetus/whitespace-eol.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16739/yetus/new-findbugs-ql.html
 |
| asflicense | 

[jira] [Updated] (HIVE-21538) Beeline: password source though the console reader did not pass to connection param

2019-03-28 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-21538:
--
Attachment: HIVE-21538.patch
Status: Patch Available  (was: In Progress)

> Beeline: password source though the console reader did not pass to connection 
> param
> ---
>
> Key: HIVE-21538
> URL: https://issues.apache.org/jira/browse/HIVE-21538
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.1.0
> Environment: Hive-3.1 auth set to LDAP
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21538.patch
>
>
> Beeline: password source through the console reader do not pass to connection 
> param, this will yield into the Authentication failure in case of LDAP 
> authentication.
> {code}
> beeline -n USER -u 
> "jdbc:hive2://host:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2"
>  -p
> Connecting to 
> jdbc:hive2://host:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;user=USER
> Enter password for jdbc:hive2://host:2181/: 
> 19/03/26 19:49:44 [main]: WARN jdbc.HiveConnection: Failed to connect to 
> host:1
> 19/03/26 19:49:44 [main]: ERROR jdbc.Utils: Unable to read HiveServer2 
> configs from ZooKeeper
> Unknown HS2 problem when communicating with Thrift server.
> Error: Could not open client transport for any of the Server URI's in 
> ZooKeeper: Peer indicated failure: PLAIN auth failed: 
> javax.security.sasl.AuthenticationException: Error validating LDAP user 
> [Caused by javax.naming.AuthenticationException: [LDAP: error code 49 - 
> 80090308: LdapErr: DSID-0C0903C8, comment: AcceptSecurityContext error, data 
> 52e, v2580]] (state=08S01,code=0)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21532) RuntimeException due to AccessControlException during creating hive-staging-dir

2019-03-28 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804472#comment-16804472
 ] 

Hive QA commented on HIVE-21532:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12964049/HIVE-21532.1.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16741/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16741/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16741/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-03-29 00:14:34.684
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-16741/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-03-29 00:14:34.717
+ cd apache-github-source-source
+ git fetch origin
>From https://github.com/apache/hive
   72d72d4..559efea  master -> origin/master
+ git reset --hard HEAD
HEAD is now at 72d72d4 HIVE-21457: Perf optimizations in ORC split-generation 
(Prasanth Jayachandran reviewed by Gopal V)
+ git clean -f -d
Removing ${project.basedir}/
Removing itests/${project.basedir}/
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded.
  (use "git pull" to update your local branch)
+ git reset --hard origin/master
HEAD is now at 559efea HIVE-21204 (Addendum): Instrumentation for read/write 
locks in LLAP (Olli Draese via Slim Bouguerra)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-03-29 00:14:37.857
+ rm -rf ../yetus_PreCommit-HIVE-Build-16741
+ mkdir ../yetus_PreCommit-HIVE-Build-16741
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-16741
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-16741/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java:7261
error: repository lacks the necessary blob to fall back on 3-way merge.
error: ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java: patch 
does not apply
error: src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java: does not 
exist in index
error: java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java: does not 
exist in index
The patch does not appear to apply with p0, p1, or p2
+ result=1
+ '[' 1 -ne 0 ']'
+ rm -rf yetus_PreCommit-HIVE-Build-16741
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12964049 - PreCommit-HIVE-Build

> RuntimeException due to AccessControlException during creating 
> hive-staging-dir
> ---
>
> Key: HIVE-21532
> URL: https://issues.apache.org/jira/browse/HIVE-21532
> Project: Hive
>  Issue Type: Bug
>Reporter: Oleksandr Polishchuk
>Priority: Minor
> Attachments: HIVE-21532.1.patch, HIVE-21532.1.patch
>
>
> The bug was found with environment - Hive-2.3.
> Steps lead to an exception:
> 1) Create user without root permissions on your node.
> 2) The {{hive-site.xml}} file has to contain the next properties:
> {code:java}
>  
>     hive.security.authorization.enabled
>   true
>   
>   
>    hive.security.authorization.manager
>  
> org.apache.hadoop.hive.ql.security.authorization.plugin.fallback.FallbackHiveAuthorizerFactory
>   
> {code}
> 3) Open Hive CLI and do next query:
> {code:java}
>  insert overwrite local directory '/tmp/test_dir' row format delimited 

[jira] [Work started] (HIVE-21538) Beeline: password source though the console reader did not pass to connection param

2019-03-28 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-21538 started by Rajkumar Singh.
-
> Beeline: password source though the console reader did not pass to connection 
> param
> ---
>
> Key: HIVE-21538
> URL: https://issues.apache.org/jira/browse/HIVE-21538
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.1.0
> Environment: Hive-3.1 auth set to LDAP
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
>
> Beeline: password source through the console reader do not pass to connection 
> param, this will yield into the Authentication failure in case of LDAP 
> authentication.
> {code}
> beeline -n USER -u 
> "jdbc:hive2://host:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2"
>  -p
> Connecting to 
> jdbc:hive2://host:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;user=USER
> Enter password for jdbc:hive2://host:2181/: 
> 19/03/26 19:49:44 [main]: WARN jdbc.HiveConnection: Failed to connect to 
> host:1
> 19/03/26 19:49:44 [main]: ERROR jdbc.Utils: Unable to read HiveServer2 
> configs from ZooKeeper
> Unknown HS2 problem when communicating with Thrift server.
> Error: Could not open client transport for any of the Server URI's in 
> ZooKeeper: Peer indicated failure: PLAIN auth failed: 
> javax.security.sasl.AuthenticationException: Error validating LDAP user 
> [Caused by javax.naming.AuthenticationException: [LDAP: error code 49 - 
> 80090308: LdapErr: DSID-0C0903C8, comment: AcceptSecurityContext error, data 
> 52e, v2580]] (state=08S01,code=0)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21538) Beeline: password source though the console reader did not pass to connection param

2019-03-28 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh reassigned HIVE-21538:
-

Assignee: Rajkumar Singh

> Beeline: password source though the console reader did not pass to connection 
> param
> ---
>
> Key: HIVE-21538
> URL: https://issues.apache.org/jira/browse/HIVE-21538
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.1.0
> Environment: Hive-3.1 auth set to LDAP
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
>
> Beeline: password source through the console reader do not pass to connection 
> param, this will yield into the Authentication failure in case of LDAP 
> authentication.
> {code}
> beeline -n USER -u 
> "jdbc:hive2://host:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2"
>  -p
> Connecting to 
> jdbc:hive2://host:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;user=USER
> Enter password for jdbc:hive2://host:2181/: 
> 19/03/26 19:49:44 [main]: WARN jdbc.HiveConnection: Failed to connect to 
> host:1
> 19/03/26 19:49:44 [main]: ERROR jdbc.Utils: Unable to read HiveServer2 
> configs from ZooKeeper
> Unknown HS2 problem when communicating with Thrift server.
> Error: Could not open client transport for any of the Server URI's in 
> ZooKeeper: Peer indicated failure: PLAIN auth failed: 
> javax.security.sasl.AuthenticationException: Error validating LDAP user 
> [Caused by javax.naming.AuthenticationException: [LDAP: error code 49 - 
> 80090308: LdapErr: DSID-0C0903C8, comment: AcceptSecurityContext error, data 
> 52e, v2580]] (state=08S01,code=0)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21001) Upgrade to calcite-1.19

2019-03-28 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804469#comment-16804469
 ] 

Hive QA commented on HIVE-21001:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12964035/HIVE-21001.48.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15877 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.llap.metrics.TestReadWriteLockMetrics.testWithoutContention
 (batchId=330)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16739/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16739/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16739/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12964035 - PreCommit-HIVE-Build

> Upgrade to calcite-1.19
> ---
>
> Key: HIVE-21001
> URL: https://issues.apache.org/jira/browse/HIVE-21001
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21001.01.patch, HIVE-21001.01.patch, 
> HIVE-21001.02.patch, HIVE-21001.03.patch, HIVE-21001.04.patch, 
> HIVE-21001.05.patch, HIVE-21001.06.patch, HIVE-21001.06.patch, 
> HIVE-21001.07.patch, HIVE-21001.08.patch, HIVE-21001.08.patch, 
> HIVE-21001.08.patch, HIVE-21001.09.patch, HIVE-21001.09.patch, 
> HIVE-21001.09.patch, HIVE-21001.10.patch, HIVE-21001.11.patch, 
> HIVE-21001.12.patch, HIVE-21001.13.patch, HIVE-21001.15.patch, 
> HIVE-21001.16.patch, HIVE-21001.17.patch, HIVE-21001.18.patch, 
> HIVE-21001.18.patch, HIVE-21001.19.patch, HIVE-21001.20.patch, 
> HIVE-21001.21.patch, HIVE-21001.22.patch, HIVE-21001.22.patch, 
> HIVE-21001.22.patch, HIVE-21001.23.patch, HIVE-21001.24.patch, 
> HIVE-21001.26.patch, HIVE-21001.26.patch, HIVE-21001.26.patch, 
> HIVE-21001.26.patch, HIVE-21001.26.patch, HIVE-21001.27.patch, 
> HIVE-21001.28.patch, HIVE-21001.29.patch, HIVE-21001.29.patch, 
> HIVE-21001.30.patch, HIVE-21001.31.patch, HIVE-21001.32.patch, 
> HIVE-21001.34.patch, HIVE-21001.35.patch, HIVE-21001.36.patch, 
> HIVE-21001.37.patch, HIVE-21001.38.patch, HIVE-21001.39.patch, 
> HIVE-21001.40.patch, HIVE-21001.41.patch, HIVE-21001.42.patch, 
> HIVE-21001.43.patch, HIVE-21001.44.patch, HIVE-21001.45.patch, 
> HIVE-21001.45.patch, HIVE-21001.46.patch, HIVE-21001.47.patch, 
> HIVE-21001.48.patch, HIVE-21001.48.patch, HIVE-21001.48.patch, 
> HIVE-21001.48.patch
>
>
> XLEAR LIBRARY CACHE 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21316) Comparision of varchar column and string literal should happen in varchar

2019-03-28 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804471#comment-16804471
 ] 

Hive QA commented on HIVE-21316:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12964039/HIVE-21316.07.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16740/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16740/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16740/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Tests exited with: Exception: Patch URL 
https://issues.apache.org/jira/secure/attachment/12964039/HIVE-21316.07.patch 
was found in seen patch url's cache and a test was probably run already on it. 
Aborting...
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12964039 - PreCommit-HIVE-Build

> Comparision of varchar column and string literal should happen in varchar
> -
>
> Key: HIVE-21316
> URL: https://issues.apache.org/jira/browse/HIVE-21316
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21316.01.patch, HIVE-21316.02.patch, 
> HIVE-21316.03.patch, HIVE-21316.04.patch, HIVE-21316.05.patch, 
> HIVE-21316.06.patch, HIVE-21316.06.patch, HIVE-21316.07.patch
>
>
> this is most probably the root cause behind HIVE-21310 as well



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21499) should not remove the function from registry if create command failed with AlreadyExistsException

2019-03-28 Thread Thejas M Nair (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804461#comment-16804461
 ] 

Thejas M Nair commented on HIVE-21499:
--

+1


> should not remove the function from registry if create command failed with 
> AlreadyExistsException
> -
>
> Key: HIVE-21499
> URL: https://issues.apache.org/jira/browse/HIVE-21499
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.1.0
> Environment: Hive-3.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21499.01.patch, HIVE-21499.patch
>
>
> As a part of HIVE-20953 we are removing the function if creation for same 
> failed with any reason, this will yield into the following situation.
> 1. create function failed since function already exists
> 2. on #1 failure hive will clear the permanent function from the registry
> 3. this function will be of no use until hiveserver2 restarted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21537) Scalar query rewrite could be improved to not generate an extra join if subquery is guaranteed to produce atmost one row

2019-03-28 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21537:
---
Status: Patch Available  (was: Open)

> Scalar query rewrite could be improved to not generate an extra join if 
> subquery is guaranteed to produce atmost one row
> 
>
> Key: HIVE-21537
> URL: https://issues.apache.org/jira/browse/HIVE-21537
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
>  Labels: sub-query
> Attachments: HIVE-21537.1.patch
>
>
> Currently Hive planner introduces this branch and later executes a rule to 
> remove this branch if it could. 
> Subquery remove rule itself could check if subquery will produce max one row 
> (using relmetadat's getMaxRowCount) and avoid introducing this branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21537) Scalar query rewrite could be improved to not generate an extra join if subquery is guaranteed to produce atmost one row

2019-03-28 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21537:
---
Attachment: HIVE-21537.1.patch

> Scalar query rewrite could be improved to not generate an extra join if 
> subquery is guaranteed to produce atmost one row
> 
>
> Key: HIVE-21537
> URL: https://issues.apache.org/jira/browse/HIVE-21537
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
>  Labels: sub-query
> Attachments: HIVE-21537.1.patch
>
>
> Currently Hive planner introduces this branch and later executes a rule to 
> remove this branch if it could. 
> Subquery remove rule itself could check if subquery will produce max one row 
> (using relmetadat's getMaxRowCount) and avoid introducing this branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21316) Comparision of varchar column and string literal should happen in varchar

2019-03-28 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804392#comment-16804392
 ] 

Hive QA commented on HIVE-21316:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12964039/HIVE-21316.07.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 15841 tests 
executed
*Failed tests:*
{noformat}
TestDataSourceProviderFactory - did not produce a TEST-*.xml file (likely timed 
out) (batchId=230)
TestObjectStore - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestPartitionProjectionEvaluator - did not produce a TEST-*.xml file (likely 
timed out) (batchId=230)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16738/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16738/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16738/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12964039 - PreCommit-HIVE-Build

> Comparision of varchar column and string literal should happen in varchar
> -
>
> Key: HIVE-21316
> URL: https://issues.apache.org/jira/browse/HIVE-21316
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21316.01.patch, HIVE-21316.02.patch, 
> HIVE-21316.03.patch, HIVE-21316.04.patch, HIVE-21316.05.patch, 
> HIVE-21316.06.patch, HIVE-21316.06.patch, HIVE-21316.07.patch
>
>
> this is most probably the root cause behind HIVE-21310 as well



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21231) HiveJoinAddNotNullRule support for range predicates

2019-03-28 Thread Vineet Garg (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804381#comment-16804381
 ] 

Vineet Garg commented on HIVE-21231:


There is a query in subquery_scalar (and query44) which has an extra join now 
(due to {{sq_count_check}} not being removed). I am looking into this 
(HIVE-21537)

> HiveJoinAddNotNullRule support for range predicates
> ---
>
> Key: HIVE-21231
> URL: https://issues.apache.org/jira/browse/HIVE-21231
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: HIVE-21231.01.patch, HIVE-21231.02.patch, 
> HIVE-21231.03.patch, HIVE-21231.04.patch, HIVE-21231.05.patch, 
> HIVE-21231.06.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For instance, given the following query:
> {code:sql}
> SELECT t0.col0, t0.col1
> FROM
>   (
> SELECT col0, col1 FROM tab
>   ) AS t0
>   INNER JOIN
>   (
> SELECT col0, col1 FROM tab
>   ) AS t1
> ON t0.col0 < t1.col0 AND t0.col1 > t1.col1
> {code}
> we could still infer that col0 and col1 cannot be null for any of the inputs. 
> Currently we do not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21342) Analyze compute stats for column leave behind staging dir on hdfs

2019-03-28 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-21342:
--
Status: Open  (was: Patch Available)

> Analyze compute stats for column leave behind staging dir on hdfs
> -
>
> Key: HIVE-21342
> URL: https://issues.apache.org/jira/browse/HIVE-21342
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.1.0
> Environment: hive-3.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21342.patch, HIVE-21499.01.patch
>
>
> staging dir cleanup does not happen for the "analyze table .. compute 
> statistics for columns", this leads to stale directory on hdfs.
> the problem seems to be with ColumnStatsSemanticAnalyzer which don't have 
> hdfscleanup set for the context.
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/parse/ColumnStatsSemanticAnalyzer.java#L310



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21342) Analyze compute stats for column leave behind staging dir on hdfs

2019-03-28 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-21342:
--
Attachment: HIVE-21499.01.patch
Status: Patch Available  (was: Open)

> Analyze compute stats for column leave behind staging dir on hdfs
> -
>
> Key: HIVE-21342
> URL: https://issues.apache.org/jira/browse/HIVE-21342
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.1.0
> Environment: hive-3.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21342.patch, HIVE-21499.01.patch
>
>
> staging dir cleanup does not happen for the "analyze table .. compute 
> statistics for columns", this leads to stale directory on hdfs.
> the problem seems to be with ColumnStatsSemanticAnalyzer which don't have 
> hdfscleanup set for the context.
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/parse/ColumnStatsSemanticAnalyzer.java#L310



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21537) Scalar query rewrite could be improved to not generate an extra join if subquery is guaranteed to produce atmost one row

2019-03-28 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg reassigned HIVE-21537:
--


> Scalar query rewrite could be improved to not generate an extra join if 
> subquery is guaranteed to produce atmost one row
> 
>
> Key: HIVE-21537
> URL: https://issues.apache.org/jira/browse/HIVE-21537
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
>
> Currently Hive planner introduces this branch and later executes a rule to 
> remove this branch if it could. 
> Subquery remove rule itself could check if subquery will produce max one row 
> (using relmetadat's getMaxRowCount) and avoid introducing this branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21537) Scalar query rewrite could be improved to not generate an extra join if subquery is guaranteed to produce atmost one row

2019-03-28 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21537:
---
Labels: sub-query  (was: )

> Scalar query rewrite could be improved to not generate an extra join if 
> subquery is guaranteed to produce atmost one row
> 
>
> Key: HIVE-21537
> URL: https://issues.apache.org/jira/browse/HIVE-21537
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
>  Labels: sub-query
>
> Currently Hive planner introduces this branch and later executes a rule to 
> remove this branch if it could. 
> Subquery remove rule itself could check if subquery will produce max one row 
> (using relmetadat's getMaxRowCount) and avoid introducing this branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21316) Comparision of varchar column and string literal should happen in varchar

2019-03-28 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804348#comment-16804348
 ] 

Hive QA commented on HIVE-21316:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
26s{color} | {color:blue} ql in master has 2256 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
46s{color} | {color:red} ql: The patch generated 6 new + 137 unchanged - 0 
fixed = 143 total (was 137) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 9 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
44s{color} | {color:red} ql generated 1 new + 2256 unchanged - 0 fixed = 2257 
total (was 2256) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 7 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  
org.apache.hadoop.hive.ql.optimizer.calcite.translator.RexNodeConverter$HiveNlsString
 doesn't override org.apache.calcite.util.NlsString.equals(Object)  At 
RexNodeConverter.java:At RexNodeConverter.java:[line 1] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16738/dev-support/hive-personality.sh
 |
| git revision | master / 72d72d4 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16738/yetus/diff-checkstyle-ql.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16738/yetus/whitespace-tabs.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16738/yetus/new-findbugs-ql.html
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16738/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql itests U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16738/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Comparision of varchar column and string literal should happen in varchar
> -
>
> Key: HIVE-21316
> URL: https://issues.apache.org/jira/browse/HIVE-21316
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21316.01.patch, HIVE-21316.02.patch, 
> HIVE-21316.03.patch, HIVE-21316.04.patch, HIVE-21316.05.patch, 
> HIVE-21316.06.patch, HIVE-21316.06.patch, HIVE-21316.07.patch
>
>
> this is most probably the root 

[jira] [Commented] (HIVE-21499) should not remove the function from registry if create command failed with AlreadyExistsException

2019-03-28 Thread Rajkumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804326#comment-16804326
 ] 

Rajkumar Singh commented on HIVE-21499:
---

[~thejas] incorporated the suggested change and added the unit test, please 
review. thanks

> should not remove the function from registry if create command failed with 
> AlreadyExistsException
> -
>
> Key: HIVE-21499
> URL: https://issues.apache.org/jira/browse/HIVE-21499
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.1.0
> Environment: Hive-3.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21499.01.patch, HIVE-21499.patch
>
>
> As a part of HIVE-20953 we are removing the function if creation for same 
> failed with any reason, this will yield into the following situation.
> 1. create function failed since function already exists
> 2. on #1 failure hive will clear the permanent function from the registry
> 3. this function will be of no use until hiveserver2 restarted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21499) should not remove the function from registry if create command failed with AlreadyExistsException

2019-03-28 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-21499:
--
Status: Open  (was: Patch Available)

> should not remove the function from registry if create command failed with 
> AlreadyExistsException
> -
>
> Key: HIVE-21499
> URL: https://issues.apache.org/jira/browse/HIVE-21499
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.1.0
> Environment: Hive-3.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21499.01.patch, HIVE-21499.patch
>
>
> As a part of HIVE-20953 we are removing the function if creation for same 
> failed with any reason, this will yield into the following situation.
> 1. create function failed since function already exists
> 2. on #1 failure hive will clear the permanent function from the registry
> 3. this function will be of no use until hiveserver2 restarted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21499) should not remove the function from registry if create command failed with AlreadyExistsException

2019-03-28 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-21499:
--
Attachment: HIVE-21499.01.patch
Status: Patch Available  (was: Open)

> should not remove the function from registry if create command failed with 
> AlreadyExistsException
> -
>
> Key: HIVE-21499
> URL: https://issues.apache.org/jira/browse/HIVE-21499
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.1.0
> Environment: Hive-3.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21499.01.patch, HIVE-21499.patch
>
>
> As a part of HIVE-20953 we are removing the function if creation for same 
> failed with any reason, this will yield into the following situation.
> 1. create function failed since function already exists
> 2. on #1 failure hive will clear the permanent function from the registry
> 3. this function will be of no use until hiveserver2 restarted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21523) Break up DDLTask - extract View related operations

2019-03-28 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21523:
--
Attachment: HIVE-21523.03.patch

> Break up DDLTask - extract View related operations
> --
>
> Key: HIVE-21523
> URL: https://issues.apache.org/jira/browse/HIVE-21523
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21523.01.patch, HIVE-21523.02.patch, 
> HIVE-21523.03.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #3: extract all the view related operations from the old DDLTask, and 
> move them under the new package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21523) Break up DDLTask - extract View related operations

2019-03-28 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21523:
--
Status: Patch Available  (was: Open)

> Break up DDLTask - extract View related operations
> --
>
> Key: HIVE-21523
> URL: https://issues.apache.org/jira/browse/HIVE-21523
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21523.01.patch, HIVE-21523.02.patch, 
> HIVE-21523.03.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #3: extract all the view related operations from the old DDLTask, and 
> move them under the new package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21523) Break up DDLTask - extract View related operations

2019-03-28 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21523:
--
Status: Open  (was: Patch Available)

> Break up DDLTask - extract View related operations
> --
>
> Key: HIVE-21523
> URL: https://issues.apache.org/jira/browse/HIVE-21523
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21523.01.patch, HIVE-21523.02.patch, 
> HIVE-21523.03.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #3: extract all the view related operations from the old DDLTask, and 
> move them under the new package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21231) HiveJoinAddNotNullRule support for range predicates

2019-03-28 Thread Miklos Gergely (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804304#comment-16804304
 ] 

Miklos Gergely commented on HIVE-21231:
---

I've checked those too for a long time, and I got to the conclusion that the 
order of the tablescans / reducers  that can be executed in any order have 
changed, which looks odd in the changes view, but otherwise they are just the 
same stuff written in different order.

> HiveJoinAddNotNullRule support for range predicates
> ---
>
> Key: HIVE-21231
> URL: https://issues.apache.org/jira/browse/HIVE-21231
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: HIVE-21231.01.patch, HIVE-21231.02.patch, 
> HIVE-21231.03.patch, HIVE-21231.04.patch, HIVE-21231.05.patch, 
> HIVE-21231.06.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For instance, given the following query:
> {code:sql}
> SELECT t0.col0, t0.col1
> FROM
>   (
> SELECT col0, col1 FROM tab
>   ) AS t0
>   INNER JOIN
>   (
> SELECT col0, col1 FROM tab
>   ) AS t1
> ON t0.col0 < t1.col0 AND t0.col1 > t1.col1
> {code}
> we could still infer that col0 and col1 cannot be null for any of the inputs. 
> Currently we do not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21523) Break up DDLTask - extract View related operations

2019-03-28 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804302#comment-16804302
 ] 

Hive QA commented on HIVE-21523:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12964032/HIVE-21523.02.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 15877 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[timestamptz_2] 
(batchId=86)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_4]
 (batchId=163)
org.apache.hadoop.hive.llap.metrics.TestReadWriteLockMetrics.testWithContention 
(batchId=330)
org.apache.hadoop.hive.llap.metrics.TestReadWriteLockMetrics.testWithoutContention
 (batchId=330)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16737/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16737/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16737/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12964032 - PreCommit-HIVE-Build

> Break up DDLTask - extract View related operations
> --
>
> Key: HIVE-21523
> URL: https://issues.apache.org/jira/browse/HIVE-21523
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21523.01.patch, HIVE-21523.02.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #3: extract all the view related operations from the old DDLTask, and 
> move them under the new package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21457) Perf optimizations in ORC split-generation

2019-03-28 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-21457:
-
   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Committed to master. Thanks Gopal for the review!

> Perf optimizations in ORC split-generation
> --
>
> Key: HIVE-21457
> URL: https://issues.apache.org/jira/browse/HIVE-21457
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-21457.1.patch
>
>
> Minor split generation optimizations
>  * Reuse vectorization checks
>  * Reuse isAcid checks
>  * Reuse filesystem objects
>  * Improved logging (log at top-level instead of inside the thread pool)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21231) HiveJoinAddNotNullRule support for range predicates

2019-03-28 Thread Vineet Garg (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804282#comment-16804282
 ] 

Vineet Garg commented on HIVE-21231:


Code changes look good to me but I notice plan changes in subquery_scalar and 
query44 which looks like regression to me. Looking more into it.

> HiveJoinAddNotNullRule support for range predicates
> ---
>
> Key: HIVE-21231
> URL: https://issues.apache.org/jira/browse/HIVE-21231
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: HIVE-21231.01.patch, HIVE-21231.02.patch, 
> HIVE-21231.03.patch, HIVE-21231.04.patch, HIVE-21231.05.patch, 
> HIVE-21231.06.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For instance, given the following query:
> {code:sql}
> SELECT t0.col0, t0.col1
> FROM
>   (
> SELECT col0, col1 FROM tab
>   ) AS t0
>   INNER JOIN
>   (
> SELECT col0, col1 FROM tab
>   ) AS t1
> ON t0.col0 < t1.col0 AND t0.col1 > t1.col1
> {code}
> we could still infer that col0 and col1 cannot be null for any of the inputs. 
> Currently we do not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21457) Perf optimizations in ORC split-generation

2019-03-28 Thread Gopal V (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-21457:
---
Component/s: Transactions

> Perf optimizations in ORC split-generation
> --
>
> Key: HIVE-21457
> URL: https://issues.apache.org/jira/browse/HIVE-21457
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Minor
> Attachments: HIVE-21457.1.patch
>
>
> Minor split generation optimizations
>  * Reuse vectorization checks
>  * Reuse isAcid checks
>  * Reuse filesystem objects
>  * Improved logging (log at top-level instead of inside the thread pool)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21457) Perf optimizations in ORC split-generation

2019-03-28 Thread Gopal V (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804278#comment-16804278
 ] 

Gopal V commented on HIVE-21457:


LGTM - +1 

FYI [~vgumashta]

> Perf optimizations in ORC split-generation
> --
>
> Key: HIVE-21457
> URL: https://issues.apache.org/jira/browse/HIVE-21457
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Minor
> Attachments: HIVE-21457.1.patch
>
>
> Minor split generation optimizations
>  * Reuse vectorization checks
>  * Reuse isAcid checks
>  * Reuse filesystem objects
>  * Improved logging (log at top-level instead of inside the thread pool)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21523) Break up DDLTask - extract View related operations

2019-03-28 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804277#comment-16804277
 ] 

Hive QA commented on HIVE-21523:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
27s{color} | {color:blue} ql in master has 2256 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
53s{color} | {color:red} ql: The patch generated 2 new + 1009 unchanged - 10 
fixed = 1011 total (was 1019) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 7 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16737/dev-support/hive-personality.sh
 |
| git revision | master / 1e58bd2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16737/yetus/diff-checkstyle-ql.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16737/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16737/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Break up DDLTask - extract View related operations
> --
>
> Key: HIVE-21523
> URL: https://issues.apache.org/jira/browse/HIVE-21523
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21523.01.patch, HIVE-21523.02.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage 

[jira] [Commented] (HIVE-21536) Backport HIVE-17764 to branch-2.3

2019-03-28 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804251#comment-16804251
 ] 

Yuming Wang commented on HIVE-21536:


cc [~ashutoshc]

> Backport HIVE-17764 to branch-2.3
> -
>
> Key: HIVE-21536
> URL: https://issues.apache.org/jira/browse/HIVE-21536
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 2.3.4
>Reporter: Yuming Wang
>Assignee: Yuming Wang
>Priority: Major
> Attachments: HIVE-21536.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21536) Backport HIVE-17764 to branch-2.3

2019-03-28 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HIVE-21536:
---
  Assignee: Yuming Wang
Attachment: HIVE-21536.01.patch
Status: Patch Available  (was: Open)

> Backport HIVE-17764 to branch-2.3
> -
>
> Key: HIVE-21536
> URL: https://issues.apache.org/jira/browse/HIVE-21536
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 2.3.4
>Reporter: Yuming Wang
>Assignee: Yuming Wang
>Priority: Major
> Attachments: HIVE-21536.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21526) JSONDropDatabaseMessage needs to have the full database object.

2019-03-28 Thread Vihang Karajgaonkar (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804250#comment-16804250
 ] 

Vihang Karajgaonkar commented on HIVE-21526:


Looks good to me. +1

> JSONDropDatabaseMessage needs to have the full database object.
> ---
>
> Key: HIVE-21526
> URL: https://issues.apache.org/jira/browse/HIVE-21526
> Project: Hive
>  Issue Type: Improvement
>Reporter: Bharathkrishna Guruvayoor Murali
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-21526.1.patch
>
>
> The metastore notification event DROP_DATABASE does not provide full-thrift 
> objects as of now.
> We have added CREATION_TIME to databases in HIVE-21077, and metadata like 
> this would be useful in notification processing. One of the use-cases is 
> IMPALA-8338.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21526) JSONDropDatabaseMessage needs to have the full database object.

2019-03-28 Thread Bharathkrishna Guruvayoor Murali (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804191#comment-16804191
 ] 

Bharathkrishna Guruvayoor Murali commented on HIVE-21526:
-

Tests are passing. I guess the checkstyle error reported can be ignored. ASF 
License errors are unrelated.
[~vihangk1] can you take a final look?

> JSONDropDatabaseMessage needs to have the full database object.
> ---
>
> Key: HIVE-21526
> URL: https://issues.apache.org/jira/browse/HIVE-21526
> Project: Hive
>  Issue Type: Improvement
>Reporter: Bharathkrishna Guruvayoor Murali
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-21526.1.patch
>
>
> The metastore notification event DROP_DATABASE does not provide full-thrift 
> objects as of now.
> We have added CREATION_TIME to databases in HIVE-21077, and metadata like 
> this would be useful in notification processing. One of the use-cases is 
> IMPALA-8338.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21443) Better usability for SHOW COMPACTIONS

2019-03-28 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-21443:
--
Status: Patch Available  (was: Open)

> Better usability for SHOW COMPACTIONS
> -
>
> Key: HIVE-21443
> URL: https://issues.apache.org/jira/browse/HIVE-21443
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Reporter: Todd Lipcon
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21443.patch
>
>
> Currently on a test cluster the output of 'SHOW COMPACTIONS' has 117k rows. 
> This makes it basically useless to work with.
> For better usability, we should support syntax like 'SHOW COMPACTIONS IN 
> ' or maybe 'SHOW COMPACTIONS ON ' (particular syntax to be 
> chosen for consistency with other operations I suppose).
> Alternatively (or maybe in addition) it seems like it would be nice to expose 
> the same data in a queryable table (eg in information_schema or a system 
> namespace) so that I could do things like: SELECT dbname, state, count(*) 
> from compactions group by 1,2;



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21443) Better usability for SHOW COMPACTIONS

2019-03-28 Thread Peter Vary (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804180#comment-16804180
 ] 

Peter Vary commented on HIVE-21443:
---

Sample output:
{code:java}
0: jdbc:hive2://localhost:10003> select * from compactions;
[..]
+---++-+--+--+-+--+-+--+--+-+--+---+-+
| compactions.c_id | compactions.c_catalog | compactions.c_database | 
compactions.c_table | compactions.c_partition | compactions.c_type | 
compactions.c_state | compactions.c_hostname | compactions.c_worker_id | 
compactions.c_start | compactions.c_duration | compactions.c_hadoop_job_id | 
compactions.c_run_as | compactions.c_highest_write_id |
+---++-+--+--+-+--+-+--+--+-+--+---+-+
| 1 | default | default | acid | NULL | minor | failed | PeterVary | 
MBP15.local | 1551275865000 | 11000 | NULL | petervary | 5 |
| 10 | default | default | acid | NULL | major | failed | PeterVary | 
MBP15.local | 1551355199000 | 3 | NULL | petervary | 5 |
| 15 | default | default | acid | NULL | major | failed | PeterVary | 
MBP15.local | 1552074769000 | 22000 | NULL | petervary | 5 |
| 16 | default | default | acid | NULL | major | working | PeterVary | 
MBP15.local | 1553716252000 | NULL | NULL | petervary | 5 |
| 17 | default | default | acid2 | NULL | major | working | PeterVary | 
MBP15.local | 1553785387000 | NULL | NULL | petervary | 5 |
| 18 | default | default | acid3 | NULL | major | initiated | NULL | NULL | 
NULL | NULL | NULL | NULL | NULL |
| 19 | default | default | acid_buck | NULL | major | initiated | NULL | NULL | 
NULL | NULL | NULL | NULL | NULL |
+---++-+--+--+-+--+-+--+--+-+--+---+-+
7 rows selected (18.503 seconds)
0: jdbc:hive2://localhost:10003>{code}

> Better usability for SHOW COMPACTIONS
> -
>
> Key: HIVE-21443
> URL: https://issues.apache.org/jira/browse/HIVE-21443
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Reporter: Todd Lipcon
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21443.patch
>
>
> Currently on a test cluster the output of 'SHOW COMPACTIONS' has 117k rows. 
> This makes it basically useless to work with.
> For better usability, we should support syntax like 'SHOW COMPACTIONS IN 
> ' or maybe 'SHOW COMPACTIONS ON ' (particular syntax to be 
> chosen for consistency with other operations I suppose).
> Alternatively (or maybe in addition) it seems like it would be nice to expose 
> the same data in a queryable table (eg in information_schema or a system 
> namespace) so that I could do things like: SELECT dbname, state, count(*) 
> from compactions group by 1,2;



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21443) Better usability for SHOW COMPACTIONS

2019-03-28 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-21443:
--
Attachment: HIVE-21443.patch

> Better usability for SHOW COMPACTIONS
> -
>
> Key: HIVE-21443
> URL: https://issues.apache.org/jira/browse/HIVE-21443
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Reporter: Todd Lipcon
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21443.patch
>
>
> Currently on a test cluster the output of 'SHOW COMPACTIONS' has 117k rows. 
> This makes it basically useless to work with.
> For better usability, we should support syntax like 'SHOW COMPACTIONS IN 
> ' or maybe 'SHOW COMPACTIONS ON ' (particular syntax to be 
> chosen for consistency with other operations I suppose).
> Alternatively (or maybe in addition) it seems like it would be nice to expose 
> the same data in a queryable table (eg in information_schema or a system 
> namespace) so that I could do things like: SELECT dbname, state, count(*) 
> from compactions group by 1,2;



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21484) Metastore API getVersion() should return real version

2019-03-28 Thread Vihang Karajgaonkar (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804172#comment-16804172
 ] 

Vihang Karajgaonkar commented on HIVE-21484:


The above test failure is a known flaky test failure as reported in HIVE-21396. 
I have submitted a patch for HIVE-21396 to disable this test until we fix it. 
While that gets reviewed and subimitted, I think its okay to go ahead and 
commit this patch since there is no guarantee that {{vector_groupby_reduce}} 
will not fail with v3 patch attached above.

> Metastore API getVersion() should return real version
> -
>
> Key: HIVE-21484
> URL: https://issues.apache.org/jira/browse/HIVE-21484
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Minor
> Attachments: HIVE-21484.01.patch, HIVE-21484.02.patch, 
> HIVE-21484.03.patch
>
>
> Currently I see the {{getVersion}} implementation in the metastore is 
> returning a hard-coded "3.0". It would be good to return the real version of 
> the metastore server using {{HiveversionInfo}} so that clients can take 
> certain actions based on metastore server versions.
> Possible use-cases are:
> 1. Client A can make use of new features introduced in given Metastore 
> version else stick to the base functionality.
> 2. This version number  can be used to do a version handshake between client 
> and server in the future to improve our cross-version compatibity story.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-11662) Dynamic partitioning cannot be applied to external table which contains part-spec like directory name

2019-03-28 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-11662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804170#comment-16804170
 ] 

Hive QA commented on HIVE-11662:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12753249/HIVE-11662.2.patch.txt

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16736/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16736/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16736/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-03-28 18:17:21.894
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-16736/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-03-28 18:17:21.897
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 1e58bd2 HIVE-20580: OrcInputFormat.isOriginal() should not rely 
on hive.acid.key.index (Peter Vary reviewed by Eugene Koifman, Ashutosh Chauhan 
and Vaibhav Gumashta)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 1e58bd2 HIVE-20580: OrcInputFormat.isOriginal() should not rely 
on hive.acid.key.index (Peter Vary reviewed by Eugene Koifman, Ashutosh Chauhan 
and Vaibhav Gumashta)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-03-28 18:17:22.868
+ rm -rf ../yetus_PreCommit-HIVE-Build-16736
+ mkdir ../yetus_PreCommit-HIVE-Build-16736
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-16736
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-16736/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java: does not 
exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java: does not 
exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/plan/ConditionalResolverMergeFiles.java:
 does not exist in index
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java:18
Falling back to three-way merge...
Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java' 
with conflicts.
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java:1590
Falling back to three-way merge...
Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java' 
with conflicts.
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/plan/ConditionalResolverMergeFiles.java:21
Falling back to three-way merge...
Applied patch to 
'ql/src/java/org/apache/hadoop/hive/ql/plan/ConditionalResolverMergeFiles.java' 
with conflicts.
Going to apply patch with: git apply -p1
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java:18
Falling back to three-way merge...
Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java' 
with conflicts.
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java:1590
Falling back to three-way merge...
Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java' 
with conflicts.
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/plan/ConditionalResolverMergeFiles.java:21
Falling back to three-way merge...
Applied patch to 
'ql/src/java/org/apache/hadoop/hive/ql/plan/ConditionalResolverMergeFiles.java' 
with conflicts.
U ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
U ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
U 

[jira] [Commented] (HIVE-21396) TestCliDriver#vector_groupby_reduce is flaky - rounding error

2019-03-28 Thread Vihang Karajgaonkar (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804160#comment-16804160
 ] 

Vihang Karajgaonkar commented on HIVE-21396:


Created a separate JIRA to track re-enabling the test.

> TestCliDriver#vector_groupby_reduce is flaky - rounding error
> -
>
> Key: HIVE-21396
> URL: https://issues.apache.org/jira/browse/HIVE-21396
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Laszlo Bodor
>Assignee: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-21396.01.patch
>
>
> http://104.198.109.242/logs/PreCommit-HIVE-Build-16349/failed/61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more/TEST-61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more-TEST-org.apache.hadoop.hive.cli.TestCliDriver.xml
> http://104.198.109.242/logs/PreCommit-HIVE-Build-16351/failed/61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more/TEST-61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more-TEST-org.apache.hadoop.hive.cli.TestCliDriver.xml
> -5080.17 --> -5080.1699
> actual:
> {code:java}
> 1 85411 816 58.285714285714285 -5080.1699 -362.86928571428564 
> 621.35 44.382142857142857143
> {code}
> expected:
> {code:java}
> 1 85411 816 58.285714285714285 -5080.17 -362.8692857142857 
> 621.35 44.382142857142857143
> {code}
> https://github.com/apache/hive/blob/268a6e5af11e0fdc3887d570c1680035fd9426c3/ql/src/test/results/clientpositive/vector_groupby_reduce.q.out
> it's a result of sum (max(ss_net_profit) np)
> {code}
> select
> ss_ticket_number, sum(ss_item_sk), sum(q), avg(q), sum(np), avg(np), 
> sum(decwc), avg(decwc)
> from
> (select
> ss_ticket_number, ss_item_sk, min(ss_quantity) q, max(ss_net_profit) 
> np, max(ss_wholesale_cost_decimal) decwc
> from
> store_sales_n3
> where ss_ticket_number = 1
> group by ss_ticket_number, ss_item_sk) a
> group by ss_ticket_number
> order by ss_ticket_number
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21396) TestCliDriver#vector_groupby_reduce is flaky - rounding error

2019-03-28 Thread Vihang Karajgaonkar (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804159#comment-16804159
 ] 

Vihang Karajgaonkar commented on HIVE-21396:


patch disables the test for TestCliDriver. [~jcamachorodriguez] Can you please 
review?

> TestCliDriver#vector_groupby_reduce is flaky - rounding error
> -
>
> Key: HIVE-21396
> URL: https://issues.apache.org/jira/browse/HIVE-21396
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Laszlo Bodor
>Assignee: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-21396.01.patch
>
>
> http://104.198.109.242/logs/PreCommit-HIVE-Build-16349/failed/61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more/TEST-61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more-TEST-org.apache.hadoop.hive.cli.TestCliDriver.xml
> http://104.198.109.242/logs/PreCommit-HIVE-Build-16351/failed/61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more/TEST-61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more-TEST-org.apache.hadoop.hive.cli.TestCliDriver.xml
> -5080.17 --> -5080.1699
> actual:
> {code:java}
> 1 85411 816 58.285714285714285 -5080.1699 -362.86928571428564 
> 621.35 44.382142857142857143
> {code}
> expected:
> {code:java}
> 1 85411 816 58.285714285714285 -5080.17 -362.8692857142857 
> 621.35 44.382142857142857143
> {code}
> https://github.com/apache/hive/blob/268a6e5af11e0fdc3887d570c1680035fd9426c3/ql/src/test/results/clientpositive/vector_groupby_reduce.q.out
> it's a result of sum (max(ss_net_profit) np)
> {code}
> select
> ss_ticket_number, sum(ss_item_sk), sum(q), avg(q), sum(np), avg(np), 
> sum(decwc), avg(decwc)
> from
> (select
> ss_ticket_number, ss_item_sk, min(ss_quantity) q, max(ss_net_profit) 
> np, max(ss_wholesale_cost_decimal) decwc
> from
> store_sales_n3
> where ss_ticket_number = 1
> group by ss_ticket_number, ss_item_sk) a
> group by ss_ticket_number
> order by ss_ticket_number
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21396) TestCliDriver#vector_groupby_reduce is flaky - rounding error

2019-03-28 Thread Vihang Karajgaonkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-21396:
---
Attachment: HIVE-21396.01.patch

> TestCliDriver#vector_groupby_reduce is flaky - rounding error
> -
>
> Key: HIVE-21396
> URL: https://issues.apache.org/jira/browse/HIVE-21396
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-21396.01.patch
>
>
> http://104.198.109.242/logs/PreCommit-HIVE-Build-16349/failed/61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more/TEST-61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more-TEST-org.apache.hadoop.hive.cli.TestCliDriver.xml
> http://104.198.109.242/logs/PreCommit-HIVE-Build-16351/failed/61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more/TEST-61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more-TEST-org.apache.hadoop.hive.cli.TestCliDriver.xml
> -5080.17 --> -5080.1699
> actual:
> {code:java}
> 1 85411 816 58.285714285714285 -5080.1699 -362.86928571428564 
> 621.35 44.382142857142857143
> {code}
> expected:
> {code:java}
> 1 85411 816 58.285714285714285 -5080.17 -362.8692857142857 
> 621.35 44.382142857142857143
> {code}
> https://github.com/apache/hive/blob/268a6e5af11e0fdc3887d570c1680035fd9426c3/ql/src/test/results/clientpositive/vector_groupby_reduce.q.out
> it's a result of sum (max(ss_net_profit) np)
> {code}
> select
> ss_ticket_number, sum(ss_item_sk), sum(q), avg(q), sum(np), avg(np), 
> sum(decwc), avg(decwc)
> from
> (select
> ss_ticket_number, ss_item_sk, min(ss_quantity) q, max(ss_net_profit) 
> np, max(ss_wholesale_cost_decimal) decwc
> from
> store_sales_n3
> where ss_ticket_number = 1
> group by ss_ticket_number, ss_item_sk) a
> group by ss_ticket_number
> order by ss_ticket_number
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21396) TestCliDriver#vector_groupby_reduce is flaky - rounding error

2019-03-28 Thread Vihang Karajgaonkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar reassigned HIVE-21396:
--

Assignee: Vihang Karajgaonkar

> TestCliDriver#vector_groupby_reduce is flaky - rounding error
> -
>
> Key: HIVE-21396
> URL: https://issues.apache.org/jira/browse/HIVE-21396
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-21396.01.patch
>
>
> http://104.198.109.242/logs/PreCommit-HIVE-Build-16349/failed/61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more/TEST-61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more-TEST-org.apache.hadoop.hive.cli.TestCliDriver.xml
> http://104.198.109.242/logs/PreCommit-HIVE-Build-16351/failed/61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more/TEST-61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more-TEST-org.apache.hadoop.hive.cli.TestCliDriver.xml
> -5080.17 --> -5080.1699
> actual:
> {code:java}
> 1 85411 816 58.285714285714285 -5080.1699 -362.86928571428564 
> 621.35 44.382142857142857143
> {code}
> expected:
> {code:java}
> 1 85411 816 58.285714285714285 -5080.17 -362.8692857142857 
> 621.35 44.382142857142857143
> {code}
> https://github.com/apache/hive/blob/268a6e5af11e0fdc3887d570c1680035fd9426c3/ql/src/test/results/clientpositive/vector_groupby_reduce.q.out
> it's a result of sum (max(ss_net_profit) np)
> {code}
> select
> ss_ticket_number, sum(ss_item_sk), sum(q), avg(q), sum(np), avg(np), 
> sum(decwc), avg(decwc)
> from
> (select
> ss_ticket_number, ss_item_sk, min(ss_quantity) q, max(ss_net_profit) 
> np, max(ss_wholesale_cost_decimal) decwc
> from
> store_sales_n3
> where ss_ticket_number = 1
> group by ss_ticket_number, ss_item_sk) a
> group by ss_ticket_number
> order by ss_ticket_number
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21396) TestCliDriver#vector_groupby_reduce is flaky - rounding error

2019-03-28 Thread Vihang Karajgaonkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-21396:
---
Affects Version/s: 4.0.0
   Status: Patch Available  (was: Open)

> TestCliDriver#vector_groupby_reduce is flaky - rounding error
> -
>
> Key: HIVE-21396
> URL: https://issues.apache.org/jira/browse/HIVE-21396
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Laszlo Bodor
>Assignee: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-21396.01.patch
>
>
> http://104.198.109.242/logs/PreCommit-HIVE-Build-16349/failed/61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more/TEST-61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more-TEST-org.apache.hadoop.hive.cli.TestCliDriver.xml
> http://104.198.109.242/logs/PreCommit-HIVE-Build-16351/failed/61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more/TEST-61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more-TEST-org.apache.hadoop.hive.cli.TestCliDriver.xml
> -5080.17 --> -5080.1699
> actual:
> {code:java}
> 1 85411 816 58.285714285714285 -5080.1699 -362.86928571428564 
> 621.35 44.382142857142857143
> {code}
> expected:
> {code:java}
> 1 85411 816 58.285714285714285 -5080.17 -362.8692857142857 
> 621.35 44.382142857142857143
> {code}
> https://github.com/apache/hive/blob/268a6e5af11e0fdc3887d570c1680035fd9426c3/ql/src/test/results/clientpositive/vector_groupby_reduce.q.out
> it's a result of sum (max(ss_net_profit) np)
> {code}
> select
> ss_ticket_number, sum(ss_item_sk), sum(q), avg(q), sum(np), avg(np), 
> sum(decwc), avg(decwc)
> from
> (select
> ss_ticket_number, ss_item_sk, min(ss_quantity) q, max(ss_net_profit) 
> np, max(ss_wholesale_cost_decimal) decwc
> from
> store_sales_n3
> where ss_ticket_number = 1
> group by ss_ticket_number, ss_item_sk) a
> group by ss_ticket_number
> order by ss_ticket_number
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21396) TestCliDriver#vector_groupby_reduce is flaky - rounding error

2019-03-28 Thread Vihang Karajgaonkar (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804152#comment-16804152
 ] 

Vihang Karajgaonkar commented on HIVE-21396:


Hit this failure in HIVE-21484. I think we should disable this test until it is 
fixed.

> TestCliDriver#vector_groupby_reduce is flaky - rounding error
> -
>
> Key: HIVE-21396
> URL: https://issues.apache.org/jira/browse/HIVE-21396
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Priority: Major
>
> http://104.198.109.242/logs/PreCommit-HIVE-Build-16349/failed/61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more/TEST-61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more-TEST-org.apache.hadoop.hive.cli.TestCliDriver.xml
> http://104.198.109.242/logs/PreCommit-HIVE-Build-16351/failed/61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more/TEST-61-TestCliDriver-multi_insert_partitioned.q-parquet_types.q-udf_to_unix_timestamp.q-and-27-more-TEST-org.apache.hadoop.hive.cli.TestCliDriver.xml
> -5080.17 --> -5080.1699
> actual:
> {code:java}
> 1 85411 816 58.285714285714285 -5080.1699 -362.86928571428564 
> 621.35 44.382142857142857143
> {code}
> expected:
> {code:java}
> 1 85411 816 58.285714285714285 -5080.17 -362.8692857142857 
> 621.35 44.382142857142857143
> {code}
> https://github.com/apache/hive/blob/268a6e5af11e0fdc3887d570c1680035fd9426c3/ql/src/test/results/clientpositive/vector_groupby_reduce.q.out
> it's a result of sum (max(ss_net_profit) np)
> {code}
> select
> ss_ticket_number, sum(ss_item_sk), sum(q), avg(q), sum(np), avg(np), 
> sum(decwc), avg(decwc)
> from
> (select
> ss_ticket_number, ss_item_sk, min(ss_quantity) q, max(ss_net_profit) 
> np, max(ss_wholesale_cost_decimal) decwc
> from
> store_sales_n3
> where ss_ticket_number = 1
> group by ss_ticket_number, ss_item_sk) a
> group by ss_ticket_number
> order by ss_ticket_number
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21484) Metastore API getVersion() should return real version

2019-03-28 Thread Vihang Karajgaonkar (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804147#comment-16804147
 ] 

Vihang Karajgaonkar commented on HIVE-21484:


Test works for me locally and highly unlikely to be related. Reattaching..

> Metastore API getVersion() should return real version
> -
>
> Key: HIVE-21484
> URL: https://issues.apache.org/jira/browse/HIVE-21484
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Minor
> Attachments: HIVE-21484.01.patch, HIVE-21484.02.patch, 
> HIVE-21484.03.patch
>
>
> Currently I see the {{getVersion}} implementation in the metastore is 
> returning a hard-coded "3.0". It would be good to return the real version of 
> the metastore server using {{HiveversionInfo}} so that clients can take 
> certain actions based on metastore server versions.
> Possible use-cases are:
> 1. Client A can make use of new features introduced in given Metastore 
> version else stick to the base functionality.
> 2. This version number  can be used to do a version handshake between client 
> and server in the future to improve our cross-version compatibity story.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21484) Metastore API getVersion() should return real version

2019-03-28 Thread Vihang Karajgaonkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-21484:
---
Attachment: HIVE-21484.03.patch

> Metastore API getVersion() should return real version
> -
>
> Key: HIVE-21484
> URL: https://issues.apache.org/jira/browse/HIVE-21484
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Minor
> Attachments: HIVE-21484.01.patch, HIVE-21484.02.patch, 
> HIVE-21484.03.patch
>
>
> Currently I see the {{getVersion}} implementation in the metastore is 
> returning a hard-coded "3.0". It would be good to return the real version of 
> the metastore server using {{HiveversionInfo}} so that clients can take 
> certain actions based on metastore server versions.
> Possible use-cases are:
> 1. Client A can make use of new features introduced in given Metastore 
> version else stick to the base functionality.
> 2. This version number  can be used to do a version handshake between client 
> and server in the future to improve our cross-version compatibity story.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21230) LEFT OUTER JOIN does not generate transitive IS NOT NULL filter on right side (HiveJoinAddNotNullRule bails out for outer joins)

2019-03-28 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21230:
---
Status: Patch Available  (was: Open)

> LEFT OUTER JOIN does not generate transitive IS NOT NULL filter on right side 
> (HiveJoinAddNotNullRule bails out for outer joins)
> 
>
> Key: HIVE-21230
> URL: https://issues.apache.org/jira/browse/HIVE-21230
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Vineet Garg
>Priority: Major
>  Labels: newbie
> Attachments: HIVE-21230.1.patch, HIVE-21230.2.patch, 
> HIVE-21230.3.patch, HIVE-21230.4.patch, HIVE-21230.5.patch
>
>
> For instance, given the following query:
> {code:sql}
> SELECT t0.col0, t0.col1
> FROM
>   (
> SELECT col0, col1 FROM tab
>   ) AS t0
>   LEFT JOIN
>   (
> SELECT col0, col1 FROM tab
>   ) AS t1
> ON t0.col0 = t1.col0 AND t0.col1 = t1.col1
> {code}
> we could still infer that col0 and col1 cannot be null in the right input and 
> introduce the corresponding filter predicate. Currently, the rule just bails 
> out if it is not an inner join.
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveJoinAddNotNullRule.java#L79



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21230) LEFT OUTER JOIN does not generate transitive IS NOT NULL filter on right side (HiveJoinAddNotNullRule bails out for outer joins)

2019-03-28 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21230:
---
Status: Open  (was: Patch Available)

> LEFT OUTER JOIN does not generate transitive IS NOT NULL filter on right side 
> (HiveJoinAddNotNullRule bails out for outer joins)
> 
>
> Key: HIVE-21230
> URL: https://issues.apache.org/jira/browse/HIVE-21230
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Vineet Garg
>Priority: Major
>  Labels: newbie
> Attachments: HIVE-21230.1.patch, HIVE-21230.2.patch, 
> HIVE-21230.3.patch, HIVE-21230.4.patch, HIVE-21230.5.patch
>
>
> For instance, given the following query:
> {code:sql}
> SELECT t0.col0, t0.col1
> FROM
>   (
> SELECT col0, col1 FROM tab
>   ) AS t0
>   LEFT JOIN
>   (
> SELECT col0, col1 FROM tab
>   ) AS t1
> ON t0.col0 = t1.col0 AND t0.col1 = t1.col1
> {code}
> we could still infer that col0 and col1 cannot be null in the right input and 
> introduce the corresponding filter predicate. Currently, the rule just bails 
> out if it is not an inner join.
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveJoinAddNotNullRule.java#L79



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21230) LEFT OUTER JOIN does not generate transitive IS NOT NULL filter on right side (HiveJoinAddNotNullRule bails out for outer joins)

2019-03-28 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21230:
---
Attachment: HIVE-21230.5.patch

> LEFT OUTER JOIN does not generate transitive IS NOT NULL filter on right side 
> (HiveJoinAddNotNullRule bails out for outer joins)
> 
>
> Key: HIVE-21230
> URL: https://issues.apache.org/jira/browse/HIVE-21230
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Vineet Garg
>Priority: Major
>  Labels: newbie
> Attachments: HIVE-21230.1.patch, HIVE-21230.2.patch, 
> HIVE-21230.3.patch, HIVE-21230.4.patch, HIVE-21230.5.patch
>
>
> For instance, given the following query:
> {code:sql}
> SELECT t0.col0, t0.col1
> FROM
>   (
> SELECT col0, col1 FROM tab
>   ) AS t0
>   LEFT JOIN
>   (
> SELECT col0, col1 FROM tab
>   ) AS t1
> ON t0.col0 = t1.col0 AND t0.col1 = t1.col1
> {code}
> we could still infer that col0 and col1 cannot be null in the right input and 
> introduce the corresponding filter predicate. Currently, the rule just bails 
> out if it is not an inner join.
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveJoinAddNotNullRule.java#L79



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21230) LEFT OUTER JOIN does not generate transitive IS NOT NULL filter on right side (HiveJoinAddNotNullRule bails out for outer joins)

2019-03-28 Thread Jesus Camacho Rodriguez (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804103#comment-16804103
 ] 

Jesus Camacho Rodriguez commented on HIVE-21230:


+1 (pending tests)

> LEFT OUTER JOIN does not generate transitive IS NOT NULL filter on right side 
> (HiveJoinAddNotNullRule bails out for outer joins)
> 
>
> Key: HIVE-21230
> URL: https://issues.apache.org/jira/browse/HIVE-21230
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Vineet Garg
>Priority: Major
>  Labels: newbie
> Attachments: HIVE-21230.1.patch, HIVE-21230.2.patch, 
> HIVE-21230.3.patch, HIVE-21230.4.patch
>
>
> For instance, given the following query:
> {code:sql}
> SELECT t0.col0, t0.col1
> FROM
>   (
> SELECT col0, col1 FROM tab
>   ) AS t0
>   LEFT JOIN
>   (
> SELECT col0, col1 FROM tab
>   ) AS t1
> ON t0.col0 = t1.col0 AND t0.col1 = t1.col1
> {code}
> we could still infer that col0 and col1 cannot be null in the right input and 
> introduce the corresponding filter predicate. Currently, the rule just bails 
> out if it is not an inner join.
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveJoinAddNotNullRule.java#L79



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21532) RuntimeException due to AccessControlException during creating hive-staging-dir

2019-03-28 Thread Oleksandr Polishchuk (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Polishchuk updated HIVE-21532:

Attachment: (was: 
Opportunity_to_do_next_query_(insert_overwrite_local_directory_'_tmp_test_dir'_row_format_.patch)

> RuntimeException due to AccessControlException during creating 
> hive-staging-dir
> ---
>
> Key: HIVE-21532
> URL: https://issues.apache.org/jira/browse/HIVE-21532
> Project: Hive
>  Issue Type: Bug
>Reporter: Oleksandr Polishchuk
>Priority: Minor
> Attachments: HIVE-21532.1.patch, HIVE-21532.1.patch
>
>
> The bug was found with environment - Hive-2.3.
> Steps lead to an exception:
> 1) Create user without root permissions on your node.
> 2) The {{hive-site.xml}} file has to contain the next properties:
> {code:java}
>  
>     hive.security.authorization.enabled
>   true
>   
>   
>    hive.security.authorization.manager
>  
> org.apache.hadoop.hive.ql.security.authorization.plugin.fallback.FallbackHiveAuthorizerFactory
>   
> {code}
> 3) Open Hive CLI and do next query:
> {code:java}
>  insert overwrite local directory '/tmp/test_dir' row format delimited fields 
> terminated by ',' select * from temp.test;
> {code}
> The previous query will fails with the next exception:
> {code:java}
> FAILED: RuntimeException Cannot create staging directory 
> 'hdfs:///tmp/test_dir/.hive-staging_hive_2019-03-28_11-51-05_319_5882446299335967521-1':
>  User testuser(user id 3456)  has been denied access to create 
> .hive-staging_hive_2019-03-28_11-51-05_319_5882446299335967521-1
> {code}
> The investigation shows that if delete the mentioned above properties from 
> {{hive-site.xml}} and pass {{`queryTmpdir`}} instead of {{`dest_path`}} in 
> the {{org.apache.hadoop.hive.ql.Context#getTempDirForPath()}} as was in the 
> Hive-2.1. everything will be fine. The current method is using in the 
> {{org.apache.hadoop.hive.ql.parse.SemanticAnalyzer}}  - {{String statsTmpLoc 
> = ctx.getTempDirForPath(dest_path).toString();}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21532) RuntimeException due to AccessControlException during creating hive-staging-dir

2019-03-28 Thread Oleksandr Polishchuk (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Polishchuk updated HIVE-21532:

Attachment: HIVE-21532.1.patch
Status: Patch Available  (was: Open)

> RuntimeException due to AccessControlException during creating 
> hive-staging-dir
> ---
>
> Key: HIVE-21532
> URL: https://issues.apache.org/jira/browse/HIVE-21532
> Project: Hive
>  Issue Type: Bug
>Reporter: Oleksandr Polishchuk
>Priority: Minor
> Attachments: HIVE-21532.1.patch, HIVE-21532.1.patch
>
>
> The bug was found with environment - Hive-2.3.
> Steps lead to an exception:
> 1) Create user without root permissions on your node.
> 2) The {{hive-site.xml}} file has to contain the next properties:
> {code:java}
>  
>     hive.security.authorization.enabled
>   true
>   
>   
>    hive.security.authorization.manager
>  
> org.apache.hadoop.hive.ql.security.authorization.plugin.fallback.FallbackHiveAuthorizerFactory
>   
> {code}
> 3) Open Hive CLI and do next query:
> {code:java}
>  insert overwrite local directory '/tmp/test_dir' row format delimited fields 
> terminated by ',' select * from temp.test;
> {code}
> The previous query will fails with the next exception:
> {code:java}
> FAILED: RuntimeException Cannot create staging directory 
> 'hdfs:///tmp/test_dir/.hive-staging_hive_2019-03-28_11-51-05_319_5882446299335967521-1':
>  User testuser(user id 3456)  has been denied access to create 
> .hive-staging_hive_2019-03-28_11-51-05_319_5882446299335967521-1
> {code}
> The investigation shows that if delete the mentioned above properties from 
> {{hive-site.xml}} and pass {{`queryTmpdir`}} instead of {{`dest_path`}} in 
> the {{org.apache.hadoop.hive.ql.Context#getTempDirForPath()}} as was in the 
> Hive-2.1. everything will be fine. The current method is using in the 
> {{org.apache.hadoop.hive.ql.parse.SemanticAnalyzer}}  - {{String statsTmpLoc 
> = ctx.getTempDirForPath(dest_path).toString();}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21532) RuntimeException due to AccessControlException during creating hive-staging-dir

2019-03-28 Thread Oleksandr Polishchuk (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Polishchuk updated HIVE-21532:

Attachment: HIVE-21532.1.patch

> RuntimeException due to AccessControlException during creating 
> hive-staging-dir
> ---
>
> Key: HIVE-21532
> URL: https://issues.apache.org/jira/browse/HIVE-21532
> Project: Hive
>  Issue Type: Bug
>Reporter: Oleksandr Polishchuk
>Priority: Minor
> Attachments: HIVE-21532.1.patch, HIVE-21532.1.patch
>
>
> The bug was found with environment - Hive-2.3.
> Steps lead to an exception:
> 1) Create user without root permissions on your node.
> 2) The {{hive-site.xml}} file has to contain the next properties:
> {code:java}
>  
>     hive.security.authorization.enabled
>   true
>   
>   
>    hive.security.authorization.manager
>  
> org.apache.hadoop.hive.ql.security.authorization.plugin.fallback.FallbackHiveAuthorizerFactory
>   
> {code}
> 3) Open Hive CLI and do next query:
> {code:java}
>  insert overwrite local directory '/tmp/test_dir' row format delimited fields 
> terminated by ',' select * from temp.test;
> {code}
> The previous query will fails with the next exception:
> {code:java}
> FAILED: RuntimeException Cannot create staging directory 
> 'hdfs:///tmp/test_dir/.hive-staging_hive_2019-03-28_11-51-05_319_5882446299335967521-1':
>  User testuser(user id 3456)  has been denied access to create 
> .hive-staging_hive_2019-03-28_11-51-05_319_5882446299335967521-1
> {code}
> The investigation shows that if delete the mentioned above properties from 
> {{hive-site.xml}} and pass {{`queryTmpdir`}} instead of {{`dest_path`}} in 
> the {{org.apache.hadoop.hive.ql.Context#getTempDirForPath()}} as was in the 
> Hive-2.1. everything will be fine. The current method is using in the 
> {{org.apache.hadoop.hive.ql.parse.SemanticAnalyzer}}  - {{String statsTmpLoc 
> = ctx.getTempDirForPath(dest_path).toString();}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21520) Query "Submit plan" time reported is incorrect

2019-03-28 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804059#comment-16804059
 ] 

Hive QA commented on HIVE-21520:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12964005/HIVE-21520.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15877 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.llap.metrics.TestReadWriteLockMetrics.testWithContention 
(batchId=330)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16735/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16735/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16735/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12964005 - PreCommit-HIVE-Build

> Query "Submit plan" time reported is incorrect
> --
>
> Key: HIVE-21520
> URL: https://issues.apache.org/jira/browse/HIVE-21520
> Project: Hive
>  Issue Type: Bug
>Reporter: Rajesh Balamohan
>Priority: Trivial
> Attachments: HIVE-21520.1.patch
>
>
> Hive master branch + LLAP
> {noformat}
> Query Execution Summary
> --
> OPERATION    DURATION
> --
> Compile Query   0.00s
> Prepare Plan    0.00s
> Get Query Coordinator (AM)  0.00s
> Submit Plan 1553658149.89s
> Start DAG   0.53s
> Run DAG 0.43s
> --
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21524) Impala Engine

2019-03-28 Thread David Mollitor (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804038#comment-16804038
 ] 

David Mollitor commented on HIVE-21524:
---

Another thought... Impala, as I understand it, cannot insert into Avro tables.  
Allowing a user to switch engines could allow users to insert with Hive and 
switch over to Impala to query.

> Impala Engine
> -
>
> Key: HIVE-21524
> URL: https://issues.apache.org/jira/browse/HIVE-21524
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 4.0.0
>Reporter: David Mollitor
>Priority: Major
>
> Now that Impala has "dedicated coordinator" capability, it could be 
> interesting to pair HiveServer2 instances with Impala dedicated coordinators 
> on the same localhost.  A client could request an 'impala' execution engine 
> and subsequent queries would be routed to the local coordinator.
> {code:sql}
> set hive.execution.engine=impala;
> {code}
> This would allow clients seamless access to both capabilities without needing 
> different connections or drivers, Hive would also be a central location for 
> auditing and authorization.
> https://www.cloudera.com/documentation/enterprise/latest/topics/impala_dedicated_coordinator.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21520) Query "Submit plan" time reported is incorrect

2019-03-28 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804032#comment-16804032
 ] 

Hive QA commented on HIVE-21520:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
19s{color} | {color:blue} ql in master has 2256 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
16s{color} | {color:red} The patch generated 7 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16735/dev-support/hive-personality.sh
 |
| git revision | master / 1e58bd2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16735/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16735/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Query "Submit plan" time reported is incorrect
> --
>
> Key: HIVE-21520
> URL: https://issues.apache.org/jira/browse/HIVE-21520
> Project: Hive
>  Issue Type: Bug
>Reporter: Rajesh Balamohan
>Priority: Trivial
> Attachments: HIVE-21520.1.patch
>
>
> Hive master branch + LLAP
> {noformat}
> Query Execution Summary
> --
> OPERATION    DURATION
> --
> Compile Query   0.00s
> Prepare Plan    0.00s
> Get Query Coordinator (AM)  0.00s
> Submit Plan 1553658149.89s
> Start DAG   0.53s
> Run DAG 0.43s
> --
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21316) Comparision of varchar column and string literal should happen in varchar

2019-03-28 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-21316:

Attachment: HIVE-21316.07.patch

> Comparision of varchar column and string literal should happen in varchar
> -
>
> Key: HIVE-21316
> URL: https://issues.apache.org/jira/browse/HIVE-21316
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21316.01.patch, HIVE-21316.02.patch, 
> HIVE-21316.03.patch, HIVE-21316.04.patch, HIVE-21316.05.patch, 
> HIVE-21316.06.patch, HIVE-21316.06.patch, HIVE-21316.07.patch
>
>
> this is most probably the root cause behind HIVE-21310 as well



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21443) Better usability for SHOW COMPACTIONS

2019-03-28 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary reassigned HIVE-21443:
-

Assignee: Peter Vary

> Better usability for SHOW COMPACTIONS
> -
>
> Key: HIVE-21443
> URL: https://issues.apache.org/jira/browse/HIVE-21443
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Reporter: Todd Lipcon
>Assignee: Peter Vary
>Priority: Major
>
> Currently on a test cluster the output of 'SHOW COMPACTIONS' has 117k rows. 
> This makes it basically useless to work with.
> For better usability, we should support syntax like 'SHOW COMPACTIONS IN 
> ' or maybe 'SHOW COMPACTIONS ON ' (particular syntax to be 
> chosen for consistency with other operations I suppose).
> Alternatively (or maybe in addition) it seems like it would be nice to expose 
> the same data in a queryable table (eg in information_schema or a system 
> namespace) so that I could do things like: SELECT dbname, state, count(*) 
> from compactions group by 1,2;



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21316) Comparision of varchar column and string literal should happen in varchar

2019-03-28 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-21316:

Attachment: HIVE-21316.06.patch

> Comparision of varchar column and string literal should happen in varchar
> -
>
> Key: HIVE-21316
> URL: https://issues.apache.org/jira/browse/HIVE-21316
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21316.01.patch, HIVE-21316.02.patch, 
> HIVE-21316.03.patch, HIVE-21316.04.patch, HIVE-21316.05.patch, 
> HIVE-21316.06.patch, HIVE-21316.06.patch
>
>
> this is most probably the root cause behind HIVE-21310 as well



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21001) Upgrade to calcite-1.19

2019-03-28 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-21001:

Attachment: HIVE-21001.48.patch

> Upgrade to calcite-1.19
> ---
>
> Key: HIVE-21001
> URL: https://issues.apache.org/jira/browse/HIVE-21001
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21001.01.patch, HIVE-21001.01.patch, 
> HIVE-21001.02.patch, HIVE-21001.03.patch, HIVE-21001.04.patch, 
> HIVE-21001.05.patch, HIVE-21001.06.patch, HIVE-21001.06.patch, 
> HIVE-21001.07.patch, HIVE-21001.08.patch, HIVE-21001.08.patch, 
> HIVE-21001.08.patch, HIVE-21001.09.patch, HIVE-21001.09.patch, 
> HIVE-21001.09.patch, HIVE-21001.10.patch, HIVE-21001.11.patch, 
> HIVE-21001.12.patch, HIVE-21001.13.patch, HIVE-21001.15.patch, 
> HIVE-21001.16.patch, HIVE-21001.17.patch, HIVE-21001.18.patch, 
> HIVE-21001.18.patch, HIVE-21001.19.patch, HIVE-21001.20.patch, 
> HIVE-21001.21.patch, HIVE-21001.22.patch, HIVE-21001.22.patch, 
> HIVE-21001.22.patch, HIVE-21001.23.patch, HIVE-21001.24.patch, 
> HIVE-21001.26.patch, HIVE-21001.26.patch, HIVE-21001.26.patch, 
> HIVE-21001.26.patch, HIVE-21001.26.patch, HIVE-21001.27.patch, 
> HIVE-21001.28.patch, HIVE-21001.29.patch, HIVE-21001.29.patch, 
> HIVE-21001.30.patch, HIVE-21001.31.patch, HIVE-21001.32.patch, 
> HIVE-21001.34.patch, HIVE-21001.35.patch, HIVE-21001.36.patch, 
> HIVE-21001.37.patch, HIVE-21001.38.patch, HIVE-21001.39.patch, 
> HIVE-21001.40.patch, HIVE-21001.41.patch, HIVE-21001.42.patch, 
> HIVE-21001.43.patch, HIVE-21001.44.patch, HIVE-21001.45.patch, 
> HIVE-21001.45.patch, HIVE-21001.46.patch, HIVE-21001.47.patch, 
> HIVE-21001.48.patch, HIVE-21001.48.patch, HIVE-21001.48.patch, 
> HIVE-21001.48.patch
>
>
> XLEAR LIBRARY CACHE 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21523) Break up DDLTask - extract View related operations

2019-03-28 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21523:
--
Status: Open  (was: Patch Available)

> Break up DDLTask - extract View related operations
> --
>
> Key: HIVE-21523
> URL: https://issues.apache.org/jira/browse/HIVE-21523
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21523.01.patch, HIVE-21523.02.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #3: extract all the view related operations from the old DDLTask, and 
> move them under the new package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21523) Break up DDLTask - extract View related operations

2019-03-28 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21523:
--
Status: Patch Available  (was: Open)

> Break up DDLTask - extract View related operations
> --
>
> Key: HIVE-21523
> URL: https://issues.apache.org/jira/browse/HIVE-21523
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21523.01.patch, HIVE-21523.02.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #3: extract all the view related operations from the old DDLTask, and 
> move them under the new package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21523) Break up DDLTask - extract View related operations

2019-03-28 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21523:
--
Attachment: HIVE-21523.02.patch

> Break up DDLTask - extract View related operations
> --
>
> Key: HIVE-21523
> URL: https://issues.apache.org/jira/browse/HIVE-21523
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21523.01.patch, HIVE-21523.02.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #3: extract all the view related operations from the old DDLTask, and 
> move them under the new package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21516) Fix spark downloading for q tests

2019-03-28 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16803916#comment-16803916
 ] 

Hive QA commented on HIVE-21516:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12964002/HIVE-21516.06.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15877 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16734/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16734/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16734/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12964002 - PreCommit-HIVE-Build

> Fix spark downloading for q tests
> -
>
> Key: HIVE-21516
> URL: https://issues.apache.org/jira/browse/HIVE-21516
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21516.01.patch, HIVE-21516.02.patch, 
> HIVE-21516.03.patch, HIVE-21516.04.patch, HIVE-21516.05.patch, 
> HIVE-21516.06.patch
>
>
> Currently itests/pom.xml declares a command to generated the download script 
> for spark, thus it is re-generated every time any maven command is executed 
> for any sub project of itests. AS a side effect it is leaving download.sh 
> files everywhere. The download.sh file is almost totally static, no need to 
> recreate it every time, just requires $spark.version as a parameter.
> Also it is only working properly under linux, as it relies on the md5sum 
> program which is not present in OS X. This means that if the spark tarball is 
> partially downloaded on OS X, then it would never be re-downloaded. This 
> should be fixed by making it work as well using md5 on OS X.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21524) Impala Engine

2019-03-28 Thread David Mollitor (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16803907#comment-16803907
 ] 

David Mollitor commented on HIVE-21524:
---

It would allow some level of unification.  The authorization and auditing could 
all occur in one place, it allows the users to only have to worry about one 
connection string, and as I understand it, Impala does not consider Hive locks 
when it is performing its actions.  Tunneling the queries through HS2 first 
could apply the appropriate locks before passing query to Impala.

> Impala Engine
> -
>
> Key: HIVE-21524
> URL: https://issues.apache.org/jira/browse/HIVE-21524
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 4.0.0
>Reporter: David Mollitor
>Priority: Major
>
> Now that Impala has "dedicated coordinator" capability, it could be 
> interesting to pair HiveServer2 instances with Impala dedicated coordinators 
> on the same localhost.  A client could request an 'impala' execution engine 
> and subsequent queries would be routed to the local coordinator.
> {code:sql}
> set hive.execution.engine=impala;
> {code}
> This would allow clients seamless access to both capabilities without needing 
> different connections or drivers, Hive would also be a central location for 
> auditing and authorization.
> https://www.cloudera.com/documentation/enterprise/latest/topics/impala_dedicated_coordinator.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21532) RuntimeException due to AccessControlException during creating hive-staging-dir

2019-03-28 Thread Oleksandr Polishchuk (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Polishchuk updated HIVE-21532:

Attachment: 
Opportunity_to_do_next_query_(insert_overwrite_local_directory_'_tmp_test_dir'_row_format_.patch

> RuntimeException due to AccessControlException during creating 
> hive-staging-dir
> ---
>
> Key: HIVE-21532
> URL: https://issues.apache.org/jira/browse/HIVE-21532
> Project: Hive
>  Issue Type: Bug
>Reporter: Oleksandr Polishchuk
>Priority: Minor
> Attachments: 
> Opportunity_to_do_next_query_(insert_overwrite_local_directory_'_tmp_test_dir'_row_format_.patch
>
>
> The bug was found with environment - Hive-2.3.
> Steps lead to an exception:
> 1) Create user without root permissions on your node.
> 2) The {{hive-site.xml}} file has to contain the next properties:
> {code:java}
>  
>     hive.security.authorization.enabled
>   true
>   
>   
>    hive.security.authorization.manager
>  
> org.apache.hadoop.hive.ql.security.authorization.plugin.fallback.FallbackHiveAuthorizerFactory
>   
> {code}
> 3) Open Hive CLI and do next query:
> {code:java}
>  insert overwrite local directory '/tmp/test_dir' row format delimited fields 
> terminated by ',' select * from temp.test;
> {code}
> The previous query will fails with the next exception:
> {code:java}
> FAILED: RuntimeException Cannot create staging directory 
> 'hdfs:///tmp/test_dir/.hive-staging_hive_2019-03-28_11-51-05_319_5882446299335967521-1':
>  User testuser(user id 3456)  has been denied access to create 
> .hive-staging_hive_2019-03-28_11-51-05_319_5882446299335967521-1
> {code}
> The investigation shows that if delete the mentioned above properties from 
> {{hive-site.xml}} and pass {{`queryTmpdir`}} instead of {{`dest_path`}} in 
> the {{org.apache.hadoop.hive.ql.Context#getTempDirForPath()}} as was in the 
> Hive-2.1. everything will be fine. The current method is using in the 
> {{org.apache.hadoop.hive.ql.parse.SemanticAnalyzer}}  - {{String statsTmpLoc 
> = ctx.getTempDirForPath(dest_path).toString();}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21532) RuntimeException due to AccessControlException during creating hive-staging-dir

2019-03-28 Thread Oleksandr Polishchuk (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16803892#comment-16803892
 ] 

Oleksandr Polishchuk commented on HIVE-21532:
-

*FIXED*

In Hive-2.3. See attached patch.

*ROOT CAUSE*

There was passed {{dest_path}} not a {{queryTmpdir as was in Hive-2.1. Some 
restrictions related with }}{{FallbackHiveAuthorizerFactory }}

*SOLUTION*

There was passed {{queryTmpdir}} instead of {{dest_path}} in the 
{{org.apache.hadoop.ql.parse.SemanticAnalyzer}}

{code:java}

String statsTmpLoc = ctx.getTempDirForPath(queryTmpdir).toString();

{code}

{{The properties }}{{hive.security.authorization.enabled and 
hive.security.authorization.manager were deleted from }}{{hive-site.xml}}.


*EFFECTS*
 - Created {{TmpDirForPath}}. 
 - Access is allowed for user without root permission

> RuntimeException due to AccessControlException during creating 
> hive-staging-dir
> ---
>
> Key: HIVE-21532
> URL: https://issues.apache.org/jira/browse/HIVE-21532
> Project: Hive
>  Issue Type: Bug
>Reporter: Oleksandr Polishchuk
>Priority: Minor
>
> The bug was found with environment - Hive-2.3.
> Steps lead to an exception:
> 1) Create user without root permissions on your node.
> 2) The {{hive-site.xml}} file has to contain the next properties:
> {code:java}
>  
>     hive.security.authorization.enabled
>   true
>   
>   
>    hive.security.authorization.manager
>  
> org.apache.hadoop.hive.ql.security.authorization.plugin.fallback.FallbackHiveAuthorizerFactory
>   
> {code}
> 3) Open Hive CLI and do next query:
> {code:java}
>  insert overwrite local directory '/tmp/test_dir' row format delimited fields 
> terminated by ',' select * from temp.test;
> {code}
> The previous query will fails with the next exception:
> {code:java}
> FAILED: RuntimeException Cannot create staging directory 
> 'hdfs:///tmp/test_dir/.hive-staging_hive_2019-03-28_11-51-05_319_5882446299335967521-1':
>  User testuser(user id 3456)  has been denied access to create 
> .hive-staging_hive_2019-03-28_11-51-05_319_5882446299335967521-1
> {code}
> The investigation shows that if delete the mentioned above properties from 
> {{hive-site.xml}} and pass {{`queryTmpdir`}} instead of {{`dest_path`}} in 
> the {{org.apache.hadoop.hive.ql.Context#getTempDirForPath()}} as was in the 
> Hive-2.1. everything will be fine. The current method is using in the 
> {{org.apache.hadoop.hive.ql.parse.SemanticAnalyzer}}  - {{String statsTmpLoc 
> = ctx.getTempDirForPath(dest_path).toString();}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21525) [cosmetic] reformat code in NanoTimeUtils.java

2019-03-28 Thread Karen Coppage (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16803889#comment-16803889
 ] 

Karen Coppage commented on HIVE-21525:
--

Thanks, Jesus!

> [cosmetic] reformat code in NanoTimeUtils.java
> --
>
> Key: HIVE-21525
> URL: https://issues.apache.org/jira/browse/HIVE-21525
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Trivial
> Fix For: 4.0.0
>
> Attachments: HIVE-21525.patch
>
>
> indentation is off by 1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21516) Fix spark downloading for q tests

2019-03-28 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16803880#comment-16803880
 ] 

Hive QA commented on HIVE-21516:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 7 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16734/dev-support/hive-personality.sh
 |
| git revision | master / 1e58bd2 |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16734/yetus/patch-asflicense-problems.txt
 |
| modules | C: itests itests/hive-unit itests/qtest-spark U: itests |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16734/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Fix spark downloading for q tests
> -
>
> Key: HIVE-21516
> URL: https://issues.apache.org/jira/browse/HIVE-21516
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21516.01.patch, HIVE-21516.02.patch, 
> HIVE-21516.03.patch, HIVE-21516.04.patch, HIVE-21516.05.patch, 
> HIVE-21516.06.patch
>
>
> Currently itests/pom.xml declares a command to generated the download script 
> for spark, thus it is re-generated every time any maven command is executed 
> for any sub project of itests. AS a side effect it is leaving download.sh 
> files everywhere. The download.sh file is almost totally static, no need to 
> recreate it every time, just requires $spark.version as a parameter.
> Also it is only working properly under linux, as it relies on the md5sum 
> program which is not present in OS X. This means that if the spark tarball is 
> partially downloaded on OS X, then it would never be re-downloaded. This 
> should be fixed by making it work as well using md5 on OS X.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21485) Hive desc operation takes more than 100 seconds after upgrading from Hive 1.2.1 to 2.3.4

2019-03-28 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16803865#comment-16803865
 ] 

Hive QA commented on HIVE-21485:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12963994/HIVE-21485.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15877 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16733/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16733/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16733/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12963994 - PreCommit-HIVE-Build

> Hive desc operation takes more than 100 seconds after upgrading from Hive 
> 1.2.1 to 2.3.4
> 
>
> Key: HIVE-21485
> URL: https://issues.apache.org/jira/browse/HIVE-21485
> Project: Hive
>  Issue Type: Bug
>  Components: CLI, Hive
>Affects Versions: 2.3.4
>Reporter: Qingxin Wu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21485.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Hive desc [formatted|extended] operation cost more than 100 seconds after 
> upgrading from Hive 1.2.1 to 2.3.4. This is mainly caused by showing stats 
> for partitioned tables which was introduced by HIVE-16098 when the 
> partitioned tables have a large amount of partitions. In our case, the number 
> of partition is 187221.
> {code:java}
> hive> desc bus.kafka_data;
> OK
> idstring
> ...
> d map
> stat_date string
> log_idstring
> # Partition Information
> # col_namedata_type   comment
> stat_date string
> log_idstring
> Time taken: 115.342 seconds, Fetched: 42 row(s)
> {code}
> same operation executed in hive-1.2.1 and only cost 2 seconds.
> {code:java}
> hive> desc bus.kafka_data;
> OK
> idstring
> ...
> d map
> stat_date string
> log_idstring
> # Partition Information
> # col_namedata_type   comment
> stat_date string
> log_idstring
> Time taken: 2.037 seconds, Fetched: 42 row(s)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21485) Hive desc operation takes more than 100 seconds after upgrading from Hive 1.2.1 to 2.3.4

2019-03-28 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16803850#comment-16803850
 ] 

Hive QA commented on HIVE-21485:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
40s{color} | {color:blue} common in master has 63 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
36s{color} | {color:blue} ql in master has 2256 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
43s{color} | {color:red} ql: The patch generated 1 new + 1 unchanged - 0 fixed 
= 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 7 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16733/dev-support/hive-personality.sh
 |
| git revision | master / 1e58bd2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16733/yetus/diff-checkstyle-ql.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16733/yetus/patch-asflicense-problems.txt
 |
| modules | C: common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16733/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Hive desc operation takes more than 100 seconds after upgrading from Hive 
> 1.2.1 to 2.3.4
> 
>
> Key: HIVE-21485
> URL: https://issues.apache.org/jira/browse/HIVE-21485
> Project: Hive
>  Issue Type: Bug
>  Components: CLI, Hive
>Affects Versions: 2.3.4
>Reporter: Qingxin Wu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21485.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Hive desc [formatted|extended] operation cost more than 100 seconds after 
> upgrading from Hive 1.2.1 to 2.3.4. This is mainly caused by showing stats 
> for partitioned tables which was introduced by HIVE-16098 when the 
> partitioned tables have a large amount of partitions. In our case, the number 
> of partition is 187221.
> {code:java}
> hive> desc bus.kafka_data;
> OK
> id

[jira] [Commented] (HIVE-11662) Dynamic partitioning cannot be applied to external table which contains part-spec like directory name

2019-03-28 Thread Shaik Idris Ali (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-11662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16803841#comment-16803841
 ] 

Shaik Idris Ali commented on HIVE-11662:


Hi,

Any update on this Jira? looks like this is a valid issue especially when we 
use the data generated by Presto/Athena which seamlessly allows having equal to 
in the table path.

> Dynamic partitioning cannot be applied to external table which contains 
> part-spec like directory name
> -
>
> Key: HIVE-11662
> URL: https://issues.apache.org/jira/browse/HIVE-11662
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Attachments: HIVE-11662.1.patch.txt, HIVE-11662.2.patch.txt
>
>
> Some users want to use part-spec like directory name in their partitioned 
> table locations, something like,
> {noformat}
> /something/warehouse/some_key=some_value
> {noformat}
> DP calculates additional partitions from full path, and makes exception 
> something like,
> {noformat}
> Failed with exception Partition spec {some_key=some_value, 
> part_key=part_value} contains non-partition columns
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.MoveTask
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21001) Upgrade to calcite-1.19

2019-03-28 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16803814#comment-16803814
 ] 

Hive QA commented on HIVE-21001:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
31s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
37s{color} | {color:blue} ql in master has 2256 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
38s{color} | {color:blue} accumulo-handler in master has 21 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
39s{color} | {color:blue} hbase-handler in master has 15 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  9m 
34s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
49s{color} | {color:red} ql: The patch generated 7 new + 303 unchanged - 45 
fixed = 310 total (was 348) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  2m 
12s{color} | {color:red} root: The patch generated 7 new + 303 unchanged - 45 
fixed = 310 total (was 348) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
36s{color} | {color:red} ql generated 1 new + 2256 unchanged - 0 fixed = 2257 
total (was 2256) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 13m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
45s{color} | {color:red} The patch generated 7 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Switch statement found in 
org.apache.hadoop.hive.ql.optimizer.calcite.translator.ASTBuilder.literal(RexLiteral)
 where default case is missing  At ASTBuilder.java:where default case is 
missing  At ASTBuilder.java:[lines 279-290] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  findbugs  
checkstyle  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16732/dev-support/hive-personality.sh
 |
| git revision | master / 1e58bd2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16732/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16732/yetus/diff-checkstyle-root.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16732/yetus/whitespace-eol.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16732/yetus/new-findbugs-ql.html
 |
| asflicense | 

[jira] [Commented] (HIVE-21001) Upgrade to calcite-1.19

2019-03-28 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16803806#comment-16803806
 ] 

Hive QA commented on HIVE-21001:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12963992/HIVE-21001.48.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 15 failed/errored test(s), 15877 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[timestamptz_2] 
(batchId=86)
org.apache.hadoop.hive.metastore.TestObjectStore.catalogs (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDatabaseOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDeprecatedConfigIsOverwritten
 (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropParitionsCleanup
 (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropPartitionsCacheCrossSession
 (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSqlErrorMetrics 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testEmptyTrustStoreProps 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testMasterKeyOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testMaxEventResponse 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testPartitionOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testQueryCloseOnError 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testRoleOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testTableOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testUseSSLProperty 
(batchId=230)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16732/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16732/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16732/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 15 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12963992 - PreCommit-HIVE-Build

> Upgrade to calcite-1.19
> ---
>
> Key: HIVE-21001
> URL: https://issues.apache.org/jira/browse/HIVE-21001
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21001.01.patch, HIVE-21001.01.patch, 
> HIVE-21001.02.patch, HIVE-21001.03.patch, HIVE-21001.04.patch, 
> HIVE-21001.05.patch, HIVE-21001.06.patch, HIVE-21001.06.patch, 
> HIVE-21001.07.patch, HIVE-21001.08.patch, HIVE-21001.08.patch, 
> HIVE-21001.08.patch, HIVE-21001.09.patch, HIVE-21001.09.patch, 
> HIVE-21001.09.patch, HIVE-21001.10.patch, HIVE-21001.11.patch, 
> HIVE-21001.12.patch, HIVE-21001.13.patch, HIVE-21001.15.patch, 
> HIVE-21001.16.patch, HIVE-21001.17.patch, HIVE-21001.18.patch, 
> HIVE-21001.18.patch, HIVE-21001.19.patch, HIVE-21001.20.patch, 
> HIVE-21001.21.patch, HIVE-21001.22.patch, HIVE-21001.22.patch, 
> HIVE-21001.22.patch, HIVE-21001.23.patch, HIVE-21001.24.patch, 
> HIVE-21001.26.patch, HIVE-21001.26.patch, HIVE-21001.26.patch, 
> HIVE-21001.26.patch, HIVE-21001.26.patch, HIVE-21001.27.patch, 
> HIVE-21001.28.patch, HIVE-21001.29.patch, HIVE-21001.29.patch, 
> HIVE-21001.30.patch, HIVE-21001.31.patch, HIVE-21001.32.patch, 
> HIVE-21001.34.patch, HIVE-21001.35.patch, HIVE-21001.36.patch, 
> HIVE-21001.37.patch, HIVE-21001.38.patch, HIVE-21001.39.patch, 
> HIVE-21001.40.patch, HIVE-21001.41.patch, HIVE-21001.42.patch, 
> HIVE-21001.43.patch, HIVE-21001.44.patch, HIVE-21001.45.patch, 
> HIVE-21001.45.patch, HIVE-21001.46.patch, HIVE-21001.47.patch, 
> HIVE-21001.48.patch, HIVE-21001.48.patch, HIVE-21001.48.patch
>
>
> XLEAR LIBRARY CACHE 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21520) Query "Submit plan" time reported is incorrect

2019-03-28 Thread Rajesh Balamohan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HIVE-21520:

Status: Patch Available  (was: Open)

> Query "Submit plan" time reported is incorrect
> --
>
> Key: HIVE-21520
> URL: https://issues.apache.org/jira/browse/HIVE-21520
> Project: Hive
>  Issue Type: Bug
>Reporter: Rajesh Balamohan
>Priority: Trivial
> Attachments: HIVE-21520.1.patch
>
>
> Hive master branch + LLAP
> {noformat}
> Query Execution Summary
> --
> OPERATION    DURATION
> --
> Compile Query   0.00s
> Prepare Plan    0.00s
> Get Query Coordinator (AM)  0.00s
> Submit Plan 1553658149.89s
> Start DAG   0.53s
> Run DAG 0.43s
> --
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21520) Query "Submit plan" time reported is incorrect

2019-03-28 Thread Rajesh Balamohan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16803775#comment-16803775
 ] 

Rajesh Balamohan commented on HIVE-21520:
-

.1 patch. 

{noformat}

Query Execution Summary
--
OPERATIONDURATION
--
Compile Query   4.21s
Prepare Plan0.23s
Get Query Coordinator (AM)  0.01s
Submit Plan 0.39s
Start DAG   1.03s
Run DAG12.66s
--
{noformat}

> Query "Submit plan" time reported is incorrect
> --
>
> Key: HIVE-21520
> URL: https://issues.apache.org/jira/browse/HIVE-21520
> Project: Hive
>  Issue Type: Bug
>Reporter: Rajesh Balamohan
>Priority: Trivial
> Attachments: HIVE-21520.1.patch
>
>
> Hive master branch + LLAP
> {noformat}
> Query Execution Summary
> --
> OPERATION    DURATION
> --
> Compile Query   0.00s
> Prepare Plan    0.00s
> Get Query Coordinator (AM)  0.00s
> Submit Plan 1553658149.89s
> Start DAG   0.53s
> Run DAG 0.43s
> --
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21520) Query "Submit plan" time reported is incorrect

2019-03-28 Thread Rajesh Balamohan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HIVE-21520:

Attachment: HIVE-21520.1.patch

> Query "Submit plan" time reported is incorrect
> --
>
> Key: HIVE-21520
> URL: https://issues.apache.org/jira/browse/HIVE-21520
> Project: Hive
>  Issue Type: Bug
>Reporter: Rajesh Balamohan
>Priority: Trivial
> Attachments: HIVE-21520.1.patch
>
>
> Hive master branch + LLAP
> {noformat}
> Query Execution Summary
> --
> OPERATION    DURATION
> --
> Compile Query   0.00s
> Prepare Plan    0.00s
> Get Query Coordinator (AM)  0.00s
> Submit Plan 1553658149.89s
> Start DAG   0.53s
> Run DAG 0.43s
> --
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21516) Fix spark downloading for q tests

2019-03-28 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21516:
--
Status: Patch Available  (was: Open)

> Fix spark downloading for q tests
> -
>
> Key: HIVE-21516
> URL: https://issues.apache.org/jira/browse/HIVE-21516
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21516.01.patch, HIVE-21516.02.patch, 
> HIVE-21516.03.patch, HIVE-21516.04.patch, HIVE-21516.05.patch, 
> HIVE-21516.06.patch
>
>
> Currently itests/pom.xml declares a command to generated the download script 
> for spark, thus it is re-generated every time any maven command is executed 
> for any sub project of itests. AS a side effect it is leaving download.sh 
> files everywhere. The download.sh file is almost totally static, no need to 
> recreate it every time, just requires $spark.version as a parameter.
> Also it is only working properly under linux, as it relies on the md5sum 
> program which is not present in OS X. This means that if the spark tarball is 
> partially downloaded on OS X, then it would never be re-downloaded. This 
> should be fixed by making it work as well using md5 on OS X.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21516) Fix spark downloading for q tests

2019-03-28 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21516:
--
Status: Open  (was: Patch Available)

> Fix spark downloading for q tests
> -
>
> Key: HIVE-21516
> URL: https://issues.apache.org/jira/browse/HIVE-21516
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21516.01.patch, HIVE-21516.02.patch, 
> HIVE-21516.03.patch, HIVE-21516.04.patch, HIVE-21516.05.patch, 
> HIVE-21516.06.patch
>
>
> Currently itests/pom.xml declares a command to generated the download script 
> for spark, thus it is re-generated every time any maven command is executed 
> for any sub project of itests. AS a side effect it is leaving download.sh 
> files everywhere. The download.sh file is almost totally static, no need to 
> recreate it every time, just requires $spark.version as a parameter.
> Also it is only working properly under linux, as it relies on the md5sum 
> program which is not present in OS X. This means that if the spark tarball is 
> partially downloaded on OS X, then it would never be re-downloaded. This 
> should be fixed by making it work as well using md5 on OS X.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21516) Fix spark downloading for q tests

2019-03-28 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21516:
--
Attachment: HIVE-21516.06.patch

> Fix spark downloading for q tests
> -
>
> Key: HIVE-21516
> URL: https://issues.apache.org/jira/browse/HIVE-21516
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21516.01.patch, HIVE-21516.02.patch, 
> HIVE-21516.03.patch, HIVE-21516.04.patch, HIVE-21516.05.patch, 
> HIVE-21516.06.patch
>
>
> Currently itests/pom.xml declares a command to generated the download script 
> for spark, thus it is re-generated every time any maven command is executed 
> for any sub project of itests. AS a side effect it is leaving download.sh 
> files everywhere. The download.sh file is almost totally static, no need to 
> recreate it every time, just requires $spark.version as a parameter.
> Also it is only working properly under linux, as it relies on the md5sum 
> program which is not present in OS X. This means that if the spark tarball is 
> partially downloaded on OS X, then it would never be re-downloaded. This 
> should be fixed by making it work as well using md5 on OS X.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21492) VectorizedParquetRecordReader can't to read parquet file generated using thrift/custom tool

2019-03-28 Thread Ganesha Shreedhara (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16803742#comment-16803742
 ] 

Ganesha Shreedhara commented on HIVE-21492:
---

Please review the patch. 

> VectorizedParquetRecordReader can't to read parquet file generated using 
> thrift/custom tool
> ---
>
> Key: HIVE-21492
> URL: https://issues.apache.org/jira/browse/HIVE-21492
> Project: Hive
>  Issue Type: Bug
>Reporter: Ganesha Shreedhara
>Assignee: Ganesha Shreedhara
>Priority: Major
> Attachments: HIVE-21492.patch
>
>
> Taking an example of a parquet table having array of integers as below. 
> {code:java}
> CREATE EXTERNAL TABLE ( list_of_ints` array)
> STORED AS PARQUET 
> LOCATION '{location}';
> {code}
> Parquet file generated using hive will have schema for Type as below:
> {code:java}
> group list_of_ints (LIST) { repeated group bag { optional int32 array;\n};\n} 
> {code}
> Parquet file generated using thrift or any custom tool (using 
> org.apache.parquet.io.api.RecordConsumer)
> may have schema for Type as below:
> {code:java}
> required group list_of_ints (LIST) { repeated int32 list_of_tuple} {code}
> VectorizedParquetRecordReader handles only parquet file generated using hive. 
> It throws the following exception when parquet file generated using thrift is 
> read because of the changes done as part of HIVE-18553 .
> {code:java}
> Caused by: java.lang.ClassCastException: repeated int32 list_of_ints_tuple is 
> not a group
>  at org.apache.parquet.schema.Type.asGroupType(Type.java:207)
>  at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.getElementType(VectorizedParquetRecordReader.java:479)
>  at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.buildVectorizedParquetReader(VectorizedParquetRecordReader.java:532)
>  at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:440)
>  at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:401)
>  at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:353)
>  at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:92)
>  at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:365){code}
>  
>  I have done a small change to handle the case where the child type of group 
> type can be PrimitiveType.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21516) Fix spark downloading for q tests

2019-03-28 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16803735#comment-16803735
 ] 

Hive QA commented on HIVE-21516:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12963990/HIVE-21516.05.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 15876 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.TestObjectStore.catalogs (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDatabaseOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDeprecatedConfigIsOverwritten
 (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropParitionsCleanup
 (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropPartitionsCacheCrossSession
 (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSqlErrorMetrics 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testEmptyTrustStoreProps 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testMasterKeyOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testMaxEventResponse 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testPartitionOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testQueryCloseOnError 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testRoleOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testTableOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testUseSSLProperty 
(batchId=230)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16731/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16731/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16731/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 14 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12963990 - PreCommit-HIVE-Build

> Fix spark downloading for q tests
> -
>
> Key: HIVE-21516
> URL: https://issues.apache.org/jira/browse/HIVE-21516
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21516.01.patch, HIVE-21516.02.patch, 
> HIVE-21516.03.patch, HIVE-21516.04.patch, HIVE-21516.05.patch
>
>
> Currently itests/pom.xml declares a command to generated the download script 
> for spark, thus it is re-generated every time any maven command is executed 
> for any sub project of itests. AS a side effect it is leaving download.sh 
> files everywhere. The download.sh file is almost totally static, no need to 
> recreate it every time, just requires $spark.version as a parameter.
> Also it is only working properly under linux, as it relies on the md5sum 
> program which is not present in OS X. This means that if the spark tarball is 
> partially downloaded on OS X, then it would never be re-downloaded. This 
> should be fixed by making it work as well using md5 on OS X.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19756) Insert request with UNION ALL and lateral view explode

2019-03-28 Thread xinzhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16803701#comment-16803701
 ] 

xinzhang commented on HIVE-19756:
-

please set the config :

set hive.optimize.index.filter=false;

> Insert request with UNION ALL and lateral view explode
> --
>
> Key: HIVE-19756
> URL: https://issues.apache.org/jira/browse/HIVE-19756
> Project: Hive
>  Issue Type: Bug
> Environment: HDP 2.6.4
>Reporter: Frédéric ESCANDELL
>Priority: Major
>
> Hi,
> While executing this code snippet, no data was inserted in the final table t3.
> By replacing UNION ALL by UNION or removing the "lateral view explode" the 
> code works properly.
>  
> {code:sql}
> DROP table t1;
> DROP table t2;
> DROP table t3;
> CREATE TABLE t1(cle string,valeur array>)
> ROW FORMAT SERDE  'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
> STORED AS 
>   INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
>   OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat';
> INSERT INTO table t1 select * from (select "a",array(named_struct('v','x'), 
> named_struct('v','y'))) tmp;
>  CREATE TABLE t2(cle string,valeur array>)
> ROW FORMAT SERDE  'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
> STORED AS 
>   INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
>   OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat';
> INSERT INTO table t2 select * from (select "b",array(named_struct('v','z'), 
> named_struct('v','w'))) tmp;
> DROP view v1;
> DROP table t3;
> CREATE VIEW v1 (cle,valeur) 
> AS
> select base.cle,val.v from (select cle,valeur from t1) as base
> lateral view explode(base.valeur) a as val
> union all
> select base1.cle,val.v from (select cle,valeur from t2) as base1
> lateral view explode(base1.valeur) a as val;
>  CREATE TABLE t3(cle string,valeur string)
> ROW FORMAT SERDE  'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
> STORED AS 
>   INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
>   OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat';
> insert into t3 
> select * from v1;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21485) Hive desc operation takes more than 100 seconds after upgrading from Hive 1.2.1 to 2.3.4

2019-03-28 Thread Qingxin Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingxin Wu updated HIVE-21485:
--
Status: Patch Available  (was: Open)

> Hive desc operation takes more than 100 seconds after upgrading from Hive 
> 1.2.1 to 2.3.4
> 
>
> Key: HIVE-21485
> URL: https://issues.apache.org/jira/browse/HIVE-21485
> Project: Hive
>  Issue Type: Bug
>  Components: CLI, Hive
>Affects Versions: 2.3.4
>Reporter: Qingxin Wu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21485.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Hive desc [formatted|extended] operation cost more than 100 seconds after 
> upgrading from Hive 1.2.1 to 2.3.4. This is mainly caused by showing stats 
> for partitioned tables which was introduced by HIVE-16098 when the 
> partitioned tables have a large amount of partitions. In our case, the number 
> of partition is 187221.
> {code:java}
> hive> desc bus.kafka_data;
> OK
> idstring
> ...
> d map
> stat_date string
> log_idstring
> # Partition Information
> # col_namedata_type   comment
> stat_date string
> log_idstring
> Time taken: 115.342 seconds, Fetched: 42 row(s)
> {code}
> same operation executed in hive-1.2.1 and only cost 2 seconds.
> {code:java}
> hive> desc bus.kafka_data;
> OK
> idstring
> ...
> d map
> stat_date string
> log_idstring
> # Partition Information
> # col_namedata_type   comment
> stat_date string
> log_idstring
> Time taken: 2.037 seconds, Fetched: 42 row(s)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21485) Hive desc operation takes more than 100 seconds after upgrading from Hive 1.2.1 to 2.3.4

2019-03-28 Thread Qingxin Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingxin Wu updated HIVE-21485:
--
Status: Open  (was: Patch Available)

> Hive desc operation takes more than 100 seconds after upgrading from Hive 
> 1.2.1 to 2.3.4
> 
>
> Key: HIVE-21485
> URL: https://issues.apache.org/jira/browse/HIVE-21485
> Project: Hive
>  Issue Type: Bug
>  Components: CLI, Hive
>Affects Versions: 2.3.4
>Reporter: Qingxin Wu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21485.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Hive desc [formatted|extended] operation cost more than 100 seconds after 
> upgrading from Hive 1.2.1 to 2.3.4. This is mainly caused by showing stats 
> for partitioned tables which was introduced by HIVE-16098 when the 
> partitioned tables have a large amount of partitions. In our case, the number 
> of partition is 187221.
> {code:java}
> hive> desc bus.kafka_data;
> OK
> idstring
> ...
> d map
> stat_date string
> log_idstring
> # Partition Information
> # col_namedata_type   comment
> stat_date string
> log_idstring
> Time taken: 115.342 seconds, Fetched: 42 row(s)
> {code}
> same operation executed in hive-1.2.1 and only cost 2 seconds.
> {code:java}
> hive> desc bus.kafka_data;
> OK
> idstring
> ...
> d map
> stat_date string
> log_idstring
> # Partition Information
> # col_namedata_type   comment
> stat_date string
> log_idstring
> Time taken: 2.037 seconds, Fetched: 42 row(s)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21516) Fix spark downloading for q tests

2019-03-28 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16803691#comment-16803691
 ] 

Hive QA commented on HIVE-21516:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 7 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16731/dev-support/hive-personality.sh
 |
| git revision | master / ddb0f4f |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16731/yetus/patch-asflicense-problems.txt
 |
| modules | C: itests itests/hive-unit itests/qtest-spark U: itests |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16731/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Fix spark downloading for q tests
> -
>
> Key: HIVE-21516
> URL: https://issues.apache.org/jira/browse/HIVE-21516
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21516.01.patch, HIVE-21516.02.patch, 
> HIVE-21516.03.patch, HIVE-21516.04.patch, HIVE-21516.05.patch
>
>
> Currently itests/pom.xml declares a command to generated the download script 
> for spark, thus it is re-generated every time any maven command is executed 
> for any sub project of itests. AS a side effect it is leaving download.sh 
> files everywhere. The download.sh file is almost totally static, no need to 
> recreate it every time, just requires $spark.version as a parameter.
> Also it is only working properly under linux, as it relies on the md5sum 
> program which is not present in OS X. This means that if the spark tarball is 
> partially downloaded on OS X, then it would never be re-downloaded. This 
> should be fixed by making it work as well using md5 on OS X.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21290) Restore historical way of handling timestamps in Parquet while keeping the new semantics at the same time

2019-03-28 Thread Karen Coppage (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16803690#comment-16803690
 ] 

Karen Coppage commented on HIVE-21290:
--

Thanks, Jesus! branch-3 and 3.1 files are attached.

> Restore historical way of handling timestamps in Parquet while keeping the 
> new semantics at the same time
> -
>
> Key: HIVE-21290
> URL: https://issues.apache.org/jira/browse/HIVE-21290
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Ivanfi
>Assignee: Karen Coppage
>Priority: Major
> Fix For: 4.0.0, 3.2.0, 3.1.2
>
> Attachments: HIVE-21290.1.patch, HIVE-21290.2.patch, 
> HIVE-21290.2.patch, HIVE-21290.3.patch, HIVE-21290.4.patch, 
> HIVE-21290.4.patch, HIVE-21290.5.patch, HIVE-21290.branch-3.1.patch, 
> HIVE-21290.branch-3.patch
>
>
> This sub-task is for implementing the Parquet-specific parts of the following 
> plan:
> h1. Problem
> Historically, the semantics of the TIMESTAMP type in Hive depended on the 
> file format. Timestamps in Avro, Parquet and RCFiles with a binary SerDe had 
> _Instant_ semantics, while timestamps in ORC, textfiles and RCFiles with a 
> text SerDe had _LocalDateTime_ semantics.
> The Hive community wanted to get rid of this inconsistency and have 
> _LocalDateTime_ semantics in Avro, Parquet and RCFiles with a binary SerDe as 
> well. *Hive 3.1 turned off normalization to UTC* to achieve this. While this 
> leads to the desired new semantics, it also leads to incorrect results when 
> new Hive versions read timestamps written by old Hive versions or when old 
> Hive versions or any other component not aware of this change (including 
> legacy Impala and Spark versions) read timestamps written by new Hive 
> versions.
> h1. Solution
> To work around this issue, Hive *should restore the practice of normalizing 
> to UTC* when writing timestamps to Avro, Parquet and RCFiles with a binary 
> SerDe. In itself, this would restore the historical _Instant_ semantics, 
> which is undesirable. In order to achieve the desired _LocalDateTime_ 
> semantics in spite of normalizing to UTC, newer Hive versions should record 
> the session-local local time zone in the file metadata fields serving 
> arbitrary key-value storage purposes.
> When reading back files with this time zone metadata, newer Hive versions (or 
> any other new component aware of this extra metadata) can achieve 
> _LocalDateTime_ semantics by *converting from UTC to the saved time zone 
> (instead of to the local time zone)*. Legacy components that are unaware of 
> the new metadata can read the files without any problem and the timestamps 
> will show the historical Instant behaviour to them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >