[jira] [Commented] (HIVE-18841) Support authorization of UDF usage in hive

2018-04-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423492#comment-16423492
 ] 

Hive QA commented on HIVE-18841:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
18s{color} | {color:red} itests/hive-unit: The patch generated 2 new + 12 
unchanged - 1 fixed = 14 total (was 13) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
49s{color} | {color:red} ql: The patch generated 2 new + 554 unchanged - 2 
fixed = 556 total (was 556) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9967/dev-support/hive-personality.sh
 |
| git revision | master / ad9852c |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9967/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9967/yetus/diff-checkstyle-ql.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9967/yetus/patch-asflicense-problems.txt
 |
| modules | C: itests/hive-unit ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9967/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Support authorization of UDF usage in hive
> --
>
> Key: HIVE-18841
> URL: https://issues.apache.org/jira/browse/HIVE-18841
> Project: Hive
>  Issue Type: New Feature
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Critical
> Attachments: HIVE-18841.1.patch, HIVE-18841.1.patch
>
>
> It should be possible to create authorization policies on UDF usage. 
> ie, it should be possible to control who can use certain UDF in their queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18999) Filter operator does not work for List

2018-04-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423477#comment-16423477
 ] 

Hive QA commented on HIVE-18999:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12917225/HIVE-18999.02.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 192 failed/errored test(s), 13297 tests 
executed
*Failed tests:*
{noformat}
TestCopyUtils - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestExportImport - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=95)


[jira] [Commented] (HIVE-18999) Filter operator does not work for List

2018-04-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423436#comment-16423436
 ] 

Hive QA commented on HIVE-18999:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
52s{color} | {color:red} ql: The patch generated 4 new + 43 unchanged - 0 fixed 
= 47 total (was 43) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
16s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9966/dev-support/hive-personality.sh
 |
| git revision | master / ad9852c |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9966/yetus/diff-checkstyle-ql.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9966/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9966/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Filter operator does not work for List
> --
>
> Key: HIVE-18999
> URL: https://issues.apache.org/jira/browse/HIVE-18999
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 3.0.0
>Reporter: Steve Yeom
>Assignee: Steve Yeom
>Priority: Major
> Attachments: HIVE-18999.01.patch, HIVE-18999.02.patch
>
>
> {code:sql}
> create table table1(col0 int, col1 bigint, col2 string, col3 bigint, col4 
> bigint);
> insert into table1 values (1, 1, 'ccl',2014, 11);
> insert into table1 values (1, 1, 'ccl',2015, 11);
> insert into table1 values (1, 1, 'ccl',2014, 11);
> insert into table1 values (1, 1, 'ccl',2013, 11);
> -- INCORRECT
> SELECT COUNT(t1.col0) from table1 t1 where struct(col3, col4) in 
> (struct(2014,11));
> -- CORRECT
> SELECT COUNT(t1.col0) from table1 t1 where struct(col3, col4) in 
> (struct('2014','11'));
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18747) Cleaner for TXN_TO_WRITE_ID table entries using MIN_HISTORY_LEVEL.

2018-04-02 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-18747:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Cleaner for TXN_TO_WRITE_ID table entries using MIN_HISTORY_LEVEL.
> --
>
> Key: HIVE-18747
> URL: https://issues.apache.org/jira/browse/HIVE-18747
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Minor
>  Labels: ACID, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18747.01.patch, HIVE-18747.02.patch, 
> HIVE-18747.03.patch, HIVE-18747.04.patch, HIVE-18747.05.patch, 
> HIVE-18747.06.patch
>
>
> Per table write ID implementation (HIVE-18192) maintains a map between txn ID 
> and table write ID in TXN_TO_WRITE_ID meta table. 
> The entries in this table is used to generate ValidWriteIdList for the given 
> ValidTxnList to ensure snapshot isolation. 
> When table or database is dropped, then these entries are cleaned-up. But, it 
> is necessary to clean-up for active tables too for better performance.
> TXN_TO_WRITE_ID table keeps a mapping of Transaction ID to Write ID.  The 
> state of each Write ID (open, committed, aborted) is determined by the state 
> of the parent transaction.  In order to be able to get a WriteIdList that is 
> accurate wrt ValidTxnList that is locked in at the start of the transaction, 
> we have to retain txnid<->writeid mapping even after the transaction ends. 
> This is because a reader at Snapshot Isolation that started when transaction 
> X was open, should continue to ignore the data written by X even after X 
> commits.
> So we need a mechanism to know when it is safe to remove TXN_TO_WRITE_ID.  
> There are 2 parts to it. When txn X is opened, it records Y=select 
> min(txn_id) from TXNS where txn_state=’o’ in MIN_HISTORY(txnid,opentxnid) 
> table, i.e. it adds (X, Y) to MIN_HISTORY.  On commit (and abort) of X, it 
> removes its own entry from MIN_HISTORY. In the absence of Aborted 
> transactions, MIN_HISTORY gives us the smallest open txnid across all active 
> reader snapshots.  Let Z=select min(opentxnid) from MIN_HISTORY. We can 
> delete entries from TXN_TO_WRITE_ID once TXN_TO_WRITE_ID.T2W_TXNID < Z since 
> every active reader sees txns < Z as committed.
> If S is aborted txns, we retain the metadata about it in TXNS as long as any 
> data written S may be visible to some reader in the system so that the reader 
> knows to skip this data.  The rules for when that is are complex but wrt to 
> TXN_TO_WRITE_ID, if A=select min(TXN_ID) from TXNS where TXN_STATE=’a’, then 
> it’s safe to delete from TXN_TO_WRITE_ID when TXN_TO_WRITE_ID.T2W_TXNID < 
> min(Z,A).  
> If no open or aborted txns exist in the system, then we need to enable 
> cleanup using latest allocated value of NEXT_TXN_ID table. Delete condition 
> would be TXN_TO_WRITE_ID.T2W_TXNID < min(Z,A,NEXT_TXN_ID.ntxn_next).  
> Also, it is proposed to trigger cleanup on TXN_TO_WRITE_ID from initiator 
> immediately after cleaning up aborted txns metadata from TXNS table.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18747) Cleaner for TXN_TO_WRITE_ID table entries using MIN_HISTORY_LEVEL.

2018-04-02 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423422#comment-16423422
 ] 

Sankar Hariappan commented on HIVE-18747:
-

Thanks for the review [~ekoifman]!

Patch committed to master.

> Cleaner for TXN_TO_WRITE_ID table entries using MIN_HISTORY_LEVEL.
> --
>
> Key: HIVE-18747
> URL: https://issues.apache.org/jira/browse/HIVE-18747
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Minor
>  Labels: ACID, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18747.01.patch, HIVE-18747.02.patch, 
> HIVE-18747.03.patch, HIVE-18747.04.patch, HIVE-18747.05.patch, 
> HIVE-18747.06.patch
>
>
> Per table write ID implementation (HIVE-18192) maintains a map between txn ID 
> and table write ID in TXN_TO_WRITE_ID meta table. 
> The entries in this table is used to generate ValidWriteIdList for the given 
> ValidTxnList to ensure snapshot isolation. 
> When table or database is dropped, then these entries are cleaned-up. But, it 
> is necessary to clean-up for active tables too for better performance.
> TXN_TO_WRITE_ID table keeps a mapping of Transaction ID to Write ID.  The 
> state of each Write ID (open, committed, aborted) is determined by the state 
> of the parent transaction.  In order to be able to get a WriteIdList that is 
> accurate wrt ValidTxnList that is locked in at the start of the transaction, 
> we have to retain txnid<->writeid mapping even after the transaction ends. 
> This is because a reader at Snapshot Isolation that started when transaction 
> X was open, should continue to ignore the data written by X even after X 
> commits.
> So we need a mechanism to know when it is safe to remove TXN_TO_WRITE_ID.  
> There are 2 parts to it. When txn X is opened, it records Y=select 
> min(txn_id) from TXNS where txn_state=’o’ in MIN_HISTORY(txnid,opentxnid) 
> table, i.e. it adds (X, Y) to MIN_HISTORY.  On commit (and abort) of X, it 
> removes its own entry from MIN_HISTORY. In the absence of Aborted 
> transactions, MIN_HISTORY gives us the smallest open txnid across all active 
> reader snapshots.  Let Z=select min(opentxnid) from MIN_HISTORY. We can 
> delete entries from TXN_TO_WRITE_ID once TXN_TO_WRITE_ID.T2W_TXNID < Z since 
> every active reader sees txns < Z as committed.
> If S is aborted txns, we retain the metadata about it in TXNS as long as any 
> data written S may be visible to some reader in the system so that the reader 
> knows to skip this data.  The rules for when that is are complex but wrt to 
> TXN_TO_WRITE_ID, if A=select min(TXN_ID) from TXNS where TXN_STATE=’a’, then 
> it’s safe to delete from TXN_TO_WRITE_ID when TXN_TO_WRITE_ID.T2W_TXNID < 
> min(Z,A).  
> If no open or aborted txns exist in the system, then we need to enable 
> cleanup using latest allocated value of NEXT_TXN_ID table. Delete condition 
> would be TXN_TO_WRITE_ID.T2W_TXNID < min(Z,A,NEXT_TXN_ID.ntxn_next).  
> Also, it is proposed to trigger cleanup on TXN_TO_WRITE_ID from initiator 
> immediately after cleaning up aborted txns metadata from TXNS table.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18976) Add ability to setup Druid Kafka Ingestion from Hive

2018-04-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423417#comment-16423417
 ] 

Hive QA commented on HIVE-18976:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12917226/HIVE-18976.03.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 149 failed/errored test(s), 13568 tests 
executed
*Failed tests:*
{noformat}
TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=252)
TestCopyUtils - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=252)
TestExportImport - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=252)
TestMiniDruidKafkaCliDriver - did not produce a TEST-*.xml file (likely timed 
out) (batchId=252)
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=95)


[jira] [Updated] (HIVE-19089) Create/Replicate Allocate write-id event

2018-04-02 Thread mahesh kumar behera (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-19089:
---
Status: Patch Available  (was: Open)

> Create/Replicate Allocate write-id event
> 
>
> Key: HIVE-19089
> URL: https://issues.apache.org/jira/browse/HIVE-19089
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl, Transactions
>Affects Versions: 3.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: ACID, DR, replication
> Fix For: 3.0.0
>
> Attachments: HIVE-19089.01.patch, HIVE-19089.02.patch
>
>
> *EVENT_ALLOCATE_WRITE_ID*
> *Source Warehouse:*
>  * Create new event type EVENT_ALLOCATE_WRITE_ID with related message format 
> etc.
>  * Capture this event when allocate a table write ID from the sequence table 
> by ACID operation.
>  * Repl dump should read this event from EventNotificationTable and dump the 
> message.
> *Target Warehouse:*
>  * Repl load should read the event from the dump and get the message.
>  * Validate if source txn ID from the event is there in the source-target txn 
> ID map. If not there, just noop the event.
>  * If valid, then Allocate table write ID from sequence table
> *Extend listener notify event API to add two new parameter , dbconn and 
> sqlgenerator to add the events to notification_log table within the same 
> transaction* 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19089) Create/Replicate Allocate write-id event

2018-04-02 Thread mahesh kumar behera (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-19089:
---
Attachment: HIVE-19089.02.patch

> Create/Replicate Allocate write-id event
> 
>
> Key: HIVE-19089
> URL: https://issues.apache.org/jira/browse/HIVE-19089
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl, Transactions
>Affects Versions: 3.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: ACID, DR, replication
> Fix For: 3.0.0
>
> Attachments: HIVE-19089.01.patch, HIVE-19089.02.patch
>
>
> *EVENT_ALLOCATE_WRITE_ID*
> *Source Warehouse:*
>  * Create new event type EVENT_ALLOCATE_WRITE_ID with related message format 
> etc.
>  * Capture this event when allocate a table write ID from the sequence table 
> by ACID operation.
>  * Repl dump should read this event from EventNotificationTable and dump the 
> message.
> *Target Warehouse:*
>  * Repl load should read the event from the dump and get the message.
>  * Validate if source txn ID from the event is there in the source-target txn 
> ID map. If not there, just noop the event.
>  * If valid, then Allocate table write ID from sequence table
> *Extend listener notify event API to add two new parameter , dbconn and 
> sqlgenerator to add the events to notification_log table within the same 
> transaction* 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18976) Add ability to setup Druid Kafka Ingestion from Hive

2018-04-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423404#comment-16423404
 ] 

Hive QA commented on HIVE-18976:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
 3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  9m 
33s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
59s{color} | {color:red} root: The patch generated 113 new + 676 unchanged - 45 
fixed = 789 total (was 721) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
13s{color} | {color:red} druid-handler: The patch generated 103 new + 123 
unchanged - 43 fixed = 226 total (was 166) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
14s{color} | {color:red} itests/qtest: The patch generated 3 new + 0 unchanged 
- 0 fixed = 3 total (was 0) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
10s{color} | {color:red} itests/qtest-druid: The patch generated 3 new + 4 
unchanged - 1 fixed = 7 total (was 5) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
15s{color} | {color:red} itests/util: The patch generated 4 new + 119 unchanged 
- 1 fixed = 123 total (was 120) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  
xml  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9965/dev-support/hive-personality.sh
 |
| git revision | master / 6751225 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9965/yetus/diff-checkstyle-root.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9965/yetus/diff-checkstyle-druid-handler.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9965/yetus/diff-checkstyle-itests_qtest.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9965/yetus/diff-checkstyle-itests_qtest-druid.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9965/yetus/diff-checkstyle-itests_util.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9965/yetus/whitespace-eol.txt 
|
| asflicense | 

[jira] [Updated] (HIVE-19092) Somne improvement in bin shell scripts

2018-04-02 Thread Saijin Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Saijin Huang updated HIVE-19092:

Status: Patch Available  (was: Open)

> Somne improvement in bin shell scripts
> --
>
> Key: HIVE-19092
> URL: https://issues.apache.org/jira/browse/HIVE-19092
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Saijin Huang
>Assignee: Saijin Huang
>Priority: Minor
> Attachments: HIVE-19092.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19092) Somne improvement in bin shell scripts

2018-04-02 Thread Saijin Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Saijin Huang updated HIVE-19092:

Attachment: HIVE-19092.1.patch

> Somne improvement in bin shell scripts
> --
>
> Key: HIVE-19092
> URL: https://issues.apache.org/jira/browse/HIVE-19092
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Saijin Huang
>Assignee: Saijin Huang
>Priority: Minor
> Attachments: HIVE-19092.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19092) Somne improvement in bin shell scripts

2018-04-02 Thread Saijin Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Saijin Huang reassigned HIVE-19092:
---


> Somne improvement in bin shell scripts
> --
>
> Key: HIVE-19092
> URL: https://issues.apache.org/jira/browse/HIVE-19092
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Saijin Huang
>Assignee: Saijin Huang
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18910) Migrate to Murmur hash for shuffle and bucketing

2018-04-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423357#comment-16423357
 ] 

Hive QA commented on HIVE-18910:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12917229/HIVE-18910.17.patch

{color:green}SUCCESS:{color} +1 due to 8 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 305 failed/errored test(s), 13693 tests 
executed
*Failed tests:*
{noformat}
TestCopyUtils - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestExportImport - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=96)


[jira] [Updated] (HIVE-19064) Add mode to support delimited identifiers enclosed within double quotation

2018-04-02 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-19064:
---
Attachment: (was: HIVE-19064.02.patch)

> Add mode to support delimited identifiers enclosed within double quotation
> --
>
> Key: HIVE-19064
> URL: https://issues.apache.org/jira/browse/HIVE-19064
> Project: Hive
>  Issue Type: Improvement
>  Components: Parser, SQL
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-19064.01.patch, HIVE-19064.02.patch
>
>
> As per SQL standard. Hive currently uses `` (backticks). Default will 
> continue being backticks, but we will support identifiers within double 
> quotation via configuration parameter.
> This issue will also extends support for arbitrary char sequences, e.g., 
> containing {{~ ! @ # $ % ^ & * () , < >}}, in database and table names. 
> Currently, special characters are only supported for column names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19064) Add mode to support delimited identifiers enclosed within double quotation

2018-04-02 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-19064:
---
Attachment: HIVE-19064.02.patch

> Add mode to support delimited identifiers enclosed within double quotation
> --
>
> Key: HIVE-19064
> URL: https://issues.apache.org/jira/browse/HIVE-19064
> Project: Hive
>  Issue Type: Improvement
>  Components: Parser, SQL
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-19064.01.patch, HIVE-19064.02.patch
>
>
> As per SQL standard. Hive currently uses `` (backticks). Default will 
> continue being backticks, but we will support identifiers within double 
> quotation via configuration parameter.
> This issue will also extends support for arbitrary char sequences, e.g., 
> containing {{~ ! @ # $ % ^ & * () , < >}}, in database and table names. 
> Currently, special characters are only supported for column names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19064) Add mode to support delimited identifiers enclosed within double quotation

2018-04-02 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423347#comment-16423347
 ] 

Jesus Camacho Rodriguez commented on HIVE-19064:


[~ashutoshc], could you take a look?
https://reviews.apache.org/r/66397/

Thanks

> Add mode to support delimited identifiers enclosed within double quotation
> --
>
> Key: HIVE-19064
> URL: https://issues.apache.org/jira/browse/HIVE-19064
> Project: Hive
>  Issue Type: Improvement
>  Components: Parser, SQL
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-19064.01.patch, HIVE-19064.02.patch
>
>
> As per SQL standard. Hive currently uses `` (backticks). Default will 
> continue being backticks, but we will support identifiers within double 
> quotation via configuration parameter.
> This issue will also extends support for arbitrary char sequences, e.g., 
> containing {{~ ! @ # $ % ^ & * () , < >}}, in database and table names. 
> Currently, special characters are only supported for column names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19033) Provide an option to purge LLAP IO cache

2018-04-02 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423346#comment-16423346
 ] 

Prasanth Jayachandran commented on HIVE-19033:
--

Fixes test failure.

> Provide an option to purge LLAP IO cache
> 
>
> Key: HIVE-19033
> URL: https://issues.apache.org/jira/browse/HIVE-19033
> Project: Hive
>  Issue Type: Improvement
>  Components: llap
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19033.1.patch, HIVE-19033.2.patch, 
> HIVE-19033.3.patch, HIVE-19033.4.patch, HIVE-19033.5.patch, 
> HIVE-19033.6.patch, HIVE-19033.7.patch
>
>
> Provide an API endpoint that will trigger purging of LLAP IO cache. Also CLI 
> tool to invoke the endpoint of all LLAP daemons. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19033) Provide an option to purge LLAP IO cache

2018-04-02 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-19033:
-
Attachment: HIVE-19033.7.patch

> Provide an option to purge LLAP IO cache
> 
>
> Key: HIVE-19033
> URL: https://issues.apache.org/jira/browse/HIVE-19033
> Project: Hive
>  Issue Type: Improvement
>  Components: llap
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19033.1.patch, HIVE-19033.2.patch, 
> HIVE-19033.3.patch, HIVE-19033.4.patch, HIVE-19033.5.patch, 
> HIVE-19033.6.patch, HIVE-19033.7.patch
>
>
> Provide an API endpoint that will trigger purging of LLAP IO cache. Also CLI 
> tool to invoke the endpoint of all LLAP daemons. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18910) Migrate to Murmur hash for shuffle and bucketing

2018-04-02 Thread Deepak Jaiswal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-18910:
--
Attachment: HIVE-18910.19.patch

> Migrate to Murmur hash for shuffle and bucketing
> 
>
> Key: HIVE-18910
> URL: https://issues.apache.org/jira/browse/HIVE-18910
> Project: Hive
>  Issue Type: Task
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-18910.1.patch, HIVE-18910.10.patch, 
> HIVE-18910.11.patch, HIVE-18910.12.patch, HIVE-18910.13.patch, 
> HIVE-18910.14.patch, HIVE-18910.15.patch, HIVE-18910.16.patch, 
> HIVE-18910.17.patch, HIVE-18910.18.patch, HIVE-18910.19.patch, 
> HIVE-18910.2.patch, HIVE-18910.3.patch, HIVE-18910.4.patch, 
> HIVE-18910.5.patch, HIVE-18910.6.patch, HIVE-18910.7.patch, 
> HIVE-18910.8.patch, HIVE-18910.9.patch
>
>
> Hive uses JAVA hash which is not as good as murmur for better distribution 
> and efficiency in bucketing a table.
> Migrate to murmur hash but still keep backward compatibility for existing 
> users so that they dont have to reload the existing tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18910) Migrate to Murmur hash for shuffle and bucketing

2018-04-02 Thread Deepak Jaiswal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-18910:
--
Attachment: HIVE-18910.18.patch

> Migrate to Murmur hash for shuffle and bucketing
> 
>
> Key: HIVE-18910
> URL: https://issues.apache.org/jira/browse/HIVE-18910
> Project: Hive
>  Issue Type: Task
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-18910.1.patch, HIVE-18910.10.patch, 
> HIVE-18910.11.patch, HIVE-18910.12.patch, HIVE-18910.13.patch, 
> HIVE-18910.14.patch, HIVE-18910.15.patch, HIVE-18910.16.patch, 
> HIVE-18910.17.patch, HIVE-18910.18.patch, HIVE-18910.2.patch, 
> HIVE-18910.3.patch, HIVE-18910.4.patch, HIVE-18910.5.patch, 
> HIVE-18910.6.patch, HIVE-18910.7.patch, HIVE-18910.8.patch, HIVE-18910.9.patch
>
>
> Hive uses JAVA hash which is not as good as murmur for better distribution 
> and efficiency in bucketing a table.
> Migrate to murmur hash but still keep backward compatibility for existing 
> users so that they dont have to reload the existing tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18910) Migrate to Murmur hash for shuffle and bucketing

2018-04-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423326#comment-16423326
 ] 

Hive QA commented on HIVE-18910:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
41s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
20s{color} | {color:red} streaming in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
59s{color} | {color:red} ql in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
14s{color} | {color:red} storage-api: The patch generated 3 new + 97 unchanged 
- 3 fixed = 100 total (was 100) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
21s{color} | {color:red} serde: The patch generated 150 new + 214 unchanged - 3 
fixed = 364 total (was 217) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
15s{color} | {color:red} hcatalog/streaming: The patch generated 1 new + 33 
unchanged - 0 fixed = 34 total (was 33) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
59s{color} | {color:red} ql: The patch generated 26 new + 1267 unchanged - 3 
fixed = 1293 total (was 1270) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 124 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
16s{color} | {color:red} The patch generated 50 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9964/dev-support/hive-personality.sh
 |
| git revision | master / b849a16 |
| Default Java | 1.8.0_111 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9964/yetus/patch-mvninstall-hcatalog_streaming.txt
 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9964/yetus/patch-mvninstall-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9964/yetus/diff-checkstyle-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9964/yetus/diff-checkstyle-serde.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9964/yetus/diff-checkstyle-hcatalog_streaming.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9964/yetus/diff-checkstyle-ql.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9964/yetus/whitespace-tabs.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9964/yetus/patch-asflicense-problems.txt
 |
| modules | C: storage-api serde hbase-handler hcatalog/streaming 
itests/hive-blobstore ql standalone-metastore U: . |
| Console output | 

[jira] [Commented] (HIVE-19091) [Hive 3.0.0 Release] Rat check failure fixes

2018-04-02 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423324#comment-16423324
 ] 

Vineet Garg commented on HIVE-19091:


[~alangates] [~ashutoshc] Can you take a look?

> [Hive 3.0.0 Release] Rat check failure fixes
> 
>
> Key: HIVE-19091
> URL: https://issues.apache.org/jira/browse/HIVE-19091
> Project: Hive
>  Issue Type: Task
>  Components: Standalone Metastore
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19091.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19091) [Hive 3.0.0 Release] Rat check failure fixes

2018-04-02 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19091:
---
Status: Patch Available  (was: Open)

> [Hive 3.0.0 Release] Rat check failure fixes
> 
>
> Key: HIVE-19091
> URL: https://issues.apache.org/jira/browse/HIVE-19091
> Project: Hive
>  Issue Type: Task
>  Components: Standalone Metastore
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19091.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19091) [Hive 3.0.0 Release] Rat check failure fixes

2018-04-02 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19091:
---
Attachment: HIVE-19091.1.patch

> [Hive 3.0.0 Release] Rat check failure fixes
> 
>
> Key: HIVE-19091
> URL: https://issues.apache.org/jira/browse/HIVE-19091
> Project: Hive
>  Issue Type: Task
>  Components: Standalone Metastore
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19091.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19091) [Hive 3.0.0 Release] Rat check failure fixes

2018-04-02 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg reassigned HIVE-19091:
--


> [Hive 3.0.0 Release] Rat check failure fixes
> 
>
> Key: HIVE-19091
> URL: https://issues.apache.org/jira/browse/HIVE-19091
> Project: Hive
>  Issue Type: Task
>  Components: Standalone Metastore
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19090) Running concatenate on ORC tables either increase or decrease the number of files depends on the order of file being picked

2018-04-02 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran reassigned HIVE-19090:


Assignee: Prasanth Jayachandran

> Running concatenate on ORC tables either increase or decrease the number of 
> files depends on the order of file being picked
> ---
>
> Key: HIVE-19090
> URL: https://issues.apache.org/jira/browse/HIVE-19090
> Project: Hive
>  Issue Type: Bug
>  Components: ORC
>Reporter: Naresh P R
>Assignee: Prasanth Jayachandran
>Priority: Major
>
> I ran concatenate 2 times without changing any config
> For the 1st run, 14 files merged to 8 files
> For the 2nd run, 8 files expanded to 10 files
> From logs i could see Input files are same, whereas output splits from 
> CombineHiveInputFormat are varying depending upon the file being picked first.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19072) incorrect token handling for LLAP plugin endpoint

2018-04-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19072:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed to master. Thanks for the review!

> incorrect token handling for LLAP plugin endpoint
> -
>
> Key: HIVE-19072
> URL: https://issues.apache.org/jira/browse/HIVE-19072
> Project: Hive
>  Issue Type: Bug
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19072.01.patch, HIVE-19072.patch
>
>
> {noformat}
> java.lang.IllegalArgumentException: Null user
> at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2207) 
> ~[guava-19.0.jar:?]
> at com.google.common.cache.LocalCache.get(LocalCache.java:3953) 
> ~[guava-19.0.jar:?]
> at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4790) 
> ~[guava-19.0.jar:?]
> at 
> org.apache.hadoop.hive.llap.AsyncPbRpcProxy.getProxy(AsyncPbRpcProxy.java:425)
>  ~[hive-exec-3.0.0.3.0.0.0-1101.jar:3.0.0.3.0.0.0-1101]
> at 
> org.apache.hadoop.hive.ql.exec.tez.LlapPluginEndpointClientImpl.access$000(LlapPluginEndpointClientImpl.java:45)
>  ~[hive-exec-3.0.0.3.0.0.0-1101.jar:3.0.0.3.0.0.0-1101]
> at 
> org.apache.hadoop.hive.ql.exec.tez.LlapPluginEndpointClientImpl$SendUpdateQueryCallable.call(LlapPluginEndpointClientImpl.java:116)
>  ~[hive-exec-3.0.0.3.0.0.0-1101.jar:3.0.0.3.0.0.0-1101]
> at 
> org.apache.hadoop.hive.ql.exec.tez.LlapPluginEndpointClientImpl$SendUpdateQueryCallable.call(LlapPluginEndpointClientImpl.java:93)
>  ~[hive-exec-3.0.0.3.0.0.0-1101.jar:3.0.0.3.0.0.0-1101]
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
>  [guava-19.0.jar:?]
> at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
>  [guava-19.0.jar:?]
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
>  [guava-19.0.jar:?]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_161]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_161]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19014) utilize YARN-8028 (queue ACL check) in Hive Tez session pool

2018-04-02 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423301#comment-16423301
 ] 

Jason Dere commented on HIVE-19014:
---

Ok, in that case then +1 from me, just clean up the trailing whitespace.

> utilize YARN-8028 (queue ACL check) in Hive Tez session pool
> 
>
> Key: HIVE-19014
> URL: https://issues.apache.org/jira/browse/HIVE-19014
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19014.01.patch, HIVE-19014.02.patch, 
> HIVE-19014.03.patch, HIVE-19014.04.patch, HIVE-19014.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19058) add object owner to HivePrivilegeObject

2018-04-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423300#comment-16423300
 ] 

Hive QA commented on HIVE-19058:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12917224/HIVE-19058.03.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9963/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9963/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9963/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-04-02 23:55:02.676
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-9963/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-04-02 23:55:02.679
+ cd apache-github-source-source
+ git fetch origin
>From https://github.com/apache/hive
   2d770d8..b849a16  master -> origin/master
+ git reset --hard HEAD
HEAD is now at 2d770d8 HIVE-19073: StatsOptimizer may mangle constant columns 
(Jesus Camacho Rodriguez, reviewed by Ashutosh Chauhan)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded.
  (use "git pull" to update your local branch)
+ git reset --hard origin/master
HEAD is now at b849a16 HIVE-19071 : WM: backup resource plans cannot be used 
without quoted idenitifiers (Sergey Shelukhin, reviewed by Prasanth 
Jayachandran)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-04-02 23:55:06.445
+ rm -rf ../yetus_PreCommit-HIVE-Build-9963
+ mkdir ../yetus_PreCommit-HIVE-Build-9963
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-9963
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-9963/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
Going to apply patch with: git apply -p0
+ [[ maven == \m\a\v\e\n ]]
+ rm -rf /data/hiveptest/working/maven/org/apache/hive
+ mvn -B clean install -DskipTests -T 4 -q 
-Dmaven.repo.local=/data/hiveptest/working/maven
protoc-jar: executing: [/tmp/protoc8276770404744856378.exe, --version]
libprotoc 2.5.0
protoc-jar: executing: [/tmp/protoc8276770404744856378.exe, 
-I/data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/protobuf/org/apache/hadoop/hive/metastore,
 
--java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/target/generated-sources,
 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto]
ANTLR Parser Generator  Version 3.5.2
Output file 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/target/generated-sources/org/apache/hadoop/hive/metastore/parser/FilterParser.java
 does not exist: must build 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/parser/Filter.g
org/apache/hadoop/hive/metastore/parser/Filter.g
log4j:WARN No appenders could be found for logger (DataNucleus.Persistence).
log4j:WARN Please initialize the log4j system properly.
DataNucleus Enhancer (version 4.1.17) for API "JDO"
DataNucleus Enhancer completed with success for 39 classes.
ANTLR Parser Generator  Version 3.5.2
Output file 
/data/hiveptest/working/apache-github-source-source/ql/target/generated-sources/antlr3/org/apache/hadoop/hive/ql/parse/HiveLexer.java
 does not exist: must build 
/data/hiveptest/working/apache-github-source-source/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g
org/apache/hadoop/hive/ql/parse/HiveLexer.g
Output file 

[jira] [Commented] (HIVE-19072) incorrect token handling for LLAP plugin endpoint

2018-04-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423297#comment-16423297
 ] 

Hive QA commented on HIVE-19072:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12917221/HIVE-19072.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 191 failed/errored test(s), 13296 tests 
executed
*Failed tests:*
{noformat}
TestCopyUtils - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestExportImport - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=95)


[jira] [Commented] (HIVE-19014) utilize YARN-8028 (queue ACL check) in Hive Tez session pool

2018-04-02 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423295#comment-16423295
 ] 

Thejas M Nair commented on HIVE-19014:
--

bq. any comment on the SessionState.getUserFromAuthenticator() vs 
SessionState.getUserName() 
That needs some cleanup. As of now, hive code is using both in different 
places. The main difference is that SessionState.getUserFromAuthenticator() 
lets you configure something like SessionStateConfigUserAuthenticator as the 
authenticator, which can be used to switch usernames within a .q file.
When the cleanup is done, we should be able to replace all uses of the api 
calls with one of them.
Usage in this patch is fine.


> utilize YARN-8028 (queue ACL check) in Hive Tez session pool
> 
>
> Key: HIVE-19014
> URL: https://issues.apache.org/jira/browse/HIVE-19014
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19014.01.patch, HIVE-19014.02.patch, 
> HIVE-19014.03.patch, HIVE-19014.04.patch, HIVE-19014.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19064) Add mode to support delimited identifiers enclosed within double quotation

2018-04-02 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-19064:
---
Attachment: HIVE-19064.02.patch

> Add mode to support delimited identifiers enclosed within double quotation
> --
>
> Key: HIVE-19064
> URL: https://issues.apache.org/jira/browse/HIVE-19064
> Project: Hive
>  Issue Type: Improvement
>  Components: Parser, SQL
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-19064.01.patch, HIVE-19064.02.patch
>
>
> As per SQL standard. Hive currently uses `` (backticks). Default will 
> continue being backticks, but we will support identifiers within double 
> quotation via configuration parameter.
> This issue will also extends support for arbitrary char sequences, e.g., 
> containing {{~ ! @ # $ % ^ & * () , < >}}, in database and table names. 
> Currently, special characters are only supported for column names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18839) Implement incremental rebuild for materialized views (only insert operations in source tables)

2018-04-02 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423289#comment-16423289
 ] 

Jesus Camacho Rodriguez commented on HIVE-18839:


[~ashutoshc], could you take a look at this one?
https://reviews.apache.org/r/66369/

Thanks

> Implement incremental rebuild for materialized views (only insert operations 
> in source tables)
> --
>
> Key: HIVE-18839
> URL: https://issues.apache.org/jira/browse/HIVE-18839
> Project: Hive
>  Issue Type: Improvement
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: TODOC3.0
> Attachments: HIVE-18839.patch
>
>
> Implementation will follow current code path for full rebuild. 
> When the MV query plan is retrieved, if the MV contents are outdated because 
> there were insert operations in the source tables, we will introduce a filter 
> with a condition based on stored value of ValidWriteIdLists. For instance, 
> {{WRITE_ID < high_txn_id AND WRITE_ID NOT IN (x, y, ...)}}. Then the 
> rewriting will do the rest of the work by creating a partial rewriting, where 
> the contents of the MV are read as well as the new contents from the source 
> tables.
> This mechanism will not work only for ALTER MV... REBUILD, but also for user 
> queries which will be able to benefit from using outdated MVs to compute part 
> of the needed results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19054) Function replication shall use "hive.repl.replica.functions.root.dir" as root

2018-04-02 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-19054:
--
Attachment: HIVE-19054.2.patch

> Function replication shall use "hive.repl.replica.functions.root.dir" as root
> -
>
> Key: HIVE-19054
> URL: https://issues.apache.org/jira/browse/HIVE-19054
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19054.1.patch, HIVE-19054.2.patch
>
>
> It's wrongly use fs.defaultFS as the root, ignore 
> "hive.repl.replica.functions.root.dir" definition, thus prevent replicating 
> to cloud destination.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19083) Make partition clause optional for INSERT

2018-04-02 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423283#comment-16423283
 ] 

Ashutosh Chauhan commented on HIVE-19083:
-

+1 pending tests.

> Make partition clause optional for INSERT
> -
>
> Key: HIVE-19083
> URL: https://issues.apache.org/jira/browse/HIVE-19083
> Project: Hive
>  Issue Type: Sub-task
>  Components: SQL
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19083.1.patch, HIVE-19083.2.patch, 
> HIVE-19083.3.patch
>
>
> Partition clause should be optional for
>  * INSERT INTO VALUES
>  * INSERT OVERWRITE
>  * INSERT SELECT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-17970) MM LOAD DATA with OVERWRITE doesn't use base_n directory concept

2018-04-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HIVE-17970:
---

Assignee: Sergey Shelukhin

> MM LOAD DATA with OVERWRITE doesn't use base_n directory concept
> 
>
> Key: HIVE-17970
> URL: https://issues.apache.org/jira/browse/HIVE-17970
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
>
> Judging by 
> {code:java}
> Hive.loadTable(Path loadPath, String tableName, LoadFileType loadFileType, 
> boolean isSrcLocal,
>   boolean isSkewedStoreAsSubdir, boolean isAcid, boolean 
> hasFollowingStatsTask,
>   Long txnId, int stmtId, boolean isMmTable)
> {code}
> LOAD DATA with OVERWRITE will delete all existing data then write new data 
> into the table.  This logic makes sense for non-acid tables but for Acid/MM 
> it should work like INSERT OVERWRITE statement and write new data to base_n/. 
> This way the lock manager can be used to either get an X lock for IOW and 
> thus block all readers or let it run with SemiShared and let readers continue 
> and make the system more concurrent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19084) Test case in Hive Query Language fails with a java.lang.AssertionError.

2018-04-02 Thread Steve Yeom (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423276#comment-16423276
 ] 

Steve Yeom commented on HIVE-19084:
---

if the test is based on a sorted order of the return list of "listStatus(..)", 
then it is not correct because listStatus API spec clearly indicates that the 
return list of files of the method "listStatus()" 
is not in a sorted order, for example, natural order. Checking a possible 
solution..

> Test case in Hive Query Language fails with a java.lang.AssertionError.
> ---
>
> Key: HIVE-19084
> URL: https://issues.apache.org/jira/browse/HIVE-19084
> Project: Hive
>  Issue Type: Bug
>  Components: Test, Transactions
> Environment: uname -a
> Linux pts00607-vm3 4.4.0-112-generic #135-Ubuntu SMP Fri Jan 19 11:48:46 UTC 
> 2018 ppc64le ppc64le ppc64le GNU/Linux
>Reporter: Alisha Prabhu
>Priority: Major
> Attachments: HIVE-19084.1.patch
>
>
> The test case testInsertOverwriteForPartitionedMmTable in 
> TestTxnCommandsForMmTable.java and TestTxnCommandsForOrcMmTable.java fails 
> with a java.lang.AssertionError.
> Maven command used is mvn 
> -Dtest=TestTxnCommandsForMmTable#testInsertOverwriteForPartitionedMmTable test
> The test case fails as the listStatus function of the FileSystem does not 
> guarantee to return the List of files/directories status in a sorted order.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (HIVE-19054) Function replication shall use "hive.repl.replica.functions.root.dir" as root

2018-04-02 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai reopened HIVE-19054:
---

The qualifiedDestinationPath change is still needed.

> Function replication shall use "hive.repl.replica.functions.root.dir" as root
> -
>
> Key: HIVE-19054
> URL: https://issues.apache.org/jira/browse/HIVE-19054
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19054.1.patch
>
>
> It's wrongly use fs.defaultFS as the root, ignore 
> "hive.repl.replica.functions.root.dir" definition, thus prevent replicating 
> to cloud destination.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19014) utilize YARN-8028 (queue ACL check) in Hive Tez session pool

2018-04-02 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423268#comment-16423268
 ] 

Jason Dere commented on HIVE-19014:
---

The changes look good to me. [~thejas] any comment on the 
SessionState.getUserFromAuthenticator() vs SessionState.getUserName() usage 
questions raised by Sergey?

> utilize YARN-8028 (queue ACL check) in Hive Tez session pool
> 
>
> Key: HIVE-19014
> URL: https://issues.apache.org/jira/browse/HIVE-19014
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19014.01.patch, HIVE-19014.02.patch, 
> HIVE-19014.03.patch, HIVE-19014.04.patch, HIVE-19014.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17661) DBTxnManager.acquireLocks() - MM tables should use shared lock for Insert

2018-04-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-17661:

Attachment: HIVE-17661.01.patch

> DBTxnManager.acquireLocks() - MM tables should use shared lock for Insert
> -
>
> Key: HIVE-17661
> URL: https://issues.apache.org/jira/browse/HIVE-17661
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17661.01.patch, HIVE-17661.patch
>
>
> {noformat}
> case INSERT:
>   assert t != null;
>   if(AcidUtils.isFullAcidTable(t)) {
> compBuilder.setShared();
>   }
>   else {
> if 
> (conf.getBoolVar(HiveConf.ConfVars.HIVE_TXN_STRICT_LOCKING_MODE)) {
> {noformat}
> _if(AcidUtils.isFullAcidTable(t)) {_ 
> should probably be 
> _if(AcidUtils.isAcidTable(t)) {_



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17661) DBTxnManager.acquireLocks() - MM tables should use shared lock for Insert

2018-04-02 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423266#comment-16423266
 ] 

Sergey Shelukhin commented on HIVE-17661:
-

Fixing some negative tests. Not sure if they are broken after this change or 
were broken for a while, given the state of QA...

> DBTxnManager.acquireLocks() - MM tables should use shared lock for Insert
> -
>
> Key: HIVE-17661
> URL: https://issues.apache.org/jira/browse/HIVE-17661
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17661.01.patch, HIVE-17661.patch
>
>
> {noformat}
> case INSERT:
>   assert t != null;
>   if(AcidUtils.isFullAcidTable(t)) {
> compBuilder.setShared();
>   }
>   else {
> if 
> (conf.getBoolVar(HiveConf.ConfVars.HIVE_TXN_STRICT_LOCKING_MODE)) {
> {noformat}
> _if(AcidUtils.isFullAcidTable(t)) {_ 
> should probably be 
> _if(AcidUtils.isAcidTable(t)) {_



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18883) Add findbugs to yetus pre-commit checks

2018-04-02 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423243#comment-16423243
 ] 

Sahil Takiar commented on HIVE-18883:
-

[~pvary] any comments on this?

> Add findbugs to yetus pre-commit checks
> ---
>
> Key: HIVE-18883
> URL: https://issues.apache.org/jira/browse/HIVE-18883
> Project: Hive
>  Issue Type: Sub-task
>  Components: Testing Infrastructure
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18883.1.patch, HIVE-18883.2.patch
>
>
> We should enable FindBugs for our YETUS pre-commit checks, this will help 
> overall code quality and should decrease the overall number of bugs in Hive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-19014) utilize YARN-8028 (queue ACL check) in Hive Tez session pool

2018-04-02 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423203#comment-16423203
 ] 

Sergey Shelukhin edited comment on HIVE-19014 at 4/2/18 10:46 PM:
--

Changes to account for YARN-8091.
Tested both patches in a cluster w/custom jars


was (Author: sershe):
Changes to account for YARN-8091

> utilize YARN-8028 (queue ACL check) in Hive Tez session pool
> 
>
> Key: HIVE-19014
> URL: https://issues.apache.org/jira/browse/HIVE-19014
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19014.01.patch, HIVE-19014.02.patch, 
> HIVE-19014.03.patch, HIVE-19014.04.patch, HIVE-19014.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18525) Add explain plan to Hive on Spark Web UI

2018-04-02 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423239#comment-16423239
 ] 

Sahil Takiar commented on HIVE-18525:
-

[~xuefuz], [~aihuaxu] could you take a look?

> Add explain plan to Hive on Spark Web UI
> 
>
> Key: HIVE-18525
> URL: https://issues.apache.org/jira/browse/HIVE-18525
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18525.1.patch, HIVE-18525.2.patch, 
> HIVE-18525.3.patch, HIVE-18525.4.patch, Job-Page-Collapsed.png, 
> Job-Page-Expanded.png, Map-Explain-Plan.png, Reduce-Explain-Plan.png
>
>
> More of an investigation JIRA. The Spark UI has a "long description" of each 
> stage in the Spark DAG. Typically one stage in the Spark DAG corresponds to 
> either a {{MapWork}} or {{ReduceWork}} object. It would be useful if the long 
> description contained the explain plan of the corresponding work object.
> I'm not sure how much additional overhead this would introduce. If not the 
> full explain plan, then maybe a modified one that just lists out all the 
> operator tree along with each operator name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18525) Add explain plan to Hive on Spark Web UI

2018-04-02 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423233#comment-16423233
 ] 

Sahil Takiar commented on HIVE-18525:
-

Attached updated patch with a few changes:
* Made the patch configurable, but turned on by default
* Truncate each stage of the explain plan at 100,000 characters

I made these changes to address any possible issues where the explain plans are 
so big that they overwhelm the Spark Web UI. This could happen if a user 
submits a very, very long query.

> Add explain plan to Hive on Spark Web UI
> 
>
> Key: HIVE-18525
> URL: https://issues.apache.org/jira/browse/HIVE-18525
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18525.1.patch, HIVE-18525.2.patch, 
> HIVE-18525.3.patch, HIVE-18525.4.patch, Job-Page-Collapsed.png, 
> Job-Page-Expanded.png, Map-Explain-Plan.png, Reduce-Explain-Plan.png
>
>
> More of an investigation JIRA. The Spark UI has a "long description" of each 
> stage in the Spark DAG. Typically one stage in the Spark DAG corresponds to 
> either a {{MapWork}} or {{ReduceWork}} object. It would be useful if the long 
> description contained the explain plan of the corresponding work object.
> I'm not sure how much additional overhead this would introduce. If not the 
> full explain plan, then maybe a modified one that just lists out all the 
> operator tree along with each operator name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19071) WM: backup resource plans cannot be used without quoted idenitifiers

2018-04-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19071:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed to master. cc [~dileep529]

> WM: backup resource plans cannot be used without quoted idenitifiers
> 
>
> Key: HIVE-19071
> URL: https://issues.apache.org/jira/browse/HIVE-19071
> Project: Hive
>  Issue Type: Bug
>Reporter: Dileep Kumar Chiguruvada
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19071.01.patch, HIVE-19071.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19083) Make partition clause optional for INSERT

2018-04-02 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19083:
---
Status: Patch Available  (was: Open)

> Make partition clause optional for INSERT
> -
>
> Key: HIVE-19083
> URL: https://issues.apache.org/jira/browse/HIVE-19083
> Project: Hive
>  Issue Type: Sub-task
>  Components: SQL
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19083.1.patch, HIVE-19083.2.patch, 
> HIVE-19083.3.patch
>
>
> Partition clause should be optional for
>  * INSERT INTO VALUES
>  * INSERT OVERWRITE
>  * INSERT SELECT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19072) incorrect token handling for LLAP plugin endpoint

2018-04-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423232#comment-16423232
 ] 

Hive QA commented on HIVE-19072:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} llap-common: The patch generated 1 new + 193 unchanged 
- 0 fixed = 194 total (was 193) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9962/dev-support/hive-personality.sh
 |
| git revision | master / 2d770d8 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9962/yetus/diff-checkstyle-llap-common.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9962/yetus/patch-asflicense-problems.txt
 |
| modules | C: llap-common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9962/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> incorrect token handling for LLAP plugin endpoint
> -
>
> Key: HIVE-19072
> URL: https://issues.apache.org/jira/browse/HIVE-19072
> Project: Hive
>  Issue Type: Bug
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19072.01.patch, HIVE-19072.patch
>
>
> {noformat}
> java.lang.IllegalArgumentException: Null user
> at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2207) 
> ~[guava-19.0.jar:?]
> at com.google.common.cache.LocalCache.get(LocalCache.java:3953) 
> ~[guava-19.0.jar:?]
> at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4790) 
> ~[guava-19.0.jar:?]
> at 
> org.apache.hadoop.hive.llap.AsyncPbRpcProxy.getProxy(AsyncPbRpcProxy.java:425)
>  ~[hive-exec-3.0.0.3.0.0.0-1101.jar:3.0.0.3.0.0.0-1101]
> at 
> org.apache.hadoop.hive.ql.exec.tez.LlapPluginEndpointClientImpl.access$000(LlapPluginEndpointClientImpl.java:45)
>  ~[hive-exec-3.0.0.3.0.0.0-1101.jar:3.0.0.3.0.0.0-1101]
> at 
> 

[jira] [Updated] (HIVE-19083) Make partition clause optional for INSERT

2018-04-02 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19083:
---
Status: Open  (was: Patch Available)

> Make partition clause optional for INSERT
> -
>
> Key: HIVE-19083
> URL: https://issues.apache.org/jira/browse/HIVE-19083
> Project: Hive
>  Issue Type: Sub-task
>  Components: SQL
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19083.1.patch, HIVE-19083.2.patch, 
> HIVE-19083.3.patch
>
>
> Partition clause should be optional for
>  * INSERT INTO VALUES
>  * INSERT OVERWRITE
>  * INSERT SELECT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19083) Make partition clause optional for INSERT

2018-04-02 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19083:
---
Attachment: HIVE-19083.3.patch

> Make partition clause optional for INSERT
> -
>
> Key: HIVE-19083
> URL: https://issues.apache.org/jira/browse/HIVE-19083
> Project: Hive
>  Issue Type: Sub-task
>  Components: SQL
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19083.1.patch, HIVE-19083.2.patch, 
> HIVE-19083.3.patch
>
>
> Partition clause should be optional for
>  * INSERT INTO VALUES
>  * INSERT OVERWRITE
>  * INSERT SELECT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19083) Make partition clause optional for INSERT

2018-04-02 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423231#comment-16423231
 ] 

Vineet Garg commented on HIVE-19083:


Thanks [~ashutoshc]. I have updated the patch.

> Make partition clause optional for INSERT
> -
>
> Key: HIVE-19083
> URL: https://issues.apache.org/jira/browse/HIVE-19083
> Project: Hive
>  Issue Type: Sub-task
>  Components: SQL
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19083.1.patch, HIVE-19083.2.patch
>
>
> Partition clause should be optional for
>  * INSERT INTO VALUES
>  * INSERT OVERWRITE
>  * INSERT SELECT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18525) Add explain plan to Hive on Spark Web UI

2018-04-02 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-18525:

Attachment: HIVE-18525.4.patch

> Add explain plan to Hive on Spark Web UI
> 
>
> Key: HIVE-18525
> URL: https://issues.apache.org/jira/browse/HIVE-18525
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18525.1.patch, HIVE-18525.2.patch, 
> HIVE-18525.3.patch, HIVE-18525.4.patch, Job-Page-Collapsed.png, 
> Job-Page-Expanded.png, Map-Explain-Plan.png, Reduce-Explain-Plan.png
>
>
> More of an investigation JIRA. The Spark UI has a "long description" of each 
> stage in the Spark DAG. Typically one stage in the Spark DAG corresponds to 
> either a {{MapWork}} or {{ReduceWork}} object. It would be useful if the long 
> description contained the explain plan of the corresponding work object.
> I'm not sure how much additional overhead this would introduce. If not the 
> full explain plan, then maybe a modified one that just lists out all the 
> operator tree along with each operator name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16282) Semijoin: Disable slow-start for the bloom filter aggregate task

2018-04-02 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-16282:
--
Fix Version/s: (was: 2.3.0)
   3.0.0

> Semijoin: Disable slow-start for the bloom filter aggregate task
> 
>
> Key: HIVE-16282
> URL: https://issues.apache.org/jira/browse/HIVE-16282
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Gopal V
>Assignee: Deepak Jaiswal
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-16282.1.patch, HIVE-16282.2.patch, 
> HIVE-16282.3.patch, HIVE-16282.4.patch, HIVE-16282.5.patch, extended plan.rtf
>
>
> The slow-start of the bloom filter vertex is a scheduling problem which 
> causes more pre-emption than is useful.
> When the bloom filters are arranged as follows
> Map 1(10 tasks)->Reducer 2(1 task)->Map 3(100 tasks)
> Map 3 and Map 1 are immediately active since Reducer 2 -> Map 3 is a 
> broadcast edge.
> Once 3 tasks in Map 1 finish, the engine kills one active task from Map 3 to 
> make room for Reducer 2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19071) WM: backup resource plans cannot be used without quoted idenitifiers

2018-04-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423207#comment-16423207
 ] 

Hive QA commented on HIVE-19071:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12917222/HIVE-19071.01.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 237 failed/errored test(s), 13683 tests 
executed
*Failed tests:*
{noformat}
TestCopyUtils - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestExportImport - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=93)

[infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q]
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=95)


[jira] [Commented] (HIVE-19014) utilize YARN-8028 (queue ACL check) in Hive Tez session pool

2018-04-02 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423203#comment-16423203
 ] 

Sergey Shelukhin commented on HIVE-19014:
-

Changes to account for YARN-8091

> utilize YARN-8028 (queue ACL check) in Hive Tez session pool
> 
>
> Key: HIVE-19014
> URL: https://issues.apache.org/jira/browse/HIVE-19014
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19014.01.patch, HIVE-19014.02.patch, 
> HIVE-19014.03.patch, HIVE-19014.04.patch, HIVE-19014.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-16282) Semijoin: Disable slow-start for the bloom filter aggregate task

2018-04-02 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423202#comment-16423202
 ] 

Vineet Garg commented on HIVE-16282:


[~djaiswal] [~jdere] If this committed to master after branch-2 split can you 
change the fix version to 3.0.0?

> Semijoin: Disable slow-start for the bloom filter aggregate task
> 
>
> Key: HIVE-16282
> URL: https://issues.apache.org/jira/browse/HIVE-16282
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Gopal V
>Assignee: Deepak Jaiswal
>Priority: Major
> Fix For: 2.3.0
>
> Attachments: HIVE-16282.1.patch, HIVE-16282.2.patch, 
> HIVE-16282.3.patch, HIVE-16282.4.patch, HIVE-16282.5.patch, extended plan.rtf
>
>
> The slow-start of the bloom filter vertex is a scheduling problem which 
> causes more pre-emption than is useful.
> When the bloom filters are arranged as follows
> Map 1(10 tasks)->Reducer 2(1 task)->Map 3(100 tasks)
> Map 3 and Map 1 are immediately active since Reducer 2 -> Map 3 is a 
> broadcast edge.
> Once 3 tasks in Map 1 finish, the engine kills one active task from Map 3 to 
> make room for Reducer 2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19014) utilize YARN-8028 (queue ACL check) in Hive Tez session pool

2018-04-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19014:

Attachment: HIVE-19014.04.patch

> utilize YARN-8028 (queue ACL check) in Hive Tez session pool
> 
>
> Key: HIVE-19014
> URL: https://issues.apache.org/jira/browse/HIVE-19014
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19014.01.patch, HIVE-19014.02.patch, 
> HIVE-19014.03.patch, HIVE-19014.04.patch, HIVE-19014.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18651) Expose additional Spark metrics

2018-04-02 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423187#comment-16423187
 ] 

Vihang Karajgaonkar commented on HIVE-18651:


+1 LGTM. Thanks for making the changes.

> Expose additional Spark metrics
> ---
>
> Key: HIVE-18651
> URL: https://issues.apache.org/jira/browse/HIVE-18651
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18651.1.patch, HIVE-18651.2.patch, 
> HIVE-18651.3.patch
>
>
> There have been multiples additional metrics that get collected via Spark 
> (such as executor CPU time spent). We should expose them in HoS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18651) Expose additional Spark metrics

2018-04-02 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423182#comment-16423182
 ] 

Sahil Takiar commented on HIVE-18651:
-

Thanks for taking a look [~vihangk1]. Addressed your comments, updated the RB 
and attached the updated patch.

> Expose additional Spark metrics
> ---
>
> Key: HIVE-18651
> URL: https://issues.apache.org/jira/browse/HIVE-18651
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18651.1.patch, HIVE-18651.2.patch, 
> HIVE-18651.3.patch
>
>
> There have been multiples additional metrics that get collected via Spark 
> (such as executor CPU time spent). We should expose them in HoS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18651) Expose additional Spark metrics

2018-04-02 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-18651:

Attachment: HIVE-18651.3.patch

> Expose additional Spark metrics
> ---
>
> Key: HIVE-18651
> URL: https://issues.apache.org/jira/browse/HIVE-18651
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18651.1.patch, HIVE-18651.2.patch, 
> HIVE-18651.3.patch
>
>
> There have been multiples additional metrics that get collected via Spark 
> (such as executor CPU time spent). We should expose them in HoS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17824) msck repair table should drop the missing partitions from metastore

2018-04-02 Thread Janaki Lahorani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janaki Lahorani updated HIVE-17824:
---
Attachment: HIVE-17824.1.patch

> msck repair table should drop the missing partitions from metastore
> ---
>
> Key: HIVE-17824
> URL: https://issues.apache.org/jira/browse/HIVE-17824
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vihang Karajgaonkar
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-17824.1.patch
>
>
> {{msck repair table }} is often used in environments where the new 
> partitions are loaded as directories on HDFS or S3 and users want to create 
> the missing partitions in bulk. However, currently it only supports addition 
> of missing partitions. If there are any partitions which are present in 
> metastore but not on the FileSystem, it should also delete them so that it 
> truly repairs the table metadata.
> We should be careful not to break backwards compatibility so we should either 
> introduce a new config or keyword to add support to delete unnecessary 
> partitions from the metastore. This way users who want the old behavior can 
> easily turn it off. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17824) msck repair table should drop the missing partitions from metastore

2018-04-02 Thread Janaki Lahorani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janaki Lahorani updated HIVE-17824:
---
Status: Patch Available  (was: Open)

> msck repair table should drop the missing partitions from metastore
> ---
>
> Key: HIVE-17824
> URL: https://issues.apache.org/jira/browse/HIVE-17824
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vihang Karajgaonkar
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-17824.1.patch
>
>
> {{msck repair table }} is often used in environments where the new 
> partitions are loaded as directories on HDFS or S3 and users want to create 
> the missing partitions in bulk. However, currently it only supports addition 
> of missing partitions. If there are any partitions which are present in 
> metastore but not on the FileSystem, it should also delete them so that it 
> truly repairs the table metadata.
> We should be careful not to break backwards compatibility so we should either 
> introduce a new config or keyword to add support to delete unnecessary 
> partitions from the metastore. This way users who want the old behavior can 
> easily turn it off. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17647) DDLTask.generateAddMmTasks(Table tbl) and other random code should not start transactions

2018-04-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-17647:

Description: 
This method (and other places) have 
{noformat}
  if (txnManager.isTxnOpen()) {
mmWriteId = txnManager.getCurrentTxnId();
  } else {
mmWriteId = txnManager.openTxn(new Context(conf), conf.getUser());
txnManager.commitTxn();
  }
{noformat}
this should throw if there is no open transaction.  It should never open one.

In general the logic seems suspect.  Looks like the intent is to move all 
existing files into a delta_x_x/ when a plain table is converted to MM table.  
This seems like something that needs to be done from under an Exclusive lock to 
prevent concurrent Insert operations writing data under table/partition root.  
But this is too late to acquire locks which should be done from the 
Driver.acquireLocks()  (or else have deadlock detector since acquiring them 
here would bread all-or-nothing lock acquisition semantics currently required 
w/o deadlock detector)

  was:
This method has 
{noformat}
  if (txnManager.isTxnOpen()) {
mmWriteId = txnManager.getCurrentTxnId();
  } else {
mmWriteId = txnManager.openTxn(new Context(conf), conf.getUser());
txnManager.commitTxn();
  }
{noformat}
this should throw if there is no open transaction.  It should never open one.

In general the logic seems suspect.  Looks like the intent is to move all 
existing files into a delta_x_x/ when a plain table is converted to MM table.  
This seems like something that needs to be done from under an Exclusive lock to 
prevent concurrent Insert operations writing data under table/partition root.  
But this is too late to acquire locks which should be done from the 
Driver.acquireLocks()  (or else have deadlock detector since acquiring them 
here would bread all-or-nothing lock acquisition semantics currently required 
w/o deadlock detector)


> DDLTask.generateAddMmTasks(Table tbl) and other random code should not start 
> transactions
> -
>
> Key: HIVE-17647
> URL: https://issues.apache.org/jira/browse/HIVE-17647
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Priority: Major
>  Labels: mm-gap-2
>
> This method (and other places) have 
> {noformat}
>   if (txnManager.isTxnOpen()) {
> mmWriteId = txnManager.getCurrentTxnId();
>   } else {
> mmWriteId = txnManager.openTxn(new Context(conf), conf.getUser());
> txnManager.commitTxn();
>   }
> {noformat}
> this should throw if there is no open transaction.  It should never open one.
> In general the logic seems suspect.  Looks like the intent is to move all 
> existing files into a delta_x_x/ when a plain table is converted to MM table. 
>  This seems like something that needs to be done from under an Exclusive lock 
> to prevent concurrent Insert operations writing data under table/partition 
> root.  But this is too late to acquire locks which should be done from the 
> Driver.acquireLocks()  (or else have deadlock detector since acquiring them 
> here would bread all-or-nothing lock acquisition semantics currently required 
> w/o deadlock detector)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-17647) DDLTask.generateAddMmTasks(Table tbl) and other random code should not start transactions

2018-04-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HIVE-17647:
---

Assignee: Sergey Shelukhin

> DDLTask.generateAddMmTasks(Table tbl) and other random code should not start 
> transactions
> -
>
> Key: HIVE-17647
> URL: https://issues.apache.org/jira/browse/HIVE-17647
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
>
> This method (and other places) have 
> {noformat}
>   if (txnManager.isTxnOpen()) {
> mmWriteId = txnManager.getCurrentTxnId();
>   } else {
> mmWriteId = txnManager.openTxn(new Context(conf), conf.getUser());
> txnManager.commitTxn();
>   }
> {noformat}
> this should throw if there is no open transaction.  It should never open one.
> In general the logic seems suspect.  Looks like the intent is to move all 
> existing files into a delta_x_x/ when a plain table is converted to MM table. 
>  This seems like something that needs to be done from under an Exclusive lock 
> to prevent concurrent Insert operations writing data under table/partition 
> root.  But this is too late to acquire locks which should be done from the 
> Driver.acquireLocks()  (or else have deadlock detector since acquiring them 
> here would bread all-or-nothing lock acquisition semantics currently required 
> w/o deadlock detector)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17647) DDLTask.generateAddMmTasks(Table tbl) and other random code should not start transactions

2018-04-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-17647:

Summary: DDLTask.generateAddMmTasks(Table tbl) and other random code should 
not start transactions  (was: DDLTask.generateAddMmTasks(Table tbl) should not 
start transactions)

> DDLTask.generateAddMmTasks(Table tbl) and other random code should not start 
> transactions
> -
>
> Key: HIVE-17647
> URL: https://issues.apache.org/jira/browse/HIVE-17647
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Priority: Major
>  Labels: mm-gap-2
>
> This method has 
> {noformat}
>   if (txnManager.isTxnOpen()) {
> mmWriteId = txnManager.getCurrentTxnId();
>   } else {
> mmWriteId = txnManager.openTxn(new Context(conf), conf.getUser());
> txnManager.commitTxn();
>   }
> {noformat}
> this should throw if there is no open transaction.  It should never open one.
> In general the logic seems suspect.  Looks like the intent is to move all 
> existing files into a delta_x_x/ when a plain table is converted to MM table. 
>  This seems like something that needs to be done from under an Exclusive lock 
> to prevent concurrent Insert operations writing data under table/partition 
> root.  But this is too late to acquire locks which should be done from the 
> Driver.acquireLocks()  (or else have deadlock detector since acquiring them 
> here would bread all-or-nothing lock acquisition semantics currently required 
> w/o deadlock detector)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-16850) Converting table to insert-only acid may open a txn in an inappropriate place

2018-04-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin resolved HIVE-16850.
-
Resolution: Duplicate

> Converting table to insert-only acid may open a txn in an inappropriate place
> -
>
> Key: HIVE-16850
> URL: https://issues.apache.org/jira/browse/HIVE-16850
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Wei Zheng
>Assignee: Eugene Koifman
>Priority: Major
>  Labels: mm-gap-2
>
> This would work for unit-testing, but would need to be fixed for production.
> {noformat}
> HiveTxnManager txnManager = SessionState.get().getTxnMgr();
>   if (txnManager.isTxnOpen()) {
> mmWriteId = txnManager.getCurrentTxnId();
>   } else {
> mmWriteId = txnManager.openTxn(new Context(conf), conf.getUser());
> txnManager.commitTxn();
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-17645) MM tables patch conflicts with HIVE-17482 (Spark/Acid integration)

2018-04-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HIVE-17645:
---

Assignee: Eugene Koifman

> MM tables patch conflicts with HIVE-17482 (Spark/Acid integration)
> --
>
> Key: HIVE-17645
> URL: https://issues.apache.org/jira/browse/HIVE-17645
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
>  Labels: mm-gap-2
>
> MM code introduces 
> {noformat}
> HiveTxnManager txnManager = SessionState.get().getTxnMgr()
> {noformat}
> in a number of places (e.g _DDLTask.generateAddMmTasks(Table tbl)_).  
> HIVE-17482 adds a mode where a TransactionManager not associated with the 
> session should be used.  This will need to be addressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17647) DDLTask.generateAddMmTasks(Table tbl) should not start transactions

2018-04-02 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423164#comment-16423164
 ] 

Sergey Shelukhin commented on HIVE-17647:
-

We will dbl check the code to open txns when necessary, and replace else with 
throw (in case we miss smth)

> DDLTask.generateAddMmTasks(Table tbl) should not start transactions
> ---
>
> Key: HIVE-17647
> URL: https://issues.apache.org/jira/browse/HIVE-17647
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Priority: Major
>  Labels: mm-gap-2
>
> This method has 
> {noformat}
>   if (txnManager.isTxnOpen()) {
> mmWriteId = txnManager.getCurrentTxnId();
>   } else {
> mmWriteId = txnManager.openTxn(new Context(conf), conf.getUser());
> txnManager.commitTxn();
>   }
> {noformat}
> this should throw if there is no open transaction.  It should never open one.
> In general the logic seems suspect.  Looks like the intent is to move all 
> existing files into a delta_x_x/ when a plain table is converted to MM table. 
>  This seems like something that needs to be done from under an Exclusive lock 
> to prevent concurrent Insert operations writing data under table/partition 
> root.  But this is too late to acquire locks which should be done from the 
> Driver.acquireLocks()  (or else have deadlock detector since acquiring them 
> here would bread all-or-nothing lock acquisition semantics currently required 
> w/o deadlock detector)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17855) conversion to MM tables via alter may be broken

2018-04-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-17855:

Description: 
{noformat}
git difftool 77511070dd^ 77511070dd -- */mm_conversions.q
{noformat}
Looks like during ACID "integration" alter was simply quietly changed to 
create+insert, because it's broken.


  was:
{noformat}
git difftool 77511070dd^ 77511070dd -- */mm_conversions.q
{noformat}
Looks like during ACID "integration" alter was simply quietly changed to 
create+insert, because it's broken.
I asked to keep feature parity with every change but I should have rather 
insisted on it and -1d all the patches that didn't... This is just annoying. 


> conversion to MM tables via alter may be broken
> ---
>
> Key: HIVE-17855
> URL: https://issues.apache.org/jira/browse/HIVE-17855
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
>
> {noformat}
> git difftool 77511070dd^ 77511070dd -- */mm_conversions.q
> {noformat}
> Looks like during ACID "integration" alter was simply quietly changed to 
> create+insert, because it's broken.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-17859) MM tables - Tez merge may not run

2018-04-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin resolved HIVE-17859.
-
Resolution: Not A Problem

> MM tables - Tez merge may not run
> -
>
> Key: HIVE-17859
> URL: https://issues.apache.org/jira/browse/HIVE-17859
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
>
> If mm_all test is executed on MiniLlap, all the merge test cases pass but 
> with changed order of rows; however, one can see in stats_mm, etc cases that 
> while the stats are correct, the number of files changes. Seems like merge 
> doesn't work in this case where it worked before. It's not pertinent to stats 
> cases but it should be examined. Perhaps stats output for the # of files 
> should be added to merge cases to make sure merge actually merges.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19071) WM: backup resource plans cannot be used without quoted idenitifiers

2018-04-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423145#comment-16423145
 ] 

Hive QA commented on HIVE-19071:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
47s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
16s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9961/dev-support/hive-personality.sh
 |
| git revision | master / 3660ac2 |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9961/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql standalone-metastore U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9961/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> WM: backup resource plans cannot be used without quoted idenitifiers
> 
>
> Key: HIVE-19071
> URL: https://issues.apache.org/jira/browse/HIVE-19071
> Project: Hive
>  Issue Type: Bug
>Reporter: Dileep Kumar Chiguruvada
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19071.01.patch, HIVE-19071.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19073) StatsOptimizer may mangle constant columns

2018-04-02 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-19073:
---
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master, thanks [~ashutoshc]!

> StatsOptimizer may mangle constant columns
> --
>
> Key: HIVE-19073
> URL: https://issues.apache.org/jira/browse/HIVE-19073
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 1.2.2
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19073.patch
>
>
> Following query is giving wrong result:
> {code:sql}
> SELECT DATE_SUB(CURRENT_DATE,0) as GROUP_BY_FIELD, count (*)  as src_cnt from 
> mytable WHERE 1=1 group by DATE_SUB(CURRENT_DATE,0);
> +-+--+--+
> | group_by_field  | src_cnt  |
> +-+--+--+
> | 239 | NULL |
> +-+--+--+
> 1 row selected (5.175 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19054) Function replication shall use "hive.repl.replica.functions.root.dir" as root

2018-04-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423098#comment-16423098
 ] 

Hive QA commented on HIVE-19054:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12916280/HIVE-19054.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 191 failed/errored test(s), 13297 tests 
executed
*Failed tests:*
{noformat}
TestCopyUtils - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestExportImport - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=95)


[jira] [Commented] (HIVE-18515) Custom Hive on Spark Tab in Spark Web UI

2018-04-02 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423091#comment-16423091
 ] 

Sahil Takiar commented on HIVE-18515:
-

Example: 
https://github.com/sanjosh/scala/blob/master/spark_extensions/webui/src/main/scala/org/apache/spark/SparkUIExtender.scala
 
(https://www.slideshare.net/SandeepJoshi55/apache-spark-undocumented-extensions-78929290)

> Custom Hive on Spark Tab in Spark Web UI
> 
>
> Key: HIVE-18515
> URL: https://issues.apache.org/jira/browse/HIVE-18515
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Priority: Major
>
> This is more of an investigation JIRA. It would be nice if Hive-on-Spark had 
> its own dedicated tab in the Spark Web UI. Something similar to what Spark 
> SQL has. It may be do-able if we follow the same model that Spark SQL does - 
> creating a custom class that extends {{SparkUITab}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18840) CachedStore: Prioritize loading of recently accessed tables during prewarm

2018-04-02 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-18840:

Attachment: HIVE-18840.2.patch

> CachedStore: Prioritize loading of recently accessed tables during prewarm
> --
>
> Key: HIVE-18840
> URL: https://issues.apache.org/jira/browse/HIVE-18840
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Affects Versions: 3.0.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-18840.1.patch, HIVE-18840.2.patch, 
> HIVE-18840.2.patch
>
>
> On clusters with large metadata, prewarming the cache can take several hours. 
> Now that CachedStore does not block on prewarm anymore (after HIVE-18264), we 
> should prioritize loading of recently accessed tables during prewarm.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18831) Differentiate errors that are thrown by Spark tasks

2018-04-02 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-18831:

Attachment: (was: HIVE-18831.9.patch)

> Differentiate errors that are thrown by Spark tasks
> ---
>
> Key: HIVE-18831
> URL: https://issues.apache.org/jira/browse/HIVE-18831
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18831.1.patch, HIVE-18831.2.patch, 
> HIVE-18831.3.patch, HIVE-18831.4.patch, HIVE-18831.6.patch, 
> HIVE-18831.7.patch, HIVE-18831.8.WIP.patch, HIVE-18831.9.patch
>
>
> We propagate exceptions from Spark task failures to the client well, but we 
> don't differentiate between errors from HS2 / RSC vs. errors thrown by 
> individual tasks.
> Main motivation is that when the client sees a propagated Spark exception its 
> difficult to know what part of the excution threw the exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18831) Differentiate errors that are thrown by Spark tasks

2018-04-02 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-18831:

Attachment: HIVE-18831.9.patch

> Differentiate errors that are thrown by Spark tasks
> ---
>
> Key: HIVE-18831
> URL: https://issues.apache.org/jira/browse/HIVE-18831
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18831.1.patch, HIVE-18831.2.patch, 
> HIVE-18831.3.patch, HIVE-18831.4.patch, HIVE-18831.6.patch, 
> HIVE-18831.7.patch, HIVE-18831.8.WIP.patch, HIVE-18831.9.patch
>
>
> We propagate exceptions from Spark task failures to the client well, but we 
> don't differentiate between errors from HS2 / RSC vs. errors thrown by 
> individual tasks.
> Main motivation is that when the client sees a propagated Spark exception its 
> difficult to know what part of the excution threw the exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18831) Differentiate errors that are thrown by Spark tasks

2018-04-02 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423058#comment-16423058
 ] 

Sahil Takiar commented on HIVE-18831:
-

Sounds good [~lirui]. Attached an updated patch and updated the RB.

> Differentiate errors that are thrown by Spark tasks
> ---
>
> Key: HIVE-18831
> URL: https://issues.apache.org/jira/browse/HIVE-18831
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18831.1.patch, HIVE-18831.2.patch, 
> HIVE-18831.3.patch, HIVE-18831.4.patch, HIVE-18831.6.patch, 
> HIVE-18831.7.patch, HIVE-18831.8.WIP.patch, HIVE-18831.9.patch
>
>
> We propagate exceptions from Spark task failures to the client well, but we 
> don't differentiate between errors from HS2 / RSC vs. errors thrown by 
> individual tasks.
> Main motivation is that when the client sees a propagated Spark exception its 
> difficult to know what part of the excution threw the exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18831) Differentiate errors that are thrown by Spark tasks

2018-04-02 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-18831:

Attachment: HIVE-18831.9.patch

> Differentiate errors that are thrown by Spark tasks
> ---
>
> Key: HIVE-18831
> URL: https://issues.apache.org/jira/browse/HIVE-18831
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18831.1.patch, HIVE-18831.2.patch, 
> HIVE-18831.3.patch, HIVE-18831.4.patch, HIVE-18831.6.patch, 
> HIVE-18831.7.patch, HIVE-18831.8.WIP.patch, HIVE-18831.9.patch
>
>
> We propagate exceptions from Spark task failures to the client well, but we 
> don't differentiate between errors from HS2 / RSC vs. errors thrown by 
> individual tasks.
> Main motivation is that when the client sees a propagated Spark exception its 
> difficult to know what part of the excution threw the exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19086) Write notifications in bulk only when commitTransaction actually commits

2018-04-02 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar reassigned HIVE-19086:
--

Assignee: Vihang Karajgaonkar

> Write notifications in bulk only when commitTransaction actually commits
> 
>
> Key: HIVE-19086
> URL: https://issues.apache.org/jira/browse/HIVE-19086
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Alexander Kolbasov
>Assignee: Vihang Karajgaonkar
>Priority: Major
>
> This is an optimization that is targeting reducing the amount of time the 
> global DB lock is held for notifications.
> The idea is to collect all notifications and only push them when 
> commitTransaction() actually commits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18755) Modifications to the metastore for catalogs

2018-04-02 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423031#comment-16423031
 ] 

Alan Gates commented on HIVE-18755:
---

Alright, I'll file a ticket and get this fixed.  Thanks for catching this.

> Modifications to the metastore for catalogs
> ---
>
> Key: HIVE-18755
> URL: https://issues.apache.org/jira/browse/HIVE-18755
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18755.2.patch, HIVE-18755.3.patch, 
> HIVE-18755.4.patch, HIVE-18755.final.patch, HIVE-18755.nothrift, 
> HIVE-18755.patch
>
>
> Step 1 of adding catalogs is to add support in the metastore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19054) Function replication shall use "hive.repl.replica.functions.root.dir" as root

2018-04-02 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-19054:
--
   Resolution: Duplicate
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

This is actually already fixed by HIVE-19007. cc [~sankarh].

> Function replication shall use "hive.repl.replica.functions.root.dir" as root
> -
>
> Key: HIVE-19054
> URL: https://issues.apache.org/jira/browse/HIVE-19054
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19054.1.patch
>
>
> It's wrongly use fs.defaultFS as the root, ignore 
> "hive.repl.replica.functions.root.dir" definition, thus prevent replicating 
> to cloud destination.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19074) Vectorization: Add llap vectorization_div0.q.out Q output file

2018-04-02 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-19074:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Vectorization: Add llap vectorization_div0.q.out Q output file
> --
>
> Key: HIVE-19074
> URL: https://issues.apache.org/jira/browse/HIVE-19074
> Project: Hive
>  Issue Type: Bug
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-19074.02.patch, HIVE-19074.03.patch
>
>
> At some point llap/vectorization_div0.q.out got omitted.
> The Q file output is unstable because of missing ORDER BY columns.  You must 
> have ORDER BY on all/critical the columns when there is a LIMIT clause.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19074) Vectorization: Add llap vectorization_div0.q.out Q output file

2018-04-02 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-19074:

Fix Version/s: 3.0.0

> Vectorization: Add llap vectorization_div0.q.out Q output file
> --
>
> Key: HIVE-19074
> URL: https://issues.apache.org/jira/browse/HIVE-19074
> Project: Hive
>  Issue Type: Bug
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-19074.02.patch, HIVE-19074.03.patch
>
>
> At some point llap/vectorization_div0.q.out got omitted.
> The Q file output is unstable because of missing ORDER BY columns.  You must 
> have ORDER BY on all/critical the columns when there is a LIMIT clause.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19074) Vectorization: Add llap vectorization_div0.q.out Q output file

2018-04-02 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423001#comment-16423001
 ] 

Matt McCline commented on HIVE-19074:
-

Committed to master.  [~teddy.choi] thank you for your code review.

> Vectorization: Add llap vectorization_div0.q.out Q output file
> --
>
> Key: HIVE-19074
> URL: https://issues.apache.org/jira/browse/HIVE-19074
> Project: Hive
>  Issue Type: Bug
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-19074.02.patch, HIVE-19074.03.patch
>
>
> At some point llap/vectorization_div0.q.out got omitted.
> The Q file output is unstable because of missing ORDER BY columns.  You must 
> have ORDER BY on all/critical the columns when there is a LIMIT clause.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18841) Support authorization of UDF usage in hive

2018-04-02 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422998#comment-16422998
 ] 

Thejas M Nair commented on HIVE-18841:
--

Attaching again for test run.


> Support authorization of UDF usage in hive
> --
>
> Key: HIVE-18841
> URL: https://issues.apache.org/jira/browse/HIVE-18841
> Project: Hive
>  Issue Type: New Feature
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Critical
> Attachments: HIVE-18841.1.patch, HIVE-18841.1.patch
>
>
> It should be possible to create authorization policies on UDF usage. 
> ie, it should be possible to control who can use certain UDF in their queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18841) Support authorization of UDF usage in hive

2018-04-02 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-18841:
-
Attachment: HIVE-18841.1.patch

> Support authorization of UDF usage in hive
> --
>
> Key: HIVE-18841
> URL: https://issues.apache.org/jira/browse/HIVE-18841
> Project: Hive
>  Issue Type: New Feature
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Critical
> Attachments: HIVE-18841.1.patch, HIVE-18841.1.patch
>
>
> It should be possible to create authorization policies on UDF usage. 
> ie, it should be possible to control who can use certain UDF in their queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19054) Function replication shall use "hive.repl.replica.functions.root.dir" as root

2018-04-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422997#comment-16422997
 ] 

Hive QA commented on HIVE-19054:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 12s{color} 
| {color:red} 
/data/hiveptest/logs/PreCommit-HIVE-Build-9960/patches/PreCommit-HIVE-Build-9960.patch
 does not apply to master. Rebase required? Wrong Branch? See 
http://cwiki.apache.org/confluence/display/Hive/HowToContribute for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9960/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Function replication shall use "hive.repl.replica.functions.root.dir" as root
> -
>
> Key: HIVE-19054
> URL: https://issues.apache.org/jira/browse/HIVE-19054
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-19054.1.patch
>
>
> It's wrongly use fs.defaultFS as the root, ignore 
> "hive.repl.replica.functions.root.dir" definition, thus prevent replicating 
> to cloud destination.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19074) Vectorization: Add llap vectorization_div0.q.out Q output file

2018-04-02 Thread Teddy Choi (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422991#comment-16422991
 ] 

Teddy Choi commented on HIVE-19074:
---

+1 LGTM.

Sometimes the result difference bugs me a lot, thanks for resolving it!

> Vectorization: Add llap vectorization_div0.q.out Q output file
> --
>
> Key: HIVE-19074
> URL: https://issues.apache.org/jira/browse/HIVE-19074
> Project: Hive
>  Issue Type: Bug
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-19074.02.patch, HIVE-19074.03.patch
>
>
> At some point llap/vectorization_div0.q.out got omitted.
> The Q file output is unstable because of missing ORDER BY columns.  You must 
> have ORDER BY on all/critical the columns when there is a LIMIT clause.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19084) Test case in Hive Query Language fails with a java.lang.AssertionError.

2018-04-02 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422963#comment-16422963
 ] 

Eugene Koifman commented on HIVE-19084:
---

FYI, [~steveyeom2017]

> Test case in Hive Query Language fails with a java.lang.AssertionError.
> ---
>
> Key: HIVE-19084
> URL: https://issues.apache.org/jira/browse/HIVE-19084
> Project: Hive
>  Issue Type: Bug
>  Components: Test, Transactions
> Environment: uname -a
> Linux pts00607-vm3 4.4.0-112-generic #135-Ubuntu SMP Fri Jan 19 11:48:46 UTC 
> 2018 ppc64le ppc64le ppc64le GNU/Linux
>Reporter: Alisha Prabhu
>Priority: Major
> Attachments: HIVE-19084.1.patch
>
>
> The test case testInsertOverwriteForPartitionedMmTable in 
> TestTxnCommandsForMmTable.java and TestTxnCommandsForOrcMmTable.java fails 
> with a java.lang.AssertionError.
> Maven command used is mvn 
> -Dtest=TestTxnCommandsForMmTable#testInsertOverwriteForPartitionedMmTable test
> The test case fails as the listStatus function of the FileSystem does not 
> guarantee to return the List of files/directories status in a sorted order.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19084) Test case in Hive Query Language fails with a java.lang.AssertionError.

2018-04-02 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-19084:
--
Component/s: (was: Hive)
 Transactions
 Test

> Test case in Hive Query Language fails with a java.lang.AssertionError.
> ---
>
> Key: HIVE-19084
> URL: https://issues.apache.org/jira/browse/HIVE-19084
> Project: Hive
>  Issue Type: Bug
>  Components: Test, Transactions
> Environment: uname -a
> Linux pts00607-vm3 4.4.0-112-generic #135-Ubuntu SMP Fri Jan 19 11:48:46 UTC 
> 2018 ppc64le ppc64le ppc64le GNU/Linux
>Reporter: Alisha Prabhu
>Priority: Major
> Attachments: HIVE-19084.1.patch
>
>
> The test case testInsertOverwriteForPartitionedMmTable in 
> TestTxnCommandsForMmTable.java and TestTxnCommandsForOrcMmTable.java fails 
> with a java.lang.AssertionError.
> Maven command used is mvn 
> -Dtest=TestTxnCommandsForMmTable#testInsertOverwriteForPartitionedMmTable test
> The test case fails as the listStatus function of the FileSystem does not 
> guarantee to return the List of files/directories status in a sorted order.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18910) Migrate to Murmur hash for shuffle and bucketing

2018-04-02 Thread Deepak Jaiswal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-18910:
--
Attachment: HIVE-18910.17.patch

> Migrate to Murmur hash for shuffle and bucketing
> 
>
> Key: HIVE-18910
> URL: https://issues.apache.org/jira/browse/HIVE-18910
> Project: Hive
>  Issue Type: Task
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-18910.1.patch, HIVE-18910.10.patch, 
> HIVE-18910.11.patch, HIVE-18910.12.patch, HIVE-18910.13.patch, 
> HIVE-18910.14.patch, HIVE-18910.15.patch, HIVE-18910.16.patch, 
> HIVE-18910.17.patch, HIVE-18910.2.patch, HIVE-18910.3.patch, 
> HIVE-18910.4.patch, HIVE-18910.5.patch, HIVE-18910.6.patch, 
> HIVE-18910.7.patch, HIVE-18910.8.patch, HIVE-18910.9.patch
>
>
> Hive uses JAVA hash which is not as good as murmur for better distribution 
> and efficiency in bucketing a table.
> Migrate to murmur hash but still keep backward compatibility for existing 
> users so that they dont have to reload the existing tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18976) Add ability to setup Druid Kafka Ingestion from Hive

2018-04-02 Thread Nishant Bangarwa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishant Bangarwa updated HIVE-18976:

Attachment: HIVE-18976.03.patch

> Add ability to setup Druid Kafka Ingestion from Hive
> 
>
> Key: HIVE-18976
> URL: https://issues.apache.org/jira/browse/HIVE-18976
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-18976.03.patch, HIVE-18976.patch
>
>
> Add Ability to setup druid kafka Ingestion using Hive CREATE TABLE statement
> e.g. Below query can submit a kafka supervisor spec to the druid overlord and 
> druid can start ingesting events from kafka. 
> {code:java}
>  
> CREATE TABLE druid_kafka_test(`__time` timestamp, page string, language 
> string, `user` string, added int, deleted int, delta int)
> STORED BY 
> 'org.apache.hadoop.hive.druid.DruidKafkaStreamingStorageHandler'
> TBLPROPERTIES (
> "druid.segment.granularity" = "HOUR",
> "druid.query.granularity" = "MINUTE",
> "kafka.bootstrap.servers" = "localhost:9092",
> "kafka.topic" = "test-topic",
> "druid.kafka.ingest.useEarliestOffset" = "true"
> );
> {code}
> Design - This can be done via a DruidKafkaStreamingStorageHandler that 
> extends existing DruidStorageHandler and adds the additional functionality 
> for Streaming. 
> Testing - Add a DruidKafkaMiniCluster which will consist of DruidMiniCluster 
> + Single Node Kafka Broker. The broker can be populated with a test topic 
> that has some predefined data. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18999) Filter operator does not work for List

2018-04-02 Thread Steve Yeom (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422953#comment-16422953
 ] 

Steve Yeom commented on HIVE-18999:
---

patch 01 is not p-tested. So the same patch is added as patch 02.

> Filter operator does not work for List
> --
>
> Key: HIVE-18999
> URL: https://issues.apache.org/jira/browse/HIVE-18999
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 3.0.0
>Reporter: Steve Yeom
>Assignee: Steve Yeom
>Priority: Major
> Attachments: HIVE-18999.01.patch, HIVE-18999.02.patch
>
>
> {code:sql}
> create table table1(col0 int, col1 bigint, col2 string, col3 bigint, col4 
> bigint);
> insert into table1 values (1, 1, 'ccl',2014, 11);
> insert into table1 values (1, 1, 'ccl',2015, 11);
> insert into table1 values (1, 1, 'ccl',2014, 11);
> insert into table1 values (1, 1, 'ccl',2013, 11);
> -- INCORRECT
> SELECT COUNT(t1.col0) from table1 t1 where struct(col3, col4) in 
> (struct(2014,11));
> -- CORRECT
> SELECT COUNT(t1.col0) from table1 t1 where struct(col3, col4) in 
> (struct('2014','11'));
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18999) Filter operator does not work for List

2018-04-02 Thread Steve Yeom (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Yeom updated HIVE-18999:
--
Attachment: HIVE-18999.02.patch

> Filter operator does not work for List
> --
>
> Key: HIVE-18999
> URL: https://issues.apache.org/jira/browse/HIVE-18999
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 3.0.0
>Reporter: Steve Yeom
>Assignee: Steve Yeom
>Priority: Major
> Attachments: HIVE-18999.01.patch, HIVE-18999.02.patch
>
>
> {code:sql}
> create table table1(col0 int, col1 bigint, col2 string, col3 bigint, col4 
> bigint);
> insert into table1 values (1, 1, 'ccl',2014, 11);
> insert into table1 values (1, 1, 'ccl',2015, 11);
> insert into table1 values (1, 1, 'ccl',2014, 11);
> insert into table1 values (1, 1, 'ccl',2013, 11);
> -- INCORRECT
> SELECT COUNT(t1.col0) from table1 t1 where struct(col3, col4) in 
> (struct(2014,11));
> -- CORRECT
> SELECT COUNT(t1.col0) from table1 t1 where struct(col3, col4) in 
> (struct('2014','11'));
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19058) add object owner to HivePrivilegeObject

2018-04-02 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-19058:
--
Attachment: HIVE-19058.03.patch

> add object owner to HivePrivilegeObject
> ---
>
> Key: HIVE-19058
> URL: https://issues.apache.org/jira/browse/HIVE-19058
> Project: Hive
>  Issue Type: Bug
>  Components: Security
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-19058.01.patch, HIVE-19058.02.patch, 
> HIVE-19058.03.patch
>
>
> this can enable HiveAuthorizer to create policies based on the owner of the 
> object - for example, only let the owner of a table read/write it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18747) Cleaner for TXN_TO_WRITE_ID table entries using MIN_HISTORY_LEVEL.

2018-04-02 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422949#comment-16422949
 ] 

Eugene Koifman commented on HIVE-18747:
---

+1 patch 6

> Cleaner for TXN_TO_WRITE_ID table entries using MIN_HISTORY_LEVEL.
> --
>
> Key: HIVE-18747
> URL: https://issues.apache.org/jira/browse/HIVE-18747
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Minor
>  Labels: ACID, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18747.01.patch, HIVE-18747.02.patch, 
> HIVE-18747.03.patch, HIVE-18747.04.patch, HIVE-18747.05.patch, 
> HIVE-18747.06.patch
>
>
> Per table write ID implementation (HIVE-18192) maintains a map between txn ID 
> and table write ID in TXN_TO_WRITE_ID meta table. 
> The entries in this table is used to generate ValidWriteIdList for the given 
> ValidTxnList to ensure snapshot isolation. 
> When table or database is dropped, then these entries are cleaned-up. But, it 
> is necessary to clean-up for active tables too for better performance.
> TXN_TO_WRITE_ID table keeps a mapping of Transaction ID to Write ID.  The 
> state of each Write ID (open, committed, aborted) is determined by the state 
> of the parent transaction.  In order to be able to get a WriteIdList that is 
> accurate wrt ValidTxnList that is locked in at the start of the transaction, 
> we have to retain txnid<->writeid mapping even after the transaction ends. 
> This is because a reader at Snapshot Isolation that started when transaction 
> X was open, should continue to ignore the data written by X even after X 
> commits.
> So we need a mechanism to know when it is safe to remove TXN_TO_WRITE_ID.  
> There are 2 parts to it. When txn X is opened, it records Y=select 
> min(txn_id) from TXNS where txn_state=’o’ in MIN_HISTORY(txnid,opentxnid) 
> table, i.e. it adds (X, Y) to MIN_HISTORY.  On commit (and abort) of X, it 
> removes its own entry from MIN_HISTORY. In the absence of Aborted 
> transactions, MIN_HISTORY gives us the smallest open txnid across all active 
> reader snapshots.  Let Z=select min(opentxnid) from MIN_HISTORY. We can 
> delete entries from TXN_TO_WRITE_ID once TXN_TO_WRITE_ID.T2W_TXNID < Z since 
> every active reader sees txns < Z as committed.
> If S is aborted txns, we retain the metadata about it in TXNS as long as any 
> data written S may be visible to some reader in the system so that the reader 
> knows to skip this data.  The rules for when that is are complex but wrt to 
> TXN_TO_WRITE_ID, if A=select min(TXN_ID) from TXNS where TXN_STATE=’a’, then 
> it’s safe to delete from TXN_TO_WRITE_ID when TXN_TO_WRITE_ID.T2W_TXNID < 
> min(Z,A).  
> If no open or aborted txns exist in the system, then we need to enable 
> cleanup using latest allocated value of NEXT_TXN_ID table. Delete condition 
> would be TXN_TO_WRITE_ID.T2W_TXNID < min(Z,A,NEXT_TXN_ID.ntxn_next).  
> Also, it is proposed to trigger cleanup on TXN_TO_WRITE_ID from initiator 
> immediately after cleaning up aborted txns metadata from TXNS table.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18963) JDBC: Provide an option to simplify beeline usage by supporting default and named URL for beeline

2018-04-02 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-18963:

Fix Version/s: 3.0.0

> JDBC: Provide an option to simplify beeline usage by supporting default and 
> named URL for beeline
> -
>
> Key: HIVE-18963
> URL: https://issues.apache.org/jira/browse/HIVE-18963
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18963.1.patch, HIVE-18963.2.patch, 
> HIVE-18963.3.patch
>
>
> Currently, after opening Beeline CLI, the user needs to supply a connection 
> string to use the HS2 instance and set up the jdbc driver. Since we plan to 
> replace Hive CLI with Beeline in future (HIVE-10511), it will help the 
> usability if the user can simply type {{beeline}} and get start the hive 
> session. The jdbc url can be specified in a beeline-site.xml (which can 
> contain other named jdbc urls as well, and they can be accessed by something 
> like: {{beeline -c namedUrl}}. The use of beeline-site.xml can also be 
> potentially expanded later if needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18999) Filter operator does not work for List

2018-04-02 Thread Steve Yeom (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Yeom updated HIVE-18999:
--
Status: Patch Available  (was: Open)

> Filter operator does not work for List
> --
>
> Key: HIVE-18999
> URL: https://issues.apache.org/jira/browse/HIVE-18999
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 3.0.0
>Reporter: Steve Yeom
>Assignee: Steve Yeom
>Priority: Major
> Attachments: HIVE-18999.01.patch
>
>
> {code:sql}
> create table table1(col0 int, col1 bigint, col2 string, col3 bigint, col4 
> bigint);
> insert into table1 values (1, 1, 'ccl',2014, 11);
> insert into table1 values (1, 1, 'ccl',2015, 11);
> insert into table1 values (1, 1, 'ccl',2014, 11);
> insert into table1 values (1, 1, 'ccl',2013, 11);
> -- INCORRECT
> SELECT COUNT(t1.col0) from table1 t1 where struct(col3, col4) in 
> (struct(2014,11));
> -- CORRECT
> SELECT COUNT(t1.col0) from table1 t1 where struct(col3, col4) in 
> (struct('2014','11'));
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18999) Filter operator does not work for List

2018-04-02 Thread Steve Yeom (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Yeom updated HIVE-18999:
--
Status: Open  (was: Patch Available)

> Filter operator does not work for List
> --
>
> Key: HIVE-18999
> URL: https://issues.apache.org/jira/browse/HIVE-18999
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 3.0.0
>Reporter: Steve Yeom
>Assignee: Steve Yeom
>Priority: Major
> Attachments: HIVE-18999.01.patch
>
>
> {code:sql}
> create table table1(col0 int, col1 bigint, col2 string, col3 bigint, col4 
> bigint);
> insert into table1 values (1, 1, 'ccl',2014, 11);
> insert into table1 values (1, 1, 'ccl',2015, 11);
> insert into table1 values (1, 1, 'ccl',2014, 11);
> insert into table1 values (1, 1, 'ccl',2013, 11);
> -- INCORRECT
> SELECT COUNT(t1.col0) from table1 t1 where struct(col3, col4) in 
> (struct(2014,11));
> -- CORRECT
> SELECT COUNT(t1.col0) from table1 t1 where struct(col3, col4) in 
> (struct('2014','11'));
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >