[jira] [Commented] (HIVE-19089) Create/Replicate Allocate write-id event

2018-04-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425051#comment-16425051
 ] 

Hive QA commented on HIVE-19089:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12917293/HIVE-19089.02.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 248 failed/errored test(s), 13702 tests 
executed
*Failed tests:*
{noformat}
TestCopyUtils - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestExportImport - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=96)


[jira] [Commented] (HIVE-19092) Somne improvement in bin shell scripts

2018-04-03 Thread Saijin Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425018#comment-16425018
 ] 

Saijin Huang commented on HIVE-19092:
-

update the patch.Pending test!

> Somne improvement in bin shell scripts
> --
>
> Key: HIVE-19092
> URL: https://issues.apache.org/jira/browse/HIVE-19092
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Saijin Huang
>Assignee: Saijin Huang
>Priority: Minor
> Attachments: HIVE-19092.1.patch, HIVE-19092.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19092) Somne improvement in bin shell scripts

2018-04-03 Thread Saijin Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Saijin Huang updated HIVE-19092:

Attachment: HIVE-19092.2.patch

> Somne improvement in bin shell scripts
> --
>
> Key: HIVE-19092
> URL: https://issues.apache.org/jira/browse/HIVE-19092
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Saijin Huang
>Assignee: Saijin Huang
>Priority: Minor
> Attachments: HIVE-19092.1.patch, HIVE-19092.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19089) Create/Replicate Allocate write-id event

2018-04-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425012#comment-16425012
 ] 

Hive QA commented on HIVE-19089:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
53s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} server-extensions in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} hcatalog-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
38s{color} | {color:red} ql in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} hcatalog/server-extensions: The patch generated 4 new 
+ 8 unchanged - 0 fixed = 12 total (was 8) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
45s{color} | {color:red} ql: The patch generated 2 new + 212 unchanged - 0 
fixed = 214 total (was 212) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
34s{color} | {color:red} standalone-metastore: The patch generated 12 new + 
1577 unchanged - 6 fixed = 1589 total (was 1583) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 26 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
16s{color} | {color:red} The patch generated 50 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9984/dev-support/hive-personality.sh
 |
| git revision | master / 04f3be0 |
| Default Java | 1.8.0_111 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9984/yetus/patch-mvninstall-hcatalog_server-extensions.txt
 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9984/yetus/patch-mvninstall-itests_hcatalog-unit.txt
 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9984/yetus/patch-mvninstall-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9984/yetus/diff-checkstyle-hcatalog_server-extensions.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9984/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9984/yetus/diff-checkstyle-standalone-metastore.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9984/yetus/whitespace-eol.txt 
|
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9984/yetus/patch-asflicense-problems.txt
 |
| modules | C: hcatalog/server-extensions itests/hcatalog-unit ql 

[jira] [Commented] (HIVE-19092) Somne improvement in bin shell scripts

2018-04-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424982#comment-16424982
 ] 

Hive QA commented on HIVE-19092:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12917291/HIVE-19092.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 315 failed/errored test(s), 13678 tests 
executed
*Failed tests:*
{noformat}
TestCopyUtils - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestExportImport - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=95)


[jira] [Commented] (HIVE-19092) Somne improvement in bin shell scripts

2018-04-03 Thread Saijin Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424934#comment-16424934
 ] 

Saijin Huang commented on HIVE-19092:
-

[~alangates] It is a reduntant operation and i wil fix it.Thank you for your 
advice.

> Somne improvement in bin shell scripts
> --
>
> Key: HIVE-19092
> URL: https://issues.apache.org/jira/browse/HIVE-19092
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Saijin Huang
>Assignee: Saijin Huang
>Priority: Minor
> Attachments: HIVE-19092.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19092) Somne improvement in bin shell scripts

2018-04-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424915#comment-16424915
 ] 

Hive QA commented on HIVE-19092:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  1m  
2s{color} | {color:red} The patch generated 50 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}  1m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9983/dev-support/hive-personality.sh
 |
| git revision | master / 04f3be0 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9983/yetus/patch-asflicense-problems.txt
 |
| modules | C: . U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9983/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Somne improvement in bin shell scripts
> --
>
> Key: HIVE-19092
> URL: https://issues.apache.org/jira/browse/HIVE-19092
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Saijin Huang
>Assignee: Saijin Huang
>Priority: Minor
> Attachments: HIVE-19092.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19064) Add mode to support delimited identifiers enclosed within double quotation

2018-04-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424904#comment-16424904
 ] 

Hive QA commented on HIVE-19064:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12917287/HIVE-19064.02.patch

{color:green}SUCCESS:{color} +1 due to 7 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 301 failed/errored test(s), 14101 tests 
executed
*Failed tests:*
{noformat}
TestCopyUtils - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestExportImport - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=216)
TestReplicationOnHDFSEncryptedZones - did not produce a TEST-*.xml file (likely 
timed out) (batchId=230)
TestReplicationScenarios - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestReplicationScenariosAcidTables - did not produce a TEST-*.xml file (likely 
timed out) (batchId=230)
TestReplicationScenariosAcrossInstances - did not produce a TEST-*.xml file 
(likely timed out) (batchId=230)
TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] 
(batchId=54)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[quotedid_basic_standard] 
(batchId=31)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez_empty]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[default_constraint]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[groupby_groupingset_bug]
 (batchId=174)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] 
(batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[update_access_time_non_current_db]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_div0]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_dynamic_semijoin_reduction]
 (batchId=155)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.org.apache.hadoop.hive.cli.TestNegativeCliDriver
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.org.apache.hadoop.hive.cli.TestNegativeCliDriver
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_notnull_constraint_violation]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into_acid_notnull]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into_notnull_constraint]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_multi_into_notnull]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_overwrite_notnull_constraint]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[smb_bucketmapjoin]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[smb_mapjoin_14] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[sortmerge_mapjoin_mismatch_1]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_1]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_2]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_publisher_error_1]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_publisher_error_2]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_corr_in_agg]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_in_implicit_gby]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_notin_implicit_gby]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_subquery_chain]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[truncate_bucketed_column]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[truncate_column_list_bucketing]
 (batchId=95)

[jira] [Commented] (HIVE-17645) MM tables patch conflicts with HIVE-17482 (Spark/Acid integration)

2018-04-03 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424902#comment-16424902
 ] 

Sergey Shelukhin commented on HIVE-17645:
-

I see code everywhere using session txn manager, not just in MM tables.

> MM tables patch conflicts with HIVE-17482 (Spark/Acid integration)
> --
>
> Key: HIVE-17645
> URL: https://issues.apache.org/jira/browse/HIVE-17645
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
>  Labels: mm-gap-2
>
> MM code introduces 
> {noformat}
> HiveTxnManager txnManager = SessionState.get().getTxnMgr()
> {noformat}
> in a number of places (e.g _DDLTask.generateAddMmTasks(Table tbl)_).  
> HIVE-17482 adds a mode where a TransactionManager not associated with the 
> session should be used.  This will need to be addressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17647) DDLTask.generateAddMmTasks(Table tbl) and other random code should not start transactions

2018-04-03 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424901#comment-16424901
 ] 

Sergey Shelukhin commented on HIVE-17647:
-


I dunno why my patches have so much whitespace lately, probably some eclipse 
setting got reset  Will fix w/CR feedback

> DDLTask.generateAddMmTasks(Table tbl) and other random code should not start 
> transactions
> -
>
> Key: HIVE-17647
> URL: https://issues.apache.org/jira/browse/HIVE-17647
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17647.patch
>
>
> This method (and other places) have 
> {noformat}
>   if (txnManager.isTxnOpen()) {
> mmWriteId = txnManager.getCurrentTxnId();
>   } else {
> mmWriteId = txnManager.openTxn(new Context(conf), conf.getUser());
> txnManager.commitTxn();
>   }
> {noformat}
> this should throw if there is no open transaction.  It should never open one.
> In general the logic seems suspect.  Looks like the intent is to move all 
> existing files into a delta_x_x/ when a plain table is converted to MM table. 
>  This seems like something that needs to be done from under an Exclusive lock 
> to prevent concurrent Insert operations writing data under table/partition 
> root.  But this is too late to acquire locks which should be done from the 
> Driver.acquireLocks()  (or else have deadlock detector since acquiring them 
> here would bread all-or-nothing lock acquisition semantics currently required 
> w/o deadlock detector)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (HIVE-16850) Converting table to insert-only acid may open a txn in an inappropriate place

2018-04-03 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-16850:

Comment: was deleted

(was: I dunno why my patches have so much whitespace lately, probably some 
eclipse setting got reset :) Will fix w/CR feedback)

> Converting table to insert-only acid may open a txn in an inappropriate place
> -
>
> Key: HIVE-16850
> URL: https://issues.apache.org/jira/browse/HIVE-16850
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Wei Zheng
>Assignee: Eugene Koifman
>Priority: Major
>  Labels: mm-gap-2
>
> This would work for unit-testing, but would need to be fixed for production.
> {noformat}
> HiveTxnManager txnManager = SessionState.get().getTxnMgr();
>   if (txnManager.isTxnOpen()) {
> mmWriteId = txnManager.getCurrentTxnId();
>   } else {
> mmWriteId = txnManager.openTxn(new Context(conf), conf.getUser());
> txnManager.commitTxn();
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-16850) Converting table to insert-only acid may open a txn in an inappropriate place

2018-04-03 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424899#comment-16424899
 ] 

Sergey Shelukhin commented on HIVE-16850:
-

I dunno why my patches have so much whitespace lately, probably some eclipse 
setting got reset :) Will fix w/CR feedback

> Converting table to insert-only acid may open a txn in an inappropriate place
> -
>
> Key: HIVE-16850
> URL: https://issues.apache.org/jira/browse/HIVE-16850
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Wei Zheng
>Assignee: Eugene Koifman
>Priority: Major
>  Labels: mm-gap-2
>
> This would work for unit-testing, but would need to be fixed for production.
> {noformat}
> HiveTxnManager txnManager = SessionState.get().getTxnMgr();
>   if (txnManager.isTxnOpen()) {
> mmWriteId = txnManager.getCurrentTxnId();
>   } else {
> mmWriteId = txnManager.openTxn(new Context(conf), conf.getUser());
> txnManager.commitTxn();
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17647) DDLTask.generateAddMmTasks(Table tbl) and other random code should not start transactions

2018-04-03 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424898#comment-16424898
 ] 

Sergey Shelukhin commented on HIVE-17647:
-

[~ekoifman] can you review? thnx

> DDLTask.generateAddMmTasks(Table tbl) and other random code should not start 
> transactions
> -
>
> Key: HIVE-17647
> URL: https://issues.apache.org/jira/browse/HIVE-17647
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17647.patch
>
>
> This method (and other places) have 
> {noformat}
>   if (txnManager.isTxnOpen()) {
> mmWriteId = txnManager.getCurrentTxnId();
>   } else {
> mmWriteId = txnManager.openTxn(new Context(conf), conf.getUser());
> txnManager.commitTxn();
>   }
> {noformat}
> this should throw if there is no open transaction.  It should never open one.
> In general the logic seems suspect.  Looks like the intent is to move all 
> existing files into a delta_x_x/ when a plain table is converted to MM table. 
>  This seems like something that needs to be done from under an Exclusive lock 
> to prevent concurrent Insert operations writing data under table/partition 
> root.  But this is too late to acquire locks which should be done from the 
> Driver.acquireLocks()  (or else have deadlock detector since acquiring them 
> here would bread all-or-nothing lock acquisition semantics currently required 
> w/o deadlock detector)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17647) DDLTask.generateAddMmTasks(Table tbl) and other random code should not start transactions

2018-04-03 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424897#comment-16424897
 ] 

Sergey Shelukhin commented on HIVE-17647:
-

Adding write ID propagation in a proper manner.
There's actually some other non MM table related code that gets write IDs in 
strange places, although without opening a transaction... could be fixed in a 
similar manner.

> DDLTask.generateAddMmTasks(Table tbl) and other random code should not start 
> transactions
> -
>
> Key: HIVE-17647
> URL: https://issues.apache.org/jira/browse/HIVE-17647
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17647.patch
>
>
> This method (and other places) have 
> {noformat}
>   if (txnManager.isTxnOpen()) {
> mmWriteId = txnManager.getCurrentTxnId();
>   } else {
> mmWriteId = txnManager.openTxn(new Context(conf), conf.getUser());
> txnManager.commitTxn();
>   }
> {noformat}
> this should throw if there is no open transaction.  It should never open one.
> In general the logic seems suspect.  Looks like the intent is to move all 
> existing files into a delta_x_x/ when a plain table is converted to MM table. 
>  This seems like something that needs to be done from under an Exclusive lock 
> to prevent concurrent Insert operations writing data under table/partition 
> root.  But this is too late to acquire locks which should be done from the 
> Driver.acquireLocks()  (or else have deadlock detector since acquiring them 
> here would bread all-or-nothing lock acquisition semantics currently required 
> w/o deadlock detector)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17647) DDLTask.generateAddMmTasks(Table tbl) and other random code should not start transactions

2018-04-03 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-17647:

Status: Patch Available  (was: Open)

> DDLTask.generateAddMmTasks(Table tbl) and other random code should not start 
> transactions
> -
>
> Key: HIVE-17647
> URL: https://issues.apache.org/jira/browse/HIVE-17647
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17647.patch
>
>
> This method (and other places) have 
> {noformat}
>   if (txnManager.isTxnOpen()) {
> mmWriteId = txnManager.getCurrentTxnId();
>   } else {
> mmWriteId = txnManager.openTxn(new Context(conf), conf.getUser());
> txnManager.commitTxn();
>   }
> {noformat}
> this should throw if there is no open transaction.  It should never open one.
> In general the logic seems suspect.  Looks like the intent is to move all 
> existing files into a delta_x_x/ when a plain table is converted to MM table. 
>  This seems like something that needs to be done from under an Exclusive lock 
> to prevent concurrent Insert operations writing data under table/partition 
> root.  But this is too late to acquire locks which should be done from the 
> Driver.acquireLocks()  (or else have deadlock detector since acquiring them 
> here would bread all-or-nothing lock acquisition semantics currently required 
> w/o deadlock detector)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17647) DDLTask.generateAddMmTasks(Table tbl) and other random code should not start transactions

2018-04-03 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-17647:

Attachment: HIVE-17647.patch

> DDLTask.generateAddMmTasks(Table tbl) and other random code should not start 
> transactions
> -
>
> Key: HIVE-17647
> URL: https://issues.apache.org/jira/browse/HIVE-17647
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17647.patch
>
>
> This method (and other places) have 
> {noformat}
>   if (txnManager.isTxnOpen()) {
> mmWriteId = txnManager.getCurrentTxnId();
>   } else {
> mmWriteId = txnManager.openTxn(new Context(conf), conf.getUser());
> txnManager.commitTxn();
>   }
> {noformat}
> this should throw if there is no open transaction.  It should never open one.
> In general the logic seems suspect.  Looks like the intent is to move all 
> existing files into a delta_x_x/ when a plain table is converted to MM table. 
>  This seems like something that needs to be done from under an Exclusive lock 
> to prevent concurrent Insert operations writing data under table/partition 
> root.  But this is too late to acquire locks which should be done from the 
> Driver.acquireLocks()  (or else have deadlock detector since acquiring them 
> here would bread all-or-nothing lock acquisition semantics currently required 
> w/o deadlock detector)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19064) Add mode to support delimited identifiers enclosed within double quotation

2018-04-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424843#comment-16424843
 ] 

Hive QA commented on HIVE-19064:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
42s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} common: The patch generated 0 new + 427 unchanged - 
1 fixed = 427 total (was 428) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
48s{color} | {color:red} ql: The patch generated 7 new + 769 unchanged - 8 
fixed = 776 total (was 777) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} standalone-metastore: The patch generated 0 new + 
562 unchanged - 3 fixed = 562 total (was 565) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 77 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 50 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9982/dev-support/hive-personality.sh
 |
| git revision | master / 04f3be0 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9982/yetus/diff-checkstyle-ql.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9982/yetus/whitespace-eol.txt 
|
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9982/yetus/patch-asflicense-problems.txt
 |
| modules | C: common itests ql standalone-metastore U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9982/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add mode to support delimited identifiers enclosed within double quotation
> --
>
> Key: HIVE-19064
> URL: https://issues.apache.org/jira/browse/HIVE-19064
> Project: Hive
>  Issue Type: Improvement
>  Components: Parser, SQL
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-19064.01.patch, HIVE-19064.02.patch
>
>
> As per SQL standard. Hive currently uses `` (backticks). Default will 
> continue being backticks, but we will support identifiers 

[jira] [Updated] (HIVE-18986) Table rename will run java.lang.StackOverflowError in dataNucleus if the table contains large number of columns

2018-04-03 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-18986:

Status: In Progress  (was: Patch Available)

> Table rename will run java.lang.StackOverflowError in dataNucleus if the 
> table contains large number of columns
> ---
>
> Key: HIVE-18986
> URL: https://issues.apache.org/jira/browse/HIVE-18986
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18986.1.patch, HIVE-18986.2.patch, 
> HIVE-18986.3.patch
>
>
> If the table contains a lot of columns e.g, 5k, simple table rename would 
> fail with the following stack trace. The issue is datanucleus can't handle 
> the query with lots of colName='c1' && colName='c2' && ... .
>  
> 2018-03-13 17:19:52,770 INFO 
> org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-5-thread-200]: 
> ugi=anonymous ip=10.17.100.135 cmd=source:10.17.100.135 alter_table: 
> db=default tbl=fgv_full_var_pivoted02 newtbl=fgv_full_var_pivoted 2018-03-13 
> 17:20:00,495 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: 
> [pool-5-thread-200]: java.lang.StackOverflowError at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:330) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18986) Table rename will run java.lang.StackOverflowError in dataNucleus if the table contains large number of columns

2018-04-03 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-18986:

Attachment: (was: HIVE-18986.3.patch)

> Table rename will run java.lang.StackOverflowError in dataNucleus if the 
> table contains large number of columns
> ---
>
> Key: HIVE-18986
> URL: https://issues.apache.org/jira/browse/HIVE-18986
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18986.1.patch, HIVE-18986.2.patch, 
> HIVE-18986.3.patch
>
>
> If the table contains a lot of columns e.g, 5k, simple table rename would 
> fail with the following stack trace. The issue is datanucleus can't handle 
> the query with lots of colName='c1' && colName='c2' && ... .
>  
> 2018-03-13 17:19:52,770 INFO 
> org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-5-thread-200]: 
> ugi=anonymous ip=10.17.100.135 cmd=source:10.17.100.135 alter_table: 
> db=default tbl=fgv_full_var_pivoted02 newtbl=fgv_full_var_pivoted 2018-03-13 
> 17:20:00,495 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: 
> [pool-5-thread-200]: java.lang.StackOverflowError at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:330) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18986) Table rename will run java.lang.StackOverflowError in dataNucleus if the table contains large number of columns

2018-04-03 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-18986:

Status: Patch Available  (was: In Progress)

> Table rename will run java.lang.StackOverflowError in dataNucleus if the 
> table contains large number of columns
> ---
>
> Key: HIVE-18986
> URL: https://issues.apache.org/jira/browse/HIVE-18986
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18986.1.patch, HIVE-18986.2.patch, 
> HIVE-18986.3.patch
>
>
> If the table contains a lot of columns e.g, 5k, simple table rename would 
> fail with the following stack trace. The issue is datanucleus can't handle 
> the query with lots of colName='c1' && colName='c2' && ... .
>  
> 2018-03-13 17:19:52,770 INFO 
> org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-5-thread-200]: 
> ugi=anonymous ip=10.17.100.135 cmd=source:10.17.100.135 alter_table: 
> db=default tbl=fgv_full_var_pivoted02 newtbl=fgv_full_var_pivoted 2018-03-13 
> 17:20:00,495 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: 
> [pool-5-thread-200]: java.lang.StackOverflowError at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:330) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18986) Table rename will run java.lang.StackOverflowError in dataNucleus if the table contains large number of columns

2018-04-03 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-18986:

Attachment: HIVE-18986.3.patch

> Table rename will run java.lang.StackOverflowError in dataNucleus if the 
> table contains large number of columns
> ---
>
> Key: HIVE-18986
> URL: https://issues.apache.org/jira/browse/HIVE-18986
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18986.1.patch, HIVE-18986.2.patch, 
> HIVE-18986.3.patch
>
>
> If the table contains a lot of columns e.g, 5k, simple table rename would 
> fail with the following stack trace. The issue is datanucleus can't handle 
> the query with lots of colName='c1' && colName='c2' && ... .
>  
> 2018-03-13 17:19:52,770 INFO 
> org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-5-thread-200]: 
> ugi=anonymous ip=10.17.100.135 cmd=source:10.17.100.135 alter_table: 
> db=default tbl=fgv_full_var_pivoted02 newtbl=fgv_full_var_pivoted 2018-03-13 
> 17:20:00,495 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: 
> [pool-5-thread-200]: java.lang.StackOverflowError at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:330) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18910) Migrate to Murmur hash for shuffle and bucketing

2018-04-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424825#comment-16424825
 ] 

Hive QA commented on HIVE-18910:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12917423/HIVE-18910.20.patch

{color:green}SUCCESS:{color} +1 due to 9 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 262 failed/errored test(s), 13302 tests 
executed
*Failed tests:*
{noformat}
TestCopyUtils - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestExportImport - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=95)


[jira] [Commented] (HIVE-18909) Metrics for results cache

2018-04-03 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424822#comment-16424822
 ] 

Jason Dere commented on HIVE-18909:
---

Changing System.currentTimeMillis to System.nanoTime

> Metrics for results cache
> -
>
> Key: HIVE-18909
> URL: https://issues.apache.org/jira/browse/HIVE-18909
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
>  Labels: Metrics
> Attachments: HIVE-18909.1.patch, HIVE-18909.2.patch, 
> HIVE-18909.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18909) Metrics for results cache

2018-04-03 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-18909:
--
Attachment: HIVE-18909.3.patch

> Metrics for results cache
> -
>
> Key: HIVE-18909
> URL: https://issues.apache.org/jira/browse/HIVE-18909
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
>  Labels: Metrics
> Attachments: HIVE-18909.1.patch, HIVE-18909.2.patch, 
> HIVE-18909.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18910) Migrate to Murmur hash for shuffle and bucketing

2018-04-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424789#comment-16424789
 ] 

Hive QA commented on HIVE-18910:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m  
4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
48s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
20s{color} | {color:red} streaming in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
57s{color} | {color:red} ql in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
14s{color} | {color:red} storage-api: The patch generated 3 new + 97 unchanged 
- 3 fixed = 100 total (was 100) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
23s{color} | {color:red} serde: The patch generated 150 new + 214 unchanged - 3 
fixed = 364 total (was 217) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
14s{color} | {color:red} hcatalog/streaming: The patch generated 1 new + 33 
unchanged - 0 fixed = 34 total (was 33) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
3s{color} | {color:red} ql: The patch generated 26 new + 1267 unchanged - 3 
fixed = 1293 total (was 1270) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch 248 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
17s{color} | {color:red} The patch generated 51 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9981/dev-support/hive-personality.sh
 |
| git revision | master / 04f3be0 |
| Default Java | 1.8.0_111 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9981/yetus/patch-mvninstall-hcatalog_streaming.txt
 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9981/yetus/patch-mvninstall-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9981/yetus/diff-checkstyle-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9981/yetus/diff-checkstyle-serde.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9981/yetus/diff-checkstyle-hcatalog_streaming.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9981/yetus/diff-checkstyle-ql.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9981/yetus/whitespace-tabs.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9981/yetus/patch-asflicense-problems.txt
 |
| modules | C: storage-api serde hbase-handler hcatalog/streaming 
itests/hive-blobstore ql standalone-metastore U: . |
| Console output | 

[jira] [Updated] (HIVE-18910) Migrate to Murmur hash for shuffle and bucketing

2018-04-03 Thread Deepak Jaiswal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-18910:
--
Attachment: HIVE-18910.21.patch

> Migrate to Murmur hash for shuffle and bucketing
> 
>
> Key: HIVE-18910
> URL: https://issues.apache.org/jira/browse/HIVE-18910
> Project: Hive
>  Issue Type: Task
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-18910.1.patch, HIVE-18910.10.patch, 
> HIVE-18910.11.patch, HIVE-18910.12.patch, HIVE-18910.13.patch, 
> HIVE-18910.14.patch, HIVE-18910.15.patch, HIVE-18910.16.patch, 
> HIVE-18910.17.patch, HIVE-18910.18.patch, HIVE-18910.19.patch, 
> HIVE-18910.2.patch, HIVE-18910.20.patch, HIVE-18910.21.patch, 
> HIVE-18910.3.patch, HIVE-18910.4.patch, HIVE-18910.5.patch, 
> HIVE-18910.6.patch, HIVE-18910.7.patch, HIVE-18910.8.patch, HIVE-18910.9.patch
>
>
> Hive uses JAVA hash which is not as good as murmur for better distribution 
> and efficiency in bucketing a table.
> Migrate to murmur hash but still keep backward compatibility for existing 
> users so that they dont have to reload the existing tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19091) [Hive 3.0.0 Release] Rat check failure fixes

2018-04-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424736#comment-16424736
 ] 

Hive QA commented on HIVE-19091:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12917282/HIVE-19091.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 244 failed/errored test(s), 13702 tests 
executed
*Failed tests:*
{noformat}
TestCopyUtils - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestExportImport - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=95)


[jira] [Comment Edited] (HIVE-17970) MM LOAD DATA with OVERWRITE doesn't use base_n directory concept

2018-04-03 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424726#comment-16424726
 ] 

Sergey Shelukhin edited comment on HIVE-17970 at 4/3/18 11:13 PM:
--

[~ekoifman] can you take a look? thanks

The test results didn't change; the old approach removed the delta directories 
not matching the current one, I removed that and it broke the test; handling 
overwrite properly with base fixed the test.


was (Author: sershe):
[~ekoifman] can you take a look? thanks

> MM LOAD DATA with OVERWRITE doesn't use base_n directory concept
> 
>
> Key: HIVE-17970
> URL: https://issues.apache.org/jira/browse/HIVE-17970
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17970.patch
>
>
> Judging by 
> {code:java}
> Hive.loadTable(Path loadPath, String tableName, LoadFileType loadFileType, 
> boolean isSrcLocal,
>   boolean isSkewedStoreAsSubdir, boolean isAcid, boolean 
> hasFollowingStatsTask,
>   Long txnId, int stmtId, boolean isMmTable)
> {code}
> LOAD DATA with OVERWRITE will delete all existing data then write new data 
> into the table.  This logic makes sense for non-acid tables but for Acid/MM 
> it should work like INSERT OVERWRITE statement and write new data to base_n/. 
> This way the lock manager can be used to either get an X lock for IOW and 
> thus block all readers or let it run with SemiShared and let readers continue 
> and make the system more concurrent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17970) MM LOAD DATA with OVERWRITE doesn't use base_n directory concept

2018-04-03 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-17970:

Status: Patch Available  (was: Open)

> MM LOAD DATA with OVERWRITE doesn't use base_n directory concept
> 
>
> Key: HIVE-17970
> URL: https://issues.apache.org/jira/browse/HIVE-17970
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17970.patch
>
>
> Judging by 
> {code:java}
> Hive.loadTable(Path loadPath, String tableName, LoadFileType loadFileType, 
> boolean isSrcLocal,
>   boolean isSkewedStoreAsSubdir, boolean isAcid, boolean 
> hasFollowingStatsTask,
>   Long txnId, int stmtId, boolean isMmTable)
> {code}
> LOAD DATA with OVERWRITE will delete all existing data then write new data 
> into the table.  This logic makes sense for non-acid tables but for Acid/MM 
> it should work like INSERT OVERWRITE statement and write new data to base_n/. 
> This way the lock manager can be used to either get an X lock for IOW and 
> thus block all readers or let it run with SemiShared and let readers continue 
> and make the system more concurrent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17970) MM LOAD DATA with OVERWRITE doesn't use base_n directory concept

2018-04-03 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424726#comment-16424726
 ] 

Sergey Shelukhin commented on HIVE-17970:
-

[~ekoifman] can you take a look? thanks

> MM LOAD DATA with OVERWRITE doesn't use base_n directory concept
> 
>
> Key: HIVE-17970
> URL: https://issues.apache.org/jira/browse/HIVE-17970
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17970.patch
>
>
> Judging by 
> {code:java}
> Hive.loadTable(Path loadPath, String tableName, LoadFileType loadFileType, 
> boolean isSrcLocal,
>   boolean isSkewedStoreAsSubdir, boolean isAcid, boolean 
> hasFollowingStatsTask,
>   Long txnId, int stmtId, boolean isMmTable)
> {code}
> LOAD DATA with OVERWRITE will delete all existing data then write new data 
> into the table.  This logic makes sense for non-acid tables but for Acid/MM 
> it should work like INSERT OVERWRITE statement and write new data to base_n/. 
> This way the lock manager can be used to either get an X lock for IOW and 
> thus block all readers or let it run with SemiShared and let readers continue 
> and make the system more concurrent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17970) MM LOAD DATA with OVERWRITE doesn't use base_n directory concept

2018-04-03 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-17970:

Attachment: HIVE-17970.patch

> MM LOAD DATA with OVERWRITE doesn't use base_n directory concept
> 
>
> Key: HIVE-17970
> URL: https://issues.apache.org/jira/browse/HIVE-17970
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17970.patch
>
>
> Judging by 
> {code:java}
> Hive.loadTable(Path loadPath, String tableName, LoadFileType loadFileType, 
> boolean isSrcLocal,
>   boolean isSkewedStoreAsSubdir, boolean isAcid, boolean 
> hasFollowingStatsTask,
>   Long txnId, int stmtId, boolean isMmTable)
> {code}
> LOAD DATA with OVERWRITE will delete all existing data then write new data 
> into the table.  This logic makes sense for non-acid tables but for Acid/MM 
> it should work like INSERT OVERWRITE statement and write new data to base_n/. 
> This way the lock manager can be used to either get an X lock for IOW and 
> thus block all readers or let it run with SemiShared and let readers continue 
> and make the system more concurrent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18570) ACID IOW implemented using base may delete too much data

2018-04-03 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18570:
--
Target Version/s: 3.0.0
   Fix Version/s: (was: 3.0.0)

> ACID IOW implemented using base may delete too much data
> 
>
> Key: HIVE-18570
> URL: https://issues.apache.org/jira/browse/HIVE-18570
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Priority: Blocker
>
> Suppose we have a table with delta_0 insert data.
> Txn 1 starts an insert into delta_1.
> Txn 2 starts an IOW into base_2.
> Txn 2 commits.
> Txn 1 commits after txn 2 but its results would be invisible.
> Txn 2 deletes rows committed by txn 1 that according to standard ACID 
> semantics it could have never observed and affected; this sequence of events 
> is only possible under read-uncommitted isolation level (so, 2 deletes rows 
> written by 1 before 1 commits them). 
> This is if we look at IOW as transactional delete+insert. Otherwise we are 
> just saying IOW performs "semi"-transactional delete.
> If 1 ran an update on rows instead of an insert, and 2 still ran an 
> IOW/delete, row lock conflict (or equivalent) should cause one of them to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18570) ACID IOW implemented using base may delete too much data

2018-04-03 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424696#comment-16424696
 ] 

Eugene Koifman commented on HIVE-18570:
---

Given the current state of things the only way to prevent this is to make IOW 
take an X lock which would block all readers as well.  So perhaps there should 
be a "is strict" type of option to cause this behavior.  Longer term we should 
enhance LM to have a lock that blocks all writes but not reads for this (would 
be useful elsewhere as well).

> ACID IOW implemented using base may delete too much data
> 
>
> Key: HIVE-18570
> URL: https://issues.apache.org/jira/browse/HIVE-18570
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Priority: Blocker
> Fix For: 3.0.0
>
>
> Suppose we have a table with delta_0 insert data.
> Txn 1 starts an insert into delta_1.
> Txn 2 starts an IOW into base_2.
> Txn 2 commits.
> Txn 1 commits after txn 2 but its results would be invisible.
> Txn 2 deletes rows committed by txn 1 that according to standard ACID 
> semantics it could have never observed and affected; this sequence of events 
> is only possible under read-uncommitted isolation level (so, 2 deletes rows 
> written by 1 before 1 commits them). 
> This is if we look at IOW as transactional delete+insert. Otherwise we are 
> just saying IOW performs "semi"-transactional delete.
> If 1 ran an update on rows instead of an insert, and 2 still ran an 
> IOW/delete, row lock conflict (or equivalent) should cause one of them to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19091) [Hive 3.0.0 Release] Rat check failure fixes

2018-04-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424672#comment-16424672
 ] 

Hive QA commented on HIVE-19091:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
44s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9980/dev-support/hive-personality.sh
 |
| git revision | master / 04f3be0 |
| Default Java | 1.8.0_111 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9980/yetus/whitespace-tabs.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9980/yetus/patch-asflicense-problems.txt
 |
| modules | C: standalone-metastore U: standalone-metastore |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9980/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> [Hive 3.0.0 Release] Rat check failure fixes
> 
>
> Key: HIVE-19091
> URL: https://issues.apache.org/jira/browse/HIVE-19091
> Project: Hive
>  Issue Type: Task
>  Components: Standalone Metastore
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19091.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18525) Add explain plan to Hive on Spark Web UI

2018-04-03 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424665#comment-16424665
 ] 

Aihua Xu commented on HIVE-18525:
-

[~stakiar] This enhancement looks great. Can you take a look the test failures? 
Seems related to the change.

> Add explain plan to Hive on Spark Web UI
> 
>
> Key: HIVE-18525
> URL: https://issues.apache.org/jira/browse/HIVE-18525
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18525.1.patch, HIVE-18525.2.patch, 
> HIVE-18525.3.patch, HIVE-18525.4.patch, Job-Page-Collapsed.png, 
> Job-Page-Expanded.png, Map-Explain-Plan.png, Reduce-Explain-Plan.png
>
>
> More of an investigation JIRA. The Spark UI has a "long description" of each 
> stage in the Spark DAG. Typically one stage in the Spark DAG corresponds to 
> either a {{MapWork}} or {{ReduceWork}} object. It would be useful if the long 
> description contained the explain plan of the corresponding work object.
> I'm not sure how much additional overhead this would introduce. If not the 
> full explain plan, then maybe a modified one that just lists out all the 
> operator tree along with each operator name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18570) ACID IOW implemented using base may delete too much data

2018-04-03 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18570:

Description: 
Suppose we have a table with delta_0 insert data.
Txn 1 starts an insert into delta_1.
Txn 2 starts an IOW into base_2.
Txn 2 commits.
Txn 1 commits after txn 2 but its results would be invisible.

Txn 2 deletes rows committed by txn 1 that according to standard ACID semantics 
it could have never observed and affected; this sequence of events is only 
possible under read-uncommitted isolation level (so, 2 deletes rows written by 
1 before 1 commits them). 
This is if we look at IOW as transactional delete+insert. Otherwise we are just 
saying IOW performs "semi"-transactional delete.

If 1 ran an update on rows instead of an insert, and 2 still ran an IOW/delete, 
row lock conflict (or equivalent) should cause one of them to fail.





  was:
Suppose we have a table with delta_0 insert data.
Txn 1 starts an insert into delta_1.
Txn 2 starts an IOW into base_2.
Txn 2 commits.
Txn 1 commits after txn 2 but its results would be invisible.

Txn 2 deletes rows committed by txn 1 that according to standard ACID semantics 
it could have never observed and affected; this sequence of events is only 
possible under read-uncommitted isolation level (so, 2 deletes rows written by 
1 before 1 commits them). 
If 1 ran an update on rows instead of an insert, and 2 still ran an IOW/delete, 
row lock conflict (or equivalent) should cause one of them to fail.






> ACID IOW implemented using base may delete too much data
> 
>
> Key: HIVE-18570
> URL: https://issues.apache.org/jira/browse/HIVE-18570
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Priority: Blocker
> Fix For: 3.0.0
>
>
> Suppose we have a table with delta_0 insert data.
> Txn 1 starts an insert into delta_1.
> Txn 2 starts an IOW into base_2.
> Txn 2 commits.
> Txn 1 commits after txn 2 but its results would be invisible.
> Txn 2 deletes rows committed by txn 1 that according to standard ACID 
> semantics it could have never observed and affected; this sequence of events 
> is only possible under read-uncommitted isolation level (so, 2 deletes rows 
> written by 1 before 1 commits them). 
> This is if we look at IOW as transactional delete+insert. Otherwise we are 
> just saying IOW performs "semi"-transactional delete.
> If 1 ran an update on rows instead of an insert, and 2 still ran an 
> IOW/delete, row lock conflict (or equivalent) should cause one of them to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18570) ACID IOW implemented using base may delete too much data

2018-04-03 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18570:

Description: 
Suppose we have a table with delta_0 insert data.
Txn 1 starts an insert into delta_1.
Txn 2 starts an IOW into base_2.
Txn 2 commits.
Txn 1 commits after txn 2 but its results would be invisible.

Txn 2 deletes rows committed by txn 1 that according to standard ACID semantics 
it could have never observed and affected; this sequence of events is only 
possible under read-uncommitted isolation level (so, 2 deletes rows written by 
1 before 1 commits them). 
If 1 ran an update on rows instead of an insert, and 2 still ran an IOW/delete, 
row lock conflict (or equivalent) should cause one of them to fail.





  was:
Suppose we have a table with delta_0 insert data.
Txn 1 starts an insert into delta_1.
Txn 2 starts an IOW into base_2.
Txn 2 commits.
Txn 1 commits after txn 2 but its results would be invisible.

Txn 2 deletes rows committed by txn 1 that according to standard ACID semantics 
it could have never observed and affected.

If we treat IOW foo like DELETE FROM foo (to reason about it w.r.t. ACID 
semantics), it seems to me this sequence of events is only possible under 
read-uncommitted isolation level (so, 2 deletes rows written by 1).
If 1 ran an update on rows instead of an insert, and 2 still ran an IOW/delete, 
row lock conflict (or equivalent) should cause one of them to fail.






> ACID IOW implemented using base may delete too much data
> 
>
> Key: HIVE-18570
> URL: https://issues.apache.org/jira/browse/HIVE-18570
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Priority: Blocker
> Fix For: 3.0.0
>
>
> Suppose we have a table with delta_0 insert data.
> Txn 1 starts an insert into delta_1.
> Txn 2 starts an IOW into base_2.
> Txn 2 commits.
> Txn 1 commits after txn 2 but its results would be invisible.
> Txn 2 deletes rows committed by txn 1 that according to standard ACID 
> semantics it could have never observed and affected; this sequence of events 
> is only possible under read-uncommitted isolation level (so, 2 deletes rows 
> written by 1 before 1 commits them). 
> If 1 ran an update on rows instead of an insert, and 2 still ran an 
> IOW/delete, row lock conflict (or equivalent) should cause one of them to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18570) ACID IOW implemented using base may delete too much data

2018-04-03 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424659#comment-16424659
 ] 

Sergey Shelukhin commented on HIVE-18570:
-

[~hagleitn] [~ashutoshc] [~ekoifman] I think this is another thing that needs 
to be addressed for ACID; although I guess current ACID behavior is still 
better than regular Hive tables.

> ACID IOW implemented using base may delete too much data
> 
>
> Key: HIVE-18570
> URL: https://issues.apache.org/jira/browse/HIVE-18570
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Priority: Blocker
> Fix For: 3.0.0
>
>
> Suppose we have a table with delta_0 insert data.
> Txn 1 starts an insert into delta_1.
> Txn 2 starts an IOW into base_2.
> Txn 2 commits.
> Txn 1 commits after txn 2 but its results would be invisible.
> Txn 2 deletes rows committed by txn 1 that according to standard ACID 
> semantics it could have never observed and affected.
> If we treat IOW foo like DELETE FROM foo (to reason about it w.r.t. ACID 
> semantics), it seems to me this sequence of events is only possible under 
> read-uncommitted isolation level (so, 2 deletes rows written by 1).
> If 1 ran an update on rows instead of an insert, and 2 still ran an 
> IOW/delete, row lock conflict (or equivalent) should cause one of them to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18570) ACID IOW implemented using base may delete too much data

2018-04-03 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18570:

Description: 
Suppose we have a table with delta_0 insert data.
Txn 1 starts an insert into delta_1.
Txn 2 starts an IOW into base_2.
Txn 2 commits.
Txn 1 commits after txn 2 but its results would be invisible.

Txn 2 deletes rows committed by txn 1 that according to standard ACID semantics 
it could have never observed and affected.

If we treat IOW foo like DELETE FROM foo (to reason about it w.r.t. ACID 
semantics), it seems to me this sequence of events is only possible under 
read-uncommitted isolation level (so, 2 deletes rows written by 1).
If 1 ran an update on rows instead of an insert, and 2 still ran an IOW/delete, 
row lock conflict (or equivalent) should cause one of them to fail.





  was:
Suppose we have a table with delta_0 insert data.
Txn 1 starts an insert into delta_1.
Txn 2 starts an IOW into base_2.
Txn 2 commits.
Txn 1 commits after txn 2 but its results would be invisible.

Txn 2 deletes rows committed by txn 1 that according to standard ACID semantics 
it could have never observed.

If we treat IOW foo like DELETE FROM foo (to reason about it w.r.t. ACID 
semantics), it seems to me this sequence of events is only possible under 
read-uncommitted isolation level (so, 2 deletes rows written by 1).
If 1 ran an update on rows instead of an insert, and 2 still ran an IOW/delete, 
row lock conflict (or equivalent) should cause one of them to fail.






> ACID IOW implemented using base may delete too much data
> 
>
> Key: HIVE-18570
> URL: https://issues.apache.org/jira/browse/HIVE-18570
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Priority: Blocker
> Fix For: 3.0.0
>
>
> Suppose we have a table with delta_0 insert data.
> Txn 1 starts an insert into delta_1.
> Txn 2 starts an IOW into base_2.
> Txn 2 commits.
> Txn 1 commits after txn 2 but its results would be invisible.
> Txn 2 deletes rows committed by txn 1 that according to standard ACID 
> semantics it could have never observed and affected.
> If we treat IOW foo like DELETE FROM foo (to reason about it w.r.t. ACID 
> semantics), it seems to me this sequence of events is only possible under 
> read-uncommitted isolation level (so, 2 deletes rows written by 1).
> If 1 ran an update on rows instead of an insert, and 2 still ran an 
> IOW/delete, row lock conflict (or equivalent) should cause one of them to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18570) ACID IOW implemented using base may delete too much data

2018-04-03 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18570:

Description: 
Suppose we have a table with delta_0 insert data.
Txn 1 starts an insert into delta_1.
Txn 2 starts an IOW into base_2.
Txn 2 commits.
Txn 1 commits after txn 2 but its results would be invisible.

Txn 2 deletes rows committed by txn 1 that according to standard ACID semantics 
it could have never observed.

If we treat IOW foo like DELETE FROM foo (to reason about it w.r.t. ACID 
semantics), it seems to me this sequence of events is only possible under 
read-uncommitted isolation level (so, 2 deletes rows written by 1).
If 1 ran an update on rows instead of an insert, and 2 still ran an IOW/delete, 
row lock conflict (or equivalent) should cause one of them to fail.





  was:
Suppose we have a table with delta_0 insert data.
Txn 1 starts an insert into delta_1.
Txn 2 starts an IOW into base_2.
Txn 2 commits.
Txn 1 commits after txn 2 but its results would be invisible.

If we treat IOW foo like DELETE FROM foo (to reason about it w.r.t. ACID 
semantics), it seems to me this sequence of events is only possible under 
read-uncommitted isolation level (so, 2 deletes rows written by 1).
Under any other isolation level rows written by 1 must survive, or there must 
be some lock based change in sequence or conflict.
Update: to clarify, if 1 ran an update on rows instead of an insert, and 2 
still ran an IOW/delete, row lock conflict (or equivalent) should cause one of 
them to fail.






> ACID IOW implemented using base may delete too much data
> 
>
> Key: HIVE-18570
> URL: https://issues.apache.org/jira/browse/HIVE-18570
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Priority: Blocker
> Fix For: 3.0.0
>
>
> Suppose we have a table with delta_0 insert data.
> Txn 1 starts an insert into delta_1.
> Txn 2 starts an IOW into base_2.
> Txn 2 commits.
> Txn 1 commits after txn 2 but its results would be invisible.
> Txn 2 deletes rows committed by txn 1 that according to standard ACID 
> semantics it could have never observed.
> If we treat IOW foo like DELETE FROM foo (to reason about it w.r.t. ACID 
> semantics), it seems to me this sequence of events is only possible under 
> read-uncommitted isolation level (so, 2 deletes rows written by 1).
> If 1 ran an update on rows instead of an insert, and 2 still ran an 
> IOW/delete, row lock conflict (or equivalent) should cause one of them to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18910) Migrate to Murmur hash for shuffle and bucketing

2018-04-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424642#comment-16424642
 ] 

Hive QA commented on HIVE-18910:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12917423/HIVE-18910.20.patch

{color:green}SUCCESS:{color} +1 due to 9 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 260 failed/errored test(s), 13305 tests 
executed
*Failed tests:*
{noformat}
TestCopyUtils - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestExportImport - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=95)


[jira] [Commented] (HIVE-18910) Migrate to Murmur hash for shuffle and bucketing

2018-04-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424572#comment-16424572
 ] 

Hive QA commented on HIVE-18910:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
52s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
20s{color} | {color:red} streaming in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
58s{color} | {color:red} ql in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
15s{color} | {color:red} storage-api: The patch generated 3 new + 97 unchanged 
- 3 fixed = 100 total (was 100) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
21s{color} | {color:red} serde: The patch generated 150 new + 214 unchanged - 3 
fixed = 364 total (was 217) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
15s{color} | {color:red} hcatalog/streaming: The patch generated 1 new + 33 
unchanged - 0 fixed = 34 total (was 33) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
4s{color} | {color:red} ql: The patch generated 26 new + 1267 unchanged - 3 
fixed = 1293 total (was 1270) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch 248 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 51 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9979/dev-support/hive-personality.sh
 |
| git revision | master / 064eac2 |
| Default Java | 1.8.0_111 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9979/yetus/patch-mvninstall-hcatalog_streaming.txt
 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9979/yetus/patch-mvninstall-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9979/yetus/diff-checkstyle-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9979/yetus/diff-checkstyle-serde.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9979/yetus/diff-checkstyle-hcatalog_streaming.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9979/yetus/diff-checkstyle-ql.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9979/yetus/whitespace-tabs.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9979/yetus/patch-asflicense-problems.txt
 |
| modules | C: storage-api serde hbase-handler hcatalog/streaming 
itests/hive-blobstore ql standalone-metastore U: . |
| Console output | 

[jira] [Assigned] (HIVE-17855) conversion to MM tables via alter may be broken

2018-04-03 Thread Steve Yeom (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Yeom reassigned HIVE-17855:
-

Assignee: Steve Yeom

> conversion to MM tables via alter may be broken
> ---
>
> Key: HIVE-17855
> URL: https://issues.apache.org/jira/browse/HIVE-17855
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Assignee: Steve Yeom
>Priority: Major
>  Labels: mm-gap-2
>
> {noformat}
> git difftool 77511070dd^ 77511070dd -- */mm_conversions.q
> {noformat}
> Looks like during ACID "integration" alter was simply quietly changed to 
> create+insert, because it's broken.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17661) DBTxnManager.acquireLocks() - MM tables should use shared lock for Insert

2018-04-03 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-17661:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed to master. Thanks for the review!

> DBTxnManager.acquireLocks() - MM tables should use shared lock for Insert
> -
>
> Key: HIVE-17661
> URL: https://issues.apache.org/jira/browse/HIVE-17661
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Fix For: 3.0.0
>
> Attachments: HIVE-17661.01.patch, HIVE-17661.patch
>
>
> {noformat}
> case INSERT:
>   assert t != null;
>   if(AcidUtils.isFullAcidTable(t)) {
> compBuilder.setShared();
>   }
>   else {
> if 
> (conf.getBoolVar(HiveConf.ConfVars.HIVE_TXN_STRICT_LOCKING_MODE)) {
> {noformat}
> _if(AcidUtils.isFullAcidTable(t)) {_ 
> should probably be 
> _if(AcidUtils.isAcidTable(t)) {_



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-17861) MM tables - multi-IOW is broken

2018-04-03 Thread Steve Yeom (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Yeom reassigned HIVE-17861:
-

Assignee: Steve Yeom

> MM tables - multi-IOW is broken
> ---
>
> Key: HIVE-17861
> URL: https://issues.apache.org/jira/browse/HIVE-17861
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Assignee: Steve Yeom
>Priority: Major
>  Labels: mm-gap-2
>
> After HIVE-17856, see if multi IOW was commented out because of IOW issues or 
> because it's broken in addition to IOW being broken.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19064) Add mode to support delimited identifiers enclosed within double quotation

2018-04-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424510#comment-16424510
 ] 

Hive QA commented on HIVE-19064:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12917287/HIVE-19064.02.patch

{color:green}SUCCESS:{color} +1 due to 7 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 204 failed/errored test(s), 13308 tests 
executed
*Failed tests:*
{noformat}
TestCopyUtils - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestExportImport - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=95)


[jira] [Commented] (HIVE-19100) investigate TestStreaming failures

2018-04-03 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424506#comment-16424506
 ] 

Alan Gates commented on HIVE-19100:
---

+1.

> investigate TestStreaming failures
> --
>
> Key: HIVE-19100
> URL: https://issues.apache.org/jira/browse/HIVE-19100
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-19100.01.patch, HIVE-19100.02.patch, 
> HIVE-19100.03.patch
>
>
> {noformat}
> [ERROR] Failures: 
> [ERROR]   
> TestStreaming.testInterleavedTransactionBatchCommits:1218->checkDataWritten2:619
>  expected:<11> but was:<12>
> [ERROR]   
> TestStreaming.testMultipleTransactionBatchCommits:1157->checkDataWritten2:619 
> expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchAbortAndCommit:1138->checkDataWritten:566 
> expected:<1> but was:<2>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Delimited:861->testTransactionBatchCommit_Delimited:881->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_DelimitedUGI:865->testTransactionBatchCommit_Delimited:881->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Json:1011->checkDataWritten:566 
> expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Regex:928->testTransactionBatchCommit_Regex:949->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_RegexUGI:932->testTransactionBatchCommit_Regex:949->checkDataWritten:566
>  expected:<1> but was:<3>
> [INFO] 
> [ERROR] Tests run: 26, Failures: 8, Errors: 0, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19014) utilize YARN-8028 (queue ACL check) in Hive Tez session pool

2018-04-03 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19014:

Attachment: HIVE-19014.05.patch

> utilize YARN-8028 (queue ACL check) in Hive Tez session pool
> 
>
> Key: HIVE-19014
> URL: https://issues.apache.org/jira/browse/HIVE-19014
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19014.01.patch, HIVE-19014.02.patch, 
> HIVE-19014.03.patch, HIVE-19014.04.patch, HIVE-19014.05.patch, 
> HIVE-19014.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19044) Duplicate field names within Druid Query Generated by Calcite plan

2018-04-03 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated HIVE-19044:
--
Description: Test case is attached to the Jira Patch  (was: This is the 
Query plan as you can see "$f4" is duplicated.
{code}
PREHOOK: query: EXPLAIN SELECT Calcs.key AS none_key_nk,   SUM(Calcs.num0) AS 
temp_z_stdevp_num0___1723718801__0_,   COUNT(Calcs.num0) AS 
temp_z_stdevp_num0___2730138885__0_,   SUM((Calcs.num0 * Calcs.num0)) AS 
temp_z_stdevp_num0___4071133194__0_,   STDDEV_POP(Calcs.num0) AS stp_num0_ok 
FROM druid_tableau.calcs Calcs GROUP BY Calcs.key
PREHOOK: type: QUERY
POSTHOOK: query: EXPLAIN SELECT Calcs.key AS none_key_nk,   SUM(Calcs.num0) AS 
temp_z_stdevp_num0___1723718801__0_,   COUNT(Calcs.num0) AS 
temp_z_stdevp_num0___2730138885__0_,   SUM((Calcs.num0 * Calcs.num0)) AS 
temp_z_stdevp_num0___4071133194__0_,   STDDEV_POP(Calcs.num0) AS stp_num0_ok 
FROM druid_tableau.calcs Calcs GROUP BY Calcs.key
POSTHOOK: type: QUERY
STAGE DEPENDENCIES:
  Stage-0 is a root stage

STAGE PLANS:
  Stage: Stage-0
Fetch Operator
  limit: -1
  Processor Tree:
TableScan
  alias: calcs
  properties:
druid.fieldNames key,$f1,$f2,$f3,$f4
druid.fieldTypes string,double,bigint,double,double
druid.query.json 
{"queryType":"groupBy","dataSource":"druid_tableau.calcs","granularity":"all","dimensions":[{"type":"default","dimension":"key","outputName":"key","outputType":"STRING"}],"limitSpec":{"type":"default"},"aggregations":[{"type":"doubleSum","name":"$f1","fieldName":"num0"},{"type":"filtered","filter":{"type":"not","field":{"type":"selector","dimension":"num0","value":null}},"aggregator":{"type":"count","name":"$f2","fieldName":"num0"}},{"type":"doubleSum","name":"$f3","expression":"(\"num0\"
 * \"num0\")"},{"type":"doubleSum","name":"$f4","expression":"(\"num0\" * 
\"num0\")"}],"postAggregations":[{"type":"expression","name":"$f4","expression":"pow(((\"$f4\"
 - ((\"$f1\" * \"$f1\") / \"$f2\")) / 
\"$f2\"),0.5)"}],"intervals":["1900-01-01T00:00:00.000Z/3000-01-01T00:00:00.000Z"]}
druid.query.type groupBy
  Select Operator
expressions: key (type: string), $f1 (type: double), $f2 (type: 
bigint), $f3 (type: double), $f4 (type: double)
outputColumnNames: _col0, _col1, _col2, _col3, _col4
ListSink
{code}
Table DDL 
{code}
create database druid_tableau;
use druid_tableau;
drop table if exists calcs;
create table calcs
STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler'
TBLPROPERTIES (
  "druid.segment.granularity" = "MONTH",
  "druid.query.granularity" = "DAY")
AS SELECT
  cast(datetime0 as timestamp with local time zone) `__time`,
  key,
  str0, str1, str2, str3,
  date0, date1, date2, date3,
  time0, time1,
  datetime1,
  zzz,
  cast(bool0 as string) bool0,
  cast(bool1 as string) bool1,
  cast(bool2 as string) bool2,
  cast(bool3 as string) bool3,
  int0, int1, int2, int3,
  num0, num1, num2, num3, num4
from default.calcs_orc;
{code})

> Duplicate field names within Druid Query Generated by Calcite plan
> --
>
> Key: HIVE-19044
> URL: https://issues.apache.org/jira/browse/HIVE-19044
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Attachments: HIVE-19044.patch
>
>
> Test case is attached to the Jira Patch



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19099) HIVE-18755 forgot to update derby install script in metastore

2018-04-03 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-19099:
--
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

The fix for HIVE-18775 addresses this problem.

> HIVE-18755 forgot to update derby install script in metastore
> -
>
> Key: HIVE-19099
> URL: https://issues.apache.org/jira/browse/HIVE-19099
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.0.0
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HIVE-19099.patch
>
>
> metastore/srcripts/upgrade/derby/hive-schema-3.0 was not properly updated 
> with the new and changed tables for catalogs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18775) HIVE-17983 missed deleting metastore/scripts/upgrade/derby/hive-schema-3.0.0.derby.sql

2018-04-03 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-18775:
--
Attachment: HIVE-18775.3.patch

> HIVE-17983 missed deleting 
> metastore/scripts/upgrade/derby/hive-schema-3.0.0.derby.sql
> --
>
> Key: HIVE-18775
> URL: https://issues.apache.org/jira/browse/HIVE-18775
> Project: Hive
>  Issue Type: Bug
>Reporter: Vineet Garg
>Assignee: Alan Gates
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HIVE-18775.1.patch, HIVE-18775.2.patch, 
> HIVE-18775.3.patch
>
>
> HIVE-17983 moved hive metastore schema sql files for all databases but derby 
> to standalone-metastore. As a result there are not two copies of 
> {{hive-schema-3.0.0.derby.sql}}.
> {{metastore/scripts/upgrade/derby/hive-schema-3.0.0.derby.sql}} needs to be 
> removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18775) HIVE-17983 missed deleting metastore/scripts/upgrade/derby/hive-schema-3.0.0.derby.sql

2018-04-03 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424500#comment-16424500
 ] 

Alan Gates commented on HIVE-18775:
---

New version of the patch with the pom changes.

> HIVE-17983 missed deleting 
> metastore/scripts/upgrade/derby/hive-schema-3.0.0.derby.sql
> --
>
> Key: HIVE-18775
> URL: https://issues.apache.org/jira/browse/HIVE-18775
> Project: Hive
>  Issue Type: Bug
>Reporter: Vineet Garg
>Assignee: Alan Gates
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HIVE-18775.1.patch, HIVE-18775.2.patch, 
> HIVE-18775.3.patch
>
>
> HIVE-17983 moved hive metastore schema sql files for all databases but derby 
> to standalone-metastore. As a result there are not two copies of 
> {{hive-schema-3.0.0.derby.sql}}.
> {{metastore/scripts/upgrade/derby/hive-schema-3.0.0.derby.sql}} needs to be 
> removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19100) investigate TestStreaming failures

2018-04-03 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-19100:
--
Attachment: HIVE-19100.03.patch

> investigate TestStreaming failures
> --
>
> Key: HIVE-19100
> URL: https://issues.apache.org/jira/browse/HIVE-19100
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-19100.01.patch, HIVE-19100.02.patch, 
> HIVE-19100.03.patch
>
>
> {noformat}
> [ERROR] Failures: 
> [ERROR]   
> TestStreaming.testInterleavedTransactionBatchCommits:1218->checkDataWritten2:619
>  expected:<11> but was:<12>
> [ERROR]   
> TestStreaming.testMultipleTransactionBatchCommits:1157->checkDataWritten2:619 
> expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchAbortAndCommit:1138->checkDataWritten:566 
> expected:<1> but was:<2>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Delimited:861->testTransactionBatchCommit_Delimited:881->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_DelimitedUGI:865->testTransactionBatchCommit_Delimited:881->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Json:1011->checkDataWritten:566 
> expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Regex:928->testTransactionBatchCommit_Regex:949->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_RegexUGI:932->testTransactionBatchCommit_Regex:949->checkDataWritten:566
>  expected:<1> but was:<3>
> [INFO] 
> [ERROR] Tests run: 26, Failures: 8, Errors: 0, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18775) HIVE-17983 missed deleting metastore/scripts/upgrade/derby/hive-schema-3.0.0.derby.sql

2018-04-03 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates reassigned HIVE-18775:
-

Assignee: Alan Gates  (was: Vineet Garg)

> HIVE-17983 missed deleting 
> metastore/scripts/upgrade/derby/hive-schema-3.0.0.derby.sql
> --
>
> Key: HIVE-18775
> URL: https://issues.apache.org/jira/browse/HIVE-18775
> Project: Hive
>  Issue Type: Bug
>Reporter: Vineet Garg
>Assignee: Alan Gates
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HIVE-18775.1.patch, HIVE-18775.2.patch
>
>
> HIVE-17983 moved hive metastore schema sql files for all databases but derby 
> to standalone-metastore. As a result there are not two copies of 
> {{hive-schema-3.0.0.derby.sql}}.
> {{metastore/scripts/upgrade/derby/hive-schema-3.0.0.derby.sql}} needs to be 
> removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19014) utilize YARN-8028 (queue ACL check) in Hive Tez session pool

2018-04-03 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19014:

Attachment: (was: HIVE-19014.05.patch)

> utilize YARN-8028 (queue ACL check) in Hive Tez session pool
> 
>
> Key: HIVE-19014
> URL: https://issues.apache.org/jira/browse/HIVE-19014
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19014.01.patch, HIVE-19014.02.patch, 
> HIVE-19014.03.patch, HIVE-19014.04.patch, HIVE-19014.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19014) utilize YARN-8028 (queue ACL check) in Hive Tez session pool

2018-04-03 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424497#comment-16424497
 ] 

Sergey Shelukhin commented on HIVE-19014:
-

Fixing the test init issue

> utilize YARN-8028 (queue ACL check) in Hive Tez session pool
> 
>
> Key: HIVE-19014
> URL: https://issues.apache.org/jira/browse/HIVE-19014
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19014.01.patch, HIVE-19014.02.patch, 
> HIVE-19014.03.patch, HIVE-19014.04.patch, HIVE-19014.05.patch, 
> HIVE-19014.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19014) utilize YARN-8028 (queue ACL check) in Hive Tez session pool

2018-04-03 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19014:

Attachment: HIVE-19014.05.patch

> utilize YARN-8028 (queue ACL check) in Hive Tez session pool
> 
>
> Key: HIVE-19014
> URL: https://issues.apache.org/jira/browse/HIVE-19014
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19014.01.patch, HIVE-19014.02.patch, 
> HIVE-19014.03.patch, HIVE-19014.04.patch, HIVE-19014.05.patch, 
> HIVE-19014.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-19044) Duplicate field names within Druid Query Generated by Calcite plan

2018-04-03 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424486#comment-16424486
 ] 

slim bouguerra edited comment on HIVE-19044 at 4/3/18 7:35 PM:
---

This patch is a test case that reproduces the issue plus the good excepted 
results.
[~ashutoshc]/[~jcamachorodriguez] This can be merged when the calcite fix is 
merged and present at the Hive runtime libs.


was (Author: bslim):
This patch is a test case that reproduces the issue, this can be merged when 
the calcite fix is merged and present at the Hive runtime libs.

> Duplicate field names within Druid Query Generated by Calcite plan
> --
>
> Key: HIVE-19044
> URL: https://issues.apache.org/jira/browse/HIVE-19044
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Attachments: HIVE-19044.patch
>
>
> This is the Query plan as you can see "$f4" is duplicated.
> {code}
> PREHOOK: query: EXPLAIN SELECT Calcs.key AS none_key_nk,   SUM(Calcs.num0) AS 
> temp_z_stdevp_num0___1723718801__0_,   COUNT(Calcs.num0) AS 
> temp_z_stdevp_num0___2730138885__0_,   SUM((Calcs.num0 * Calcs.num0)) AS 
> temp_z_stdevp_num0___4071133194__0_,   STDDEV_POP(Calcs.num0) AS stp_num0_ok 
> FROM druid_tableau.calcs Calcs GROUP BY Calcs.key
> PREHOOK: type: QUERY
> POSTHOOK: query: EXPLAIN SELECT Calcs.key AS none_key_nk,   SUM(Calcs.num0) 
> AS temp_z_stdevp_num0___1723718801__0_,   COUNT(Calcs.num0) AS 
> temp_z_stdevp_num0___2730138885__0_,   SUM((Calcs.num0 * Calcs.num0)) AS 
> temp_z_stdevp_num0___4071133194__0_,   STDDEV_POP(Calcs.num0) AS stp_num0_ok 
> FROM druid_tableau.calcs Calcs GROUP BY Calcs.key
> POSTHOOK: type: QUERY
> STAGE DEPENDENCIES:
>   Stage-0 is a root stage
> STAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: calcs
>   properties:
> druid.fieldNames key,$f1,$f2,$f3,$f4
> druid.fieldTypes string,double,bigint,double,double
> druid.query.json 
> {"queryType":"groupBy","dataSource":"druid_tableau.calcs","granularity":"all","dimensions":[{"type":"default","dimension":"key","outputName":"key","outputType":"STRING"}],"limitSpec":{"type":"default"},"aggregations":[{"type":"doubleSum","name":"$f1","fieldName":"num0"},{"type":"filtered","filter":{"type":"not","field":{"type":"selector","dimension":"num0","value":null}},"aggregator":{"type":"count","name":"$f2","fieldName":"num0"}},{"type":"doubleSum","name":"$f3","expression":"(\"num0\"
>  * \"num0\")"},{"type":"doubleSum","name":"$f4","expression":"(\"num0\" * 
> \"num0\")"}],"postAggregations":[{"type":"expression","name":"$f4","expression":"pow(((\"$f4\"
>  - ((\"$f1\" * \"$f1\") / \"$f2\")) / 
> \"$f2\"),0.5)"}],"intervals":["1900-01-01T00:00:00.000Z/3000-01-01T00:00:00.000Z"]}
> druid.query.type groupBy
>   Select Operator
> expressions: key (type: string), $f1 (type: double), $f2 (type: 
> bigint), $f3 (type: double), $f4 (type: double)
> outputColumnNames: _col0, _col1, _col2, _col3, _col4
> ListSink
> {code}
> Table DDL 
> {code}
> create database druid_tableau;
> use druid_tableau;
> drop table if exists calcs;
> create table calcs
> STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler'
> TBLPROPERTIES (
>   "druid.segment.granularity" = "MONTH",
>   "druid.query.granularity" = "DAY")
> AS SELECT
>   cast(datetime0 as timestamp with local time zone) `__time`,
>   key,
>   str0, str1, str2, str3,
>   date0, date1, date2, date3,
>   time0, time1,
>   datetime1,
>   zzz,
>   cast(bool0 as string) bool0,
>   cast(bool1 as string) bool1,
>   cast(bool2 as string) bool2,
>   cast(bool3 as string) bool3,
>   int0, int1, int2, int3,
>   num0, num1, num2, num3, num4
> from default.calcs_orc;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-19044) Duplicate field names within Druid Query Generated by Calcite plan

2018-04-03 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424486#comment-16424486
 ] 

slim bouguerra edited comment on HIVE-19044 at 4/3/18 7:34 PM:
---

This patch is a test case that reproduces the issue, this can be merged when 
the calcite fix is merged and present at the Hive runtime libs.


was (Author: bslim):
This patch is a test case that reproduce the issue, this can be merged when the 
calcite fix is merged and present at the Hive runtime libs.

> Duplicate field names within Druid Query Generated by Calcite plan
> --
>
> Key: HIVE-19044
> URL: https://issues.apache.org/jira/browse/HIVE-19044
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Attachments: HIVE-19044.patch
>
>
> This is the Query plan as you can see "$f4" is duplicated.
> {code}
> PREHOOK: query: EXPLAIN SELECT Calcs.key AS none_key_nk,   SUM(Calcs.num0) AS 
> temp_z_stdevp_num0___1723718801__0_,   COUNT(Calcs.num0) AS 
> temp_z_stdevp_num0___2730138885__0_,   SUM((Calcs.num0 * Calcs.num0)) AS 
> temp_z_stdevp_num0___4071133194__0_,   STDDEV_POP(Calcs.num0) AS stp_num0_ok 
> FROM druid_tableau.calcs Calcs GROUP BY Calcs.key
> PREHOOK: type: QUERY
> POSTHOOK: query: EXPLAIN SELECT Calcs.key AS none_key_nk,   SUM(Calcs.num0) 
> AS temp_z_stdevp_num0___1723718801__0_,   COUNT(Calcs.num0) AS 
> temp_z_stdevp_num0___2730138885__0_,   SUM((Calcs.num0 * Calcs.num0)) AS 
> temp_z_stdevp_num0___4071133194__0_,   STDDEV_POP(Calcs.num0) AS stp_num0_ok 
> FROM druid_tableau.calcs Calcs GROUP BY Calcs.key
> POSTHOOK: type: QUERY
> STAGE DEPENDENCIES:
>   Stage-0 is a root stage
> STAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: calcs
>   properties:
> druid.fieldNames key,$f1,$f2,$f3,$f4
> druid.fieldTypes string,double,bigint,double,double
> druid.query.json 
> {"queryType":"groupBy","dataSource":"druid_tableau.calcs","granularity":"all","dimensions":[{"type":"default","dimension":"key","outputName":"key","outputType":"STRING"}],"limitSpec":{"type":"default"},"aggregations":[{"type":"doubleSum","name":"$f1","fieldName":"num0"},{"type":"filtered","filter":{"type":"not","field":{"type":"selector","dimension":"num0","value":null}},"aggregator":{"type":"count","name":"$f2","fieldName":"num0"}},{"type":"doubleSum","name":"$f3","expression":"(\"num0\"
>  * \"num0\")"},{"type":"doubleSum","name":"$f4","expression":"(\"num0\" * 
> \"num0\")"}],"postAggregations":[{"type":"expression","name":"$f4","expression":"pow(((\"$f4\"
>  - ((\"$f1\" * \"$f1\") / \"$f2\")) / 
> \"$f2\"),0.5)"}],"intervals":["1900-01-01T00:00:00.000Z/3000-01-01T00:00:00.000Z"]}
> druid.query.type groupBy
>   Select Operator
> expressions: key (type: string), $f1 (type: double), $f2 (type: 
> bigint), $f3 (type: double), $f4 (type: double)
> outputColumnNames: _col0, _col1, _col2, _col3, _col4
> ListSink
> {code}
> Table DDL 
> {code}
> create database druid_tableau;
> use druid_tableau;
> drop table if exists calcs;
> create table calcs
> STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler'
> TBLPROPERTIES (
>   "druid.segment.granularity" = "MONTH",
>   "druid.query.granularity" = "DAY")
> AS SELECT
>   cast(datetime0 as timestamp with local time zone) `__time`,
>   key,
>   str0, str1, str2, str3,
>   date0, date1, date2, date3,
>   time0, time1,
>   datetime1,
>   zzz,
>   cast(bool0 as string) bool0,
>   cast(bool1 as string) bool1,
>   cast(bool2 as string) bool2,
>   cast(bool3 as string) bool3,
>   int0, int1, int2, int3,
>   num0, num1, num2, num3, num4
> from default.calcs_orc;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19044) Duplicate field names within Druid Query Generated by Calcite plan

2018-04-03 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424486#comment-16424486
 ] 

slim bouguerra commented on HIVE-19044:
---

This patch is a test case that reproduce the issue, this can be merged when the 
calcite fix is merged and present at the Hive runtime libs.

> Duplicate field names within Druid Query Generated by Calcite plan
> --
>
> Key: HIVE-19044
> URL: https://issues.apache.org/jira/browse/HIVE-19044
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Attachments: HIVE-19044.patch
>
>
> This is the Query plan as you can see "$f4" is duplicated.
> {code}
> PREHOOK: query: EXPLAIN SELECT Calcs.key AS none_key_nk,   SUM(Calcs.num0) AS 
> temp_z_stdevp_num0___1723718801__0_,   COUNT(Calcs.num0) AS 
> temp_z_stdevp_num0___2730138885__0_,   SUM((Calcs.num0 * Calcs.num0)) AS 
> temp_z_stdevp_num0___4071133194__0_,   STDDEV_POP(Calcs.num0) AS stp_num0_ok 
> FROM druid_tableau.calcs Calcs GROUP BY Calcs.key
> PREHOOK: type: QUERY
> POSTHOOK: query: EXPLAIN SELECT Calcs.key AS none_key_nk,   SUM(Calcs.num0) 
> AS temp_z_stdevp_num0___1723718801__0_,   COUNT(Calcs.num0) AS 
> temp_z_stdevp_num0___2730138885__0_,   SUM((Calcs.num0 * Calcs.num0)) AS 
> temp_z_stdevp_num0___4071133194__0_,   STDDEV_POP(Calcs.num0) AS stp_num0_ok 
> FROM druid_tableau.calcs Calcs GROUP BY Calcs.key
> POSTHOOK: type: QUERY
> STAGE DEPENDENCIES:
>   Stage-0 is a root stage
> STAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: calcs
>   properties:
> druid.fieldNames key,$f1,$f2,$f3,$f4
> druid.fieldTypes string,double,bigint,double,double
> druid.query.json 
> {"queryType":"groupBy","dataSource":"druid_tableau.calcs","granularity":"all","dimensions":[{"type":"default","dimension":"key","outputName":"key","outputType":"STRING"}],"limitSpec":{"type":"default"},"aggregations":[{"type":"doubleSum","name":"$f1","fieldName":"num0"},{"type":"filtered","filter":{"type":"not","field":{"type":"selector","dimension":"num0","value":null}},"aggregator":{"type":"count","name":"$f2","fieldName":"num0"}},{"type":"doubleSum","name":"$f3","expression":"(\"num0\"
>  * \"num0\")"},{"type":"doubleSum","name":"$f4","expression":"(\"num0\" * 
> \"num0\")"}],"postAggregations":[{"type":"expression","name":"$f4","expression":"pow(((\"$f4\"
>  - ((\"$f1\" * \"$f1\") / \"$f2\")) / 
> \"$f2\"),0.5)"}],"intervals":["1900-01-01T00:00:00.000Z/3000-01-01T00:00:00.000Z"]}
> druid.query.type groupBy
>   Select Operator
> expressions: key (type: string), $f1 (type: double), $f2 (type: 
> bigint), $f3 (type: double), $f4 (type: double)
> outputColumnNames: _col0, _col1, _col2, _col3, _col4
> ListSink
> {code}
> Table DDL 
> {code}
> create database druid_tableau;
> use druid_tableau;
> drop table if exists calcs;
> create table calcs
> STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler'
> TBLPROPERTIES (
>   "druid.segment.granularity" = "MONTH",
>   "druid.query.granularity" = "DAY")
> AS SELECT
>   cast(datetime0 as timestamp with local time zone) `__time`,
>   key,
>   str0, str1, str2, str3,
>   date0, date1, date2, date3,
>   time0, time1,
>   datetime1,
>   zzz,
>   cast(bool0 as string) bool0,
>   cast(bool1 as string) bool1,
>   cast(bool2 as string) bool2,
>   cast(bool3 as string) bool3,
>   int0, int1, int2, int3,
>   num0, num1, num2, num3, num4
> from default.calcs_orc;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19044) Duplicate field names within Druid Query Generated by Calcite plan

2018-04-03 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated HIVE-19044:
--
Status: Patch Available  (was: Open)

> Duplicate field names within Druid Query Generated by Calcite plan
> --
>
> Key: HIVE-19044
> URL: https://issues.apache.org/jira/browse/HIVE-19044
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Attachments: HIVE-19044.patch
>
>
> This is the Query plan as you can see "$f4" is duplicated.
> {code}
> PREHOOK: query: EXPLAIN SELECT Calcs.key AS none_key_nk,   SUM(Calcs.num0) AS 
> temp_z_stdevp_num0___1723718801__0_,   COUNT(Calcs.num0) AS 
> temp_z_stdevp_num0___2730138885__0_,   SUM((Calcs.num0 * Calcs.num0)) AS 
> temp_z_stdevp_num0___4071133194__0_,   STDDEV_POP(Calcs.num0) AS stp_num0_ok 
> FROM druid_tableau.calcs Calcs GROUP BY Calcs.key
> PREHOOK: type: QUERY
> POSTHOOK: query: EXPLAIN SELECT Calcs.key AS none_key_nk,   SUM(Calcs.num0) 
> AS temp_z_stdevp_num0___1723718801__0_,   COUNT(Calcs.num0) AS 
> temp_z_stdevp_num0___2730138885__0_,   SUM((Calcs.num0 * Calcs.num0)) AS 
> temp_z_stdevp_num0___4071133194__0_,   STDDEV_POP(Calcs.num0) AS stp_num0_ok 
> FROM druid_tableau.calcs Calcs GROUP BY Calcs.key
> POSTHOOK: type: QUERY
> STAGE DEPENDENCIES:
>   Stage-0 is a root stage
> STAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: calcs
>   properties:
> druid.fieldNames key,$f1,$f2,$f3,$f4
> druid.fieldTypes string,double,bigint,double,double
> druid.query.json 
> {"queryType":"groupBy","dataSource":"druid_tableau.calcs","granularity":"all","dimensions":[{"type":"default","dimension":"key","outputName":"key","outputType":"STRING"}],"limitSpec":{"type":"default"},"aggregations":[{"type":"doubleSum","name":"$f1","fieldName":"num0"},{"type":"filtered","filter":{"type":"not","field":{"type":"selector","dimension":"num0","value":null}},"aggregator":{"type":"count","name":"$f2","fieldName":"num0"}},{"type":"doubleSum","name":"$f3","expression":"(\"num0\"
>  * \"num0\")"},{"type":"doubleSum","name":"$f4","expression":"(\"num0\" * 
> \"num0\")"}],"postAggregations":[{"type":"expression","name":"$f4","expression":"pow(((\"$f4\"
>  - ((\"$f1\" * \"$f1\") / \"$f2\")) / 
> \"$f2\"),0.5)"}],"intervals":["1900-01-01T00:00:00.000Z/3000-01-01T00:00:00.000Z"]}
> druid.query.type groupBy
>   Select Operator
> expressions: key (type: string), $f1 (type: double), $f2 (type: 
> bigint), $f3 (type: double), $f4 (type: double)
> outputColumnNames: _col0, _col1, _col2, _col3, _col4
> ListSink
> {code}
> Table DDL 
> {code}
> create database druid_tableau;
> use druid_tableau;
> drop table if exists calcs;
> create table calcs
> STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler'
> TBLPROPERTIES (
>   "druid.segment.granularity" = "MONTH",
>   "druid.query.granularity" = "DAY")
> AS SELECT
>   cast(datetime0 as timestamp with local time zone) `__time`,
>   key,
>   str0, str1, str2, str3,
>   date0, date1, date2, date3,
>   time0, time1,
>   datetime1,
>   zzz,
>   cast(bool0 as string) bool0,
>   cast(bool1 as string) bool1,
>   cast(bool2 as string) bool2,
>   cast(bool3 as string) bool3,
>   int0, int1, int2, int3,
>   num0, num1, num2, num3, num4
> from default.calcs_orc;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19044) Duplicate field names within Druid Query Generated by Calcite plan

2018-04-03 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated HIVE-19044:
--
Attachment: HIVE-19044.patch

> Duplicate field names within Druid Query Generated by Calcite plan
> --
>
> Key: HIVE-19044
> URL: https://issues.apache.org/jira/browse/HIVE-19044
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Attachments: HIVE-19044.patch
>
>
> This is the Query plan as you can see "$f4" is duplicated.
> {code}
> PREHOOK: query: EXPLAIN SELECT Calcs.key AS none_key_nk,   SUM(Calcs.num0) AS 
> temp_z_stdevp_num0___1723718801__0_,   COUNT(Calcs.num0) AS 
> temp_z_stdevp_num0___2730138885__0_,   SUM((Calcs.num0 * Calcs.num0)) AS 
> temp_z_stdevp_num0___4071133194__0_,   STDDEV_POP(Calcs.num0) AS stp_num0_ok 
> FROM druid_tableau.calcs Calcs GROUP BY Calcs.key
> PREHOOK: type: QUERY
> POSTHOOK: query: EXPLAIN SELECT Calcs.key AS none_key_nk,   SUM(Calcs.num0) 
> AS temp_z_stdevp_num0___1723718801__0_,   COUNT(Calcs.num0) AS 
> temp_z_stdevp_num0___2730138885__0_,   SUM((Calcs.num0 * Calcs.num0)) AS 
> temp_z_stdevp_num0___4071133194__0_,   STDDEV_POP(Calcs.num0) AS stp_num0_ok 
> FROM druid_tableau.calcs Calcs GROUP BY Calcs.key
> POSTHOOK: type: QUERY
> STAGE DEPENDENCIES:
>   Stage-0 is a root stage
> STAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: calcs
>   properties:
> druid.fieldNames key,$f1,$f2,$f3,$f4
> druid.fieldTypes string,double,bigint,double,double
> druid.query.json 
> {"queryType":"groupBy","dataSource":"druid_tableau.calcs","granularity":"all","dimensions":[{"type":"default","dimension":"key","outputName":"key","outputType":"STRING"}],"limitSpec":{"type":"default"},"aggregations":[{"type":"doubleSum","name":"$f1","fieldName":"num0"},{"type":"filtered","filter":{"type":"not","field":{"type":"selector","dimension":"num0","value":null}},"aggregator":{"type":"count","name":"$f2","fieldName":"num0"}},{"type":"doubleSum","name":"$f3","expression":"(\"num0\"
>  * \"num0\")"},{"type":"doubleSum","name":"$f4","expression":"(\"num0\" * 
> \"num0\")"}],"postAggregations":[{"type":"expression","name":"$f4","expression":"pow(((\"$f4\"
>  - ((\"$f1\" * \"$f1\") / \"$f2\")) / 
> \"$f2\"),0.5)"}],"intervals":["1900-01-01T00:00:00.000Z/3000-01-01T00:00:00.000Z"]}
> druid.query.type groupBy
>   Select Operator
> expressions: key (type: string), $f1 (type: double), $f2 (type: 
> bigint), $f3 (type: double), $f4 (type: double)
> outputColumnNames: _col0, _col1, _col2, _col3, _col4
> ListSink
> {code}
> Table DDL 
> {code}
> create database druid_tableau;
> use druid_tableau;
> drop table if exists calcs;
> create table calcs
> STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler'
> TBLPROPERTIES (
>   "druid.segment.granularity" = "MONTH",
>   "druid.query.granularity" = "DAY")
> AS SELECT
>   cast(datetime0 as timestamp with local time zone) `__time`,
>   key,
>   str0, str1, str2, str3,
>   date0, date1, date2, date3,
>   time0, time1,
>   datetime1,
>   zzz,
>   cast(bool0 as string) bool0,
>   cast(bool1 as string) bool1,
>   cast(bool2 as string) bool2,
>   cast(bool3 as string) bool3,
>   int0, int1, int2, int3,
>   num0, num1, num2, num3, num4
> from default.calcs_orc;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19100) investigate TestStreaming failures

2018-04-03 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424477#comment-16424477
 ] 

Eugene Koifman commented on HIVE-19100:
---

This turned out entirely self inflicted - consequence of "add partition" w/o 
any data allocating a writeId - which it doesn't need to.

> investigate TestStreaming failures
> --
>
> Key: HIVE-19100
> URL: https://issues.apache.org/jira/browse/HIVE-19100
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-19100.01.patch, HIVE-19100.02.patch
>
>
> {noformat}
> [ERROR] Failures: 
> [ERROR]   
> TestStreaming.testInterleavedTransactionBatchCommits:1218->checkDataWritten2:619
>  expected:<11> but was:<12>
> [ERROR]   
> TestStreaming.testMultipleTransactionBatchCommits:1157->checkDataWritten2:619 
> expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchAbortAndCommit:1138->checkDataWritten:566 
> expected:<1> but was:<2>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Delimited:861->testTransactionBatchCommit_Delimited:881->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_DelimitedUGI:865->testTransactionBatchCommit_Delimited:881->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Json:1011->checkDataWritten:566 
> expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Regex:928->testTransactionBatchCommit_Regex:949->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_RegexUGI:932->testTransactionBatchCommit_Regex:949->checkDataWritten:566
>  expected:<1> but was:<3>
> [INFO] 
> [ERROR] Tests run: 26, Failures: 8, Errors: 0, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16897) repl load does not lead to excessive memory consumption for multiple functions from same binary jar

2018-04-03 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-16897:
---
Fix Version/s: (was: 3.0.0)

> repl load does not lead to excessive memory consumption for multiple 
> functions from same binary  jar
> 
>
> Key: HIVE-16897
> URL: https://issues.apache.org/jira/browse/HIVE-16897
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: anishek
>Priority: Major
>
> as part of function replication we currently keep a separate copy of the 
> binary jar associated with the function ( this should be same on the primary 
> warehouse also since each hdfs jar location given during creation of function 
> will download the resource in a separate resource location thus leading to 
> the same jar being included in class path multiple times)
> this will lead to excessive space used to keep all jars in classpath, solve 
> this by identifying the common binary jar ( using checksum from primary on 
> replica) and not creating multiple copies thus preventing excessive memory 
> usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-16897) repl load does not lead to excessive memory consumption for multiple functions from same binary jar

2018-04-03 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424476#comment-16424476
 ] 

Vineet Garg commented on HIVE-16897:


Removing fix version to defer this to next release.

> repl load does not lead to excessive memory consumption for multiple 
> functions from same binary  jar
> 
>
> Key: HIVE-16897
> URL: https://issues.apache.org/jira/browse/HIVE-16897
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: anishek
>Priority: Major
>
> as part of function replication we currently keep a separate copy of the 
> binary jar associated with the function ( this should be same on the primary 
> warehouse also since each hdfs jar location given during creation of function 
> will download the resource in a separate resource location thus leading to 
> the same jar being included in class path multiple times)
> this will lead to excessive space used to keep all jars in classpath, solve 
> this by identifying the common binary jar ( using checksum from primary on 
> replica) and not creating multiple copies thus preventing excessive memory 
> usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16842) insert overwrite with select does not remove data when the select query returns empty resultset

2018-04-03 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-16842:
---
Fix Version/s: (was: 3.0.0)

> insert overwrite with select does not remove data when the select query 
> returns empty resultset
> ---
>
> Key: HIVE-16842
> URL: https://issues.apache.org/jira/browse/HIVE-16842
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: anishek
>Priority: Major
>
> {code}create table address (city string);
> insert into address values ('b');
> create table empty_insert (city string);
> insert into empty_insert values ('a');
> insert overwrite table empty_insert select city from address where city='g';
> {code}
> empty_insert still contains 'a'; # should be nothing 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-16842) insert overwrite with select does not remove data when the select query returns empty resultset

2018-04-03 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424474#comment-16424474
 ] 

Vineet Garg commented on HIVE-16842:


Removing fix version to defer this to next release.

> insert overwrite with select does not remove data when the select query 
> returns empty resultset
> ---
>
> Key: HIVE-16842
> URL: https://issues.apache.org/jira/browse/HIVE-16842
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: anishek
>Priority: Major
>
> {code}create table address (city string);
> insert into address values ('b');
> create table empty_insert (city string);
> insert into empty_insert values ('a');
> insert overwrite table empty_insert select city from address where city='g';
> {code}
> empty_insert still contains 'a'; # should be nothing 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16843) PrimaryToReplicaResourceFunctionTest.createDestinationPath fails with AssertionError

2018-04-03 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-16843:
---
Fix Version/s: (was: 3.0.0)

> PrimaryToReplicaResourceFunctionTest.createDestinationPath fails with 
> AssertionError
> 
>
> Key: HIVE-16843
> URL: https://issues.apache.org/jira/browse/HIVE-16843
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 3.0.0
> Environment: # cat /etc/lsb-release
> DISTRIB_ID=Ubuntu
> DISTRIB_RELEASE=14.04
> DISTRIB_CODENAME=trusty
> DISTRIB_DESCRIPTION="Ubuntu 14.04.5 LTS"
> # uname -a
> Linux 9efcdb4d8880 3.19.0-37-generic #42-Ubuntu SMP Fri Nov 20 18:22:05 UTC 
> 2015 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Yussuf Shaikh
>Assignee: Yussuf Shaikh
>Priority: Minor
> Attachments: HIVE-16843.patch
>
>
> Stacktrace:
> java.lang.AssertionError: 
> Expected: is 
> "hdfs://somehost:9000/someBasePath/withADir/replicaDbName/somefunctionname/9223372036854775807/ab.jar"
>  but: was 
> "hdfs://somehost:9000/someBasePath/withADir/replicadbname/somefunctionname/0/ab.jar"
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at org.junit.Assert.assertThat(Assert.java:865)
>   at org.junit.Assert.assertThat(Assert.java:832)
>   at 
> org.apache.hadoop.hive.ql.parse.repl.load.message.PrimaryToReplicaResourceFunctionTest.createDestinationPath(PrimaryToReplicaResourceFunctionTest.java:82)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-16843) PrimaryToReplicaResourceFunctionTest.createDestinationPath fails with AssertionError

2018-04-03 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424472#comment-16424472
 ] 

Vineet Garg commented on HIVE-16843:


Removing fix version to defer this to next release.  

> PrimaryToReplicaResourceFunctionTest.createDestinationPath fails with 
> AssertionError
> 
>
> Key: HIVE-16843
> URL: https://issues.apache.org/jira/browse/HIVE-16843
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 3.0.0
> Environment: # cat /etc/lsb-release
> DISTRIB_ID=Ubuntu
> DISTRIB_RELEASE=14.04
> DISTRIB_CODENAME=trusty
> DISTRIB_DESCRIPTION="Ubuntu 14.04.5 LTS"
> # uname -a
> Linux 9efcdb4d8880 3.19.0-37-generic #42-Ubuntu SMP Fri Nov 20 18:22:05 UTC 
> 2015 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Yussuf Shaikh
>Assignee: Yussuf Shaikh
>Priority: Minor
> Attachments: HIVE-16843.patch
>
>
> Stacktrace:
> java.lang.AssertionError: 
> Expected: is 
> "hdfs://somehost:9000/someBasePath/withADir/replicaDbName/somefunctionname/9223372036854775807/ab.jar"
>  but: was 
> "hdfs://somehost:9000/someBasePath/withADir/replicadbname/somefunctionname/0/ab.jar"
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at org.junit.Assert.assertThat(Assert.java:865)
>   at org.junit.Assert.assertThat(Assert.java:832)
>   at 
> org.apache.hadoop.hive.ql.parse.repl.load.message.PrimaryToReplicaResourceFunctionTest.createDestinationPath(PrimaryToReplicaResourceFunctionTest.java:82)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16718) Provide a way to pass in user supplied maven build and test arguments to Ptest

2018-04-03 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-16718:
---
Fix Version/s: (was: 3.0.0)

> Provide a way to pass in user supplied maven build and test arguments to Ptest
> --
>
> Key: HIVE-16718
> URL: https://issues.apache.org/jira/browse/HIVE-16718
> Project: Hive
>  Issue Type: New Feature
>Reporter: Barna Zsombor Klara
>Assignee: Barna Zsombor Klara
>Priority: Minor
> Attachments: HIVE-16718.01.patch
>
>
> Currently we can only pass in maven build and test arguments from the 
> properties file, so all of them need to be hardcoded.
> We should find a way to pass in arguments from the command line.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-16718) Provide a way to pass in user supplied maven build and test arguments to Ptest

2018-04-03 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424470#comment-16424470
 ] 

Vineet Garg commented on HIVE-16718:


Removing fix version to defer this to next release.

> Provide a way to pass in user supplied maven build and test arguments to Ptest
> --
>
> Key: HIVE-16718
> URL: https://issues.apache.org/jira/browse/HIVE-16718
> Project: Hive
>  Issue Type: New Feature
>Reporter: Barna Zsombor Klara
>Assignee: Barna Zsombor Klara
>Priority: Minor
> Attachments: HIVE-16718.01.patch
>
>
> Currently we can only pass in maven build and test arguments from the 
> properties file, so all of them need to be hardcoded.
> We should find a way to pass in arguments from the command line.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18839) Implement incremental rebuild for materialized views (only insert operations in source tables)

2018-04-03 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-18839:
---
Attachment: HIVE-18839.01.patch

> Implement incremental rebuild for materialized views (only insert operations 
> in source tables)
> --
>
> Key: HIVE-18839
> URL: https://issues.apache.org/jira/browse/HIVE-18839
> Project: Hive
>  Issue Type: Improvement
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: TODOC3.0
> Attachments: HIVE-18839.01.patch, HIVE-18839.patch
>
>
> Implementation will follow current code path for full rebuild. 
> When the MV query plan is retrieved, if the MV contents are outdated because 
> there were insert operations in the source tables, we will introduce a filter 
> with a condition based on stored value of ValidWriteIdLists. For instance, 
> {{WRITE_ID < high_txn_id AND WRITE_ID NOT IN (x, y, ...)}}. Then the 
> rewriting will do the rest of the work by creating a partial rewriting, where 
> the contents of the MV are read as well as the new contents from the source 
> tables.
> This mechanism will not work only for ALTER MV... REBUILD, but also for user 
> queries which will be able to benefit from using outdated MVs to compute part 
> of the needed results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18839) Implement incremental rebuild for materialized views (only insert operations in source tables)

2018-04-03 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424467#comment-16424467
 ] 

Jesus Camacho Rodriguez commented on HIVE-18839:


Rebased patch.

> Implement incremental rebuild for materialized views (only insert operations 
> in source tables)
> --
>
> Key: HIVE-18839
> URL: https://issues.apache.org/jira/browse/HIVE-18839
> Project: Hive
>  Issue Type: Improvement
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: TODOC3.0
> Attachments: HIVE-18839.01.patch, HIVE-18839.patch
>
>
> Implementation will follow current code path for full rebuild. 
> When the MV query plan is retrieved, if the MV contents are outdated because 
> there were insert operations in the source tables, we will introduce a filter 
> with a condition based on stored value of ValidWriteIdLists. For instance, 
> {{WRITE_ID < high_txn_id AND WRITE_ID NOT IN (x, y, ...)}}. Then the 
> rewriting will do the rest of the work by creating a partial rewriting, where 
> the contents of the MV are read as well as the new contents from the source 
> tables.
> This mechanism will not work only for ALTER MV... REBUILD, but also for user 
> queries which will be able to benefit from using outdated MVs to compute part 
> of the needed results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16612) PerfLogger is configurable, but not extensible

2018-04-03 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-16612:
---
Fix Version/s: (was: 3.0.0)

> PerfLogger is configurable, but not extensible
> --
>
> Key: HIVE-16612
> URL: https://issues.apache.org/jira/browse/HIVE-16612
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning, Query Processor
>Reporter: Remus Rusanu
>Assignee: Remus Rusanu
>Priority: Minor
> Attachments: HIVE-16612.01.patch, HIVE-16612.02.patch
>
>
> {code}
>   result = (PerfLogger) 
> ReflectionUtils.newInstance(conf.getClassByName(
> conf.getVar(HiveConf.ConfVars.HIVE_PERF_LOGGER)), conf);
> {code}
> The PerfLogger instance is configurable via {{hive.exec.perf.logger}} 
> (HIVE-11891) but the requirement to extend from {{PerfLogger}} cannot be met 
> since HIVE-11149 as the ctor is private. Also useful methods in PerfLogger 
> are also private. I tried to extend PerfLogger for my needs and realized 
> that, as is, the configurability is not usable. At the very least the 
> PerfLogger should make all private members {{protected}}, better the 
> requirement should be an interface not a class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-16612) PerfLogger is configurable, but not extensible

2018-04-03 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424468#comment-16424468
 ] 

Vineet Garg commented on HIVE-16612:


Removing fix version to defer this to next release.

> PerfLogger is configurable, but not extensible
> --
>
> Key: HIVE-16612
> URL: https://issues.apache.org/jira/browse/HIVE-16612
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning, Query Processor
>Reporter: Remus Rusanu
>Assignee: Remus Rusanu
>Priority: Minor
> Attachments: HIVE-16612.01.patch, HIVE-16612.02.patch
>
>
> {code}
>   result = (PerfLogger) 
> ReflectionUtils.newInstance(conf.getClassByName(
> conf.getVar(HiveConf.ConfVars.HIVE_PERF_LOGGER)), conf);
> {code}
> The PerfLogger instance is configurable via {{hive.exec.perf.logger}} 
> (HIVE-11891) but the requirement to extend from {{PerfLogger}} cannot be met 
> since HIVE-11149 as the ctor is private. Also useful methods in PerfLogger 
> are also private. I tried to extend PerfLogger for my needs and realized 
> that, as is, the configurability is not usable. At the very least the 
> PerfLogger should make all private members {{protected}}, better the 
> requirement should be an interface not a class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18910) Migrate to Murmur hash for shuffle and bucketing

2018-04-03 Thread Deepak Jaiswal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-18910:
--
Attachment: HIVE-18910.20.patch

> Migrate to Murmur hash for shuffle and bucketing
> 
>
> Key: HIVE-18910
> URL: https://issues.apache.org/jira/browse/HIVE-18910
> Project: Hive
>  Issue Type: Task
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-18910.1.patch, HIVE-18910.10.patch, 
> HIVE-18910.11.patch, HIVE-18910.12.patch, HIVE-18910.13.patch, 
> HIVE-18910.14.patch, HIVE-18910.15.patch, HIVE-18910.16.patch, 
> HIVE-18910.17.patch, HIVE-18910.18.patch, HIVE-18910.19.patch, 
> HIVE-18910.2.patch, HIVE-18910.20.patch, HIVE-18910.3.patch, 
> HIVE-18910.4.patch, HIVE-18910.5.patch, HIVE-18910.6.patch, 
> HIVE-18910.7.patch, HIVE-18910.8.patch, HIVE-18910.9.patch
>
>
> Hive uses JAVA hash which is not as good as murmur for better distribution 
> and efficiency in bucketing a table.
> Migrate to murmur hash but still keep backward compatibility for existing 
> users so that they dont have to reload the existing tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19100) investigate TestStreaming failures

2018-04-03 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-19100:
--
Attachment: HIVE-19100.02.patch

> investigate TestStreaming failures
> --
>
> Key: HIVE-19100
> URL: https://issues.apache.org/jira/browse/HIVE-19100
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-19100.01.patch, HIVE-19100.02.patch
>
>
> {noformat}
> [ERROR] Failures: 
> [ERROR]   
> TestStreaming.testInterleavedTransactionBatchCommits:1218->checkDataWritten2:619
>  expected:<11> but was:<12>
> [ERROR]   
> TestStreaming.testMultipleTransactionBatchCommits:1157->checkDataWritten2:619 
> expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchAbortAndCommit:1138->checkDataWritten:566 
> expected:<1> but was:<2>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Delimited:861->testTransactionBatchCommit_Delimited:881->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_DelimitedUGI:865->testTransactionBatchCommit_Delimited:881->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Json:1011->checkDataWritten:566 
> expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Regex:928->testTransactionBatchCommit_Regex:949->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_RegexUGI:932->testTransactionBatchCommit_Regex:949->checkDataWritten:566
>  expected:<1> but was:<3>
> [INFO] 
> [ERROR] Tests run: 26, Failures: 8, Errors: 0, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19100) investigate TestStreaming failures

2018-04-03 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-19100:
--
Status: Patch Available  (was: Open)

> investigate TestStreaming failures
> --
>
> Key: HIVE-19100
> URL: https://issues.apache.org/jira/browse/HIVE-19100
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-19100.01.patch, HIVE-19100.02.patch
>
>
> {noformat}
> [ERROR] Failures: 
> [ERROR]   
> TestStreaming.testInterleavedTransactionBatchCommits:1218->checkDataWritten2:619
>  expected:<11> but was:<12>
> [ERROR]   
> TestStreaming.testMultipleTransactionBatchCommits:1157->checkDataWritten2:619 
> expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchAbortAndCommit:1138->checkDataWritten:566 
> expected:<1> but was:<2>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Delimited:861->testTransactionBatchCommit_Delimited:881->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_DelimitedUGI:865->testTransactionBatchCommit_Delimited:881->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Json:1011->checkDataWritten:566 
> expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Regex:928->testTransactionBatchCommit_Regex:949->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_RegexUGI:932->testTransactionBatchCommit_Regex:949->checkDataWritten:566
>  expected:<1> but was:<3>
> [INFO] 
> [ERROR] Tests run: 26, Failures: 8, Errors: 0, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19100) investigate TestStreaming failures

2018-04-03 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-19100:
--
Status: Open  (was: Patch Available)

> investigate TestStreaming failures
> --
>
> Key: HIVE-19100
> URL: https://issues.apache.org/jira/browse/HIVE-19100
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-19100.01.patch
>
>
> {noformat}
> [ERROR] Failures: 
> [ERROR]   
> TestStreaming.testInterleavedTransactionBatchCommits:1218->checkDataWritten2:619
>  expected:<11> but was:<12>
> [ERROR]   
> TestStreaming.testMultipleTransactionBatchCommits:1157->checkDataWritten2:619 
> expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchAbortAndCommit:1138->checkDataWritten:566 
> expected:<1> but was:<2>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Delimited:861->testTransactionBatchCommit_Delimited:881->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_DelimitedUGI:865->testTransactionBatchCommit_Delimited:881->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Json:1011->checkDataWritten:566 
> expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Regex:928->testTransactionBatchCommit_Regex:949->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_RegexUGI:932->testTransactionBatchCommit_Regex:949->checkDataWritten:566
>  expected:<1> but was:<3>
> [INFO] 
> [ERROR] Tests run: 26, Failures: 8, Errors: 0, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-16949) Leak of threads from Get-Input-Paths and Get-Input-Summary thread pool

2018-04-03 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424458#comment-16424458
 ] 

Thejas M Nair commented on HIVE-16949:
--

[~stakiar]
This is missing the fix version, can you please add it ?


> Leak of threads from Get-Input-Paths and Get-Input-Summary thread pool
> --
>
> Key: HIVE-16949
> URL: https://issues.apache.org/jira/browse/HIVE-16949
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Birger Brunswiek
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-16949.1.patch
>
>
> The commit 
> [20210de|https://github.com/apache/hive/commit/20210dec94148c9b529132b1545df3dd7be083c3]
>  which was part of HIVE-15546 [introduced a thread 
> pool|https://github.com/apache/hive/blob/824b9c80b443dc4e2b9ad35214a23ac756e75234/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java#L3109]
>  which is not shutdown upon completion of its threads. This leads to a leak 
> of threads for each query which uses more than 1 partition. They are not 
> removed automatically. When queries spanning multiple partitions are made the 
> number of threads increases and is never reduced. On my machine hiveserver2 
> starts to get slower and slower once 10k threads are reached.
> Thread pools only shutdown automatically in special circumstances (see 
> [documentation section 
> _Finalization_|https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ThreadPoolExecutor.html]).
>  This is not currently the case for the Get-Input-Paths thread pool. I would 
> add a _pool.shutdown()_ in a finally block just before returning the result 
> to make sure the threads are really shutdown.
> My current workaround is to set {{hive.exec.input.listing.max.threads = 1}}. 
> This prevents the the thread pool from being spawned 
> [\[1\]|https://github.com/apache/hive/blob/824b9c80b443dc4e2b9ad35214a23ac756e75234/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java#L2118]
>  
> [\[2\]|https://github.com/apache/hive/blob/824b9c80b443dc4e2b9ad35214a23ac756e75234/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java#L3107].
> The same issue probably also applies to the [Get-Input-Summary thread 
> pool|https://github.com/apache/hive/blob/824b9c80b443dc4e2b9ad35214a23ac756e75234/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java#L2193].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19083) Make partition clause optional for INSERT

2018-04-03 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19083:
---
Attachment: HIVE-19083.4.patch

> Make partition clause optional for INSERT
> -
>
> Key: HIVE-19083
> URL: https://issues.apache.org/jira/browse/HIVE-19083
> Project: Hive
>  Issue Type: Sub-task
>  Components: SQL
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19083.1.patch, HIVE-19083.2.patch, 
> HIVE-19083.3.patch, HIVE-19083.4.patch
>
>
> Partition clause should be optional for
>  * INSERT INTO VALUES
>  * INSERT OVERWRITE
>  * INSERT SELECT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19083) Make partition clause optional for INSERT

2018-04-03 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19083:
---
Status: Patch Available  (was: Open)

> Make partition clause optional for INSERT
> -
>
> Key: HIVE-19083
> URL: https://issues.apache.org/jira/browse/HIVE-19083
> Project: Hive
>  Issue Type: Sub-task
>  Components: SQL
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19083.1.patch, HIVE-19083.2.patch, 
> HIVE-19083.3.patch, HIVE-19083.4.patch
>
>
> Partition clause should be optional for
>  * INSERT INTO VALUES
>  * INSERT OVERWRITE
>  * INSERT SELECT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19083) Make partition clause optional for INSERT

2018-04-03 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19083:
---
Status: Open  (was: Patch Available)

> Make partition clause optional for INSERT
> -
>
> Key: HIVE-19083
> URL: https://issues.apache.org/jira/browse/HIVE-19083
> Project: Hive
>  Issue Type: Sub-task
>  Components: SQL
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19083.1.patch, HIVE-19083.2.patch, 
> HIVE-19083.3.patch, HIVE-19083.4.patch
>
>
> Partition clause should be optional for
>  * INSERT INTO VALUES
>  * INSERT OVERWRITE
>  * INSERT SELECT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18976) Add ability to setup Druid Kafka Ingestion from Hive

2018-04-03 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424446#comment-16424446
 ] 

Ashutosh Chauhan commented on HIVE-18976:
-

+1

> Add ability to setup Druid Kafka Ingestion from Hive
> 
>
> Key: HIVE-18976
> URL: https://issues.apache.org/jira/browse/HIVE-18976
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-18976.03.patch, HIVE-18976.04.patch, 
> HIVE-18976.patch
>
>
> Add Ability to setup druid kafka Ingestion using Hive CREATE TABLE statement
> e.g. Below query can submit a kafka supervisor spec to the druid overlord and 
> druid can start ingesting events from kafka. 
> {code:java}
>  
> CREATE TABLE druid_kafka_test(`__time` timestamp, page string, language 
> string, `user` string, added int, deleted int, delta int)
> STORED BY 
> 'org.apache.hadoop.hive.druid.DruidKafkaStreamingStorageHandler'
> TBLPROPERTIES (
> "druid.segment.granularity" = "HOUR",
> "druid.query.granularity" = "MINUTE",
> "kafka.bootstrap.servers" = "localhost:9092",
> "kafka.topic" = "test-topic",
> "druid.kafka.ingest.useEarliestOffset" = "true"
> );
> {code}
> Design - This can be done via a DruidKafkaStreamingStorageHandler that 
> extends existing DruidStorageHandler and adds the additional functionality 
> for Streaming. 
> Testing - Add a DruidKafkaMiniCluster which will consist of DruidMiniCluster 
> + Single Node Kafka Broker. The broker can be populated with a test topic 
> that has some predefined data. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19064) Add mode to support delimited identifiers enclosed within double quotation

2018-04-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424436#comment-16424436
 ] 

Hive QA commented on HIVE-19064:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
48s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} common: The patch generated 0 new + 427 unchanged - 
1 fixed = 427 total (was 428) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
52s{color} | {color:red} ql: The patch generated 7 new + 769 unchanged - 8 
fixed = 776 total (was 777) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} standalone-metastore: The patch generated 0 new + 
562 unchanged - 3 fixed = 562 total (was 565) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 77 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
16s{color} | {color:red} The patch generated 50 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9978/dev-support/hive-personality.sh
 |
| git revision | master / fdc1e1f |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9978/yetus/diff-checkstyle-ql.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9978/yetus/whitespace-eol.txt 
|
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9978/yetus/patch-asflicense-problems.txt
 |
| modules | C: common itests ql standalone-metastore U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9978/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add mode to support delimited identifiers enclosed within double quotation
> --
>
> Key: HIVE-19064
> URL: https://issues.apache.org/jira/browse/HIVE-19064
> Project: Hive
>  Issue Type: Improvement
>  Components: Parser, SQL
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-19064.01.patch, HIVE-19064.02.patch
>
>
> As per SQL standard. Hive currently uses `` (backticks). Default will 
> continue being backticks, but we will support identifiers 

[jira] [Commented] (HIVE-19100) investigate TestStreaming failures

2018-04-03 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424435#comment-16424435
 ] 

Eugene Koifman commented on HIVE-19100:
---

I looked at 2 of the tests: testMultipleTransactionBatchCommits and 
testTransactionBatchAbortAndCommit
In both the difference is different wrtieIDs in delta name from what is 
expected.  This could be due to an additional write to the test table before 
the failing check or something else consuming a write id - I can't tell what 
could've caused the change.



> investigate TestStreaming failures
> --
>
> Key: HIVE-19100
> URL: https://issues.apache.org/jira/browse/HIVE-19100
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-19100.01.patch
>
>
> {noformat}
> [ERROR] Failures: 
> [ERROR]   
> TestStreaming.testInterleavedTransactionBatchCommits:1218->checkDataWritten2:619
>  expected:<11> but was:<12>
> [ERROR]   
> TestStreaming.testMultipleTransactionBatchCommits:1157->checkDataWritten2:619 
> expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchAbortAndCommit:1138->checkDataWritten:566 
> expected:<1> but was:<2>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Delimited:861->testTransactionBatchCommit_Delimited:881->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_DelimitedUGI:865->testTransactionBatchCommit_Delimited:881->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Json:1011->checkDataWritten:566 
> expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Regex:928->testTransactionBatchCommit_Regex:949->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_RegexUGI:932->testTransactionBatchCommit_Regex:949->checkDataWritten:566
>  expected:<1> but was:<3>
> [INFO] 
> [ERROR] Tests run: 26, Failures: 8, Errors: 0, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18814) Support Add Partition For Acid tables

2018-04-03 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424427#comment-16424427
 ] 

Eugene Koifman commented on HIVE-18814:
---

I filed HIVE-19100 to follow up on tests

> Support Add Partition For Acid tables
> -
>
> Key: HIVE-18814
> URL: https://issues.apache.org/jira/browse/HIVE-18814
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18814.01.patch, HIVE-18814.02.patch, 
> HIVE-18814.03.patch, HIVE-18814.04.patch
>
>
> [https://cwiki.apache.org/confluence/display/Hive/LanguageManual%2BDDL#LanguageManualDDL-AddPartitions]
> Add Partition command creates a {{Partition}} metadata object and sets the 
> location to the directory containing data files.
> In current master (Hive 3.0), Add partition on an acid table doesn't fail and 
> at read time the data is decorated with row__id but the original transaction 
> is 0.  I suspect in earlier Hive versions this will throw or return no data.
> Since this new partition didn't have data before, assigning txnid:0 isn't 
> going to generate duplicate IDs but it could violate Snapshot Isolation in 
> multi stmt txns.  Suppose txnid:7 runs {{select * from T}}.  Then txnid:8 
> adds a partition to T.  Now if txnid:7 runs the same query again, it will see 
> the data in the new partition.
> This can't be release like this since a delete on this data (added via Add 
> partition) will use row_ids with txnid:0 so a later upgrade that sees 
> un-compacted may generate row_ids with different txnid (assuming this is 
> fixed by then)
>  
> One option is follow Load Data approach and create a new delta_x_x/ and 
> move/copy the data there.
>  
> Another is to allocate a new writeid and save it in Partition metadata.  This 
> could then be used to decorate data with ROW__IDs.  This avoids move/copy but 
> retains data "outside" of the table tree which make it more likely that this 
> data will be modified in some way which can really break things if done after 
> and SQL update/delete on this data have happened. 
>  
> It performs no validations on add (except for partition spec) so any file 
> with any format can be added.  It allows add to bucketed tables as well.
> Seems like a very dangerous command.  Maybe a better option is to block it 
> and advise using Load Data.  Alternatively, make this do Add partition 
> metadata op followed by Load Data. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19100) investigate TestStreaming failures

2018-04-03 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-19100:
--
Attachment: HIVE-19100.01.patch

> investigate TestStreaming failures
> --
>
> Key: HIVE-19100
> URL: https://issues.apache.org/jira/browse/HIVE-19100
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-19100.01.patch
>
>
> {noformat}
> [ERROR] Failures: 
> [ERROR]   
> TestStreaming.testInterleavedTransactionBatchCommits:1218->checkDataWritten2:619
>  expected:<11> but was:<12>
> [ERROR]   
> TestStreaming.testMultipleTransactionBatchCommits:1157->checkDataWritten2:619 
> expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchAbortAndCommit:1138->checkDataWritten:566 
> expected:<1> but was:<2>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Delimited:861->testTransactionBatchCommit_Delimited:881->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_DelimitedUGI:865->testTransactionBatchCommit_Delimited:881->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Json:1011->checkDataWritten:566 
> expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Regex:928->testTransactionBatchCommit_Regex:949->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_RegexUGI:932->testTransactionBatchCommit_Regex:949->checkDataWritten:566
>  expected:<1> but was:<3>
> [INFO] 
> [ERROR] Tests run: 26, Failures: 8, Errors: 0, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19100) investigate TestStreaming failures

2018-04-03 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-19100:
--
Status: Patch Available  (was: Open)

> investigate TestStreaming failures
> --
>
> Key: HIVE-19100
> URL: https://issues.apache.org/jira/browse/HIVE-19100
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-19100.01.patch
>
>
> {noformat}
> [ERROR] Failures: 
> [ERROR]   
> TestStreaming.testInterleavedTransactionBatchCommits:1218->checkDataWritten2:619
>  expected:<11> but was:<12>
> [ERROR]   
> TestStreaming.testMultipleTransactionBatchCommits:1157->checkDataWritten2:619 
> expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchAbortAndCommit:1138->checkDataWritten:566 
> expected:<1> but was:<2>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Delimited:861->testTransactionBatchCommit_Delimited:881->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_DelimitedUGI:865->testTransactionBatchCommit_Delimited:881->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Json:1011->checkDataWritten:566 
> expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Regex:928->testTransactionBatchCommit_Regex:949->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_RegexUGI:932->testTransactionBatchCommit_Regex:949->checkDataWritten:566
>  expected:<1> but was:<3>
> [INFO] 
> [ERROR] Tests run: 26, Failures: 8, Errors: 0, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19100) investigate TestStreaming failures

2018-04-03 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-19100:
--
Description: 
{noformat}
[ERROR] Failures: 
[ERROR]   
TestStreaming.testInterleavedTransactionBatchCommits:1218->checkDataWritten2:619
 expected:<11> but was:<12>
[ERROR]   
TestStreaming.testMultipleTransactionBatchCommits:1157->checkDataWritten2:619 
expected:<1> but was:<3>
[ERROR]   
TestStreaming.testTransactionBatchAbortAndCommit:1138->checkDataWritten:566 
expected:<1> but was:<2>
[ERROR]   
TestStreaming.testTransactionBatchCommit_Delimited:861->testTransactionBatchCommit_Delimited:881->checkDataWritten:566
 expected:<1> but was:<3>
[ERROR]   
TestStreaming.testTransactionBatchCommit_DelimitedUGI:865->testTransactionBatchCommit_Delimited:881->checkDataWritten:566
 expected:<1> but was:<3>
[ERROR]   
TestStreaming.testTransactionBatchCommit_Json:1011->checkDataWritten:566 
expected:<1> but was:<3>
[ERROR]   
TestStreaming.testTransactionBatchCommit_Regex:928->testTransactionBatchCommit_Regex:949->checkDataWritten:566
 expected:<1> but was:<3>
[ERROR]   
TestStreaming.testTransactionBatchCommit_RegexUGI:932->testTransactionBatchCommit_Regex:949->checkDataWritten:566
 expected:<1> but was:<3>
[INFO] 
[ERROR] Tests run: 26, Failures: 8, Errors: 0, Skipped: 0

{noformat}


  was:
[ERROR] Failures: 
[ERROR]   
TestStreaming.testInterleavedTransactionBatchCommits:1218->checkDataWritten2:619
 expected:<11> but was:<12>
[ERROR]   
TestStreaming.testMultipleTransactionBatchCommits:1157->checkDataWritten2:619 
expected:<1> but was:<3>
[ERROR]   
TestStreaming.testTransactionBatchAbortAndCommit:1138->checkDataWritten:566 
expected:<1> but was:<2>
[ERROR]   
TestStreaming.testTransactionBatchCommit_Delimited:861->testTransactionBatchCommit_Delimited:881->checkDataWritten:566
 expected:<1> but was:<3>
[ERROR]   
TestStreaming.testTransactionBatchCommit_DelimitedUGI:865->testTransactionBatchCommit_Delimited:881->checkDataWritten:566
 expected:<1> but was:<3>
[ERROR]   
TestStreaming.testTransactionBatchCommit_Json:1011->checkDataWritten:566 
expected:<1> but was:<3>
[ERROR]   
TestStreaming.testTransactionBatchCommit_Regex:928->testTransactionBatchCommit_Regex:949->checkDataWritten:566
 expected:<1> but was:<3>
[ERROR]   
TestStreaming.testTransactionBatchCommit_RegexUGI:932->testTransactionBatchCommit_Regex:949->checkDataWritten:566
 expected:<1> but was:<3>
[INFO] 
[ERROR] Tests run: 26, Failures: 8, Errors: 0, Skipped: 0



> investigate TestStreaming failures
> --
>
> Key: HIVE-19100
> URL: https://issues.apache.org/jira/browse/HIVE-19100
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
>
> {noformat}
> [ERROR] Failures: 
> [ERROR]   
> TestStreaming.testInterleavedTransactionBatchCommits:1218->checkDataWritten2:619
>  expected:<11> but was:<12>
> [ERROR]   
> TestStreaming.testMultipleTransactionBatchCommits:1157->checkDataWritten2:619 
> expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchAbortAndCommit:1138->checkDataWritten:566 
> expected:<1> but was:<2>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Delimited:861->testTransactionBatchCommit_Delimited:881->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_DelimitedUGI:865->testTransactionBatchCommit_Delimited:881->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Json:1011->checkDataWritten:566 
> expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Regex:928->testTransactionBatchCommit_Regex:949->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_RegexUGI:932->testTransactionBatchCommit_Regex:949->checkDataWritten:566
>  expected:<1> but was:<3>
> [INFO] 
> [ERROR] Tests run: 26, Failures: 8, Errors: 0, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19100) investigate TestStreaming failures

2018-04-03 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman reassigned HIVE-19100:
-


> investigate TestStreaming failures
> --
>
> Key: HIVE-19100
> URL: https://issues.apache.org/jira/browse/HIVE-19100
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
>
> [ERROR] Failures: 
> [ERROR]   
> TestStreaming.testInterleavedTransactionBatchCommits:1218->checkDataWritten2:619
>  expected:<11> but was:<12>
> [ERROR]   
> TestStreaming.testMultipleTransactionBatchCommits:1157->checkDataWritten2:619 
> expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchAbortAndCommit:1138->checkDataWritten:566 
> expected:<1> but was:<2>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Delimited:861->testTransactionBatchCommit_Delimited:881->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_DelimitedUGI:865->testTransactionBatchCommit_Delimited:881->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Json:1011->checkDataWritten:566 
> expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_Regex:928->testTransactionBatchCommit_Regex:949->checkDataWritten:566
>  expected:<1> but was:<3>
> [ERROR]   
> TestStreaming.testTransactionBatchCommit_RegexUGI:932->testTransactionBatchCommit_Regex:949->checkDataWritten:566
>  expected:<1> but was:<3>
> [INFO] 
> [ERROR] Tests run: 26, Failures: 8, Errors: 0, Skipped: 0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18814) Support Add Partition For Acid tables

2018-04-03 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424411#comment-16424411
 ] 

Alan Gates commented on HIVE-18814:
---

(cd hcatalog/streaming; mvn test) passes for me both on Linux and Mac.  Odd.  
We really need to fix all these tests.

> Support Add Partition For Acid tables
> -
>
> Key: HIVE-18814
> URL: https://issues.apache.org/jira/browse/HIVE-18814
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18814.01.patch, HIVE-18814.02.patch, 
> HIVE-18814.03.patch, HIVE-18814.04.patch
>
>
> [https://cwiki.apache.org/confluence/display/Hive/LanguageManual%2BDDL#LanguageManualDDL-AddPartitions]
> Add Partition command creates a {{Partition}} metadata object and sets the 
> location to the directory containing data files.
> In current master (Hive 3.0), Add partition on an acid table doesn't fail and 
> at read time the data is decorated with row__id but the original transaction 
> is 0.  I suspect in earlier Hive versions this will throw or return no data.
> Since this new partition didn't have data before, assigning txnid:0 isn't 
> going to generate duplicate IDs but it could violate Snapshot Isolation in 
> multi stmt txns.  Suppose txnid:7 runs {{select * from T}}.  Then txnid:8 
> adds a partition to T.  Now if txnid:7 runs the same query again, it will see 
> the data in the new partition.
> This can't be release like this since a delete on this data (added via Add 
> partition) will use row_ids with txnid:0 so a later upgrade that sees 
> un-compacted may generate row_ids with different txnid (assuming this is 
> fixed by then)
>  
> One option is follow Load Data approach and create a new delta_x_x/ and 
> move/copy the data there.
>  
> Another is to allocate a new writeid and save it in Partition metadata.  This 
> could then be used to decorate data with ROW__IDs.  This avoids move/copy but 
> retains data "outside" of the table tree which make it more likely that this 
> data will be modified in some way which can really break things if done after 
> and SQL update/delete on this data have happened. 
>  
> It performs no validations on add (except for partition spec) so any file 
> with any format can be added.  It allows add to bucketed tables as well.
> Seems like a very dangerous command.  Maybe a better option is to block it 
> and advise using Load Data.  Alternatively, make this do Add partition 
> metadata op followed by Load Data. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18814) Support Add Partition For Acid tables

2018-04-03 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18814:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

committed to master

> Support Add Partition For Acid tables
> -
>
> Key: HIVE-18814
> URL: https://issues.apache.org/jira/browse/HIVE-18814
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18814.01.patch, HIVE-18814.02.patch, 
> HIVE-18814.03.patch, HIVE-18814.04.patch
>
>
> [https://cwiki.apache.org/confluence/display/Hive/LanguageManual%2BDDL#LanguageManualDDL-AddPartitions]
> Add Partition command creates a {{Partition}} metadata object and sets the 
> location to the directory containing data files.
> In current master (Hive 3.0), Add partition on an acid table doesn't fail and 
> at read time the data is decorated with row__id but the original transaction 
> is 0.  I suspect in earlier Hive versions this will throw or return no data.
> Since this new partition didn't have data before, assigning txnid:0 isn't 
> going to generate duplicate IDs but it could violate Snapshot Isolation in 
> multi stmt txns.  Suppose txnid:7 runs {{select * from T}}.  Then txnid:8 
> adds a partition to T.  Now if txnid:7 runs the same query again, it will see 
> the data in the new partition.
> This can't be release like this since a delete on this data (added via Add 
> partition) will use row_ids with txnid:0 so a later upgrade that sees 
> un-compacted may generate row_ids with different txnid (assuming this is 
> fixed by then)
>  
> One option is follow Load Data approach and create a new delta_x_x/ and 
> move/copy the data there.
>  
> Another is to allocate a new writeid and save it in Partition metadata.  This 
> could then be used to decorate data with ROW__IDs.  This avoids move/copy but 
> retains data "outside" of the table tree which make it more likely that this 
> data will be modified in some way which can really break things if done after 
> and SQL update/delete on this data have happened. 
>  
> It performs no validations on add (except for partition spec) so any file 
> with any format can be added.  It allows add to bucketed tables as well.
> Seems like a very dangerous command.  Maybe a better option is to block it 
> and advise using Load Data.  Alternatively, make this do Add partition 
> metadata op followed by Load Data. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18877) HiveSchemaTool.validateSchemaTables() should wrap a SQLException when rethrowing

2018-04-03 Thread Andrew Sherman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424398#comment-16424398
 ] 

Andrew Sherman commented on HIVE-18877:
---

Thanks [~vihangk1]

> HiveSchemaTool.validateSchemaTables() should wrap a SQLException when 
> rethrowing
> 
>
> Key: HIVE-18877
> URL: https://issues.apache.org/jira/browse/HIVE-18877
> Project: Hive
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
> Fix For: 3.0.0, 2.4.0
>
> Attachments: HIVE-18877.1.patch, HIVE-18877.2.patch, 
> HIVE-18877.3.patch, HIVE-18877.4.patch, HIVE-18877.5.patch, HIVE-18877.6.patch
>
>
> If schematool is run with the -verbose flag then it will print a stack trace 
> for an exception that occurs. If a SQLException is caught during 
> HiveSchemaTool.validateSchemaTables() then a HiveMetaException is rethrown 
> containing the text of the SQLException. If we instead throw  a 
> HiveMetaException that wraps the SQLException then the stacktrace will help 
> with diagnosis of issues where the SQLException contains a generic error 
> text. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18877) HiveSchemaTool.validateSchemaTables() should wrap a SQLException when rethrowing

2018-04-03 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-18877:
---
   Resolution: Fixed
Fix Version/s: 2.4.0
   3.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master and branch-2. Also, changed the dependency scope to test for 
commons-dbcp in the itests/hive-unit/pom.xml. Thanks for your contribution 
[~asherman]

> HiveSchemaTool.validateSchemaTables() should wrap a SQLException when 
> rethrowing
> 
>
> Key: HIVE-18877
> URL: https://issues.apache.org/jira/browse/HIVE-18877
> Project: Hive
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
> Fix For: 3.0.0, 2.4.0
>
> Attachments: HIVE-18877.1.patch, HIVE-18877.2.patch, 
> HIVE-18877.3.patch, HIVE-18877.4.patch, HIVE-18877.5.patch, HIVE-18877.6.patch
>
>
> If schematool is run with the -verbose flag then it will print a stack trace 
> for an exception that occurs. If a SQLException is caught during 
> HiveSchemaTool.validateSchemaTables() then a HiveMetaException is rethrown 
> containing the text of the SQLException. If we instead throw  a 
> HiveMetaException that wraps the SQLException then the stacktrace will help 
> with diagnosis of issues where the SQLException contains a generic error 
> text. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18839) Implement incremental rebuild for materialized views (only insert operations in source tables)

2018-04-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424383#comment-16424383
 ] 

Hive QA commented on HIVE-18839:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12916889/HIVE-18839.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9977/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9977/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9977/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-04-03 18:10:05.312
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-9977/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-04-03 18:10:05.315
+ cd apache-github-source-source
+ git fetch origin
>From https://github.com/apache/hive
   ec17339..fdc1e1f  master -> origin/master
+ git reset --hard HEAD
HEAD is now at ec17339 HIVE-18955: HoS: Unable to create Channel from class 
NioServerSocketChannel (Rui reviewed by Jesus Camacho Rodriguez and Sahil 
Takiar)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is behind 'origin/master' by 2 commits, and can be fast-forwarded.
  (use "git pull" to update your local branch)
+ git reset --hard origin/master
HEAD is now at fdc1e1f HIVE-18814 - Support Add Partition For Acid tables 
(Eugene Koifman, reviewed by Alan Gates)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-04-03 18:10:09.444
+ rm -rf ../yetus_PreCommit-HIVE-Build-9977
+ mkdir ../yetus_PreCommit-HIVE-Build-9977
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-9977
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-9977/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java: does not 
exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java: does not 
exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/MaterializedViewTask.java: 
does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/lockmgr/DbTxnManager.java: does 
not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveTxnManager.java: 
does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveTxnManagerImpl.java: 
does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java: does not 
exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveMaterializedViewsRegistry.java:
 does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java: does 
not exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/parse/MaterializedViewRebuildSemanticAnalyzer.java:
 does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java: 
does not exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDTFGetSplits.java: 
does not exist in index
error: 
a/ql/src/test/queries/clientpositive/materialized_view_create_rewrite_4.q: does 
not exist in index
error: 
a/ql/src/test/results/clientpositive/materialized_view_create_rewrite_3.q.out: 
does not exist in index
error: 
a/ql/src/test/results/clientpositive/materialized_view_create_rewrite_4.q.out: 
does not exist in index
error: a/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp: 
does not exist in index
error: a/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h: 
does not exist in index
error: 

[jira] [Commented] (HIVE-19092) Somne improvement in bin shell scripts

2018-04-03 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424384#comment-16424384
 ] 

Alan Gates commented on HIVE-19092:
---

rcfilecat doesn't need the --service argument, as there is a specific check for 
--rcfilecat in bin/hive.

 

> Somne improvement in bin shell scripts
> --
>
> Key: HIVE-19092
> URL: https://issues.apache.org/jira/browse/HIVE-19092
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Saijin Huang
>Assignee: Saijin Huang
>Priority: Minor
> Attachments: HIVE-19092.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19084) Test case in Hive Query Language fails with a java.lang.AssertionError.

2018-04-03 Thread Steve Yeom (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Yeom reassigned HIVE-19084:
-

Assignee: Steve Yeom

> Test case in Hive Query Language fails with a java.lang.AssertionError.
> ---
>
> Key: HIVE-19084
> URL: https://issues.apache.org/jira/browse/HIVE-19084
> Project: Hive
>  Issue Type: Bug
>  Components: Test, Transactions
> Environment: uname -a
> Linux pts00607-vm3 4.4.0-112-generic #135-Ubuntu SMP Fri Jan 19 11:48:46 UTC 
> 2018 ppc64le ppc64le ppc64le GNU/Linux
>Reporter: Alisha Prabhu
>Assignee: Steve Yeom
>Priority: Major
> Attachments: HIVE-19084.1.patch
>
>
> The test case testInsertOverwriteForPartitionedMmTable in 
> TestTxnCommandsForMmTable.java and TestTxnCommandsForOrcMmTable.java fails 
> with a java.lang.AssertionError.
> Maven command used is mvn 
> -Dtest=TestTxnCommandsForMmTable#testInsertOverwriteForPartitionedMmTable test
> The test case fails as the listStatus function of the FileSystem does not 
> guarantee to return the List of files/directories status in a sorted order.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19099) HIVE-18755 forgot to update derby install script in metastore

2018-04-03 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-19099:
--
Status: Patch Available  (was: Open)

> HIVE-18755 forgot to update derby install script in metastore
> -
>
> Key: HIVE-19099
> URL: https://issues.apache.org/jira/browse/HIVE-19099
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.0.0
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HIVE-19099.patch
>
>
> metastore/srcripts/upgrade/derby/hive-schema-3.0 was not properly updated 
> with the new and changed tables for catalogs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19099) HIVE-18755 forgot to update derby install script in metastore

2018-04-03 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-19099:
--
Attachment: HIVE-19099.patch

> HIVE-18755 forgot to update derby install script in metastore
> -
>
> Key: HIVE-19099
> URL: https://issues.apache.org/jira/browse/HIVE-19099
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.0.0
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HIVE-19099.patch
>
>
> metastore/srcripts/upgrade/derby/hive-schema-3.0 was not properly updated 
> with the new and changed tables for catalogs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >