[jira] [Commented] (HIVE-21022) Fix remote metastore tests which use ZooKeeper

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719872#comment-16719872
 ] 

Hive QA commented on HIVE-21022:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m  
8s{color} | {color:blue} standalone-metastore/metastore-common in master has 29 
extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m  
5s{color} | {color:blue} standalone-metastore/metastore-server in master has 
188 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15294/dev-support/hive-personality.sh
 |
| git revision | master / a43581b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: standalone-metastore/metastore-common 
standalone-metastore/metastore-server U: standalone-metastore |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15294/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Fix remote metastore tests which use ZooKeeper
> --
>
> Key: HIVE-21022
> URL: https://issues.apache.org/jira/browse/HIVE-21022
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21022.01, HIVE-21022.01, HIVE-21022.01, 
> HIVE-21022.02, HIVE-21022.02.patch, HIVE-21022.03, HIVE-21022.03, 
> HIVE-21022.04, HIVE-21022.05, HIVE-21022.05
>
>
> Per [~vgarg]'s comment on HIVE-20794 at 
> https://issues.apache.org/jira/browse/HIVE-20794?focusedCommentId=16714093=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16714093,
>  the remote metatstore tests using ZooKeeper are flaky. They are failing with 
> error "Got exception: org.apache.zookeeper.KeeperException$NoNodeException 
> KeeperErrorCode = NoNode for /hs2mszktest".
> Both of these tests are using the same root namespace and 

[jira] [Commented] (HIVE-16957) Support CTAS for auto gather column stats

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-16957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719846#comment-16719846
 ] 

Hive QA commented on HIVE-16957:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12951582/HIVE-16957.01.patch

{color:green}SUCCESS:{color} +1 due to 29 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 42 failed/errored test(s), 15570 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input1_limit] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input3_limit] 
(batchId=69)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input_part10] (batchId=5)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert2_overwrite_partitions]
 (batchId=95)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_into1] 
(batchId=23)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_into2] 
(batchId=96)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_into3] 
(batchId=29)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_into4] 
(batchId=19)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_into5] 
(batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_into6] 
(batchId=78)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[limit_pushdown_negative] 
(batchId=43)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_dyn_part14] 
(batchId=97)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[merge4] (batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[nonreserved_keywords_insert_into1]
 (batchId=28)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udtf_explode] 
(batchId=55)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[insert_into1] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[insert_into2] 
(batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynamic_semijoin_reduction_3]
 (batchId=178)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_opt_vectorization]
 (batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_optimization]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[enforce_constraint_notnull]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert1_overwrite_partitions]
 (batchId=176)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_into_default_keyword]
 (batchId=158)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid2] 
(batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[runtime_stats_merge]
 (batchId=181)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[semijoin_hint]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sqlmerge] 
(batchId=180)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sqlmerge_stats]
 (batchId=179)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_scalar]
 (batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_select]
 (batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_nway_join]
 (batchId=181)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr_2]
 (batchId=174)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_like_2]
 (batchId=180)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_udf2]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_mapjoin3]
 (batchId=160)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_scalar] 
(batchId=128)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_select] 
(batchId=128)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query70] 
(batchId=272)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query70] 
(batchId=270)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[query70]
 (batchId=270)
org.apache.hive.service.TestHS2ImpersonationWithRemoteMS.testImpersonation 
(batchId=254)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15293/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15293/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15293/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase

[jira] [Commented] (HIVE-16957) Support CTAS for auto gather column stats

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-16957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719824#comment-16719824
 ] 

Hive QA commented on HIVE-16957:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
45s{color} | {color:blue} ql in master has 2310 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
36s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
42s{color} | {color:red} ql: The patch generated 4 new + 565 unchanged - 5 
fixed = 569 total (was 570) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
50s{color} | {color:red} ql generated 1 new + 2308 unchanged - 2 fixed = 2309 
total (was 2310) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
59s{color} | {color:red} ql generated 2 new + 98 unchanged - 2 fixed = 100 
total (was 100) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  
org.apache.hadoop.hive.ql.parse.ColumnStatsSemanticAnalyzer.genPartitionClause(Table,
 Map) makes inefficient use of keySet iterator instead of entrySet iterator  At 
ColumnStatsSemanticAnalyzer.java:of keySet iterator instead of entrySet 
iterator  At ColumnStatsSemanticAnalyzer.java:[line 160] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15293/dev-support/hive-personality.sh
 |
| git revision | master / a43581b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15293/yetus/diff-checkstyle-ql.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15293/yetus/new-findbugs-ql.html
 |
| javadoc | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15293/yetus/diff-javadoc-javadoc-ql.txt
 |
| modules | C: ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15293/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Support CTAS for auto gather column stats
> -
>
> Key: HIVE-16957
> URL: https://issues.apache.org/jira/browse/HIVE-16957
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-16957.01.patch, HIVE-16957.patch
>
>
> 

[jira] [Updated] (HIVE-21032) Refactor HiveMetaTool

2018-12-12 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21032:
--
Status: Patch Available  (was: Open)

> Refactor HiveMetaTool
> -
>
> Key: HIVE-21032
> URL: https://issues.apache.org/jira/browse/HIVE-21032
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.1.0
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HIVE-21032.01.patch, HIVE-21032.02.patch, 
> HIVE-21032.03.patch
>
>
> HiveMetaTool is doing everything in one class, needs to be refactored to have 
> a nicer design.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21032) Refactor HiveMetaTool

2018-12-12 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21032:
--
Status: Open  (was: Patch Available)

> Refactor HiveMetaTool
> -
>
> Key: HIVE-21032
> URL: https://issues.apache.org/jira/browse/HIVE-21032
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.1.0
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HIVE-21032.01.patch, HIVE-21032.02.patch, 
> HIVE-21032.03.patch
>
>
> HiveMetaTool is doing everything in one class, needs to be refactored to have 
> a nicer design.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21032) Refactor HiveMetaTool

2018-12-12 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21032:
--
Attachment: HIVE-21032.03.patch

> Refactor HiveMetaTool
> -
>
> Key: HIVE-21032
> URL: https://issues.apache.org/jira/browse/HIVE-21032
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.1.0
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HIVE-21032.01.patch, HIVE-21032.02.patch, 
> HIVE-21032.03.patch
>
>
> HiveMetaTool is doing everything in one class, needs to be refactored to have 
> a nicer design.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21028) get_table_meta should use a fetch plan to avoid race conditions ending up in NucleusObjectNotFoundException

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719789#comment-16719789
 ] 

Hive QA commented on HIVE-21028:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12951580/HIVE-21028.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 15571 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitions
 (batchId=259)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitionsUnionAll
 (batchId=259)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighShuffleBytes
 (batchId=259)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15292/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15292/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15292/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12951580 - PreCommit-HIVE-Build

> get_table_meta should use a fetch plan to avoid race conditions ending up in 
> NucleusObjectNotFoundException
> ---
>
> Key: HIVE-21028
> URL: https://issues.apache.org/jira/browse/HIVE-21028
> Project: Hive
>  Issue Type: Bug
>Reporter: Karthik Manamcheri
>Assignee: Karthik Manamcheri
>Priority: Major
> Attachments: HIVE-21028.1.patch
>
>
> The {{getTableMeta}} call retrieves the tables, loops through the tables and 
> during this loop it retrieves the database object to get the containing 
> database name. DataNuclues does a lazy retrieval and so, when the first call 
> to get all the tables is done, it does not retrieve the database objects.
> When this query is executed
> {code}query = pm.newQuery(MTable.class, filterBuilder.toString());
> {code}
> it loads all the tables, and when you do
> {code}
> table.getDatabase().getName()
> {code}
> it then goes and retrieves the database object.
> *However*, there could be another thread which actually has deleted the 
> database!! If this happens, we end up with exceptions such as
> {code}
> 2018-12-04 22:25:06,525 INFO  DataNucleus.Datastore.Retrieve: 
> [pool-7-thread-191]: Object with id 
> "6930391[OID]org.apache.hadoop.hive.metastore.model.MTable" not found !
> 2018-12-04 22:25:06,527 WARN  DataNucleus.Persistence: [pool-7-thread-191]: 
> Exception thrown by StateManager.isLoaded
> No such database row
> org.datanucleus.exceptions.NucleusObjectNotFoundException: No such database 
> row
> {code}
> We see this happen especially with calls which retrieve all the tables in all 
> the databases (basically a call to get_table_meta with dbNames="\*" and 
> tableNames="\*").
> To avoid this, we can define a custom fetch plan and activate it only for the 
> get_table_meta query. This fetch plan would fetch the database object along 
> with the MTable object.
> We would first create a fetch plan on the pmf
> {code}
> pmf.getFetchGroup(MTable.class, 
> "mtable_db_fetch_group").addMember("database");
> {code}
> Then we use it just before calling the query
> {code}
> pm.getFetchPlan().addGroup("mtable_db_fetch_group");
> query = pm.newQuery(MTable.class, filterBuilder.toString());
> Collection tables = (Collection) query.executeWithArray(...);
> ...
> {code}
> Before the API call ends, we can remove the fetch plan by
> {code}
> pm.getFetchPlan().removeGroup("mtable_db_fetch_group");
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21028) get_table_meta should use a fetch plan to avoid race conditions ending up in NucleusObjectNotFoundException

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719758#comment-16719758
 ] 

Hive QA commented on HIVE-21028:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 7s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m  
5s{color} | {color:blue} standalone-metastore/metastore-server in master has 
188 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15292/dev-support/hive-personality.sh
 |
| git revision | master / a43581b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: standalone-metastore/metastore-server U: 
standalone-metastore/metastore-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15292/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> get_table_meta should use a fetch plan to avoid race conditions ending up in 
> NucleusObjectNotFoundException
> ---
>
> Key: HIVE-21028
> URL: https://issues.apache.org/jira/browse/HIVE-21028
> Project: Hive
>  Issue Type: Bug
>Reporter: Karthik Manamcheri
>Assignee: Karthik Manamcheri
>Priority: Major
> Attachments: HIVE-21028.1.patch
>
>
> The {{getTableMeta}} call retrieves the tables, loops through the tables and 
> during this loop it retrieves the database object to get the containing 
> database name. DataNuclues does a lazy retrieval and so, when the first call 
> to get all the tables is done, it does not retrieve the database objects.
> When this query is executed
> {code}query = pm.newQuery(MTable.class, filterBuilder.toString());
> {code}
> it loads all the tables, and when you do
> {code}
> table.getDatabase().getName()
> {code}
> it then goes and retrieves the database object.
> *However*, there could be another thread which actually has deleted the 
> database!! If this happens, we end up with exceptions such as
> {code}
> 2018-12-04 22:25:06,525 INFO  DataNucleus.Datastore.Retrieve: 
> [pool-7-thread-191]: Object with id 
> "6930391[OID]org.apache.hadoop.hive.metastore.model.MTable" not found !
> 2018-12-04 22:25:06,527 WARN  DataNucleus.Persistence: [pool-7-thread-191]: 
> Exception thrown by StateManager.isLoaded
> No such database row
> 

[jira] [Commented] (HIVE-21032) Refactor HiveMetaTool

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719747#comment-16719747
 ] 

Hive QA commented on HIVE-21032:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12951575/HIVE-21032.02.patch

{color:green}SUCCESS:{color} +1 due to 6 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 15587 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.jdbc.TestSSL.testMetastoreWithSSL (batchId=255)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitions
 (batchId=259)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitionsUnionAll
 (batchId=259)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighShuffleBytes
 (batchId=259)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15291/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15291/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15291/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12951575 - PreCommit-HIVE-Build

> Refactor HiveMetaTool
> -
>
> Key: HIVE-21032
> URL: https://issues.apache.org/jira/browse/HIVE-21032
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.1.0
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HIVE-21032.01.patch, HIVE-21032.02.patch
>
>
> HiveMetaTool is doing everything in one class, needs to be refactored to have 
> a nicer design.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21032) Refactor HiveMetaTool

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719745#comment-16719745
 ] 

Hive QA commented on HIVE-21032:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
12s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m  
5s{color} | {color:blue} standalone-metastore/metastore-server in master has 
188 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
38s{color} | {color:blue} ql in master has 2310 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
35s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
42s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 6s{color} | {color:green} The patch metastore-server passed checkstyle {color} 
|
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} ql: The patch generated 0 new + 1 unchanged - 1 
fixed = 1 total (was 2) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} The patch . passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} itests/hive-unit: The patch generated 0 new + 0 
unchanged - 3 fixed = 0 total (was 3) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} standalone-metastore/metastore-server generated 0 new 
+ 187 unchanged - 1 fixed = 187 total (was 188) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
55s{color} | {color:green} ql in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} hive-unit in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15291/dev-support/hive-personality.sh
 |
| git revision | master / a43581b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: standalone-metastore/metastore-server ql . itests/hive-unit U: . 
|
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15291/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Refactor HiveMetaTool
> -
>
>   

[jira] [Commented] (HIVE-21035) Race condition in SparkUtilities#getSparkSession

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719711#comment-16719711
 ] 

Hive QA commented on HIVE-21035:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12951555/HIVE-21035.02.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15571 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15290/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15290/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15290/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12951555 - PreCommit-HIVE-Build

> Race condition in SparkUtilities#getSparkSession
> 
>
> Key: HIVE-21035
> URL: https://issues.apache.org/jira/browse/HIVE-21035
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 4.0.0
>Reporter: Antal Sinkovits
>Assignee: Antal Sinkovits
>Priority: Major
> Attachments: HIVE-21035.01.patch, HIVE-21035.02.patch
>
>
> It can happen, that when in one given session, multiple queries are executed, 
> that due to a race condition, multiple spark application master gets kicked 
> off.
> In this case, the one that started earlier, will not be killed, when the hive 
> session closes, consuming resources.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21022) Fix remote metastore tests which use ZooKeeper

2018-12-12 Thread Ashutosh Bapat (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Bapat updated HIVE-21022:
--
Attachment: HIVE-21022.05
Status: Patch Available  (was: In Progress)

Reattaching to trigger ptests second time.

> Fix remote metastore tests which use ZooKeeper
> --
>
> Key: HIVE-21022
> URL: https://issues.apache.org/jira/browse/HIVE-21022
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21022.01, HIVE-21022.01, HIVE-21022.01, 
> HIVE-21022.02, HIVE-21022.02.patch, HIVE-21022.03, HIVE-21022.03, 
> HIVE-21022.04, HIVE-21022.05, HIVE-21022.05
>
>
> Per [~vgarg]'s comment on HIVE-20794 at 
> https://issues.apache.org/jira/browse/HIVE-20794?focusedCommentId=16714093=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16714093,
>  the remote metatstore tests using ZooKeeper are flaky. They are failing with 
> error "Got exception: org.apache.zookeeper.KeeperException$NoNodeException 
> KeeperErrorCode = NoNode for /hs2mszktest".
> Both of these tests are using the same root namespace and hence the reason 
> for this failure could be that the root namespace becomes unavailable to one 
> test when the other drops it. The drop seems to be happening automatically 
> through TestingServer code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21035) Race condition in SparkUtilities#getSparkSession

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719691#comment-16719691
 ] 

Hive QA commented on HIVE-21035:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
42s{color} | {color:blue} ql in master has 2310 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} ql: The patch generated 0 new + 4 unchanged - 1 
fixed = 4 total (was 5) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15290/dev-support/hive-personality.sh
 |
| git revision | master / a43581b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15290/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Race condition in SparkUtilities#getSparkSession
> 
>
> Key: HIVE-21035
> URL: https://issues.apache.org/jira/browse/HIVE-21035
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 4.0.0
>Reporter: Antal Sinkovits
>Assignee: Antal Sinkovits
>Priority: Major
> Attachments: HIVE-21035.01.patch, HIVE-21035.02.patch
>
>
> It can happen, that when in one given session, multiple queries are executed, 
> that due to a race condition, multiple spark application master gets kicked 
> off.
> In this case, the one that started earlier, will not be killed, when the hive 
> session closes, consuming resources.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21022) Fix remote metastore tests which use ZooKeeper

2018-12-12 Thread Ashutosh Bapat (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Bapat updated HIVE-21022:
--
Status: In Progress  (was: Patch Available)

> Fix remote metastore tests which use ZooKeeper
> --
>
> Key: HIVE-21022
> URL: https://issues.apache.org/jira/browse/HIVE-21022
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21022.01, HIVE-21022.01, HIVE-21022.01, 
> HIVE-21022.02, HIVE-21022.02.patch, HIVE-21022.03, HIVE-21022.03, 
> HIVE-21022.04, HIVE-21022.05
>
>
> Per [~vgarg]'s comment on HIVE-20794 at 
> https://issues.apache.org/jira/browse/HIVE-20794?focusedCommentId=16714093=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16714093,
>  the remote metatstore tests using ZooKeeper are flaky. They are failing with 
> error "Got exception: org.apache.zookeeper.KeeperException$NoNodeException 
> KeeperErrorCode = NoNode for /hs2mszktest".
> Both of these tests are using the same root namespace and hence the reason 
> for this failure could be that the root namespace becomes unavailable to one 
> test when the other drops it. The drop seems to be happening automatically 
> through TestingServer code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20943) Handle Compactor transaction abort properly

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719674#comment-16719674
 ] 

Hive QA commented on HIVE-20943:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12951553/HIVE-20943.02.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15289/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15289/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15289/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Tests exited with: Exception: Patch URL 
https://issues.apache.org/jira/secure/attachment/12951553/HIVE-20943.02.patch 
was found in seen patch url's cache and a test was probably run already on it. 
Aborting...
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12951553 - PreCommit-HIVE-Build

> Handle Compactor transaction abort properly
> ---
>
> Key: HIVE-20943
> URL: https://issues.apache.org/jira/browse/HIVE-20943
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-20943.01.patch, HIVE-20943.02.patch, 
> HIVE-20943.02.patch
>
>
> A transactions in which the Worker runs may fail after base_x_cZ 
> (delta_x_y_xZ) is created but before files are fully written.  Need to make 
> sure to write to TXN_COMPONENTS an entry for corresponding to Z so "_cZ" 
> directories are not read by anyone and cleaned by Cleaner.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20943) Handle Compactor transaction abort properly

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719673#comment-16719673
 ] 

Hive QA commented on HIVE-20943:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12951553/HIVE-20943.02.patch

{color:green}SUCCESS:{color} +1 due to 6 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15571 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15288/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15288/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15288/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12951553 - PreCommit-HIVE-Build

> Handle Compactor transaction abort properly
> ---
>
> Key: HIVE-20943
> URL: https://issues.apache.org/jira/browse/HIVE-20943
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-20943.01.patch, HIVE-20943.02.patch, 
> HIVE-20943.02.patch
>
>
> A transactions in which the Worker runs may fail after base_x_cZ 
> (delta_x_y_xZ) is created but before files are fully written.  Need to make 
> sure to write to TXN_COMPONENTS an entry for corresponding to Z so "_cZ" 
> directories are not read by anyone and cleaned by Cleaner.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20943) Handle Compactor transaction abort properly

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719648#comment-16719648
 ] 

Hive QA commented on HIVE-20943:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m  
4s{color} | {color:blue} standalone-metastore/metastore-server in master has 
188 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
40s{color} | {color:blue} ql in master has 2310 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
37s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
42s{color} | {color:red} ql: The patch generated 21 new + 587 unchanged - 15 
fixed = 608 total (was 602) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
16s{color} | {color:red} standalone-metastore/metastore-server generated 1 new 
+ 187 unchanged - 1 fixed = 188 total (was 188) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
18s{color} | {color:red} standalone-metastore_metastore-server generated 1 new 
+ 48 unchanged - 0 fixed = 49 total (was 48) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:standalone-metastore/metastore-server |
|  |  
org.apache.hadoop.hive.metastore.txn.CompactionTxnHandler.updateCompactorState(CompactionInfo,
 long) passes a nonconstant String to an execute or addBatch method on an SQL 
statement  At CompactionTxnHandler.java:String to an execute or addBatch method 
on an SQL statement  At CompactionTxnHandler.java:[line 786] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15288/dev-support/hive-personality.sh
 |
| git revision | master / a43581b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15288/yetus/diff-checkstyle-ql.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15288/yetus/new-findbugs-standalone-metastore_metastore-server.html
 |
| javadoc | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15288/yetus/diff-javadoc-javadoc-standalone-metastore_metastore-server.txt
 |
| modules | C: standalone-metastore/metastore-server ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15288/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Handle 

[jira] [Commented] (HIVE-21021) Scalar subquery with only aggregate in subquery (no group by) has unnecessary sq_count_check branch

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719612#comment-16719612
 ] 

Hive QA commented on HIVE-21021:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12951539/HIVE-21021.5.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 16 failed/errored test(s), 15570 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[implicit_cast_during_insert]
 (batchId=55)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[merge3] (batchId=63)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_opt_vectorization]
 (batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_optimization2]
 (batchId=158)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_optimization]
 (batchId=172)
org.apache.hadoop.hive.metastore.TestObjectStore.catalogs (batchId=229)
org.apache.hadoop.hive.metastore.TestObjectStore.testDatabaseOps (batchId=229)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropParitionsCleanup
 (batchId=229)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropPartitionsCacheCrossSession
 (batchId=229)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSqlErrorMetrics 
(batchId=229)
org.apache.hadoop.hive.metastore.TestObjectStore.testMasterKeyOps (batchId=229)
org.apache.hadoop.hive.metastore.TestObjectStore.testMaxEventResponse 
(batchId=229)
org.apache.hadoop.hive.metastore.TestObjectStore.testPartitionOps (batchId=229)
org.apache.hadoop.hive.metastore.TestObjectStore.testQueryCloseOnError 
(batchId=229)
org.apache.hadoop.hive.metastore.TestObjectStore.testRoleOps (batchId=229)
org.apache.hadoop.hive.metastore.TestObjectStore.testTableOps (batchId=229)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15287/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15287/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15287/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 16 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12951539 - PreCommit-HIVE-Build

> Scalar subquery with only aggregate in subquery (no group by) has unnecessary 
> sq_count_check branch
> ---
>
> Key: HIVE-21021
> URL: https://issues.apache.org/jira/browse/HIVE-21021
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21021.1.patch, HIVE-21021.2.patch, 
> HIVE-21021.3.patch, HIVE-21021.4.patch, HIVE-21021.5.patch
>
>
> {code:sql}
> CREATE TABLE `store_sales`(
>   `ss_sold_date_sk` int,
>   `ss_quantity` int,
>   `ss_list_price` decimal(7,2));
> CREATE TABLE `date_dim`(
>   `d_date_sk` int,
>   `d_year` int);
> explain cbo with avg_sales as
>  (select avg(quantity*list_price) average_sales
>   from (select ss_quantity quantity
>  ,ss_list_price list_price
>from store_sales
>,date_dim
>where ss_sold_date_sk = d_date_sk
>  and d_year between 1999 and 2001 ) x)
> select * from store_sales where ss_list_price > (select average_sales from 
> avg_sales);
> {code}
> {noformat}
> CBO PLAN:
> HiveProject(ss_sold_date_sk=[$0], ss_quantity=[$1], ss_list_price=[$2])
>   HiveJoin(condition=[true], joinType=[inner], algorithm=[none], cost=[{2.0 
> rows, 0.0 cpu, 0.0 io}])
> HiveJoin(condition=[>($2, $3)], joinType=[inner], algorithm=[none], 
> cost=[{2.0 rows, 0.0 cpu, 0.0 io}])
>   HiveProject(ss_sold_date_sk=[$0], ss_quantity=[$1], ss_list_price=[$2])
> HiveTableScan(table=[[sub, store_sales]], table:alias=[store_sales])
>   HiveProject($f0=[/($0, $1)])
> HiveAggregate(group=[{}], agg#0=[sum($0)], agg#1=[count($0)])
>   HiveProject($f0=[*(CAST($1):DECIMAL(10, 0), $2)])
> HiveJoin(condition=[=($0, $3)], joinType=[inner], 
> algorithm=[none], cost=[{2.0 rows, 0.0 cpu, 0.0 io}])
>   HiveProject(ss_sold_date_sk=[$0], ss_quantity=[$1], 
> ss_list_price=[$2])
> HiveFilter(condition=[IS NOT NULL($0)])
>   HiveTableScan(table=[[sub, store_sales]], 
> table:alias=[store_sales])
>   HiveProject(d_date_sk=[$0])
> 

[jira] [Updated] (HIVE-21030) Add credential store env properties redaction in JobConf

2018-12-12 Thread Denys Kuzmenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko updated HIVE-21030:
--
Attachment: HIVE-21030.5.patch

> Add credential store env properties redaction in JobConf
> 
>
> Key: HIVE-21030
> URL: https://issues.apache.org/jira/browse/HIVE-21030
> Project: Hive
>  Issue Type: Bug
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-21030.1.patch, HIVE-21030.2.patch, 
> HIVE-21030.3.patch, HIVE-21030.4.patch, HIVE-21030.5.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (HIVE-17020) Aggressive RS dedup can incorrectly remove OP tree branch

2018-12-12 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-17020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg reopened HIVE-17020:


Reverted this since it was causing dynamic_sort test failures

> Aggressive RS dedup can incorrectly remove OP tree branch
> -
>
> Key: HIVE-17020
> URL: https://issues.apache.org/jira/browse/HIVE-17020
> Project: Hive
>  Issue Type: Bug
>Reporter: Rui Li
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-17020.1.patch, HIVE-17020.2.patch, 
> HIVE-17020.3.patch
>
>
> Suppose we have an OP tree like this:
> {noformat}
>  ...
>   |
>  RS[1]
>   |
> SEL[2]
> /\
> SEL[3]   SEL[4]
>   | |
> RS[5] FS[6]
>   |
>  ... 
> {noformat}
> When doing aggressive RS dedup, we'll remove all the operators between RS5 
> and RS1, and thus the branch containing FS6 is lost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-16957) Support CTAS for auto gather column stats

2018-12-12 Thread Jesus Camacho Rodriguez (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-16957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719597#comment-16719597
 ] 

Jesus Camacho Rodriguez commented on HIVE-16957:


[~ashutoshc], could you take a look? https://reviews.apache.org/r/69562/
Thanks

> Support CTAS for auto gather column stats
> -
>
> Key: HIVE-16957
> URL: https://issues.apache.org/jira/browse/HIVE-16957
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-16957.01.patch, HIVE-16957.patch
>
>
> The idea is to rely as much as possible on the logic in 
> ColumnStatsSemanticAnalyzer as other operations do. In particular, they 
> create a 'analyze table t compute statistics for columns', use 
> ColumnStatsSemanticAnalyzer to parse it, and connect resulting plan to 
> existing INSERT/INSERT OVERWRITE statement. The challenge for CTAS or CREATE 
> MATERIALIZED VIEW is that the table object does not exist yet, hence we 
> cannot rely fully on ColumnStatsSemanticAnalyzer.
> Thus, we use same process, but ColumnStatsSemanticAnalyzer produces a 
> statement for column stats collection that uses a table values clause instead 
> of the original table reference:
> {code}
> select compute_stats(col1), compute_stats(col2), compute_stats(col3)
> from table(values(cast(null as int), cast(null as int), cast(null as 
> string))) as t(col1, col2, col3);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16957) Support CTAS for auto gather column stats

2018-12-12 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-16957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-16957:
---
Attachment: HIVE-16957.01.patch

> Support CTAS for auto gather column stats
> -
>
> Key: HIVE-16957
> URL: https://issues.apache.org/jira/browse/HIVE-16957
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-16957.01.patch, HIVE-16957.patch
>
>
> The idea is to rely as much as possible on the logic in 
> ColumnStatsSemanticAnalyzer as other operations do. In particular, they 
> create a 'analyze table t compute statistics for columns', use 
> ColumnStatsSemanticAnalyzer to parse it, and connect resulting plan to 
> existing INSERT/INSERT OVERWRITE statement. The challenge for CTAS or CREATE 
> MATERIALIZED VIEW is that the table object does not exist yet, hence we 
> cannot rely fully on ColumnStatsSemanticAnalyzer.
> Thus, we use same process, but ColumnStatsSemanticAnalyzer produces a 
> statement for column stats collection that uses a table values clause instead 
> of the original table reference:
> {code}
> select compute_stats(col1), compute_stats(col2), compute_stats(col3)
> from table(values(cast(null as int), cast(null as int), cast(null as 
> string))) as t(col1, col2, col3);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21028) get_table_meta should use a fetch plan to avoid race conditions ending up in NucleusObjectNotFoundException

2018-12-12 Thread Karthik Manamcheri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Manamcheri updated HIVE-21028:
--
Attachment: HIVE-21028.1.patch

> get_table_meta should use a fetch plan to avoid race conditions ending up in 
> NucleusObjectNotFoundException
> ---
>
> Key: HIVE-21028
> URL: https://issues.apache.org/jira/browse/HIVE-21028
> Project: Hive
>  Issue Type: Bug
>Reporter: Karthik Manamcheri
>Assignee: Karthik Manamcheri
>Priority: Major
> Attachments: HIVE-21028.1.patch
>
>
> The {{getTableMeta}} call retrieves the tables, loops through the tables and 
> during this loop it retrieves the database object to get the containing 
> database name. DataNuclues does a lazy retrieval and so, when the first call 
> to get all the tables is done, it does not retrieve the database objects.
> When this query is executed
> {code}query = pm.newQuery(MTable.class, filterBuilder.toString());
> {code}
> it loads all the tables, and when you do
> {code}
> table.getDatabase().getName()
> {code}
> it then goes and retrieves the database object.
> *However*, there could be another thread which actually has deleted the 
> database!! If this happens, we end up with exceptions such as
> {code}
> 2018-12-04 22:25:06,525 INFO  DataNucleus.Datastore.Retrieve: 
> [pool-7-thread-191]: Object with id 
> "6930391[OID]org.apache.hadoop.hive.metastore.model.MTable" not found !
> 2018-12-04 22:25:06,527 WARN  DataNucleus.Persistence: [pool-7-thread-191]: 
> Exception thrown by StateManager.isLoaded
> No such database row
> org.datanucleus.exceptions.NucleusObjectNotFoundException: No such database 
> row
> {code}
> We see this happen especially with calls which retrieve all the tables in all 
> the databases (basically a call to get_table_meta with dbNames="\*" and 
> tableNames="\*").
> To avoid this, we can define a custom fetch plan and activate it only for the 
> get_table_meta query. This fetch plan would fetch the database object along 
> with the MTable object.
> We would first create a fetch plan on the pmf
> {code}
> pmf.getFetchGroup(MTable.class, 
> "mtable_db_fetch_group").addMember("database");
> {code}
> Then we use it just before calling the query
> {code}
> pm.getFetchPlan().addGroup("mtable_db_fetch_group");
> query = pm.newQuery(MTable.class, filterBuilder.toString());
> Collection tables = (Collection) query.executeWithArray(...);
> ...
> {code}
> Before the API call ends, we can remove the fetch plan by
> {code}
> pm.getFetchPlan().removeGroup("mtable_db_fetch_group");
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21028) get_table_meta should use a fetch plan to avoid race conditions ending up in NucleusObjectNotFoundException

2018-12-12 Thread Karthik Manamcheri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Manamcheri updated HIVE-21028:
--
Status: Patch Available  (was: In Progress)

> get_table_meta should use a fetch plan to avoid race conditions ending up in 
> NucleusObjectNotFoundException
> ---
>
> Key: HIVE-21028
> URL: https://issues.apache.org/jira/browse/HIVE-21028
> Project: Hive
>  Issue Type: Bug
>Reporter: Karthik Manamcheri
>Assignee: Karthik Manamcheri
>Priority: Major
> Attachments: HIVE-21028.1.patch
>
>
> The {{getTableMeta}} call retrieves the tables, loops through the tables and 
> during this loop it retrieves the database object to get the containing 
> database name. DataNuclues does a lazy retrieval and so, when the first call 
> to get all the tables is done, it does not retrieve the database objects.
> When this query is executed
> {code}query = pm.newQuery(MTable.class, filterBuilder.toString());
> {code}
> it loads all the tables, and when you do
> {code}
> table.getDatabase().getName()
> {code}
> it then goes and retrieves the database object.
> *However*, there could be another thread which actually has deleted the 
> database!! If this happens, we end up with exceptions such as
> {code}
> 2018-12-04 22:25:06,525 INFO  DataNucleus.Datastore.Retrieve: 
> [pool-7-thread-191]: Object with id 
> "6930391[OID]org.apache.hadoop.hive.metastore.model.MTable" not found !
> 2018-12-04 22:25:06,527 WARN  DataNucleus.Persistence: [pool-7-thread-191]: 
> Exception thrown by StateManager.isLoaded
> No such database row
> org.datanucleus.exceptions.NucleusObjectNotFoundException: No such database 
> row
> {code}
> We see this happen especially with calls which retrieve all the tables in all 
> the databases (basically a call to get_table_meta with dbNames="\*" and 
> tableNames="\*").
> To avoid this, we can define a custom fetch plan and activate it only for the 
> get_table_meta query. This fetch plan would fetch the database object along 
> with the MTable object.
> We would first create a fetch plan on the pmf
> {code}
> pmf.getFetchGroup(MTable.class, 
> "mtable_db_fetch_group").addMember("database");
> {code}
> Then we use it just before calling the query
> {code}
> pm.getFetchPlan().addGroup("mtable_db_fetch_group");
> query = pm.newQuery(MTable.class, filterBuilder.toString());
> Collection tables = (Collection) query.executeWithArray(...);
> ...
> {code}
> Before the API call ends, we can remove the fetch plan by
> {code}
> pm.getFetchPlan().removeGroup("mtable_db_fetch_group");
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21021) Scalar subquery with only aggregate in subquery (no group by) has unnecessary sq_count_check branch

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719586#comment-16719586
 ] 

Hive QA commented on HIVE-21021:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
44s{color} | {color:blue} ql in master has 2310 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15287/dev-support/hive-personality.sh
 |
| git revision | master / 881e291 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15287/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Scalar subquery with only aggregate in subquery (no group by) has unnecessary 
> sq_count_check branch
> ---
>
> Key: HIVE-21021
> URL: https://issues.apache.org/jira/browse/HIVE-21021
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21021.1.patch, HIVE-21021.2.patch, 
> HIVE-21021.3.patch, HIVE-21021.4.patch, HIVE-21021.5.patch
>
>
> {code:sql}
> CREATE TABLE `store_sales`(
>   `ss_sold_date_sk` int,
>   `ss_quantity` int,
>   `ss_list_price` decimal(7,2));
> CREATE TABLE `date_dim`(
>   `d_date_sk` int,
>   `d_year` int);
> explain cbo with avg_sales as
>  (select avg(quantity*list_price) average_sales
>   from (select ss_quantity quantity
>  ,ss_list_price list_price
>from store_sales
>,date_dim
>where ss_sold_date_sk = d_date_sk
>  and d_year between 1999 and 2001 ) x)
> select * from store_sales where ss_list_price > (select average_sales from 
> avg_sales);
> {code}
> {noformat}
> CBO PLAN:
> HiveProject(ss_sold_date_sk=[$0], ss_quantity=[$1], ss_list_price=[$2])
>   HiveJoin(condition=[true], joinType=[inner], algorithm=[none], cost=[{2.0 
> rows, 0.0 cpu, 0.0 io}])
> HiveJoin(condition=[>($2, $3)], joinType=[inner], algorithm=[none], 
> cost=[{2.0 rows, 0.0 cpu, 0.0 io}])
>   HiveProject(ss_sold_date_sk=[$0], ss_quantity=[$1], ss_list_price=[$2])
> HiveTableScan(table=[[sub, store_sales]], table:alias=[store_sales])
>   

[jira] [Commented] (HIVE-21036) extend OpenTxnRequest with transaction type

2018-12-12 Thread Eugene Koifman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719573#comment-16719573
 ] 

Eugene Koifman commented on HIVE-21036:
---

FYI, [~ikryvenko]

> extend OpenTxnRequest with transaction type
> ---
>
> Key: HIVE-21036
> URL: https://issues.apache.org/jira/browse/HIVE-21036
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Eugene Koifman
>Priority: Major
>
> There is a {{TXN_TYPE}} field in {{TXNS}} table.
> There is {{TxnHandler.TxnType}} with legal values.  It would be useful to 
> TxnType a {{Thrift}}, add a new {{COMPACTION}} type object and allow setting 
> it in {{OpenTxnRequest}}.
> Since HIVE-20823 compactor starts a txn and should set this.
> Down the road we may want to set READ_ONLY either based on parsing of the 
> query or user input which can make {{TxnHandler.commitTxn}} faster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20936) Allow the Worker thread in the metastore to run outside of it

2018-12-12 Thread Eugene Koifman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719572#comment-16719572
 ] 

Eugene Koifman commented on HIVE-20936:
---

[~jmarhuen] I left some comments on RB

> Allow the Worker thread in the metastore to run outside of it
> -
>
> Key: HIVE-20936
> URL: https://issues.apache.org/jira/browse/HIVE-20936
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Reporter: Jaume M
>Assignee: Jaume M
>Priority: Major
> Attachments: HIVE-20936.1.patch, HIVE-20936.2.patch, 
> HIVE-20936.3.patch, HIVE-20936.4.patch, HIVE-20936.5.patch, 
> HIVE-20936.6.patch, HIVE-20936.7.patch, HIVE-20936.8.patch, HIVE-20936.8.patch
>
>
> Currently the Worker thread in the metastore in bounded to the metastore, 
> mainly because of the TxnHandler that it has. This thread runs some map 
> reduce jobs which may not only be an option wherever the metastore is 
> running. A solution for this can be to run this thread in HS2 depending on a 
> flag.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21031) Array with one empty string is inserted as an empty array

2018-12-12 Thread Patrick Byrnes (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719568#comment-16719568
 ] 

Patrick Byrnes commented on HIVE-21031:
---

This issue occurs when using
{code:java}
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t' ESCAPED BY '\\'{code}
but not when
{code:java}
STORED AS PARQUET{code}

> Array with one empty string is inserted as an empty array
> -
>
> Key: HIVE-21031
> URL: https://issues.apache.org/jira/browse/HIVE-21031
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.3.2
>Reporter: Patrick Byrnes
>Priority: Major
>
> In beeline the output of
> {code:java}
> select array("");{code}
> is:
> {code:java}
> [""]
> {code}
> However, the output of
> {code:java}
> insert into table a select array("");select * from a;{code}
> is one row of:
> {code:java}
> []{code}
>  
>  
> Similarly, the output of
> {code:java}
> select array(array()){code}
> is:
> {code:java}
> [[]]{code}
> However, the output of
> {code:java}
> insert into table b select array(array());select a,size(a) from b;{code}
> is one row of:
> {code:java}
> []{code}
>  
> Is there a way to insert an array whose only element is an empty string or an 
> array whose only element is an empty array into a table?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21022) Fix remote metastore tests which use ZooKeeper

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719563#comment-16719563
 ] 

Hive QA commented on HIVE-21022:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12951533/HIVE-21022.05

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 15660 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[implicit_cast_during_insert]
 (batchId=55)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[merge3] (batchId=63)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_opt_vectorization]
 (batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_optimization2]
 (batchId=158)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_optimization]
 (batchId=172)
org.apache.hive.jdbc.TestSSL.testMetastoreWithSSL (batchId=255)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15286/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15286/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15286/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12951533 - PreCommit-HIVE-Build

> Fix remote metastore tests which use ZooKeeper
> --
>
> Key: HIVE-21022
> URL: https://issues.apache.org/jira/browse/HIVE-21022
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21022.01, HIVE-21022.01, HIVE-21022.01, 
> HIVE-21022.02, HIVE-21022.02.patch, HIVE-21022.03, HIVE-21022.03, 
> HIVE-21022.04, HIVE-21022.05
>
>
> Per [~vgarg]'s comment on HIVE-20794 at 
> https://issues.apache.org/jira/browse/HIVE-20794?focusedCommentId=16714093=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16714093,
>  the remote metatstore tests using ZooKeeper are flaky. They are failing with 
> error "Got exception: org.apache.zookeeper.KeeperException$NoNodeException 
> KeeperErrorCode = NoNode for /hs2mszktest".
> Both of these tests are using the same root namespace and hence the reason 
> for this failure could be that the root namespace becomes unavailable to one 
> test when the other drops it. The drop seems to be happening automatically 
> through TestingServer code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21032) Refactor HiveMetaTool

2018-12-12 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21032:
--
Status: Open  (was: Patch Available)

> Refactor HiveMetaTool
> -
>
> Key: HIVE-21032
> URL: https://issues.apache.org/jira/browse/HIVE-21032
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.1.0
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HIVE-21032.01.patch, HIVE-21032.02.patch
>
>
> HiveMetaTool is doing everything in one class, needs to be refactored to have 
> a nicer design.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21032) Refactor HiveMetaTool

2018-12-12 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21032:
--
Status: Patch Available  (was: Open)

> Refactor HiveMetaTool
> -
>
> Key: HIVE-21032
> URL: https://issues.apache.org/jira/browse/HIVE-21032
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.1.0
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HIVE-21032.01.patch, HIVE-21032.02.patch
>
>
> HiveMetaTool is doing everything in one class, needs to be refactored to have 
> a nicer design.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21032) Refactor HiveMetaTool

2018-12-12 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21032:
--
Attachment: HIVE-21032.02.patch

> Refactor HiveMetaTool
> -
>
> Key: HIVE-21032
> URL: https://issues.apache.org/jira/browse/HIVE-21032
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.1.0
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HIVE-21032.01.patch, HIVE-21032.02.patch
>
>
> HiveMetaTool is doing everything in one class, needs to be refactored to have 
> a nicer design.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21022) Fix remote metastore tests which use ZooKeeper

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719536#comment-16719536
 ] 

Hive QA commented on HIVE-21022:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
18s{color} | {color:blue} standalone-metastore/metastore-common in master has 
29 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m  
4s{color} | {color:blue} standalone-metastore/metastore-server in master has 
188 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15286/dev-support/hive-personality.sh
 |
| git revision | master / 881e291 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: standalone-metastore/metastore-common 
standalone-metastore/metastore-server U: standalone-metastore |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15286/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Fix remote metastore tests which use ZooKeeper
> --
>
> Key: HIVE-21022
> URL: https://issues.apache.org/jira/browse/HIVE-21022
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21022.01, HIVE-21022.01, HIVE-21022.01, 
> HIVE-21022.02, HIVE-21022.02.patch, HIVE-21022.03, HIVE-21022.03, 
> HIVE-21022.04, HIVE-21022.05
>
>
> Per [~vgarg]'s comment on HIVE-20794 at 
> https://issues.apache.org/jira/browse/HIVE-20794?focusedCommentId=16714093=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16714093,
>  the remote metatstore tests using ZooKeeper are flaky. They are failing with 
> error "Got exception: org.apache.zookeeper.KeeperException$NoNodeException 
> KeeperErrorCode = NoNode for /hs2mszktest".
> Both of these tests are using the same root namespace and hence the reason 

[jira] [Commented] (HIVE-20733) GenericUDFOPEqualNS may not use = in plan descriptions

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719524#comment-16719524
 ] 

Hive QA commented on HIVE-20733:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12951531/HIVE-20733.8.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 12 failed/errored test(s), 15570 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[implicit_cast_during_insert]
 (batchId=55)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[merge3] (batchId=63)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_opt_vectorization]
 (batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_optimization2]
 (batchId=158)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_optimization]
 (batchId=172)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitions
 (batchId=259)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitionsUnionAll
 (batchId=259)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomNonExistent
 (batchId=259)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighBytesRead 
(batchId=259)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighShuffleBytes
 (batchId=259)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerSlowQueryElapsedTime
 (batchId=259)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerSlowQueryExecutionTime
 (batchId=259)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15285/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15285/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15285/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 12 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12951531 - PreCommit-HIVE-Build

> GenericUDFOPEqualNS may not use = in plan descriptions
> --
>
> Key: HIVE-20733
> URL: https://issues.apache.org/jira/browse/HIVE-20733
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: David Lavati
>Priority: Major
> Attachments: HIVE-20733.2.patch, HIVE-20733.3.patch, 
> HIVE-20733.4.patch, HIVE-20733.5.patch, HIVE-20733.6.patch, 
> HIVE-20733.7.patch, HIVE-20733.8.patch, HIVE-20733.patch
>
>
> right now GenericUDFOPEqualNS is displayed a "=" in explains; however it 
> should be "<=>"
> this may cause some confusion...
> related qtest: is_distinct_from.q
> same: GenericUDFOPNotEqualNS



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20998) HiveStrictManagedMigration utility should update DB/Table location as last migration steps

2018-12-12 Thread Jason Dere (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-20998:
--
   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Committed to master

> HiveStrictManagedMigration utility should update DB/Table location as last 
> migration steps
> --
>
> Key: HIVE-20998
> URL: https://issues.apache.org/jira/browse/HIVE-20998
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20998.1.patch
>
>
> When processing a database or table, the HiveStrictManagedMigration utility 
> currently changes the database/table locations as the first step in 
> processing that database/table. Unfortunately if an error occurs while 
> processing this database or table, then there may still be migration work 
> that needs to continue for that db/table by running the migration again. 
> However the migration tool only processes dbs/tables that have the old 
> warehouse location, then the tool will skip over the db/table when the 
> migration is run again.
>  One fix here is to set the new location as the last step after all of the 
> migration work is done:
>  - The new table location will not be set until all of its partitions have 
> been successfully migrated.
>  - The new database location will not be set until all of its tables have 
> been successfully migrated.
> For existing migrations that failed with an error, the following workaround 
> can be done so that the db/tables can be re-processed by the migration tool:
>  1) Use the migration tool logs to find which databases/tables failed during 
> processing.
>  2) For each db/table, change location of of the database and table back to 
> old location:
>  ALTER DATABASE tpcds_bin_partitioned_orc_10 SET LOCATION 
> 'hdfs://ns1/apps/hive/warehouse/tpcds_bin_partitioned_orc_10.db';
>  ALTER TABLE tpcds_bin_partitioned_orc_10.store_sales SET LOCATION 
> 'hdfs://ns1/apps/hive/warehouse/tpcds_bin_partitioned_orc_10.db/store_sales';
>  2) Rerun the migration tool



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20733) GenericUDFOPEqualNS may not use = in plan descriptions

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719487#comment-16719487
 ] 

Hive QA commented on HIVE-20733:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
42s{color} | {color:blue} ql in master has 2310 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} ql: The patch generated 0 new + 1 unchanged - 3 
fixed = 1 total (was 4) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15285/dev-support/hive-personality.sh
 |
| git revision | master / 8d084d6 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15285/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> GenericUDFOPEqualNS may not use = in plan descriptions
> --
>
> Key: HIVE-20733
> URL: https://issues.apache.org/jira/browse/HIVE-20733
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: David Lavati
>Priority: Major
> Attachments: HIVE-20733.2.patch, HIVE-20733.3.patch, 
> HIVE-20733.4.patch, HIVE-20733.5.patch, HIVE-20733.6.patch, 
> HIVE-20733.7.patch, HIVE-20733.8.patch, HIVE-20733.patch
>
>
> right now GenericUDFOPEqualNS is displayed a "=" in explains; however it 
> should be "<=>"
> this may cause some confusion...
> related qtest: is_distinct_from.q
> same: GenericUDFOPNotEqualNS



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21030) Add credential store env properties redaction in JobConf

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719459#comment-16719459
 ] 

Hive QA commented on HIVE-21030:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12951522/HIVE-21030.4.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 15570 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[implicit_cast_during_insert]
 (batchId=55)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[merge3] (batchId=63)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_opt_vectorization]
 (batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_optimization2]
 (batchId=158)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_optimization]
 (batchId=172)
org.apache.hive.jdbc.TestJdbcGenericUDTFGetSplits.testGenericUDTFOrderBySplitCount1
 (batchId=254)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15284/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15284/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15284/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12951522 - PreCommit-HIVE-Build

> Add credential store env properties redaction in JobConf
> 
>
> Key: HIVE-21030
> URL: https://issues.apache.org/jira/browse/HIVE-21030
> Project: Hive
>  Issue Type: Bug
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-21030.1.patch, HIVE-21030.2.patch, 
> HIVE-21030.3.patch, HIVE-21030.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20943) Handle Compactor transaction abort properly

2018-12-12 Thread Eugene Koifman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719450#comment-16719450
 ] 

Eugene Koifman commented on HIVE-20943:
---

[~vgumashta] could you review please

> Handle Compactor transaction abort properly
> ---
>
> Key: HIVE-20943
> URL: https://issues.apache.org/jira/browse/HIVE-20943
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-20943.01.patch, HIVE-20943.02.patch, 
> HIVE-20943.02.patch
>
>
> A transactions in which the Worker runs may fail after base_x_cZ 
> (delta_x_y_xZ) is created but before files are fully written.  Need to make 
> sure to write to TXN_COMPONENTS an entry for corresponding to Z so "_cZ" 
> directories are not read by anyone and cleaned by Cleaner.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21036) extend OpenTxnRequest with transaction type

2018-12-12 Thread Eugene Koifman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719437#comment-16719437
 ] 

Eugene Koifman commented on HIVE-21036:
---

should be done after HIVE-20943 and HIVE-20936

> extend OpenTxnRequest with transaction type
> ---
>
> Key: HIVE-21036
> URL: https://issues.apache.org/jira/browse/HIVE-21036
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Eugene Koifman
>Priority: Major
>
> There is a {{TXN_TYPE}} field in {{TXNS}} table.
> There is {{TxnHandler.TxnType}} with legal values.  It would be useful to 
> TxnType a {{Thrift}}, add a new {{COMPACTION}} type object and allow setting 
> it in {{OpenTxnRequest}}.
> Since HIVE-20823 compactor starts a txn and should set this.
> Down the road we may want to set READ_ONLY either based on parsing of the 
> query or user input which can make {{TxnHandler.commitTxn}} faster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21030) Add credential store env properties redaction in JobConf

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719421#comment-16719421
 ] 

Hive QA commented on HIVE-21030:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
31s{color} | {color:blue} common in master has 65 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
40s{color} | {color:blue} ql in master has 2310 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15284/dev-support/hive-personality.sh
 |
| git revision | master / 8d084d6 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15284/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add credential store env properties redaction in JobConf
> 
>
> Key: HIVE-21030
> URL: https://issues.apache.org/jira/browse/HIVE-21030
> Project: Hive
>  Issue Type: Bug
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-21030.1.patch, HIVE-21030.2.patch, 
> HIVE-21030.3.patch, HIVE-21030.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21035) Race condition in SparkUtilities#getSparkSession

2018-12-12 Thread Antal Sinkovits (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antal Sinkovits updated HIVE-21035:
---
Attachment: HIVE-21035.02.patch

> Race condition in SparkUtilities#getSparkSession
> 
>
> Key: HIVE-21035
> URL: https://issues.apache.org/jira/browse/HIVE-21035
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 4.0.0
>Reporter: Antal Sinkovits
>Assignee: Antal Sinkovits
>Priority: Major
> Attachments: HIVE-21035.01.patch, HIVE-21035.02.patch
>
>
> It can happen, that when in one given session, multiple queries are executed, 
> that due to a race condition, multiple spark application master gets kicked 
> off.
> In this case, the one that started earlier, will not be killed, when the hive 
> session closes, consuming resources.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21035) Race condition in SparkUtilities#getSparkSession

2018-12-12 Thread Antal Sinkovits (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719410#comment-16719410
 ] 

Antal Sinkovits commented on HIVE-21035:


[~xuefuz] Thanks for checking on this. 

The application master starts up, when the first query is executed. If two 
queries are executed simultaneously, (and the AM is not running) , or the AM 
starts up slow, and the other query is executed during its startup, then it can 
happen that both queries try to kick one off. Unfortunately when this happens, 
one of them wont get shutdown.

I was able two reproduce the issue two way. Once with HUE, which can execute 
multiple queries simultaneously in one session. 
Also programatically using the same JDBC connection with multiple threads 
creating their own statements.


> Race condition in SparkUtilities#getSparkSession
> 
>
> Key: HIVE-21035
> URL: https://issues.apache.org/jira/browse/HIVE-21035
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 4.0.0
>Reporter: Antal Sinkovits
>Assignee: Antal Sinkovits
>Priority: Major
> Attachments: HIVE-21035.01.patch
>
>
> It can happen, that when in one given session, multiple queries are executed, 
> that due to a race condition, multiple spark application master gets kicked 
> off.
> In this case, the one that started earlier, will not be killed, when the hive 
> session closes, consuming resources.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21035) Race condition in SparkUtilities#getSparkSession

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719396#comment-16719396
 ] 

Hive QA commented on HIVE-21035:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12951521/HIVE-21035.01.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 15571 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[implicit_cast_during_insert]
 (batchId=55)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[merge3] (batchId=63)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_opt_vectorization]
 (batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_optimization2]
 (batchId=158)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_optimization]
 (batchId=172)
org.apache.hadoop.hive.metastore.TestGetPartitionsUsingProjectionAndFilterSpecs.testGetPartitionsUsingValues
 (batchId=221)
org.apache.hadoop.hive.metastore.TestPartitionManagement.testPartitionDiscoveryTransactionalTable
 (batchId=219)
org.apache.hadoop.hive.ql.exec.tez.TestCustomPartitionVertex.testGetBytePayload 
(batchId=312)
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testValidWriteIdListSnapshot
 (batchId=321)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15283/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15283/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15283/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 9 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12951521 - PreCommit-HIVE-Build

> Race condition in SparkUtilities#getSparkSession
> 
>
> Key: HIVE-21035
> URL: https://issues.apache.org/jira/browse/HIVE-21035
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 4.0.0
>Reporter: Antal Sinkovits
>Assignee: Antal Sinkovits
>Priority: Major
> Attachments: HIVE-21035.01.patch
>
>
> It can happen, that when in one given session, multiple queries are executed, 
> that due to a race condition, multiple spark application master gets kicked 
> off.
> In this case, the one that started earlier, will not be killed, when the hive 
> session closes, consuming resources.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20943) Handle Compactor transaction abort properly

2018-12-12 Thread Eugene Koifman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-20943:
--
Attachment: HIVE-20943.02.patch

> Handle Compactor transaction abort properly
> ---
>
> Key: HIVE-20943
> URL: https://issues.apache.org/jira/browse/HIVE-20943
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-20943.01.patch, HIVE-20943.02.patch, 
> HIVE-20943.02.patch
>
>
> A transactions in which the Worker runs may fail after base_x_cZ 
> (delta_x_y_xZ) is created but before files are fully written.  Need to make 
> sure to write to TXN_COMPONENTS an entry for corresponding to Z so "_cZ" 
> directories are not read by anyone and cleaned by Cleaner.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20943) Handle Compactor transaction abort properly

2018-12-12 Thread Eugene Koifman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-20943:
--
Attachment: HIVE-20943.02.patch

> Handle Compactor transaction abort properly
> ---
>
> Key: HIVE-20943
> URL: https://issues.apache.org/jira/browse/HIVE-20943
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-20943.01.patch, HIVE-20943.02.patch
>
>
> A transactions in which the Worker runs may fail after base_x_cZ 
> (delta_x_y_xZ) is created but before files are fully written.  Need to make 
> sure to write to TXN_COMPONENTS an entry for corresponding to Z so "_cZ" 
> directories are not read by anyone and cleaned by Cleaner.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21035) Race condition in SparkUtilities#getSparkSession

2018-12-12 Thread Xuefu Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719356#comment-16719356
 ] 

Xuefu Zhang commented on HIVE-21035:


[~asinkovits] Thanks for working on this. Maybe I have missed something, but 
I'm wondering how multiple app masters can be created in one session. My 
understanding is that at most one master is created for one session while 
multiple queries can be submitted to the app master.

> Race condition in SparkUtilities#getSparkSession
> 
>
> Key: HIVE-21035
> URL: https://issues.apache.org/jira/browse/HIVE-21035
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 4.0.0
>Reporter: Antal Sinkovits
>Assignee: Antal Sinkovits
>Priority: Major
> Attachments: HIVE-21035.01.patch
>
>
> It can happen, that when in one given session, multiple queries are executed, 
> that due to a race condition, multiple spark application master gets kicked 
> off.
> In this case, the one that started earlier, will not be killed, when the hive 
> session closes, consuming resources.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21035) Race condition in SparkUtilities#getSparkSession

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719341#comment-16719341
 ] 

Hive QA commented on HIVE-21035:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
43s{color} | {color:blue} ql in master has 2310 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} ql: The patch generated 0 new + 4 unchanged - 1 
fixed = 4 total (was 5) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15283/dev-support/hive-personality.sh
 |
| git revision | master / 8d084d6 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15283/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Race condition in SparkUtilities#getSparkSession
> 
>
> Key: HIVE-21035
> URL: https://issues.apache.org/jira/browse/HIVE-21035
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 4.0.0
>Reporter: Antal Sinkovits
>Assignee: Antal Sinkovits
>Priority: Major
> Attachments: HIVE-21035.01.patch
>
>
> It can happen, that when in one given session, multiple queries are executed, 
> that due to a race condition, multiple spark application master gets kicked 
> off.
> In this case, the one that started earlier, will not be killed, when the hive 
> session closes, consuming resources.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19968) UDF exception is not throw out

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719311#comment-16719311
 ] 

Hive QA commented on HIVE-19968:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12951519/HIVE-19968.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 15570 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_reflect] (batchId=61)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation2]
 (batchId=168)
org.apache.hadoop.hive.llap.security.TestLlapSignerImpl.testSigning 
(batchId=331)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15282/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15282/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15282/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12951519 - PreCommit-HIVE-Build

> UDF exception is not throw out
> --
>
> Key: HIVE-19968
> URL: https://issues.apache.org/jira/browse/HIVE-19968
> Project: Hive
>  Issue Type: Bug
>Reporter: sandflee
>Assignee: Laszlo Bodor
>Priority: Major
> Attachments: HIVE-19968.01.patch, hive-udf.png
>
>
> udf init failed, and throw a exception, but hive catch it and do nothing, 
> leading to app succ, but no data is generated.
> {code}
> GenericUDFReflect.java#evaluate()
> try {  
>    o = null;  
>    o = ReflectionUtils.newInstance(c, null);
> }   catch (Exception e) {  
> // ignored
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21033) Forgetting to close operation cuts off any more HiveServer2 output

2018-12-12 Thread Szehon Ho (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-21033:
-
Description: 
We had a custom client that did not handle closing the operations, until the 
end of the session.  it is a mistake in the client, but it reveals kind of a 
vulnerability in HiveServer2

This happens if you have a session with  (1) HiveCommandOperation and (2) 
SQLOperation and don't close them right after.  For example a session that does 
the operations (set a=b; select * from foobar; ). 

When SQLOperation runs , it set SessionState.out and err to be System.out and 
System.err . Ref:  
[SQLOperation#setupSessionIO|https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java#L139]

Then the client closes the session, or disconnects which triggers 
closeSession() on the Thrift side.  In this case, the closeSession closes all 
the operations, starting with HiveCommandOperation.  This one closes all the 
streams, which is System.out and System.err as set by SQLOperation earlier.  
Ref: 
[HiveCommandOperation#tearDownSessionIO|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java#L101]
 

After this, no more HiveServer2 output appears as System.out and System.err are 
closed.

  was:
We had a custom client that did not handle closing the operations, until the 
end of the session.  it is a mistake in the client, but it reveals kind of a 
vulnerability in HiveServer2

This happens if you have a session with  (1) HiveCommandOperation and (2) 
SQLOperation and don't close them right after.  For example a session that does 
the operations (set a=b; select * from foobar; ). 

When SQLOperation runs , it set SessionState.out and err to be System.out and 
System.err . Ref:  
[SQLOperation#setupSessionIO|https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java#L139]

Then the client closes the session, or disconnects.  In this case, the Session 
closes all the operations, starting with HiveCommandOperation.  This one closes 
all the streams, which is System.out and System.err as set by SQLOperation 
earlier.  Ref: 
[HiveCommandOperation#tearDownSessionIO|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java#L101]
 

After this, no more HiveServer2 output appears as System.out and System.err are 
closed.


> Forgetting to close operation cuts off any more HiveServer2 output
> --
>
> Key: HIVE-21033
> URL: https://issues.apache.org/jira/browse/HIVE-21033
> Project: Hive
>  Issue Type: Bug
>Reporter: Szehon Ho
>Priority: Major
>
> We had a custom client that did not handle closing the operations, until the 
> end of the session.  it is a mistake in the client, but it reveals kind of a 
> vulnerability in HiveServer2
> This happens if you have a session with  (1) HiveCommandOperation and (2) 
> SQLOperation and don't close them right after.  For example a session that 
> does the operations (set a=b; select * from foobar; ). 
> When SQLOperation runs , it set SessionState.out and err to be System.out and 
> System.err . Ref:  
> [SQLOperation#setupSessionIO|https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java#L139]
> Then the client closes the session, or disconnects which triggers 
> closeSession() on the Thrift side.  In this case, the closeSession closes all 
> the operations, starting with HiveCommandOperation.  This one closes all the 
> streams, which is System.out and System.err as set by SQLOperation earlier.  
> Ref: 
> [HiveCommandOperation#tearDownSessionIO|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java#L101]
>  
> After this, no more HiveServer2 output appears as System.out and System.err 
> are closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21033) Forgetting to close operation cuts off any more HiveServer2 output

2018-12-12 Thread Szehon Ho (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-21033:
-
Description: 
We had a custom client that did not handle closing the operations, until the 
end of the session.  it is a mistake in the client, but it reveals kind of a 
vulnerability in HiveServer2

This happens if you have a session with  (1) HiveCommandOperation and (2) 
SQLOperation and don't close them right after.  For example a session that does 
the operations (set a=b; select * from foobar; ). 

When SQLOperation runs , it set SessionState.out and err to be System.out and 
System.err . Ref:  
[SQLOperation#setupSessionIO|https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java#L139]

Then the client closes the session, or disconnects which triggers 
closeSession() on the Thrift side.  In this case, the closeSession closes all 
the operations, starting with HiveCommandOperation.  This closes all the 
streams, which is System.out and System.err as set by SQLOperation earlier.  
Ref: 
[HiveCommandOperation#tearDownSessionIO|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java#L101]
 

After this, no more HiveServer2 output appears as System.out and System.err are 
closed.

  was:
We had a custom client that did not handle closing the operations, until the 
end of the session.  it is a mistake in the client, but it reveals kind of a 
vulnerability in HiveServer2

This happens if you have a session with  (1) HiveCommandOperation and (2) 
SQLOperation and don't close them right after.  For example a session that does 
the operations (set a=b; select * from foobar; ). 

When SQLOperation runs , it set SessionState.out and err to be System.out and 
System.err . Ref:  
[SQLOperation#setupSessionIO|https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java#L139]

Then the client closes the session, or disconnects which triggers 
closeSession() on the Thrift side.  In this case, the closeSession closes all 
the operations, starting with HiveCommandOperation.  This one closes all the 
streams, which is System.out and System.err as set by SQLOperation earlier.  
Ref: 
[HiveCommandOperation#tearDownSessionIO|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java#L101]
 

After this, no more HiveServer2 output appears as System.out and System.err are 
closed.


> Forgetting to close operation cuts off any more HiveServer2 output
> --
>
> Key: HIVE-21033
> URL: https://issues.apache.org/jira/browse/HIVE-21033
> Project: Hive
>  Issue Type: Bug
>Reporter: Szehon Ho
>Priority: Major
>
> We had a custom client that did not handle closing the operations, until the 
> end of the session.  it is a mistake in the client, but it reveals kind of a 
> vulnerability in HiveServer2
> This happens if you have a session with  (1) HiveCommandOperation and (2) 
> SQLOperation and don't close them right after.  For example a session that 
> does the operations (set a=b; select * from foobar; ). 
> When SQLOperation runs , it set SessionState.out and err to be System.out and 
> System.err . Ref:  
> [SQLOperation#setupSessionIO|https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java#L139]
> Then the client closes the session, or disconnects which triggers 
> closeSession() on the Thrift side.  In this case, the closeSession closes all 
> the operations, starting with HiveCommandOperation.  This closes all the 
> streams, which is System.out and System.err as set by SQLOperation earlier.  
> Ref: 
> [HiveCommandOperation#tearDownSessionIO|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java#L101]
>  
> After this, no more HiveServer2 output appears as System.out and System.err 
> are closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21033) Forgetting to close operation cuts off any more HiveServer2 output

2018-12-12 Thread Szehon Ho (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-21033:
-
Description: 
We had a custom client that did not handle closing the operations, until the 
end of the session.  it is a mistake in the client, but it reveals kind of a 
vulnerability in HiveServer2

This happens if you have a session with  (1) HiveCommandOperation and (2) 
SQLOperation and don't close them right after.  For example a session that does 
the operations (set a=b; select * from foobar; ). 

When SQLOperation runs , it set SessionState.out and err to be System.out and 
System.err . Ref:  
[SQLOperation#setupSessionIO|https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java#L139]

Then the client closes the session, or disconnects.  In this case, the Session 
closes all the operations, starting with HiveCommandOperation.  This one closes 
all the streams, which is System.out and System.err as set by SQLOperation 
earlier.  Ref: 
[HiveCommandOperation#tearDownSessionIO|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java#L101]
 

After this, no more HiveServer2 output appears as System.out and System.err are 
closed.

  was:
We had a custom client that did not handle closing the operation or session on 
the error case.  But it may also happen for any client that just disconnects in 
the middle of this operation.

This happens if you have a session with  (1) HiveCommandOperation and (2) 
SQLOperation and don't close them right after.  For example a session that does 
the operations (set a=b; select * from foobar; ). 

When SQLOperation runs , it set SessionState.out and err to be System.out and 
System.err . Ref:  
[SQLOperation#setupSessionIO|https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java#L139]

Then the client closes the session, or disconnects.  In this case, the Session 
closes all the operations, starting with HiveCommandOperation.  This one closes 
all the streams, which is System.out and System.err as set by SQLOperation 
earlier.  Ref: 
[HiveCommandOperation#tearDownSessionIO|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java#L101]
 

After this, no more HiveServer2 output appears as System.out and System.err are 
closed.


> Forgetting to close operation cuts off any more HiveServer2 output
> --
>
> Key: HIVE-21033
> URL: https://issues.apache.org/jira/browse/HIVE-21033
> Project: Hive
>  Issue Type: Bug
>Reporter: Szehon Ho
>Priority: Major
>
> We had a custom client that did not handle closing the operations, until the 
> end of the session.  it is a mistake in the client, but it reveals kind of a 
> vulnerability in HiveServer2
> This happens if you have a session with  (1) HiveCommandOperation and (2) 
> SQLOperation and don't close them right after.  For example a session that 
> does the operations (set a=b; select * from foobar; ). 
> When SQLOperation runs , it set SessionState.out and err to be System.out and 
> System.err . Ref:  
> [SQLOperation#setupSessionIO|https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java#L139]
> Then the client closes the session, or disconnects.  In this case, the 
> Session closes all the operations, starting with HiveCommandOperation.  This 
> one closes all the streams, which is System.out and System.err as set by 
> SQLOperation earlier.  Ref: 
> [HiveCommandOperation#tearDownSessionIO|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java#L101]
>  
> After this, no more HiveServer2 output appears as System.out and System.err 
> are closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21033) Forgetting to close operation cuts off any more HiveServer2 output

2018-12-12 Thread Szehon Ho (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-21033:
-
Description: 
We had a custom client that did not handle closing the operation or session on 
the error case.  But it may also happen for any client that just disconnects in 
the middle of this operation.

This happens if you have a session with  (1) HiveCommandOperation and (2) 
SQLOperation and don't close them right after.  For example a session that does 
the operations (set a=b; select * from foobar; ). 

When SQLOperation runs , it set SessionState.out and err to be System.out and 
System.err . Ref:  
[SQLOperation#setupSessionIO|https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java#L139]

Then the client closes the session, or disconnects.  In this case, the Session 
closes all the operations, starting with HiveCommandOperation.  This one closes 
all the streams, which is System.out and System.err as set by SQLOperation 
earlier.  Ref: 
[HiveCommandOperation#tearDownSessionIO|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java#L101]
 

After this, no more HiveServer2 output appears as System.out and System.err are 
closed.

  was:
Its a bit tricky to reproduce, but we were able to do it (unfortunately) with 
our custom client that did not handle closing the operation or session on the 
error case.  But it may also happen for any client that just disconnects in the 
middle of this operation.

Basically you have a session with both HiveCommandOperation and SQLOperation.  
For example a session that does the operations (set a=b; select * from foobar; 
). 

The SQLOperation runs last and set SessionState.out and err to be System.out 
and System.err . Ref:  
[SQLOperation#setupSessionIO|https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java#L139]

Then the client terminates without closing the session. (In our case, a 
SemanticException triggered it).  The deleteContext is called, which closes the 
session:  Ref 
[ThriftBinaryCLIService#deleteContext|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/thrift/ThriftBinaryCLIService.java#L141]

The Session closes all the operations, starting with HiveCommandOperation.  
This one closes all the streams, which is System.out and System.err as set by 
SQLOperation earlier.  Ref: 
[HiveCommandOperation#tearDownSessionIO|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java#L101]
 

After this, no more HiveServer2 output appears as System.out and System.err are 
closed.


> Forgetting to close operation cuts off any more HiveServer2 output
> --
>
> Key: HIVE-21033
> URL: https://issues.apache.org/jira/browse/HIVE-21033
> Project: Hive
>  Issue Type: Bug
>Reporter: Szehon Ho
>Priority: Major
>
> We had a custom client that did not handle closing the operation or session 
> on the error case.  But it may also happen for any client that just 
> disconnects in the middle of this operation.
> This happens if you have a session with  (1) HiveCommandOperation and (2) 
> SQLOperation and don't close them right after.  For example a session that 
> does the operations (set a=b; select * from foobar; ). 
> When SQLOperation runs , it set SessionState.out and err to be System.out and 
> System.err . Ref:  
> [SQLOperation#setupSessionIO|https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java#L139]
> Then the client closes the session, or disconnects.  In this case, the 
> Session closes all the operations, starting with HiveCommandOperation.  This 
> one closes all the streams, which is System.out and System.err as set by 
> SQLOperation earlier.  Ref: 
> [HiveCommandOperation#tearDownSessionIO|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java#L101]
>  
> After this, no more HiveServer2 output appears as System.out and System.err 
> are closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21034) Add option to schematool to drop Hive databases

2018-12-12 Thread Alan Gates (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719291#comment-16719291
 ] 

Alan Gates commented on HIVE-21034:
---

It's not clear to me in an ephemeral cloud workload why you'd need this.  
Either it attaches to an existing HMS, in which case you don't want to drop all 
the databases, or it connects to an ephemeral HMS which will be dropped as part 
of dropping the HMS in the cloud.  Is there a case I'm missing?

I'm assuming there's no need to clean up the cloud datastore before returning 
it to the cloud provider as they'll clean it.  But if I'm wrong (or if for 
security reasons users don't want to return it to the provider uncleaned) then 
is the tool you're really looking for one that cleans out the entire metastore 
schema?  An inverse of --initSchema.

> Add option to schematool to drop Hive databases
> ---
>
> Key: HIVE-21034
> URL: https://issues.apache.org/jira/browse/HIVE-21034
> Project: Hive
>  Issue Type: Improvement
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
>
> An option to remove all Hive managed data could be a useful addition to 
> {{schematool}}.
> I propose to introduce a new flag {{-dropAllDatabases}} that would *drop all 
> databases with CASCADE* to remove all data of managed tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21021) Scalar subquery with only aggregate in subquery (no group by) has unnecessary sq_count_check branch

2018-12-12 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21021:
---
Status: Patch Available  (was: Open)

> Scalar subquery with only aggregate in subquery (no group by) has unnecessary 
> sq_count_check branch
> ---
>
> Key: HIVE-21021
> URL: https://issues.apache.org/jira/browse/HIVE-21021
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21021.1.patch, HIVE-21021.2.patch, 
> HIVE-21021.3.patch, HIVE-21021.4.patch, HIVE-21021.5.patch
>
>
> {code:sql}
> CREATE TABLE `store_sales`(
>   `ss_sold_date_sk` int,
>   `ss_quantity` int,
>   `ss_list_price` decimal(7,2));
> CREATE TABLE `date_dim`(
>   `d_date_sk` int,
>   `d_year` int);
> explain cbo with avg_sales as
>  (select avg(quantity*list_price) average_sales
>   from (select ss_quantity quantity
>  ,ss_list_price list_price
>from store_sales
>,date_dim
>where ss_sold_date_sk = d_date_sk
>  and d_year between 1999 and 2001 ) x)
> select * from store_sales where ss_list_price > (select average_sales from 
> avg_sales);
> {code}
> {noformat}
> CBO PLAN:
> HiveProject(ss_sold_date_sk=[$0], ss_quantity=[$1], ss_list_price=[$2])
>   HiveJoin(condition=[true], joinType=[inner], algorithm=[none], cost=[{2.0 
> rows, 0.0 cpu, 0.0 io}])
> HiveJoin(condition=[>($2, $3)], joinType=[inner], algorithm=[none], 
> cost=[{2.0 rows, 0.0 cpu, 0.0 io}])
>   HiveProject(ss_sold_date_sk=[$0], ss_quantity=[$1], ss_list_price=[$2])
> HiveTableScan(table=[[sub, store_sales]], table:alias=[store_sales])
>   HiveProject($f0=[/($0, $1)])
> HiveAggregate(group=[{}], agg#0=[sum($0)], agg#1=[count($0)])
>   HiveProject($f0=[*(CAST($1):DECIMAL(10, 0), $2)])
> HiveJoin(condition=[=($0, $3)], joinType=[inner], 
> algorithm=[none], cost=[{2.0 rows, 0.0 cpu, 0.0 io}])
>   HiveProject(ss_sold_date_sk=[$0], ss_quantity=[$1], 
> ss_list_price=[$2])
> HiveFilter(condition=[IS NOT NULL($0)])
>   HiveTableScan(table=[[sub, store_sales]], 
> table:alias=[store_sales])
>   HiveProject(d_date_sk=[$0])
> HiveFilter(condition=[AND(BETWEEN(false, $1, 1999, 2001), IS 
> NOT NULL($0))])
>   HiveTableScan(table=[[sub, date_dim]], 
> table:alias=[date_dim])
> HiveProject(cnt=[$0])
>   HiveFilter(condition=[<=(sq_count_check($0), 1)])
> HiveProject(cnt=[$0])
>   HiveAggregate(group=[{}], cnt=[COUNT()])
> HiveProject
>   HiveProject($f0=[$0])
> HiveAggregate(group=[{}], agg#0=[count($0)])
>   HiveJoin(condition=[=($0, $3)], joinType=[inner], 
> algorithm=[none], cost=[{2.0 rows, 0.0 cpu, 0.0 io}])
> HiveProject(ss_sold_date_sk=[$0], ss_quantity=[$1], 
> ss_list_price=[$2])
>   HiveFilter(condition=[IS NOT NULL($0)])
> HiveTableScan(table=[[sub, store_sales]], 
> table:alias=[store_sales])
> HiveProject(d_date_sk=[$0])
>   HiveFilter(condition=[AND(BETWEEN(false, $1, 1999, 
> 2001), IS NOT NULL($0))])
> HiveTableScan(table=[[sub, date_dim]], 
> table:alias=[date_dim])
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21021) Scalar subquery with only aggregate in subquery (no group by) has unnecessary sq_count_check branch

2018-12-12 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21021:
---
Attachment: HIVE-21021.5.patch

> Scalar subquery with only aggregate in subquery (no group by) has unnecessary 
> sq_count_check branch
> ---
>
> Key: HIVE-21021
> URL: https://issues.apache.org/jira/browse/HIVE-21021
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21021.1.patch, HIVE-21021.2.patch, 
> HIVE-21021.3.patch, HIVE-21021.4.patch, HIVE-21021.5.patch
>
>
> {code:sql}
> CREATE TABLE `store_sales`(
>   `ss_sold_date_sk` int,
>   `ss_quantity` int,
>   `ss_list_price` decimal(7,2));
> CREATE TABLE `date_dim`(
>   `d_date_sk` int,
>   `d_year` int);
> explain cbo with avg_sales as
>  (select avg(quantity*list_price) average_sales
>   from (select ss_quantity quantity
>  ,ss_list_price list_price
>from store_sales
>,date_dim
>where ss_sold_date_sk = d_date_sk
>  and d_year between 1999 and 2001 ) x)
> select * from store_sales where ss_list_price > (select average_sales from 
> avg_sales);
> {code}
> {noformat}
> CBO PLAN:
> HiveProject(ss_sold_date_sk=[$0], ss_quantity=[$1], ss_list_price=[$2])
>   HiveJoin(condition=[true], joinType=[inner], algorithm=[none], cost=[{2.0 
> rows, 0.0 cpu, 0.0 io}])
> HiveJoin(condition=[>($2, $3)], joinType=[inner], algorithm=[none], 
> cost=[{2.0 rows, 0.0 cpu, 0.0 io}])
>   HiveProject(ss_sold_date_sk=[$0], ss_quantity=[$1], ss_list_price=[$2])
> HiveTableScan(table=[[sub, store_sales]], table:alias=[store_sales])
>   HiveProject($f0=[/($0, $1)])
> HiveAggregate(group=[{}], agg#0=[sum($0)], agg#1=[count($0)])
>   HiveProject($f0=[*(CAST($1):DECIMAL(10, 0), $2)])
> HiveJoin(condition=[=($0, $3)], joinType=[inner], 
> algorithm=[none], cost=[{2.0 rows, 0.0 cpu, 0.0 io}])
>   HiveProject(ss_sold_date_sk=[$0], ss_quantity=[$1], 
> ss_list_price=[$2])
> HiveFilter(condition=[IS NOT NULL($0)])
>   HiveTableScan(table=[[sub, store_sales]], 
> table:alias=[store_sales])
>   HiveProject(d_date_sk=[$0])
> HiveFilter(condition=[AND(BETWEEN(false, $1, 1999, 2001), IS 
> NOT NULL($0))])
>   HiveTableScan(table=[[sub, date_dim]], 
> table:alias=[date_dim])
> HiveProject(cnt=[$0])
>   HiveFilter(condition=[<=(sq_count_check($0), 1)])
> HiveProject(cnt=[$0])
>   HiveAggregate(group=[{}], cnt=[COUNT()])
> HiveProject
>   HiveProject($f0=[$0])
> HiveAggregate(group=[{}], agg#0=[count($0)])
>   HiveJoin(condition=[=($0, $3)], joinType=[inner], 
> algorithm=[none], cost=[{2.0 rows, 0.0 cpu, 0.0 io}])
> HiveProject(ss_sold_date_sk=[$0], ss_quantity=[$1], 
> ss_list_price=[$2])
>   HiveFilter(condition=[IS NOT NULL($0)])
> HiveTableScan(table=[[sub, store_sales]], 
> table:alias=[store_sales])
> HiveProject(d_date_sk=[$0])
>   HiveFilter(condition=[AND(BETWEEN(false, $1, 1999, 
> 2001), IS NOT NULL($0))])
> HiveTableScan(table=[[sub, date_dim]], 
> table:alias=[date_dim])
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21021) Scalar subquery with only aggregate in subquery (no group by) has unnecessary sq_count_check branch

2018-12-12 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21021:
---
Status: Open  (was: Patch Available)

> Scalar subquery with only aggregate in subquery (no group by) has unnecessary 
> sq_count_check branch
> ---
>
> Key: HIVE-21021
> URL: https://issues.apache.org/jira/browse/HIVE-21021
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21021.1.patch, HIVE-21021.2.patch, 
> HIVE-21021.3.patch, HIVE-21021.4.patch, HIVE-21021.5.patch
>
>
> {code:sql}
> CREATE TABLE `store_sales`(
>   `ss_sold_date_sk` int,
>   `ss_quantity` int,
>   `ss_list_price` decimal(7,2));
> CREATE TABLE `date_dim`(
>   `d_date_sk` int,
>   `d_year` int);
> explain cbo with avg_sales as
>  (select avg(quantity*list_price) average_sales
>   from (select ss_quantity quantity
>  ,ss_list_price list_price
>from store_sales
>,date_dim
>where ss_sold_date_sk = d_date_sk
>  and d_year between 1999 and 2001 ) x)
> select * from store_sales where ss_list_price > (select average_sales from 
> avg_sales);
> {code}
> {noformat}
> CBO PLAN:
> HiveProject(ss_sold_date_sk=[$0], ss_quantity=[$1], ss_list_price=[$2])
>   HiveJoin(condition=[true], joinType=[inner], algorithm=[none], cost=[{2.0 
> rows, 0.0 cpu, 0.0 io}])
> HiveJoin(condition=[>($2, $3)], joinType=[inner], algorithm=[none], 
> cost=[{2.0 rows, 0.0 cpu, 0.0 io}])
>   HiveProject(ss_sold_date_sk=[$0], ss_quantity=[$1], ss_list_price=[$2])
> HiveTableScan(table=[[sub, store_sales]], table:alias=[store_sales])
>   HiveProject($f0=[/($0, $1)])
> HiveAggregate(group=[{}], agg#0=[sum($0)], agg#1=[count($0)])
>   HiveProject($f0=[*(CAST($1):DECIMAL(10, 0), $2)])
> HiveJoin(condition=[=($0, $3)], joinType=[inner], 
> algorithm=[none], cost=[{2.0 rows, 0.0 cpu, 0.0 io}])
>   HiveProject(ss_sold_date_sk=[$0], ss_quantity=[$1], 
> ss_list_price=[$2])
> HiveFilter(condition=[IS NOT NULL($0)])
>   HiveTableScan(table=[[sub, store_sales]], 
> table:alias=[store_sales])
>   HiveProject(d_date_sk=[$0])
> HiveFilter(condition=[AND(BETWEEN(false, $1, 1999, 2001), IS 
> NOT NULL($0))])
>   HiveTableScan(table=[[sub, date_dim]], 
> table:alias=[date_dim])
> HiveProject(cnt=[$0])
>   HiveFilter(condition=[<=(sq_count_check($0), 1)])
> HiveProject(cnt=[$0])
>   HiveAggregate(group=[{}], cnt=[COUNT()])
> HiveProject
>   HiveProject($f0=[$0])
> HiveAggregate(group=[{}], agg#0=[count($0)])
>   HiveJoin(condition=[=($0, $3)], joinType=[inner], 
> algorithm=[none], cost=[{2.0 rows, 0.0 cpu, 0.0 io}])
> HiveProject(ss_sold_date_sk=[$0], ss_quantity=[$1], 
> ss_list_price=[$2])
>   HiveFilter(condition=[IS NOT NULL($0)])
> HiveTableScan(table=[[sub, store_sales]], 
> table:alias=[store_sales])
> HiveProject(d_date_sk=[$0])
>   HiveFilter(condition=[AND(BETWEEN(false, $1, 1999, 
> 2001), IS NOT NULL($0))])
> HiveTableScan(table=[[sub, date_dim]], 
> table:alias=[date_dim])
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21033) Forgetting to close operation cuts off any more HiveServer2 output

2018-12-12 Thread Szehon Ho (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-21033:
-
Summary: Forgetting to close operation cuts off any more HiveServer2 output 
 (was: Sudden disconnect for a session with set and SQL operation cuts off any 
more HiveServer2 output)

> Forgetting to close operation cuts off any more HiveServer2 output
> --
>
> Key: HIVE-21033
> URL: https://issues.apache.org/jira/browse/HIVE-21033
> Project: Hive
>  Issue Type: Bug
>Reporter: Szehon Ho
>Priority: Major
>
> Its a bit tricky to reproduce, but we were able to do it (unfortunately) with 
> our custom client that did not handle closing the operation or session on the 
> error case.  But it may also happen for any client that just disconnects in 
> the middle of this operation.
> Basically you have a session with both HiveCommandOperation and SQLOperation. 
>  For example a session that does the operations (set a=b; select * from 
> foobar; ). 
> The SQLOperation runs last and set SessionState.out and err to be System.out 
> and System.err . Ref:  
> [SQLOperation#setupSessionIO|https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java#L139]
> Then the client terminates without closing the session. (In our case, a 
> SemanticException triggered it).  The deleteContext is called, which closes 
> the session:  Ref 
> [ThriftBinaryCLIService#deleteContext|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/thrift/ThriftBinaryCLIService.java#L141]
> The Session closes all the operations, starting with HiveCommandOperation.  
> This one closes all the streams, which is System.out and System.err as set by 
> SQLOperation earlier.  Ref: 
> [HiveCommandOperation#tearDownSessionIO|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java#L101]
>  
> After this, no more HiveServer2 output appears as System.out and System.err 
> are closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17020) Aggressive RS dedup can incorrectly remove OP tree branch

2018-12-12 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-17020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-17020:
---
   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks [~lirui]

> Aggressive RS dedup can incorrectly remove OP tree branch
> -
>
> Key: HIVE-17020
> URL: https://issues.apache.org/jira/browse/HIVE-17020
> Project: Hive
>  Issue Type: Bug
>Reporter: Rui Li
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-17020.1.patch, HIVE-17020.2.patch, 
> HIVE-17020.3.patch
>
>
> Suppose we have an OP tree like this:
> {noformat}
>  ...
>   |
>  RS[1]
>   |
> SEL[2]
> /\
> SEL[3]   SEL[4]
>   | |
> RS[5] FS[6]
>   |
>  ... 
> {noformat}
> When doing aggressive RS dedup, we'll remove all the operators between RS5 
> and RS1, and thus the branch containing FS6 is lost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19968) UDF exception is not throw out

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719281#comment-16719281
 ] 

Hive QA commented on HIVE-19968:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
41s{color} | {color:blue} ql in master has 2310 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
37s{color} | {color:red} ql: The patch generated 1 new + 2 unchanged - 0 fixed 
= 3 total (was 2) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15282/dev-support/hive-personality.sh
 |
| git revision | master / d1e219d |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15282/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15282/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> UDF exception is not throw out
> --
>
> Key: HIVE-19968
> URL: https://issues.apache.org/jira/browse/HIVE-19968
> Project: Hive
>  Issue Type: Bug
>Reporter: sandflee
>Assignee: Laszlo Bodor
>Priority: Major
> Attachments: HIVE-19968.01.patch, hive-udf.png
>
>
> udf init failed, and throw a exception, but hive catch it and do nothing, 
> leading to app succ, but no data is generated.
> {code}
> GenericUDFReflect.java#evaluate()
> try {  
>    o = null;  
>    o = ReflectionUtils.newInstance(c, null);
> }   catch (Exception e) {  
> // ignored
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21032) Refactor HiveMetaTool

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719258#comment-16719258
 ] 

Hive QA commented on HIVE-21032:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12951513/HIVE-21032.01.patch

{color:green}SUCCESS:{color} +1 due to 6 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 12 failed/errored test(s), 15632 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.TestObjectStore.catalogs (batchId=229)
org.apache.hadoop.hive.metastore.TestObjectStore.testConcurrentDropPartitions 
(batchId=229)
org.apache.hadoop.hive.metastore.TestObjectStore.testDatabaseOps (batchId=229)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropParitionsCleanup
 (batchId=229)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropPartitionsCacheCrossSession
 (batchId=229)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSqlErrorMetrics 
(batchId=229)
org.apache.hadoop.hive.metastore.TestObjectStore.testMasterKeyOps (batchId=229)
org.apache.hadoop.hive.metastore.TestObjectStore.testMaxEventResponse 
(batchId=229)
org.apache.hadoop.hive.metastore.TestObjectStore.testPartitionOps (batchId=229)
org.apache.hadoop.hive.metastore.TestObjectStore.testQueryCloseOnError 
(batchId=229)
org.apache.hadoop.hive.metastore.TestObjectStore.testRoleOps (batchId=229)
org.apache.hadoop.hive.metastore.TestObjectStore.testTableOps (batchId=229)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15281/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15281/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15281/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 12 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12951513 - PreCommit-HIVE-Build

> Refactor HiveMetaTool
> -
>
> Key: HIVE-21032
> URL: https://issues.apache.org/jira/browse/HIVE-21032
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.1.0
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HIVE-21032.01.patch
>
>
> HiveMetaTool is doing everything in one class, needs to be refactored to have 
> a nicer design.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21032) Refactor HiveMetaTool

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719256#comment-16719256
 ] 

Hive QA commented on HIVE-21032:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
12s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m  
5s{color} | {color:blue} standalone-metastore/metastore-server in master has 
188 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
47s{color} | {color:blue} ql in master has 2310 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
36s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
43s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 6s{color} | {color:green} The patch metastore-server passed checkstyle {color} 
|
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} ql: The patch generated 0 new + 1 unchanged - 1 
fixed = 1 total (was 2) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} The patch . passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} itests/hive-unit: The patch generated 0 new + 0 
unchanged - 3 fixed = 0 total (was 3) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} standalone-metastore/metastore-server generated 0 new 
+ 187 unchanged - 1 fixed = 187 total (was 188) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
54s{color} | {color:green} ql in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} hive-unit in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15281/dev-support/hive-personality.sh
 |
| git revision | master / b91b5f9 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: standalone-metastore/metastore-server ql . itests/hive-unit U: . 
|
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15281/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Refactor HiveMetaTool
> -
>
>   

[jira] [Commented] (HIVE-21034) Add option to schematool to drop Hive databases

2018-12-12 Thread Daniel Voros (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719253#comment-16719253
 ] 

Daniel Voros commented on HIVE-21034:
-

Thank you for the feedback. I do agree with you, it's a dangerous operation, 
and we should make sure it's hard do accidentally/maliciously invoke. I was 
hoping to find some documentation on how to secure your deployment with respect 
to restricting access to schematool and other executables, but came back empty 
handed so I'm not sure what (if any) protections we already have (or suggest to 
have) in place.

What I had in mind for use-cases was ephemeral cloud workloads, where the users 
might want to drop everything once the job has finished.

> Add option to schematool to drop Hive databases
> ---
>
> Key: HIVE-21034
> URL: https://issues.apache.org/jira/browse/HIVE-21034
> Project: Hive
>  Issue Type: Improvement
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
>
> An option to remove all Hive managed data could be a useful addition to 
> {{schematool}}.
> I propose to introduce a new flag {{-dropAllDatabases}} that would *drop all 
> databases with CASCADE* to remove all data of managed tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21022) Fix remote metastore tests which use ZooKeeper

2018-12-12 Thread Ashutosh Bapat (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Bapat updated HIVE-21022:
--
Attachment: HIVE-21022.05
Status: Patch Available  (was: In Progress)

Patch enables the disabled testcases.

> Fix remote metastore tests which use ZooKeeper
> --
>
> Key: HIVE-21022
> URL: https://issues.apache.org/jira/browse/HIVE-21022
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21022.01, HIVE-21022.01, HIVE-21022.01, 
> HIVE-21022.02, HIVE-21022.02.patch, HIVE-21022.03, HIVE-21022.03, 
> HIVE-21022.04, HIVE-21022.05
>
>
> Per [~vgarg]'s comment on HIVE-20794 at 
> https://issues.apache.org/jira/browse/HIVE-20794?focusedCommentId=16714093=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16714093,
>  the remote metatstore tests using ZooKeeper are flaky. They are failing with 
> error "Got exception: org.apache.zookeeper.KeeperException$NoNodeException 
> KeeperErrorCode = NoNode for /hs2mszktest".
> Both of these tests are using the same root namespace and hence the reason 
> for this failure could be that the root namespace becomes unavailable to one 
> test when the other drops it. The drop seems to be happening automatically 
> through TestingServer code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21022) Fix remote metastore tests which use ZooKeeper

2018-12-12 Thread Ashutosh Bapat (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Bapat updated HIVE-21022:
--
Status: In Progress  (was: Patch Available)

> Fix remote metastore tests which use ZooKeeper
> --
>
> Key: HIVE-21022
> URL: https://issues.apache.org/jira/browse/HIVE-21022
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21022.01, HIVE-21022.01, HIVE-21022.01, 
> HIVE-21022.02, HIVE-21022.02.patch, HIVE-21022.03, HIVE-21022.03, 
> HIVE-21022.04
>
>
> Per [~vgarg]'s comment on HIVE-20794 at 
> https://issues.apache.org/jira/browse/HIVE-20794?focusedCommentId=16714093=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16714093,
>  the remote metatstore tests using ZooKeeper are flaky. They are failing with 
> error "Got exception: org.apache.zookeeper.KeeperException$NoNodeException 
> KeeperErrorCode = NoNode for /hs2mszktest".
> Both of these tests are using the same root namespace and hence the reason 
> for this failure could be that the root namespace becomes unavailable to one 
> test when the other drops it. The drop seems to be happening automatically 
> through TestingServer code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20733) GenericUDFOPEqualNS may not use = in plan descriptions

2018-12-12 Thread David Lavati (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Lavati updated HIVE-20733:

Status: In Progress  (was: Patch Available)

> GenericUDFOPEqualNS may not use = in plan descriptions
> --
>
> Key: HIVE-20733
> URL: https://issues.apache.org/jira/browse/HIVE-20733
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: David Lavati
>Priority: Major
> Attachments: HIVE-20733.2.patch, HIVE-20733.3.patch, 
> HIVE-20733.4.patch, HIVE-20733.5.patch, HIVE-20733.6.patch, 
> HIVE-20733.7.patch, HIVE-20733.8.patch, HIVE-20733.patch
>
>
> right now GenericUDFOPEqualNS is displayed a "=" in explains; however it 
> should be "<=>"
> this may cause some confusion...
> related qtest: is_distinct_from.q
> same: GenericUDFOPNotEqualNS



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20588) Queries INSERT INTO table1 SELECT * FROM table2 LIMIT A, B insert one more row than they should

2018-12-12 Thread Jaume M (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719215#comment-16719215
 ] 

Jaume M commented on HIVE-20588:


Hello [~klcopp], I just tried with the latest master and I still get the same 
results doing the queries in the description.

> Queries INSERT INTO table1 SELECT * FROM table2 LIMIT A, B insert one more 
> row than they should
> ---
>
> Key: HIVE-20588
> URL: https://issues.apache.org/jira/browse/HIVE-20588
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Jaume M
>Priority: Critical
>
> {code}
> 0: jdbc:hive2://hs2.example.com:10005/> CREATE TABLE atest1 (foo BIGINT, bar 
> STRING);
> No rows affected (0.199 seconds)
> 0: jdbc:hive2://hs2.example.com:10005/> INSERT INTO atest1 VALUES (1, 
> "1"),(2, "2"),(3, "3");
> No rows affected (8.209 seconds)
> 0: jdbc:hive2://hs2.example.com:10005/> CREATE TABLE atest2 (foo BIGINT, bar 
> STRING);
> No rows affected (0.156 seconds)
> 0: jdbc:hive2://hs2.example.com:10005/> INSERT INTO atest2 SELECT * FROM 
> atest1 LIMIT 1;
> No rows affected (8.205 seconds)
> 0: jdbc:hive2://hs2.example.com:10005/> SELECT COUNT(*) FROM atest2;
> +--+
> | _c0  |
> +--+
> | 1|
> +--+
> 1 row selected (0.133 seconds)
> 0: jdbc:hive2://hs2.example.com:10005/> TRUNCATE TABLE atest2;
> No rows affected (0.14 seconds)
> 0: jdbc:hive2://hs2.example.com:10005/> INSERT INTO atest2 SELECT * FROM 
> atest1 LIMIT 1,1;
> No rows affected (8.19 seconds)
> 0: jdbc:hive2://hs2.example.com:10005/> SELECT COUNT(*) FROM atest2;
> +--+
> | _c0  |
> +--+
> | 2|
> +--+
> 1 row selected (0.129 seconds)
> 0: jdbc:hive2://hs2.example.com:10005/>
> 0: jdbc:hive2://hs2.example.com:10005/> SELECT * FROM atest1 LIMIT 1,1;
> +-+-+
> | atest1.foo  | atest1.bar  |
> +-+-+
> | 2   | 2   |
> +-+-+
> 1 row selected (0.197 seconds)
> 0: jdbc:hive2://hs2.example.com:10005/>
> {code}
> When two arguments are specified for limit one more row than it should is 
> being inserted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20733) GenericUDFOPEqualNS may not use = in plan descriptions

2018-12-12 Thread David Lavati (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Lavati updated HIVE-20733:

Attachment: HIVE-20733.8.patch
Status: Patch Available  (was: In Progress)

> GenericUDFOPEqualNS may not use = in plan descriptions
> --
>
> Key: HIVE-20733
> URL: https://issues.apache.org/jira/browse/HIVE-20733
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: David Lavati
>Priority: Major
> Attachments: HIVE-20733.2.patch, HIVE-20733.3.patch, 
> HIVE-20733.4.patch, HIVE-20733.5.patch, HIVE-20733.6.patch, 
> HIVE-20733.7.patch, HIVE-20733.8.patch, HIVE-20733.patch
>
>
> right now GenericUDFOPEqualNS is displayed a "=" in explains; however it 
> should be "<=>"
> this may cause some confusion...
> related qtest: is_distinct_from.q
> same: GenericUDFOPNotEqualNS



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21033) Sudden disconnect for a session with set and SQL operation cuts off any more HiveServer2 output

2018-12-12 Thread Szehon Ho (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-21033:
-
Description: 
Its a bit tricky to reproduce, but we were able to do it (unfortunately) with 
our custom client that did not handle closing the operation or session on the 
error case.  But it may also happen for any client that just disconnects in the 
middle of this operation.

Basically you have a session with both HiveCommandOperation and SQLOperation.  
For example a session that does the operations (set a=b; select * from foobar; 
). 

The SQLOperation runs last and set SessionState.out and err to be System.out 
and System.err . Ref:  
[SQLOperation#setupSessionIO|https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java#L139]

Then the client terminates without closing the session. (In our case, a 
SemanticException triggered it).  The deleteContext is called, which closes the 
session:  Ref 
[ThriftBinaryCLIService#deleteContext|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/thrift/ThriftBinaryCLIService.java#L141]

The Session closes all the operations, starting with HiveCommandOperation.  
This one closes all the streams, which is System.out and System.err as set by 
SQLOperation earlier.  Ref: 
[HiveCommandOperation#tearDownSessionIO|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java#L101]
 

After this, no more HiveServer2 output appears as System.out and System.err are 
closed.

  was:
Its a bit tricky to reproduce, but we were able to do it (unfortunately) with 
our custom client that did not handle closing the session on the error case.  
But it may also happen for any client that just disconnects in the middle of 
this operation.

Basically you have a session with both HiveCommandOperation and SQLOperation.  
For example a session that does the operations (set a=b; select * from foobar; 
). 

The SQLOperation runs last and set SessionState.out and err to be System.out 
and System.err . Ref:  
[SQLOperation#setupSessionIO|https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java#L139]

Then the client terminates without closing the session. (In our case, a 
SemanticException triggered it).  The deleteContext is called, which closes the 
session:  Ref 
[ThriftBinaryCLIService#deleteContext|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/thrift/ThriftBinaryCLIService.java#L141]

The Session closes all the operations, starting with HiveCommandOperation.  
This one closes all the streams, which is System.out and System.err as set by 
SQLOperation earlier.  Ref: 
[HiveCommandOperation#tearDownSessionIO|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java#L101]
 

After this, no more HiveServer2 output appears as System.out and System.err are 
closed.


> Sudden disconnect for a session with set and SQL operation cuts off any more 
> HiveServer2 output
> ---
>
> Key: HIVE-21033
> URL: https://issues.apache.org/jira/browse/HIVE-21033
> Project: Hive
>  Issue Type: Bug
>Reporter: Szehon Ho
>Priority: Major
>
> Its a bit tricky to reproduce, but we were able to do it (unfortunately) with 
> our custom client that did not handle closing the operation or session on the 
> error case.  But it may also happen for any client that just disconnects in 
> the middle of this operation.
> Basically you have a session with both HiveCommandOperation and SQLOperation. 
>  For example a session that does the operations (set a=b; select * from 
> foobar; ). 
> The SQLOperation runs last and set SessionState.out and err to be System.out 
> and System.err . Ref:  
> [SQLOperation#setupSessionIO|https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java#L139]
> Then the client terminates without closing the session. (In our case, a 
> SemanticException triggered it).  The deleteContext is called, which closes 
> the session:  Ref 
> [ThriftBinaryCLIService#deleteContext|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/thrift/ThriftBinaryCLIService.java#L141]
> The Session closes all the operations, starting with HiveCommandOperation.  
> This one closes all the streams, which is System.out and System.err as set by 
> SQLOperation earlier.  Ref: 
> 

[jira] [Commented] (HIVE-21022) Fix remote metastore tests which use ZooKeeper

2018-12-12 Thread Zoltan Haindrich (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719204#comment-16719204
 ] 

Zoltan Haindrich commented on HIVE-21022:
-

also disabled: TestRemoteHiveMetaStoreZK 

> Fix remote metastore tests which use ZooKeeper
> --
>
> Key: HIVE-21022
> URL: https://issues.apache.org/jira/browse/HIVE-21022
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21022.01, HIVE-21022.01, HIVE-21022.01, 
> HIVE-21022.02, HIVE-21022.02.patch, HIVE-21022.03, HIVE-21022.03, 
> HIVE-21022.04
>
>
> Per [~vgarg]'s comment on HIVE-20794 at 
> https://issues.apache.org/jira/browse/HIVE-20794?focusedCommentId=16714093=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16714093,
>  the remote metatstore tests using ZooKeeper are flaky. They are failing with 
> error "Got exception: org.apache.zookeeper.KeeperException$NoNodeException 
> KeeperErrorCode = NoNode for /hs2mszktest".
> Both of these tests are using the same root namespace and hence the reason 
> for this failure could be that the root namespace becomes unavailable to one 
> test when the other drops it. The drop seems to be happening automatically 
> through TestingServer code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20733) GenericUDFOPEqualNS may not use = in plan descriptions

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719157#comment-16719157
 ] 

Hive QA commented on HIVE-20733:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12951506/HIVE-20733.7.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 45 failed/errored test(s), 15615 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testAlterPartition 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testAlterTable 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testAlterTableCascade
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testAlterViewParititon
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testColumnStatistics 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testComplexTable 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testComplexTypeApi 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testConcurrentMetastores
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testCreateAndGetTableWithDriver
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testCreateTableSettingId
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDBLocationChange 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDBOwner 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDBOwnerChange 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDatabase 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDatabaseLocation 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDatabaseLocationWithPermissionProblems
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDropDatabaseCascadeMVMultiDB
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDropTable 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testFilterLastPartition
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testFilterSinglePartition
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testFunctionWithResources
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testGetConfigValue 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testGetMetastoreUuid 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testGetPartitionsWithSpec
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testGetSchemaWithNoClassDefFoundError
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testGetTableObjects 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testGetUUIDInParallel
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testJDOPersistanceManagerCleanup
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testListPartitionNames
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testListPartitions 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testListPartitionsWihtLimitEnabled
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testNameMethods 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testPartition 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testPartitionFilter 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testRenamePartition 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testRetriableClientWithConnLifetime
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testSimpleFunction 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testSimpleTable 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testSimpleTypeApi 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testStatsFastTrivial 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testSynchronized 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testTableDatabase 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testTableFilter 
(batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testUpdatePartitionStat_doesNotUpdateStats
 (batchId=227)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testValidateTableCols
 (batchId=227)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15280/testReport
Console 

[jira] [Commented] (HIVE-20733) GenericUDFOPEqualNS may not use = in plan descriptions

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719114#comment-16719114
 ] 

Hive QA commented on HIVE-20733:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
48s{color} | {color:blue} ql in master has 2310 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} ql: The patch generated 0 new + 1 unchanged - 3 
fixed = 1 total (was 4) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15280/dev-support/hive-personality.sh
 |
| git revision | master / b91b5f9 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15280/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> GenericUDFOPEqualNS may not use = in plan descriptions
> --
>
> Key: HIVE-20733
> URL: https://issues.apache.org/jira/browse/HIVE-20733
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: David Lavati
>Priority: Major
> Attachments: HIVE-20733.2.patch, HIVE-20733.3.patch, 
> HIVE-20733.4.patch, HIVE-20733.5.patch, HIVE-20733.6.patch, 
> HIVE-20733.7.patch, HIVE-20733.patch
>
>
> right now GenericUDFOPEqualNS is displayed a "=" in explains; however it 
> should be "<=>"
> this may cause some confusion...
> related qtest: is_distinct_from.q
> same: GenericUDFOPNotEqualNS



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21034) Add option to schematool to drop Hive databases

2018-12-12 Thread Alan Gates (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719096#comment-16719096
 ] 

Alan Gates commented on HIVE-21034:
---

Creating a single command that will destroy all a user's data is very 
dangerous.  What's the use case?

> Add option to schematool to drop Hive databases
> ---
>
> Key: HIVE-21034
> URL: https://issues.apache.org/jira/browse/HIVE-21034
> Project: Hive
>  Issue Type: Improvement
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
>
> An option to remove all Hive managed data could be a useful addition to 
> {{schematool}}.
> I propose to introduce a new flag {{-dropAllDatabases}} that would *drop all 
> databases with CASCADE* to remove all data of managed tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21030) Add credential store env properties redaction in JobConf

2018-12-12 Thread Denys Kuzmenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko updated HIVE-21030:
--
Attachment: HIVE-21030.4.patch

> Add credential store env properties redaction in JobConf
> 
>
> Key: HIVE-21030
> URL: https://issues.apache.org/jira/browse/HIVE-21030
> Project: Hive
>  Issue Type: Bug
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-21030.1.patch, HIVE-21030.2.patch, 
> HIVE-21030.3.patch, HIVE-21030.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21022) Fix remote metastore tests which use ZooKeeper

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719087#comment-16719087
 ] 

Hive QA commented on HIVE-21022:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12951497/HIVE-21022.04

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15615 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15279/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15279/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15279/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12951497 - PreCommit-HIVE-Build

> Fix remote metastore tests which use ZooKeeper
> --
>
> Key: HIVE-21022
> URL: https://issues.apache.org/jira/browse/HIVE-21022
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21022.01, HIVE-21022.01, HIVE-21022.01, 
> HIVE-21022.02, HIVE-21022.02.patch, HIVE-21022.03, HIVE-21022.03, 
> HIVE-21022.04
>
>
> Per [~vgarg]'s comment on HIVE-20794 at 
> https://issues.apache.org/jira/browse/HIVE-20794?focusedCommentId=16714093=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16714093,
>  the remote metatstore tests using ZooKeeper are flaky. They are failing with 
> error "Got exception: org.apache.zookeeper.KeeperException$NoNodeException 
> KeeperErrorCode = NoNode for /hs2mszktest".
> Both of these tests are using the same root namespace and hence the reason 
> for this failure could be that the root namespace becomes unavailable to one 
> test when the other drops it. The drop seems to be happening automatically 
> through TestingServer code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19968) UDF exception is not throw out

2018-12-12 Thread Laszlo Bodor (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719083#comment-16719083
 ] 

Laszlo Bodor commented on HIVE-19968:
-

thanks [~kgyrtkirk]!

[~sandflee]: I've attached 01.patch, in which I replaced ignore with an 
exception throwing with improved logging.



> UDF exception is not throw out
> --
>
> Key: HIVE-19968
> URL: https://issues.apache.org/jira/browse/HIVE-19968
> Project: Hive
>  Issue Type: Bug
>Reporter: sandflee
>Assignee: Laszlo Bodor
>Priority: Major
> Attachments: HIVE-19968.01.patch, hive-udf.png
>
>
> udf init failed, and throw a exception, but hive catch it and do nothing, 
> leading to app succ, but no data is generated.
> {code}
> GenericUDFReflect.java#evaluate()
> try {  
>    o = null;  
>    o = ReflectionUtils.newInstance(c, null);
> }   catch (Exception e) {  
> // ignored
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21035) Race condition in SparkUtilities#getSparkSession

2018-12-12 Thread Antal Sinkovits (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antal Sinkovits updated HIVE-21035:
---
Attachment: HIVE-21035.01.patch

> Race condition in SparkUtilities#getSparkSession
> 
>
> Key: HIVE-21035
> URL: https://issues.apache.org/jira/browse/HIVE-21035
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 4.0.0
>Reporter: Antal Sinkovits
>Assignee: Antal Sinkovits
>Priority: Major
> Attachments: HIVE-21035.01.patch
>
>
> It can happen, that when in one given session, multiple queries are executed, 
> that due to a race condition, multiple spark application master gets kicked 
> off.
> In this case, the one that started earlier, will not be killed, when the hive 
> session closes, consuming resources.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21035) Race condition in SparkUtilities#getSparkSession

2018-12-12 Thread Antal Sinkovits (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antal Sinkovits updated HIVE-21035:
---
Status: Patch Available  (was: In Progress)

> Race condition in SparkUtilities#getSparkSession
> 
>
> Key: HIVE-21035
> URL: https://issues.apache.org/jira/browse/HIVE-21035
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 4.0.0
>Reporter: Antal Sinkovits
>Assignee: Antal Sinkovits
>Priority: Major
> Attachments: HIVE-21035.01.patch
>
>
> It can happen, that when in one given session, multiple queries are executed, 
> that due to a race condition, multiple spark application master gets kicked 
> off.
> In this case, the one that started earlier, will not be killed, when the hive 
> session closes, consuming resources.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19968) UDF exception is not throw out

2018-12-12 Thread Laszlo Bodor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Bodor updated HIVE-19968:

Attachment: HIVE-19968.01.patch

> UDF exception is not throw out
> --
>
> Key: HIVE-19968
> URL: https://issues.apache.org/jira/browse/HIVE-19968
> Project: Hive
>  Issue Type: Bug
>Reporter: sandflee
>Assignee: Laszlo Bodor
>Priority: Major
> Attachments: HIVE-19968.01.patch, hive-udf.png
>
>
> udf init failed, and throw a exception, but hive catch it and do nothing, 
> leading to app succ, but no data is generated.
> {code}
> GenericUDFReflect.java#evaluate()
> try {  
>    o = null;  
>    o = ReflectionUtils.newInstance(c, null);
> }   catch (Exception e) {  
> // ignored
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19968) UDF exception is not throw out

2018-12-12 Thread Zoltan Haindrich (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719042#comment-16719042
 ] 

Zoltan Haindrich commented on HIVE-19968:
-

+1 pending tests

> UDF exception is not throw out
> --
>
> Key: HIVE-19968
> URL: https://issues.apache.org/jira/browse/HIVE-19968
> Project: Hive
>  Issue Type: Bug
>Reporter: sandflee
>Assignee: Laszlo Bodor
>Priority: Major
> Attachments: HIVE-19968.01.patch, hive-udf.png
>
>
> udf init failed, and throw a exception, but hive catch it and do nothing, 
> leading to app succ, but no data is generated.
> {code}
> GenericUDFReflect.java#evaluate()
> try {  
>    o = null;  
>    o = ReflectionUtils.newInstance(c, null);
> }   catch (Exception e) {  
> // ignored
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19968) UDF exception is not throw out

2018-12-12 Thread Laszlo Bodor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Bodor updated HIVE-19968:

Status: Patch Available  (was: Open)

> UDF exception is not throw out
> --
>
> Key: HIVE-19968
> URL: https://issues.apache.org/jira/browse/HIVE-19968
> Project: Hive
>  Issue Type: Bug
>Reporter: sandflee
>Assignee: Laszlo Bodor
>Priority: Major
> Attachments: HIVE-19968.01.patch, hive-udf.png
>
>
> udf init failed, and throw a exception, but hive catch it and do nothing, 
> leading to app succ, but no data is generated.
> {code}
> GenericUDFReflect.java#evaluate()
> try {  
>    o = null;  
>    o = ReflectionUtils.newInstance(c, null);
> }   catch (Exception e) {  
> // ignored
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21022) Fix remote metastore tests which use ZooKeeper

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719032#comment-16719032
 ] 

Hive QA commented on HIVE-21022:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
12s{color} | {color:blue} standalone-metastore/metastore-common in master has 
29 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m  
2s{color} | {color:blue} standalone-metastore/metastore-server in master has 
188 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15279/dev-support/hive-personality.sh
 |
| git revision | master / b91b5f9 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: standalone-metastore/metastore-common 
standalone-metastore/metastore-server U: standalone-metastore |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15279/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Fix remote metastore tests which use ZooKeeper
> --
>
> Key: HIVE-21022
> URL: https://issues.apache.org/jira/browse/HIVE-21022
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21022.01, HIVE-21022.01, HIVE-21022.01, 
> HIVE-21022.02, HIVE-21022.02.patch, HIVE-21022.03, HIVE-21022.03, 
> HIVE-21022.04
>
>
> Per [~vgarg]'s comment on HIVE-20794 at 
> https://issues.apache.org/jira/browse/HIVE-20794?focusedCommentId=16714093=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16714093,
>  the remote metatstore tests using ZooKeeper are flaky. They are failing with 
> error "Got exception: org.apache.zookeeper.KeeperException$NoNodeException 
> KeeperErrorCode = NoNode for /hs2mszktest".
> Both of these tests are using the same root namespace and hence the reason 
> for this 

[jira] [Assigned] (HIVE-21035) Race condition in SparkUtilities#getSparkSession

2018-12-12 Thread Antal Sinkovits (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antal Sinkovits reassigned HIVE-21035:
--


> Race condition in SparkUtilities#getSparkSession
> 
>
> Key: HIVE-21035
> URL: https://issues.apache.org/jira/browse/HIVE-21035
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 4.0.0
>Reporter: Antal Sinkovits
>Assignee: Antal Sinkovits
>Priority: Major
>
> It can happen, that when in one given session, multiple queries are executed, 
> that due to a race condition, multiple spark application master gets kicked 
> off.
> In this case, the one that started earlier, will not be killed, when the hive 
> session closes, consuming resources.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HIVE-21035) Race condition in SparkUtilities#getSparkSession

2018-12-12 Thread Antal Sinkovits (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-21035 started by Antal Sinkovits.
--
> Race condition in SparkUtilities#getSparkSession
> 
>
> Key: HIVE-21035
> URL: https://issues.apache.org/jira/browse/HIVE-21035
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 4.0.0
>Reporter: Antal Sinkovits
>Assignee: Antal Sinkovits
>Priority: Major
>
> It can happen, that when in one given session, multiple queries are executed, 
> that due to a race condition, multiple spark application master gets kicked 
> off.
> In this case, the one that started earlier, will not be killed, when the hive 
> session closes, consuming resources.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21030) Add credential store env properties redaction in JobConf

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719010#comment-16719010
 ] 

Hive QA commented on HIVE-21030:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12951485/HIVE-21030.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15615 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15278/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15278/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15278/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12951485 - PreCommit-HIVE-Build

> Add credential store env properties redaction in JobConf
> 
>
> Key: HIVE-21030
> URL: https://issues.apache.org/jira/browse/HIVE-21030
> Project: Hive
>  Issue Type: Bug
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-21030.1.patch, HIVE-21030.2.patch, 
> HIVE-21030.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21034) Add option to schematool to drop Hive databases

2018-12-12 Thread Daniel Voros (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Voros reassigned HIVE-21034:
---


> Add option to schematool to drop Hive databases
> ---
>
> Key: HIVE-21034
> URL: https://issues.apache.org/jira/browse/HIVE-21034
> Project: Hive
>  Issue Type: Improvement
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
>
> An option to remove all Hive managed data could be a useful addition to 
> {{schematool}}.
> I propose to introduce a new flag {{-dropAllDatabases}} that would *drop all 
> databases with CASCADE* to remove all data of managed tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21032) Refactor HiveMetaTool

2018-12-12 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21032:
--
Status: Patch Available  (was: Open)

> Refactor HiveMetaTool
> -
>
> Key: HIVE-21032
> URL: https://issues.apache.org/jira/browse/HIVE-21032
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.1.0
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HIVE-21032.01.patch
>
>
> HiveMetaTool is doing everything in one class, needs to be refactored to have 
> a nicer design.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21033) Sudden disconnect for a session with set and SQL operation cuts off any more HiveServer2 output

2018-12-12 Thread Szehon Ho (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-21033:
-
Description: 
Its a bit tricky to reproduce, but we were able to do it (unfortunately) with 
our custom client that did not handle closing the session on the error case.  
But it may also happen for any client that just disconnects in the middle of 
this operation.

Basically you have a session with both HiveCommandOperation and SQLOperation.  
For example a session that does the operations (set a=b; select * from foobar; 
). 

The SQLOperation runs last and set SessionState.out and err to be System.out 
and System.err . Ref:  
[SQLOperation#setupSessionIO|https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java#L139]

Then the client terminates without closing the session. (In our case, a 
SemanticException triggered it).  The deleteContext is called, which closes the 
session:  Ref 
[ThriftBinaryCLIService#deleteContext|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/thrift/ThriftBinaryCLIService.java#L141]

The Session closes all the operations, starting with HiveCommandOperation.  
This one closes all the streams, which is System.out and System.err as set by 
SQLOperation earlier.  Ref: 
[HiveCommandOperation#tearDownSessionIO|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java#L101]
 

After this, no more HiveServer2 output appears as System.out and System.err are 
closed.

  was:
Its a bit tricky to reproduce, but we were able to do it (unfortunately) with 
our custom client that did not handle closing the session on the error case.  
But it may also happen for any client that just disconnects in the middle of 
this operation.

Basically you have a session with both HiveCommandOperation and SQLOperation.  
For example a session that does the operations (set a=b; select * from foobar; 
). 

The SQLOperation runs last and set SessionState.out and err to be System.out 
and System.err . Ref:  
[SQLOperation#setupSessionIO|https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java#L139]

Then the client terminates without closing the session. (In our case, a 
SemanticException triggered it).  The deleteContext is called, which closes the 
session:  Ref 
[ThriftBinaryCLIService#deleteContext|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/thrift/ThriftBinaryCLIService.java#L141]

The Session closes all the operations, starting with HiveCommandOperation.  
This one closes all the streams, which it assumes is System.out and System.err 
as set by SQLOperation earlier.  Ref: 
[HiveCommandOperation#tearDownSessionIO|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java#L101]
 

After this, no more HiveServer2 output appears as System.out and System.err are 
closed.


> Sudden disconnect for a session with set and SQL operation cuts off any more 
> HiveServer2 output
> ---
>
> Key: HIVE-21033
> URL: https://issues.apache.org/jira/browse/HIVE-21033
> Project: Hive
>  Issue Type: Bug
>Reporter: Szehon Ho
>Priority: Major
>
> Its a bit tricky to reproduce, but we were able to do it (unfortunately) with 
> our custom client that did not handle closing the session on the error case.  
> But it may also happen for any client that just disconnects in the middle of 
> this operation.
> Basically you have a session with both HiveCommandOperation and SQLOperation. 
>  For example a session that does the operations (set a=b; select * from 
> foobar; ). 
> The SQLOperation runs last and set SessionState.out and err to be System.out 
> and System.err . Ref:  
> [SQLOperation#setupSessionIO|https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java#L139]
> Then the client terminates without closing the session. (In our case, a 
> SemanticException triggered it).  The deleteContext is called, which closes 
> the session:  Ref 
> [ThriftBinaryCLIService#deleteContext|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/thrift/ThriftBinaryCLIService.java#L141]
> The Session closes all the operations, starting with HiveCommandOperation.  
> This one closes all the streams, which is System.out and System.err as set by 
> SQLOperation earlier.  Ref: 
> 

[jira] [Updated] (HIVE-21033) Sudden disconnect for a session with set and SQL operation cuts off any more HiveServer2 logs

2018-12-12 Thread Szehon Ho (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-21033:
-
Description: 
Its a bit tricky to reproduce, but we were able to do it (unfortunately) with 
our custom client that did not handle closing the session on the error case.  
But it may also happen for any client that just disconnects in the middle of 
this operation.

Basically you have a session with both HiveCommandOperation and SQLOperation.  
For example a session that does the operations (set a=b; select * from foobar; 
). 

The SQLOperation runs last and set SessionState.out and err to be System.out 
and System.err . Ref:  
[SQLOperation#setupSessionIO|https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java#L139]

Then the client terminates without closing the session. (In our case, a 
SemanticException triggered it).  The deleteContext is called, which closes the 
session:  Ref 
[ThriftBinaryCLIService#deleteContext|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/thrift/ThriftBinaryCLIService.java#L141]

The Session closes all the operations, starting with HiveCommandOperation.  
This one closes all the streams, which it assumes is System.out and System.err 
as set by SQLOperation earlier.  Ref: 
[HiveCommandOperation#tearDownSessionIO|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java#L101]
 

After this, no more HiveServer2 output appears as System.out and System.err are 
closed.

  was:
Its a bit tricky to reproduce, but we were able to do it (unfortunately) with 
our custom client that did not handle closing the session on the error case.

Basically you have a session with both HiveCommandOperation and SQLOperation 
(set a=b; select * from foobar; ). 

Both will set up the session's out and err, with the SQLOperation setting it to 
be System.out and System.err . ref:  
[SQLOperation#setupSessionIO|https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java#L139]

Then the client terminates without closing the session. (In our case, a 
SemanticException triggered it).  The deleteContext is called, which closes the 
session:  ref 
[ThriftBinaryCLIService#deleteContext|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/thrift/ThriftBinaryCLIService.java#L141]

The Session closes all the operations, starting with HiveCommandOperation.  
This one closes all the streams, which it assumes is not System.err but was set 
so by SQLOperation.  ref: 
[HiveCommandOperation#tearDownSessionIO|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java#L101]
 

After this, no more HiveServer2 logs appear.


> Sudden disconnect for a session with set and SQL operation cuts off any more 
> HiveServer2 logs
> -
>
> Key: HIVE-21033
> URL: https://issues.apache.org/jira/browse/HIVE-21033
> Project: Hive
>  Issue Type: Bug
>Reporter: Szehon Ho
>Priority: Major
>
> Its a bit tricky to reproduce, but we were able to do it (unfortunately) with 
> our custom client that did not handle closing the session on the error case.  
> But it may also happen for any client that just disconnects in the middle of 
> this operation.
> Basically you have a session with both HiveCommandOperation and SQLOperation. 
>  For example a session that does the operations (set a=b; select * from 
> foobar; ). 
> The SQLOperation runs last and set SessionState.out and err to be System.out 
> and System.err . Ref:  
> [SQLOperation#setupSessionIO|https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java#L139]
> Then the client terminates without closing the session. (In our case, a 
> SemanticException triggered it).  The deleteContext is called, which closes 
> the session:  Ref 
> [ThriftBinaryCLIService#deleteContext|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/thrift/ThriftBinaryCLIService.java#L141]
> The Session closes all the operations, starting with HiveCommandOperation.  
> This one closes all the streams, which it assumes is System.out and 
> System.err as set by SQLOperation earlier.  Ref: 
> 

[jira] [Updated] (HIVE-21033) Sudden disconnect for a session with set and SQL operation cuts off any more HiveServer2 output

2018-12-12 Thread Szehon Ho (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-21033:
-
Summary: Sudden disconnect for a session with set and SQL operation cuts 
off any more HiveServer2 output  (was: Sudden disconnect for a session with set 
and SQL operation cuts off any more HiveServer2 logs)

> Sudden disconnect for a session with set and SQL operation cuts off any more 
> HiveServer2 output
> ---
>
> Key: HIVE-21033
> URL: https://issues.apache.org/jira/browse/HIVE-21033
> Project: Hive
>  Issue Type: Bug
>Reporter: Szehon Ho
>Priority: Major
>
> Its a bit tricky to reproduce, but we were able to do it (unfortunately) with 
> our custom client that did not handle closing the session on the error case.  
> But it may also happen for any client that just disconnects in the middle of 
> this operation.
> Basically you have a session with both HiveCommandOperation and SQLOperation. 
>  For example a session that does the operations (set a=b; select * from 
> foobar; ). 
> The SQLOperation runs last and set SessionState.out and err to be System.out 
> and System.err . Ref:  
> [SQLOperation#setupSessionIO|https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java#L139]
> Then the client terminates without closing the session. (In our case, a 
> SemanticException triggered it).  The deleteContext is called, which closes 
> the session:  Ref 
> [ThriftBinaryCLIService#deleteContext|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/thrift/ThriftBinaryCLIService.java#L141]
> The Session closes all the operations, starting with HiveCommandOperation.  
> This one closes all the streams, which it assumes is System.out and 
> System.err as set by SQLOperation earlier.  Ref: 
> [HiveCommandOperation#tearDownSessionIO|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java#L101]
>  
> After this, no more HiveServer2 output appears as System.out and System.err 
> are closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21032) Refactor HiveMetaTool

2018-12-12 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21032:
--
Attachment: HIVE-21032.01.patch

> Refactor HiveMetaTool
> -
>
> Key: HIVE-21032
> URL: https://issues.apache.org/jira/browse/HIVE-21032
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.1.0
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HIVE-21032.01.patch
>
>
> HiveMetaTool is doing everything in one class, needs to be refactored to have 
> a nicer design.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20586) Beeline is asking for user/pass when invoked without -u

2018-12-12 Thread Daniel Voros (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Voros updated HIVE-20586:

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

Same issue was solved in HIVE-20734 with a different approach.

> Beeline is asking for user/pass when invoked without -u
> ---
>
> Key: HIVE-20586
> URL: https://issues.apache.org/jira/browse/HIVE-20586
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Daniel Voros
>Assignee: Janos Gub
>Priority: Major
> Attachments: HIVE-20586.1.patch, HIVE-20586.1.patch, 
> HIVE-20586.1.patch, HIVE-20586.1.patch, HIVE-20586.2.patch, HIVE-20586.patch
>
>
> Since HIVE-18963 it's possible to define a default connection URL in 
> beeline-site.xml to be able to use beeline without specifying the HS2 JDBC 
> URL.
> When invoked with no arguments, beeline is asking for username/password on 
> the command line. When running with {{-u}} and the exact same URL as in 
> beeline-site.xml, it does not ask for username/password.
> I think these two should do exactly the same, given that the URL after {{-u}} 
> is the same as in beeline-site.xml:
> {code:java}
> beeline -u URL
> beeline
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-20872) Creating information_schema and sys schema via schematool fails with parser error

2018-12-12 Thread Daniel Voros (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Voros resolved HIVE-20872.
-
Resolution: Duplicate

> Creating information_schema and sys schema via schematool fails with parser 
> error
> -
>
> Key: HIVE-20872
> URL: https://issues.apache.org/jira/browse/HIVE-20872
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2, Metastore, SQL
>Affects Versions: 3.1.0, 3.1.1
> Environment: Apache Hive (version 3.1.1)
> Hive JDBC (version 3.1.1)
> metastore on derby embedded, derby server, postgres server
> Apache Hadoop (version 2.9.1)
>Reporter: Carsten Steckel
>Priority: Critical
>
> it took quite some time to figure out how to install the "information_schema" 
> and "sys" schemas (thanks to 
> https://issues.apache.org/jira/browse/HIVE-16941) into a hive 3.1.0/3.1.1 on 
> hdfs/hadoop 2.9.1 and I am still unsure if it is the proper way of doing it.
> when I execute:
>  
> {noformat}
> hive@hive-server ~> schematool -metaDbType derby -dbType hive -initSchema 
> -url jdbc:hive2://localhost:1/default -driver 
> org.apache.hive.jdbc.HiveDriver"
> {noformat}
>  I receive an error (from --verbose log):
>  
> {noformat}
> [...]
> Error: Error while compiling statement: FAILED: SemanticException 
> org.apache.hadoop.hive.ql.metadata.InvalidTableException: Table not found 
> _dummy_table (state=42000,code=4)
> org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization 
> FAILED! Metastore state would be inconsistent !!
> [...]
> {noformat}
>   
> It seems to be the last statement during setup of the sys-schema causes the 
> issue. When executing it manually:
>  
>  
> {noformat}
> 0: jdbc:hive2://localhost:1> CREATE OR REPLACE VIEW `VERSION` AS SELECT 1 
> AS `VER_ID`, '3.1.0' AS `SCHEMA_VERSION`, 'Hive release version 3.1.0' AS 
> `VERSION_COMMENT`;
> Error: Error while compiling statement: FAILED: SemanticException 
> org.apache.hadoop.hive.ql.metadata.InvalidTableException: Table not found 
> _dummy_table (state=42000,code=4)
> {noformat}
>  
> I have tried to switch the metastore_db from derby embedded to derby server 
> to postgresql and made sure the changed metadatabases each worked, but 
> setting up the information_schema and sys schemas always delivers the same 
> error.
> Executing only the select part without the create view works:
>  
> {noformat}
> 0: jdbc:hive2://localhost:1> SELECT 1 AS `VER_ID`, '3.1.0' AS 
> `SCHEMA_VERSION`, 'Hive release version 3.1.0' AS `VERSION_COMMENT`;
> +-+-+-+
> | ver_id  | schema_version  |   version_comment   |
> +-+-+-+
> | 1   | 3.1.0   | Hive release version 3.1.0  |
> +-+-+-+
> 1 row selected (0.595 seconds)
> {noformat}
> It seems to be related to: HIVE-19444
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-19444) Create View - Table not found _dummy_table

2018-12-12 Thread Daniel Voros (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Voros resolved HIVE-19444.
-
Resolution: Duplicate

I've also seen this with 3.1.1, but wasn't able to reproduce with current 
master where HIVE-20010 is fixed.

> Create View - Table not found _dummy_table
> --
>
> Key: HIVE-19444
> URL: https://issues.apache.org/jira/browse/HIVE-19444
> Project: Hive
>  Issue Type: Bug
>  Components: Views
>Affects Versions: 1.1.0
>Reporter: BELUGA BEHR
>Priority: Major
>
> {code:sql}
> CREATE VIEW view_s1 AS select 1;
> -- FAILED: SemanticException 
> org.apache.hadoop.hive.ql.metadata.InvalidTableException: Table not found 
> _dummy_table
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20914) MRScratchDir permission denied when "hive.server2.enable.doAs", "hive.exec.submitviachild" are set to "true" and impersonated/proxy user is used

2018-12-12 Thread Denys Kuzmenko (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16718974#comment-16718974
 ] 

Denys Kuzmenko commented on HIVE-20914:
---

[~pvary], could you please review.  Thank you!

> MRScratchDir permission denied when "hive.server2.enable.doAs", 
> "hive.exec.submitviachild" are set to "true" and impersonated/proxy user is 
> used
> 
>
> Key: HIVE-20914
> URL: https://issues.apache.org/jira/browse/HIVE-20914
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-20914.1.patch, HIVE-20914.10.patch, 
> HIVE-20914.2.patch, HIVE-20914.3.patch, HIVE-20914.4.patch, 
> HIVE-20914.5.patch, HIVE-20914.6.patch, HIVE-20914.7.patch, 
> HIVE-20914.8.patch, HIVE-20914.9.patch
>
>
> The above issue could be reproduced in none Kerberos cluster using the below 
> steps:
> 1. Set "hive.exec.submitviachild" value to "true".
> 2. Run a count query not using "hive" user.
> {code}beeline -u 'jdbc:hive2://localhost:1' -n hdfs{code}
> There is no issue when we try to execute the same query using the "hive" user.
> {code:java}
> Exception in thread "main" java.lang.RuntimeException: 
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=hive, access=EXECUTE, inode="/tmp/hive/hdfs":hdfs:supergroup:drwx-- 
> at 
> org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:279)
>  at 
> org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:260)
>  at 
> org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkTraverse(DefaultAuthorizationProvider.java:201)
>  at 
> org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:154)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3877)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3860)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:3847)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkTraverse(FSNamesystem.java:6822)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4551)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4529)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4502)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:884)
>  at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.mkdirs(AuthorizationProviderProxyClientProtocol.java:328)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:641)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2281) at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2277) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2275) at 
> org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:285) at 
> org.apache.hadoop.hive.ql.Context.getMRScratchDir(Context.java:328) at 
> org.apache.hadoop.hive.ql.Context.getMRTmpPath(Context.java:444) at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:243) at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.main(ExecDriver.java:771) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.util.RunJar.run(RunJar.java:221) at 
> org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21032) Refactor HiveMetaTool

2018-12-12 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21032:
--
Attachment: (was: HIVE-21032.01.patch)

> Refactor HiveMetaTool
> -
>
> Key: HIVE-21032
> URL: https://issues.apache.org/jira/browse/HIVE-21032
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.1.0
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Minor
> Fix For: 3.2.0
>
>
> HiveMetaTool is doing everything in one class, needs to be refactored to have 
> a nicer design.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20588) Queries INSERT INTO table1 SELECT * FROM table2 LIMIT A, B insert one more row than they should

2018-12-12 Thread Karen Coppage (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16718971#comment-16718971
 ] 

Karen Coppage commented on HIVE-20588:
--

I wasn't able to reproduce this issue, are you able to reproduce it [~jmarhuen]?

> Queries INSERT INTO table1 SELECT * FROM table2 LIMIT A, B insert one more 
> row than they should
> ---
>
> Key: HIVE-20588
> URL: https://issues.apache.org/jira/browse/HIVE-20588
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Jaume M
>Priority: Critical
>
> {code}
> 0: jdbc:hive2://hs2.example.com:10005/> CREATE TABLE atest1 (foo BIGINT, bar 
> STRING);
> No rows affected (0.199 seconds)
> 0: jdbc:hive2://hs2.example.com:10005/> INSERT INTO atest1 VALUES (1, 
> "1"),(2, "2"),(3, "3");
> No rows affected (8.209 seconds)
> 0: jdbc:hive2://hs2.example.com:10005/> CREATE TABLE atest2 (foo BIGINT, bar 
> STRING);
> No rows affected (0.156 seconds)
> 0: jdbc:hive2://hs2.example.com:10005/> INSERT INTO atest2 SELECT * FROM 
> atest1 LIMIT 1;
> No rows affected (8.205 seconds)
> 0: jdbc:hive2://hs2.example.com:10005/> SELECT COUNT(*) FROM atest2;
> +--+
> | _c0  |
> +--+
> | 1|
> +--+
> 1 row selected (0.133 seconds)
> 0: jdbc:hive2://hs2.example.com:10005/> TRUNCATE TABLE atest2;
> No rows affected (0.14 seconds)
> 0: jdbc:hive2://hs2.example.com:10005/> INSERT INTO atest2 SELECT * FROM 
> atest1 LIMIT 1,1;
> No rows affected (8.19 seconds)
> 0: jdbc:hive2://hs2.example.com:10005/> SELECT COUNT(*) FROM atest2;
> +--+
> | _c0  |
> +--+
> | 2|
> +--+
> 1 row selected (0.129 seconds)
> 0: jdbc:hive2://hs2.example.com:10005/>
> 0: jdbc:hive2://hs2.example.com:10005/> SELECT * FROM atest1 LIMIT 1,1;
> +-+-+
> | atest1.foo  | atest1.bar  |
> +-+-+
> | 2   | 2   |
> +-+-+
> 1 row selected (0.197 seconds)
> 0: jdbc:hive2://hs2.example.com:10005/>
> {code}
> When two arguments are specified for limit one more row than it should is 
> being inserted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21033) Sudden disconnect for a session with set and SQL operation cuts off any more HiveServer2 logs

2018-12-12 Thread Szehon Ho (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-21033:
-
Description: 
Its a bit tricky to reproduce, but we were able to do it (unfortunately) with 
our custom client that did not handle closing the session on the error case.

Basically you have a session with both HiveCommandOperation and SQLOperation 
(set a=b; select * from foobar; ). 

Both will set up the session's out and err, with the SQLOperation setting it to 
be System.out and System.err . ref:  
[SQLOperation#setupSessionIO|https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java#L139]

Then the client terminates without closing the session. (In our case, a 
SemanticException triggered it).  The deleteContext is called, which closes the 
session:  ref 
[ThriftBinaryCLIService#deleteContext|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/thrift/ThriftBinaryCLIService.java#L141]

The Session closes all the operations, starting with HiveCommandOperation.  
This one closes all the streams, which it assumes is not System.err but was set 
so by SQLOperation.  ref: 
[HiveCommandOperation#tearDownSessionIO|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java#L101]
 

After this, no more HiveServer2 logs appear.

  was:
Its a bit tricky to reproduce, but we were able to do it (unfortunately) with 
our custom client that did not handle closing the session on the error case.

Basically you have a session with both HiveCommandOperation and SQLOperation 
(set a=b; select * from foobar;).  Both will set up the session's out and err, 
with the SQLOperation setting it to be System.out and System.err

ref: 


> Sudden disconnect for a session with set and SQL operation cuts off any more 
> HiveServer2 logs
> -
>
> Key: HIVE-21033
> URL: https://issues.apache.org/jira/browse/HIVE-21033
> Project: Hive
>  Issue Type: Bug
>Reporter: Szehon Ho
>Priority: Major
>
> Its a bit tricky to reproduce, but we were able to do it (unfortunately) with 
> our custom client that did not handle closing the session on the error case.
> Basically you have a session with both HiveCommandOperation and SQLOperation 
> (set a=b; select * from foobar; ). 
> Both will set up the session's out and err, with the SQLOperation setting it 
> to be System.out and System.err . ref:  
> [SQLOperation#setupSessionIO|https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java#L139]
> Then the client terminates without closing the session. (In our case, a 
> SemanticException triggered it).  The deleteContext is called, which closes 
> the session:  ref 
> [ThriftBinaryCLIService#deleteContext|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/thrift/ThriftBinaryCLIService.java#L141]
> The Session closes all the operations, starting with HiveCommandOperation.  
> This one closes all the streams, which it assumes is not System.err but was 
> set so by SQLOperation.  ref: 
> [HiveCommandOperation#tearDownSessionIO|https://github.com/apache/hive/blob/f37c5de6c32b9395d1b34fa3c02ed06d1bfbf6eb/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java#L101]
>  
> After this, no more HiveServer2 logs appear.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21030) Add credential store env properties redaction in JobConf

2018-12-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16718956#comment-16718956
 ] 

Hive QA commented on HIVE-21030:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
30s{color} | {color:blue} common in master has 65 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} common: The patch generated 2 new + 6 unchanged - 0 
fixed = 8 total (was 6) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15278/dev-support/hive-personality.sh
 |
| git revision | master / b91b5f9 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15278/yetus/diff-checkstyle-common.txt
 |
| modules | C: common U: common |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15278/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add credential store env properties redaction in JobConf
> 
>
> Key: HIVE-21030
> URL: https://issues.apache.org/jira/browse/HIVE-21030
> Project: Hive
>  Issue Type: Bug
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-21030.1.patch, HIVE-21030.2.patch, 
> HIVE-21030.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21032) Refactor HiveMetaTool

2018-12-12 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21032:
--
Attachment: HIVE-21032.01.patch

> Refactor HiveMetaTool
> -
>
> Key: HIVE-21032
> URL: https://issues.apache.org/jira/browse/HIVE-21032
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.1.0
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HIVE-21032.01.patch
>
>
> HiveMetaTool is doing everything in one class, needs to be refactored to have 
> a nicer design.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21033) Sudden disconnect for a session with set and SQL operation cuts off any more HiveServer2 logs

2018-12-12 Thread Szehon Ho (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-21033:
-
Description: 
Its a bit tricky to reproduce, but we were able to do it (unfortunately) with 
our custom client that did not handle closing the session on the error case.

Basically you have a session with both HiveCommandOperation and SQLOperation 
(set a=b; select * from foobar;).  Both will set up the session's out and err, 
with the SQLOperation setting it to be System.out and System.err

ref: 

> Sudden disconnect for a session with set and SQL operation cuts off any more 
> HiveServer2 logs
> -
>
> Key: HIVE-21033
> URL: https://issues.apache.org/jira/browse/HIVE-21033
> Project: Hive
>  Issue Type: Bug
>Reporter: Szehon Ho
>Priority: Major
>
> Its a bit tricky to reproduce, but we were able to do it (unfortunately) with 
> our custom client that did not handle closing the session on the error case.
> Basically you have a session with both HiveCommandOperation and SQLOperation 
> (set a=b; select * from foobar;).  Both will set up the session's out and 
> err, with the SQLOperation setting it to be System.out and System.err
> ref: 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >