[jira] [Commented] (HIVE-20585) Fix column stat flucutuation in list_bucket_dml_4.q

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788566#comment-16788566
 ] 

Hive QA commented on HIVE-20585:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12961786/HIVE-20585.01.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 15822 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_ppr] (batchId=31)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_dml_9] 
(batchId=91)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_dyn_part1] 
(batchId=91)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_dyn_part2] 
(batchId=63)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_dyn_part3] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_1] (batchId=91)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_disablecbo_1] 
(batchId=55)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[nonmr_fetch] (batchId=22)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[offset_limit_global_optimizer]
 (batchId=21)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[outer_join_ppr] 
(batchId=21)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[regex_col] (batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats2] (batchId=66)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats4] (batchId=87)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_in_process_launcher]
 (batchId=190)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16426/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16426/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16426/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 14 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12961786 - PreCommit-HIVE-Build

> Fix column stat flucutuation in list_bucket_dml_4.q
> ---
>
> Key: HIVE-20585
> URL: https://issues.apache.org/jira/browse/HIVE-20585
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Reporter: Zoltan Haindrich
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-20585.01.patch
>
>
> If column stats are fetched(HIVE-17084) then list_bucket_dml_4.q's output is 
> flactuating; sometimes it has column stats COMPLETE; sometimes just partial.
> Running locally produces COMPLETE.
> running together with join33.q,list_bucket_dml_4.q cause it to degrade to 
> PARTIAL



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20585) Fix column stat flucutuation in list_bucket_dml_4.q

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788555#comment-16788555
 ] 

Hive QA commented on HIVE-20585:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
35s{color} | {color:blue} standalone-metastore/metastore-common in master has 
29 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
13s{color} | {color:blue} standalone-metastore/metastore-server in master has 
179 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
48s{color} | {color:blue} itests/util in master has 48 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
29s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 1 new + 1657 unchanged - 0 fixed = 1658 total (was 1657) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 15 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16426/dev-support/hive-personality.sh
 |
| git revision | master / fa62461 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16426/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16426/yetus/whitespace-eol.txt
 |
| modules | C: standalone-metastore/metastore-common 
standalone-metastore/metastore-server itests/hcatalog-unit itests/util U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16426/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Fix column stat flucutuation in list_bucket_dml_4.q
> ---
>
> Key: HIVE-20585
> URL: https://issues.apache.org/jira/browse/HIVE-20585
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Reporter: Zoltan Haindrich
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-20585.01.patch
>
>
> If column stats are fetched(HIVE-17084) then 

[jira] [Commented] (HIVE-21339) LLAP: Cache hit also initializes an FS object

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788532#comment-16788532
 ] 

Hive QA commented on HIVE-21339:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12961784/HIVE-21339.5.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15821 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16425/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16425/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16425/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12961784 - PreCommit-HIVE-Build

> LLAP: Cache hit also initializes an FS object 
> --
>
> Key: HIVE-21339
> URL: https://issues.apache.org/jira/browse/HIVE-21339
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 4.0.0
>Reporter: Gopal V
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21339.1.patch, HIVE-21339.2.patch, 
> HIVE-21339.3.patch, HIVE-21339.4.patch, HIVE-21339.5.patch, 
> llap-cache-fs-get.png, llap-query7-cached.svg
>
>
> https://github.com/apache/hive/blob/master/llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/OrcEncodedDataReader.java#L214
> {code}
> // 1. Get file metadata from cache, or create the reader and read it.
> // Don't cache the filesystem object for now; Tez closes it and FS cache 
> will fix all that
> fs = split.getPath().getFileSystem(jobConf);
> fileKey = determineFileId(fs, split,
> HiveConf.getBoolVar(daemonConf, 
> ConfVars.LLAP_CACHE_ALLOW_SYNTHETIC_FILEID),
> HiveConf.getBoolVar(daemonConf, 
> ConfVars.LLAP_CACHE_DEFAULT_FS_FILE_ID),
> !HiveConf.getBoolVar(daemonConf, ConfVars.LLAP_IO_USE_FILEID_PATH)
> );
> {code}
>  !llap-cache-fs-get.png! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21339) LLAP: Cache hit also initializes an FS object

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788520#comment-16788520
 ] 

Hive QA commented on HIVE-21339:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
13s{color} | {color:blue} ql in master has 2258 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
46s{color} | {color:blue} llap-server in master has 79 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
42s{color} | {color:red} ql: The patch generated 2 new + 111 unchanged - 0 
fixed = 113 total (was 111) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
15s{color} | {color:red} llap-server: The patch generated 1 new + 31 unchanged 
- 0 fixed = 32 total (was 31) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16425/dev-support/hive-personality.sh
 |
| git revision | master / e7f7fe3 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16425/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16425/yetus/diff-checkstyle-llap-server.txt
 |
| modules | C: ql llap-server U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16425/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> LLAP: Cache hit also initializes an FS object 
> --
>
> Key: HIVE-21339
> URL: https://issues.apache.org/jira/browse/HIVE-21339
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 4.0.0
>Reporter: Gopal V
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21339.1.patch, HIVE-21339.2.patch, 
> HIVE-21339.3.patch, HIVE-21339.4.patch, HIVE-21339.5.patch, 
> llap-cache-fs-get.png, llap-query7-cached.svg
>
>
> https://github.com/apache/hive/blob/master/llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/OrcEncodedDataReader.java#L214
> {code}
> // 1. Get file metadata from cache, or 

[jira] [Updated] (HIVE-21286) Hive should support clean-up of previously bootstrapped tables when retry from different dump.

2019-03-08 Thread Sankar Hariappan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-21286:

   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

05.patch is committed to master.
Thanks [~ashutosh.bapat] and [~maheshk114] for the review!

> Hive should support clean-up of previously bootstrapped tables when retry 
> from different dump.
> --
>
> Key: HIVE-21286
> URL: https://issues.apache.org/jira/browse/HIVE-21286
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, Replication, pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21286.01.patch, HIVE-21286.02.patch, 
> HIVE-21286.03.patch, HIVE-21286.04.patch, HIVE-21286.05.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> If external tables are enabled for replication on an existing repl policy, 
> then bootstrapping of external tables are combined with incremental dump.
> If incremental bootstrap load fails with non-retryable error for which user 
> will have to manually drop all the external tables before trying with another 
> bootstrap dump. For full bootstrap, to retry with different dump, we 
> suggested user to drop the DB but in this case they need to manually drop all 
> the external tables which is not so user friendly. So, need to handle it in 
> Hive side as follows.
> REPL LOAD takes additional config (passed by user in WITH clause) that says, 
> drop all the tables which are bootstrapped from previous dump. 
> hive.repl.clean.tables.from.bootstrap=
> Hive will use this config only if the current dump is combined bootstrap in 
> incremental dump.
> Caution to be taken by user that this config should not be passed if previous 
> REPL LOAD (with bootstrap) was successful or any successful incremental 
> dump+load happened after "previous_bootstrap_dump_dir".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21416) Log git apply tries with p0, p1, and p2

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788499#comment-16788499
 ] 

Hive QA commented on HIVE-21416:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12961770/HIVE-21416.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15821 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16424/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16424/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16424/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12961770 - PreCommit-HIVE-Build

> Log git apply tries with p0, p1, and p2
> ---
>
> Key: HIVE-21416
> URL: https://issues.apache.org/jira/browse/HIVE-21416
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21416.01.patch
>
>
> Currently when the PreCommit-HIVE-Build Jenkins job is trying to apply the 
> patch it tries it first with -p0, then if it wasn't successful with -p1, then 
> finally if it still wasn't successful with -p2. The 3 tries are not separated 
> by anything, so the error messages of  the potential failures are mixed 
> together. There should be a log message before each try.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21416) Log git apply tries with p0, p1, and p2

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788485#comment-16788485
 ] 

Hive QA commented on HIVE-21416:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
57s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  1m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16424/dev-support/hive-personality.sh
 |
| git revision | master / e7f7fe3 |
| modules | C: testutils/ptest2 U: testutils/ptest2 |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16424/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Log git apply tries with p0, p1, and p2
> ---
>
> Key: HIVE-21416
> URL: https://issues.apache.org/jira/browse/HIVE-21416
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21416.01.patch
>
>
> Currently when the PreCommit-HIVE-Build Jenkins job is trying to apply the 
> patch it tries it first with -p0, then if it wasn't successful with -p1, then 
> finally if it still wasn't successful with -p2. The 3 tries are not separated 
> by anything, so the error messages of  the potential failures are mixed 
> together. There should be a log message before each try.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21401) Break up DDLTask - extract Table related operations

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788483#comment-16788483
 ] 

Hive QA commented on HIVE-21401:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12961769/HIVE-21401.05.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 168 failed/errored test(s), 15757 tests 
executed
*Failed tests:*
{noformat}
TestCommands - did not produce a TEST-*.xml file (likely timed out) 
(batchId=204)
TestEximReplicationTasks - did not produce a TEST-*.xml file (likely timed out) 
(batchId=204)
TestHCatClient - did not produce a TEST-*.xml file (likely timed out) 
(batchId=204)
TestNoopCommand - did not produce a TEST-*.xml file (likely timed out) 
(batchId=204)
TestReplicationTask - did not produce a TEST-*.xml file (likely timed out) 
(batchId=204)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_custom_key2]
 (batchId=267)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_custom_key]
 (batchId=267)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_index] 
(batchId=267)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_predicate_pushdown]
 (batchId=267)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=267)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_single_sourced_multi_insert]
 (batchId=267)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[escape_comments] 
(batchId=275)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ambiguitycheck] 
(batchId=79)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_table] 
(batchId=23)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_union_table] 
(batchId=29)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ctas] (batchId=7)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ctas_colname] 
(batchId=64)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ctas_uses_database_location]
 (batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[drop_deleted_partitions] 
(batchId=72)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[drop_multi_partitions] 
(batchId=79)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[escape_comments] 
(batchId=82)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[explain_ddl] (batchId=51)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[fileformat_sequencefile] 
(batchId=51)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[fileformat_text] 
(batchId=53)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_duplicate_key] 
(batchId=7)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[infer_bucket_sort_reducers_power_two]
 (batchId=14)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input10] (batchId=41)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input15] (batchId=1)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input1] (batchId=75)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input2] (batchId=96)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[inputddl1] (batchId=32)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[inputddl2] (batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[inputddl3] (batchId=76)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[inputddl6] (batchId=57)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[json_serde1] (batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[merge3] (batchId=64)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_iow_temp] (batchId=11)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[nonReservedKeyWords] 
(batchId=66)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[nonmr_fetch] (batchId=22)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[nullformatCTAS] 
(batchId=42)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[nullformat] (batchId=47)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_createas1] 
(batchId=95)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parallel_orderby] 
(batchId=59)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[repl_2_exim_basic] 
(batchId=85)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[repl_3_exim_metadata] 
(batchId=62)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[serde_opencsv] 
(batchId=81)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[serde_regex] (batchId=41)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[show_create_table_alter] 
(batchId=32)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[show_create_table_db_table]
 (batchId=71)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[show_create_table_delimited]
 (batchId=31)

[jira] [Commented] (HIVE-21264) Improvements Around CharTypeInfo

2019-03-08 Thread David Mollitor (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788480#comment-16788480
 ] 

David Mollitor commented on HIVE-21264:
---

[~gopalv] Thanks!  Ya, doing this way saves a bunch of code and I makes the 
entire thing cleaner.  Also, the unit test is helpful for demonstration and 
proof.

I submitted the patch once more hopping for all-green.

> Improvements Around CharTypeInfo
> 
>
> Key: HIVE-21264
> URL: https://issues.apache.org/jira/browse/HIVE-21264
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HIVE-21264.1.patch, HIVE-21264.2.patch, 
> HIVE-21264.3.patch, HIVE-21264.3.patch, HIVE-21264.3.patch, HIVE-21264.3.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{CharTypeInfo}} stores the type name of the data type (char/varchar) and 
> the length (1-255).  {{CharTypeInfo}} objects are often getting cached once 
> they are created.
> The {{hashcode()}} and {{equals()}} of its sub-classes varchar and char are 
> inconsistent.
> * Make hashcode and equals consistent (and fast)
> * Simplify the {{getQualifiedName}} implementation and reduce the scope to 
> protected
> * Other related nits



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21401) Break up DDLTask - extract Table related operations

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788479#comment-16788479
 ] 

Hive QA commented on HIVE-21401:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
24s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
22s{color} | {color:blue} ql in master has 2258 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
43s{color} | {color:blue} hcatalog/core in master has 29 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
54s{color} | {color:blue} itests/util in master has 48 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
55s{color} | {color:red} ql: The patch generated 21 new + 1602 unchanged - 115 
fixed = 1623 total (was 1717) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
15s{color} | {color:red} hcatalog/core: The patch generated 3 new + 44 
unchanged - 4 fixed = 47 total (was 48) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
37s{color} | {color:red} ql generated 4 new + 2256 unchanged - 2 fixed = 2260 
total (was 2258) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
50s{color} | {color:red} hcatalog/core generated 1 new + 28 unchanged - 1 fixed 
= 29 total (was 29) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  org.apache.hadoop.hive.ql.ddl.DDLOperation.writeToFile(String, String) 
may fail to close stream  At DDLOperation.java:close stream  At 
DDLOperation.java:[line 183] |
|  |  org.apache.hadoop.hive.ql.ddl.DDLOperation.propertiesToString(Map, List) 
makes inefficient use of keySet iterator instead of entrySet iterator  At 
DDLOperation.java:of keySet iterator instead of entrySet iterator  At 
DDLOperation.java:[line 168] |
|  |  Class org.apache.hadoop.hive.ql.plan.DropPartitionDesc defines 
non-transient non-serializable instance field partSpecs  In 
DropPartitionDesc.java:instance field partSpecs  In DropPartitionDesc.java |
|  |  Class org.apache.hadoop.hive.ql.plan.DropPartitionDesc defines 
non-transient non-serializable instance field replicationSpec  In 
DropPartitionDesc.java:instance field replicationSpec  In 
DropPartitionDesc.java |
| FindBugs | module:hcatalog/core |
|  |  Redundant nullcheck of desc, which is known to be non-null in 
org.apache.hive.hcatalog.cli.SemanticAnalysis.CreateTableHook.postAnalyze(HiveSemanticAnalyzerHookContext,
 List)  Redundant null check at CreateTableHook.java:is known to be non-null in 

[jira] [Commented] (HIVE-21264) Improvements Around CharTypeInfo

2019-03-08 Thread Gopal V (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788478#comment-16788478
 ] 

Gopal V commented on HIVE-21264:


LGTM - +1

> Improvements Around CharTypeInfo
> 
>
> Key: HIVE-21264
> URL: https://issues.apache.org/jira/browse/HIVE-21264
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HIVE-21264.1.patch, HIVE-21264.2.patch, 
> HIVE-21264.3.patch, HIVE-21264.3.patch, HIVE-21264.3.patch, HIVE-21264.3.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{CharTypeInfo}} stores the type name of the data type (char/varchar) and 
> the length (1-255).  {{CharTypeInfo}} objects are often getting cached once 
> they are created.
> The {{hashcode()}} and {{equals()}} of its sub-classes varchar and char are 
> inconsistent.
> * Make hashcode and equals consistent (and fast)
> * Simplify the {{getQualifiedName}} implementation and reduce the scope to 
> protected
> * Other related nits



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21264) Improvements Around CharTypeInfo

2019-03-08 Thread David Mollitor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HIVE-21264:
--
Status: Patch Available  (was: Open)

> Improvements Around CharTypeInfo
> 
>
> Key: HIVE-21264
> URL: https://issues.apache.org/jira/browse/HIVE-21264
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HIVE-21264.1.patch, HIVE-21264.2.patch, 
> HIVE-21264.3.patch, HIVE-21264.3.patch, HIVE-21264.3.patch, HIVE-21264.3.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{CharTypeInfo}} stores the type name of the data type (char/varchar) and 
> the length (1-255).  {{CharTypeInfo}} objects are often getting cached once 
> they are created.
> The {{hashcode()}} and {{equals()}} of its sub-classes varchar and char are 
> inconsistent.
> * Make hashcode and equals consistent (and fast)
> * Simplify the {{getQualifiedName}} implementation and reduce the scope to 
> protected
> * Other related nits



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21264) Improvements Around CharTypeInfo

2019-03-08 Thread David Mollitor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HIVE-21264:
--
Attachment: HIVE-21264.3.patch

> Improvements Around CharTypeInfo
> 
>
> Key: HIVE-21264
> URL: https://issues.apache.org/jira/browse/HIVE-21264
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HIVE-21264.1.patch, HIVE-21264.2.patch, 
> HIVE-21264.3.patch, HIVE-21264.3.patch, HIVE-21264.3.patch, HIVE-21264.3.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{CharTypeInfo}} stores the type name of the data type (char/varchar) and 
> the length (1-255).  {{CharTypeInfo}} objects are often getting cached once 
> they are created.
> The {{hashcode()}} and {{equals()}} of its sub-classes varchar and char are 
> inconsistent.
> * Make hashcode and equals consistent (and fast)
> * Simplify the {{getQualifiedName}} implementation and reduce the scope to 
> protected
> * Other related nits



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21264) Improvements Around CharTypeInfo

2019-03-08 Thread David Mollitor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HIVE-21264:
--
Status: Open  (was: Patch Available)

> Improvements Around CharTypeInfo
> 
>
> Key: HIVE-21264
> URL: https://issues.apache.org/jira/browse/HIVE-21264
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HIVE-21264.1.patch, HIVE-21264.2.patch, 
> HIVE-21264.3.patch, HIVE-21264.3.patch, HIVE-21264.3.patch, HIVE-21264.3.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{CharTypeInfo}} stores the type name of the data type (char/varchar) and 
> the length (1-255).  {{CharTypeInfo}} objects are often getting cached once 
> they are created.
> The {{hashcode()}} and {{equals()}} of its sub-classes varchar and char are 
> inconsistent.
> * Make hashcode and equals consistent (and fast)
> * Simplify the {{getQualifiedName}} implementation and reduce the scope to 
> protected
> * Other related nits



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21416) Log git apply tries with p0, p1, and p2

2019-03-08 Thread Ashutosh Chauhan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788472#comment-16788472
 ] 

Ashutosh Chauhan commented on HIVE-21416:
-

+1

> Log git apply tries with p0, p1, and p2
> ---
>
> Key: HIVE-21416
> URL: https://issues.apache.org/jira/browse/HIVE-21416
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21416.01.patch
>
>
> Currently when the PreCommit-HIVE-Build Jenkins job is trying to apply the 
> patch it tries it first with -p0, then if it wasn't successful with -p1, then 
> finally if it still wasn't successful with -p2. The 3 tries are not separated 
> by anything, so the error messages of  the potential failures are mixed 
> together. There should be a log message before each try.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21264) Improvements Around CharTypeInfo

2019-03-08 Thread Gopal V (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788471#comment-16788471
 ] 

Gopal V commented on HIVE-21264:


bq.  There is no need to do this check explicitly in the child classes.

Cool, that was not obvious from the patch - so I'll throw in a comment there.

> Improvements Around CharTypeInfo
> 
>
> Key: HIVE-21264
> URL: https://issues.apache.org/jira/browse/HIVE-21264
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HIVE-21264.1.patch, HIVE-21264.2.patch, 
> HIVE-21264.3.patch, HIVE-21264.3.patch, HIVE-21264.3.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{CharTypeInfo}} stores the type name of the data type (char/varchar) and 
> the length (1-255).  {{CharTypeInfo}} objects are often getting cached once 
> they are created.
> The {{hashcode()}} and {{equals()}} of its sub-classes varchar and char are 
> inconsistent.
> * Make hashcode and equals consistent (and fast)
> * Simplify the {{getQualifiedName}} implementation and reduce the scope to 
> protected
> * Other related nits



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21390) BI split strategy does not work for blob stores

2019-03-08 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-21390:
-
Attachment: HIVE-21390.4.patch

> BI split strategy does not work for blob stores
> ---
>
> Key: HIVE-21390
> URL: https://issues.apache.org/jira/browse/HIVE-21390
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21390.1.patch, HIVE-21390.2.patch, 
> HIVE-21390.3.patch, HIVE-21390.4.patch
>
>
> BI split strategy cuts the split at block boundaries however there are no 
> block boundaries in blob storage so we end up with 1 split for BI split 
> strategy. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21417) JDBC: standalone jar relocates log4j interface classes

2019-03-08 Thread Gopal V (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-21417:
---
Component/s: JDBC

> JDBC: standalone jar relocates log4j interface classes
> --
>
> Key: HIVE-21417
> URL: https://issues.apache.org/jira/browse/HIVE-21417
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Reporter: Gopal V
>Priority: Major
>
> The relocation of slf4j for ILoggerFactory breaks embedding in JVMs which 
> have an slf4j impl locally.
> org/apache/hive/org/slf4j/ILoggerFactory
> Adding this jar to an existing slf4j env breaks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20585) Fix column stat flucutuation in list_bucket_dml_4.q

2019-03-08 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-20585:
--
Status: Patch Available  (was: Open)

> Fix column stat flucutuation in list_bucket_dml_4.q
> ---
>
> Key: HIVE-20585
> URL: https://issues.apache.org/jira/browse/HIVE-20585
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Reporter: Zoltan Haindrich
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-20585.01.patch
>
>
> If column stats are fetched(HIVE-17084) then list_bucket_dml_4.q's output is 
> flactuating; sometimes it has column stats COMPLETE; sometimes just partial.
> Running locally produces COMPLETE.
> running together with join33.q,list_bucket_dml_4.q cause it to degrade to 
> PARTIAL



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20656) Sensible defaults: Map aggregation memory configs are too aggressive

2019-03-08 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-20656:
-
   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Committed to master. Thanks for the review!

> Sensible defaults: Map aggregation memory configs are too aggressive
> 
>
> Key: HIVE-20656
> URL: https://issues.apache.org/jira/browse/HIVE-20656
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20656.1.patch, HIVE-20656.2.patch, 
> HIVE-20656.3.patch
>
>
> The defaults for the following configs seems to be too aggressive. In java 
> this can easily lead to several full GC pauses whose memory cannot be 
> reclaimed.
> {code:java}
> HIVEMAPAGGRHASHMEMORY("hive.map.aggr.hash.percentmemory", (float) 0.99,
> "Portion of total memory to be used by map-side group aggregation hash 
> table"),
> HIVEMAPAGGRMEMORYTHRESHOLD("hive.map.aggr.hash.force.flush.memory.threshold", 
> (float) 0.9,
> "The max memory to be used by map-side group aggregation hash table.\n" +
> "If the memory usage is higher than this number, force to flush 
> data"),{code}
>  
> We can be little bit conservative for these configs to avoid getting into GC 
> pause. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20585) Fix column stat flucutuation in list_bucket_dml_4.q

2019-03-08 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely reassigned HIVE-20585:
-

Assignee: Miklos Gergely  (was: Zoltan Haindrich)

> Fix column stat flucutuation in list_bucket_dml_4.q
> ---
>
> Key: HIVE-20585
> URL: https://issues.apache.org/jira/browse/HIVE-20585
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Reporter: Zoltan Haindrich
>Assignee: Miklos Gergely
>Priority: Major
>
> If column stats are fetched(HIVE-17084) then list_bucket_dml_4.q's output is 
> flactuating; sometimes it has column stats COMPLETE; sometimes just partial.
> Running locally produces COMPLETE.
> running together with join33.q,list_bucket_dml_4.q cause it to degrade to 
> PARTIAL



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20656) Sensible defaults: Map aggregation memory configs are too aggressive

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788462#comment-16788462
 ] 

Hive QA commented on HIVE-20656:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12961767/HIVE-20656.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15821 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16422/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16422/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16422/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12961767 - PreCommit-HIVE-Build

> Sensible defaults: Map aggregation memory configs are too aggressive
> 
>
> Key: HIVE-20656
> URL: https://issues.apache.org/jira/browse/HIVE-20656
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-20656.1.patch, HIVE-20656.2.patch, 
> HIVE-20656.3.patch
>
>
> The defaults for the following configs seems to be too aggressive. In java 
> this can easily lead to several full GC pauses whose memory cannot be 
> reclaimed.
> {code:java}
> HIVEMAPAGGRHASHMEMORY("hive.map.aggr.hash.percentmemory", (float) 0.99,
> "Portion of total memory to be used by map-side group aggregation hash 
> table"),
> HIVEMAPAGGRMEMORYTHRESHOLD("hive.map.aggr.hash.force.flush.memory.threshold", 
> (float) 0.9,
> "The max memory to be used by map-side group aggregation hash table.\n" +
> "If the memory usage is higher than this number, force to flush 
> data"),{code}
>  
> We can be little bit conservative for these configs to avoid getting into GC 
> pause. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20585) Fix column stat flucutuation in list_bucket_dml_4.q

2019-03-08 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-20585:
--
Attachment: HIVE-20585.01.patch

> Fix column stat flucutuation in list_bucket_dml_4.q
> ---
>
> Key: HIVE-20585
> URL: https://issues.apache.org/jira/browse/HIVE-20585
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Reporter: Zoltan Haindrich
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-20585.01.patch
>
>
> If column stats are fetched(HIVE-17084) then list_bucket_dml_4.q's output is 
> flactuating; sometimes it has column stats COMPLETE; sometimes just partial.
> Running locally produces COMPLETE.
> running together with join33.q,list_bucket_dml_4.q cause it to degrade to 
> PARTIAL



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21339) LLAP: Cache hit also initializes an FS object

2019-03-08 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-21339:
-
Attachment: HIVE-21339.5.patch

> LLAP: Cache hit also initializes an FS object 
> --
>
> Key: HIVE-21339
> URL: https://issues.apache.org/jira/browse/HIVE-21339
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 4.0.0
>Reporter: Gopal V
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21339.1.patch, HIVE-21339.2.patch, 
> HIVE-21339.3.patch, HIVE-21339.4.patch, HIVE-21339.5.patch, 
> llap-cache-fs-get.png, llap-query7-cached.svg
>
>
> https://github.com/apache/hive/blob/master/llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/OrcEncodedDataReader.java#L214
> {code}
> // 1. Get file metadata from cache, or create the reader and read it.
> // Don't cache the filesystem object for now; Tez closes it and FS cache 
> will fix all that
> fs = split.getPath().getFileSystem(jobConf);
> fileKey = determineFileId(fs, split,
> HiveConf.getBoolVar(daemonConf, 
> ConfVars.LLAP_CACHE_ALLOW_SYNTHETIC_FILEID),
> HiveConf.getBoolVar(daemonConf, 
> ConfVars.LLAP_CACHE_DEFAULT_FS_FILE_ID),
> !HiveConf.getBoolVar(daemonConf, ConfVars.LLAP_IO_USE_FILEID_PATH)
> );
> {code}
>  !llap-cache-fs-get.png! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21339) LLAP: Cache hit also initializes an FS object

2019-03-08 Thread Prasanth Jayachandran (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788452#comment-16788452
 ] 

Prasanth Jayachandran commented on HIVE-21339:
--

{code:java}
Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: 
java.lang.IllegalArgumentException: Unable to create serializer 
"org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer" for 
class: org.apache.hadoop.hive.ql.io.TeradataBinaryFileOutputFormat
Serialization trace:
outputFileFormatClass (org.apache.hadoop.hive.ql.plan.TableDesc)
tableInfo (org.apache.hadoop.hive.ql.plan.FileSinkDesc)
conf (org.apache.hadoop.hive.ql.exec.FileSinkOperator)
childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
parentOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
parentOperators (org.apache.hadoop.hive.ql.exec.GroupByOperator)
parentOperators (org.apache.hadoop.hive.ql.exec.ReduceSinkOperator)
parentOperators (org.apache.hadoop.hive.ql.exec.GroupByOperator)
reducer (org.apache.hadoop.hive.ql.plan.ReduceWork)
at 
org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.write(ObjectField.java:101)
at 
org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.write(FieldSerializer.java:518)
at 
org.apache.hive.com.esotericsoftware.kryo.Kryo.writeObject(Kryo.java:552)
at 
org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.write(ObjectField.java:80)
at 
org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.write(FieldSerializer.java:518)
at 
org.apache.hive.com.esotericsoftware.kryo.Kryo.writeObject(Kryo.java:552)
at 
org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.write(ObjectField.java:80)
at 
org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.write(FieldSerializer.java:518)
at 
org.apache.hive.com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:628)
at 
org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:100)
at 
org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:40)
at 
org.apache.hive.com.esotericsoftware.kryo.Kryo.writeObject(Kryo.java:552)
at 
org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.write(ObjectField.java:80)
at 
org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.write(FieldSerializer.java:518)
at 
org.apache.hive.com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:628)
at 
org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:100)
at 
org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:40)
at 
org.apache.hive.com.esotericsoftware.kryo.Kryo.writeObject(Kryo.java:552)
at 
org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.write(ObjectField.java:80)
at 
org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.write(FieldSerializer.java:518)
at 
org.apache.hive.com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:628)
at 
org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:100)
at 
org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:40)
at 
org.apache.hive.com.esotericsoftware.kryo.Kryo.writeObject(Kryo.java:552)
at 
org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.write(ObjectField.java:80)
at 
org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.write(FieldSerializer.java:518)
at 
org.apache.hive.com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:628)
at 
org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:100)
at 
org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:40)
at 
org.apache.hive.com.esotericsoftware.kryo.Kryo.writeObject(Kryo.java:552)
at 
org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.write(ObjectField.java:80)
at 
org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.write(FieldSerializer.java:518)
at 
org.apache.hive.com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:628)
at 
org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:100)
at 
org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:40)
at 
org.apache.hive.com.esotericsoftware.kryo.Kryo.writeObject(Kryo.java:552)
at 

[jira] [Updated] (HIVE-21415) Parallel build is failing, trying to download incorrect hadoop-hdfs-client version

2019-03-08 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-21415:
-
   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Test failure is unrelated. Patch is benign as this touches only few pom files 
for parallel build issue. Committed to master. 

> Parallel build is failing, trying to download incorrect hadoop-hdfs-client 
> version
> --
>
> Key: HIVE-21415
> URL: https://issues.apache.org/jira/browse/HIVE-21415
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21415.1.patch
>
>
> Running the following build command
> {code:java}
> mvn clean install -Pdist -DskipTests -Dpackaging.minimizeJar=false -T 1C 
> -DskipShade -Dremoteresources.skip=true -Dmaven.javadoc.skip=true{code}
> fails with the following exception for 3 modules (hplql, kryo-registrator, 
> packaging)
> {code:java}
> [ERROR] Failed to execute goal on project hive-packaging: Could not resolve 
> dependencies for project org.apache.hive:hive-packaging:pom:4.0.0-SNAPSHOT: 
> Failure to find org.apache.hadoop:hadoop-hdfs-client:jar:2.7.3 in 
> http://www.datanucleus.org/downloads/maven2 was cached in the local 
> repository, resolution will not be reattempted until the update interval of 
> datanucleus has elapsed or updates are forced -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR] mvn  -rf :hive-packaging{code}
>  
> It is trying to download 2.7.3 version but hadoop.version refers to 3.1.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20656) Sensible defaults: Map aggregation memory configs are too aggressive

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788449#comment-16788449
 ] 

Hive QA commented on HIVE-20656:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
36s{color} | {color:blue} common in master has 63 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
14s{color} | {color:blue} ql in master has 2258 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16422/dev-support/hive-personality.sh
 |
| git revision | master / cdd8fa5 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16422/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Sensible defaults: Map aggregation memory configs are too aggressive
> 
>
> Key: HIVE-20656
> URL: https://issues.apache.org/jira/browse/HIVE-20656
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-20656.1.patch, HIVE-20656.2.patch, 
> HIVE-20656.3.patch
>
>
> The defaults for the following configs seems to be too aggressive. In java 
> this can easily lead to several full GC pauses whose memory cannot be 
> reclaimed.
> {code:java}
> HIVEMAPAGGRHASHMEMORY("hive.map.aggr.hash.percentmemory", (float) 0.99,
> "Portion of total memory to be used by map-side group aggregation hash 
> table"),
> HIVEMAPAGGRMEMORYTHRESHOLD("hive.map.aggr.hash.force.flush.memory.threshold", 
> (float) 0.9,
> "The max memory to be used by map-side group aggregation hash table.\n" +
> "If the memory usage is higher than this number, force to flush 
> data"),{code}
>  
> We can be little bit conservative for these configs to avoid getting into GC 
> pause. 



--
This message 

[jira] [Commented] (HIVE-21415) Parallel build is failing, trying to download incorrect hadoop-hdfs-client version

2019-03-08 Thread Vineet Garg (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788428#comment-16788428
 ] 

Vineet Garg commented on HIVE-21415:


+1

> Parallel build is failing, trying to download incorrect hadoop-hdfs-client 
> version
> --
>
> Key: HIVE-21415
> URL: https://issues.apache.org/jira/browse/HIVE-21415
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21415.1.patch
>
>
> Running the following build command
> {code:java}
> mvn clean install -Pdist -DskipTests -Dpackaging.minimizeJar=false -T 1C 
> -DskipShade -Dremoteresources.skip=true -Dmaven.javadoc.skip=true{code}
> fails with the following exception for 3 modules (hplql, kryo-registrator, 
> packaging)
> {code:java}
> [ERROR] Failed to execute goal on project hive-packaging: Could not resolve 
> dependencies for project org.apache.hive:hive-packaging:pom:4.0.0-SNAPSHOT: 
> Failure to find org.apache.hadoop:hadoop-hdfs-client:jar:2.7.3 in 
> http://www.datanucleus.org/downloads/maven2 was cached in the local 
> repository, resolution will not be reattempted until the update interval of 
> datanucleus has elapsed or updates are forced -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR] mvn  -rf :hive-packaging{code}
>  
> It is trying to download 2.7.3 version but hadoop.version refers to 3.1.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21415) Parallel build is failing, trying to download incorrect hadoop-hdfs-client version

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788432#comment-16788432
 ] 

Hive QA commented on HIVE-21415:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12961766/HIVE-21415.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15821 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.mapreduce.TestHCatMutableNonPartitioned.testHCatNonPartitionedTable[2]
 (batchId=214)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16421/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16421/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16421/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12961766 - PreCommit-HIVE-Build

> Parallel build is failing, trying to download incorrect hadoop-hdfs-client 
> version
> --
>
> Key: HIVE-21415
> URL: https://issues.apache.org/jira/browse/HIVE-21415
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21415.1.patch
>
>
> Running the following build command
> {code:java}
> mvn clean install -Pdist -DskipTests -Dpackaging.minimizeJar=false -T 1C 
> -DskipShade -Dremoteresources.skip=true -Dmaven.javadoc.skip=true{code}
> fails with the following exception for 3 modules (hplql, kryo-registrator, 
> packaging)
> {code:java}
> [ERROR] Failed to execute goal on project hive-packaging: Could not resolve 
> dependencies for project org.apache.hive:hive-packaging:pom:4.0.0-SNAPSHOT: 
> Failure to find org.apache.hadoop:hadoop-hdfs-client:jar:2.7.3 in 
> http://www.datanucleus.org/downloads/maven2 was cached in the local 
> repository, resolution will not be reattempted until the update interval of 
> datanucleus has elapsed or updates are forced -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR] mvn  -rf :hive-packaging{code}
>  
> It is trying to download 2.7.3 version but hadoop.version refers to 3.1.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21415) Parallel build is failing, trying to download incorrect hadoop-hdfs-client version

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788416#comment-16788416
 ] 

Hive QA commented on HIVE-21415:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16421/dev-support/hive-personality.sh
 |
| git revision | master / cdd8fa5 |
| Default Java | 1.8.0_111 |
| modules | C: hplsql kryo-registrator packaging U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16421/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Parallel build is failing, trying to download incorrect hadoop-hdfs-client 
> version
> --
>
> Key: HIVE-21415
> URL: https://issues.apache.org/jira/browse/HIVE-21415
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21415.1.patch
>
>
> Running the following build command
> {code:java}
> mvn clean install -Pdist -DskipTests -Dpackaging.minimizeJar=false -T 1C 
> -DskipShade -Dremoteresources.skip=true -Dmaven.javadoc.skip=true{code}
> fails with the following exception for 3 modules (hplql, kryo-registrator, 
> packaging)
> {code:java}
> [ERROR] Failed to execute goal on project hive-packaging: Could not resolve 
> dependencies for project org.apache.hive:hive-packaging:pom:4.0.0-SNAPSHOT: 
> Failure to find org.apache.hadoop:hadoop-hdfs-client:jar:2.7.3 in 
> http://www.datanucleus.org/downloads/maven2 was cached in the local 
> repository, resolution will not be reattempted until the update interval of 
> datanucleus has elapsed or updates are forced -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR] mvn  -rf :hive-packaging{code}
>  
> It is trying to download 2.7.3 version but 

[jira] [Commented] (HIVE-21339) LLAP: Cache hit also initializes an FS object

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788405#comment-16788405
 ] 

Hive QA commented on HIVE-21339:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12961765/HIVE-21339.4.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15821 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[test_teradatabinaryfile] 
(batchId=2)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16420/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16420/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16420/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12961765 - PreCommit-HIVE-Build

> LLAP: Cache hit also initializes an FS object 
> --
>
> Key: HIVE-21339
> URL: https://issues.apache.org/jira/browse/HIVE-21339
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 4.0.0
>Reporter: Gopal V
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21339.1.patch, HIVE-21339.2.patch, 
> HIVE-21339.3.patch, HIVE-21339.4.patch, llap-cache-fs-get.png, 
> llap-query7-cached.svg
>
>
> https://github.com/apache/hive/blob/master/llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/OrcEncodedDataReader.java#L214
> {code}
> // 1. Get file metadata from cache, or create the reader and read it.
> // Don't cache the filesystem object for now; Tez closes it and FS cache 
> will fix all that
> fs = split.getPath().getFileSystem(jobConf);
> fileKey = determineFileId(fs, split,
> HiveConf.getBoolVar(daemonConf, 
> ConfVars.LLAP_CACHE_ALLOW_SYNTHETIC_FILEID),
> HiveConf.getBoolVar(daemonConf, 
> ConfVars.LLAP_CACHE_DEFAULT_FS_FILE_ID),
> !HiveConf.getBoolVar(daemonConf, ConfVars.LLAP_IO_USE_FILEID_PATH)
> );
> {code}
>  !llap-cache-fs-get.png! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21339) LLAP: Cache hit also initializes an FS object

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788386#comment-16788386
 ] 

Hive QA commented on HIVE-21339:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
23s{color} | {color:blue} ql in master has 2258 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
48s{color} | {color:blue} llap-server in master has 79 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
43s{color} | {color:red} ql: The patch generated 2 new + 111 unchanged - 0 
fixed = 113 total (was 111) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
13s{color} | {color:red} llap-server: The patch generated 1 new + 31 unchanged 
- 0 fixed = 32 total (was 31) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16420/dev-support/hive-personality.sh
 |
| git revision | master / cdd8fa5 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16420/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16420/yetus/diff-checkstyle-llap-server.txt
 |
| modules | C: ql llap-server U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16420/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> LLAP: Cache hit also initializes an FS object 
> --
>
> Key: HIVE-21339
> URL: https://issues.apache.org/jira/browse/HIVE-21339
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 4.0.0
>Reporter: Gopal V
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21339.1.patch, HIVE-21339.2.patch, 
> HIVE-21339.3.patch, HIVE-21339.4.patch, llap-cache-fs-get.png, 
> llap-query7-cached.svg
>
>
> https://github.com/apache/hive/blob/master/llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/OrcEncodedDataReader.java#L214
> {code}
> // 1. Get file metadata from cache, or create the reader 

[jira] [Commented] (HIVE-21390) BI split strategy does not work for blob stores

2019-03-08 Thread Gopal V (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788375#comment-16788375
 ] 

Gopal V commented on HIVE-21390:


{code}
java.lang.RuntimeException: ORC split generation failed with exception: Not 
implemented by the RawFileSystem FileSystem implementation
{code}

> BI split strategy does not work for blob stores
> ---
>
> Key: HIVE-21390
> URL: https://issues.apache.org/jira/browse/HIVE-21390
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21390.1.patch, HIVE-21390.2.patch, 
> HIVE-21390.3.patch
>
>
> BI split strategy cuts the split at block boundaries however there are no 
> block boundaries in blob storage so we end up with 1 split for BI split 
> strategy. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21416) Log git apply tries with p0, p1, and p2

2019-03-08 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely reassigned HIVE-21416:
-


> Log git apply tries with p0, p1, and p2
> ---
>
> Key: HIVE-21416
> URL: https://issues.apache.org/jira/browse/HIVE-21416
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
>
> Currently when the PreCommit-HIVE-Build Jenkins job is trying to apply the 
> patch it tries it first with -p0, then if it wasn't successful with -p1, then 
> finally if it still wasn't successful with -p2. The 3 tries are not separated 
> by anything, so the error messages of  the potential failures are mixed 
> together. There should be a log message before each try.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21416) Log git apply tries with p0, p1, and p2

2019-03-08 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21416:
--
Status: Patch Available  (was: Open)

> Log git apply tries with p0, p1, and p2
> ---
>
> Key: HIVE-21416
> URL: https://issues.apache.org/jira/browse/HIVE-21416
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21416.01.patch
>
>
> Currently when the PreCommit-HIVE-Build Jenkins job is trying to apply the 
> patch it tries it first with -p0, then if it wasn't successful with -p1, then 
> finally if it still wasn't successful with -p2. The 3 tries are not separated 
> by anything, so the error messages of  the potential failures are mixed 
> together. There should be a log message before each try.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21416) Log git apply tries with p0, p1, and p2

2019-03-08 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21416:
--
Attachment: HIVE-21416.01.patch

> Log git apply tries with p0, p1, and p2
> ---
>
> Key: HIVE-21416
> URL: https://issues.apache.org/jira/browse/HIVE-21416
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21416.01.patch
>
>
> Currently when the PreCommit-HIVE-Build Jenkins job is trying to apply the 
> patch it tries it first with -p0, then if it wasn't successful with -p1, then 
> finally if it still wasn't successful with -p2. The 3 tries are not separated 
> by anything, so the error messages of  the potential failures are mixed 
> together. There should be a log message before each try.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21390) BI split strategy does not work for blob stores

2019-03-08 Thread Prasanth Jayachandran (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788378#comment-16788378
 ] 

Prasanth Jayachandran commented on HIVE-21390:
--

My bad. Looking at the failure I ran the TestStreaming in top level streaming 
module instead of hcatalog streaming module. Will fix it. 

> BI split strategy does not work for blob stores
> ---
>
> Key: HIVE-21390
> URL: https://issues.apache.org/jira/browse/HIVE-21390
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21390.1.patch, HIVE-21390.2.patch, 
> HIVE-21390.3.patch
>
>
> BI split strategy cuts the split at block boundaries however there are no 
> block boundaries in blob storage so we end up with 1 split for BI split 
> strategy. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-21390) BI split strategy does not work for blob stores

2019-03-08 Thread Gopal V (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788375#comment-16788375
 ] 

Gopal V edited comment on HIVE-21390 at 3/8/19 11:24 PM:
-

{code}
java.lang.RuntimeException: ORC split generation failed with exception: Not 
implemented by the RawFileSystem FileSystem implementation
at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1881)
at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1969)
at 
org.apache.hive.hcatalog.streaming.mutate.StreamingAssert.readRecords(StreamingAssert.java:161)
{code}


was (Author: gopalv):
{code}
java.lang.RuntimeException: ORC split generation failed with exception: Not 
implemented by the RawFileSystem FileSystem implementation
{code}

> BI split strategy does not work for blob stores
> ---
>
> Key: HIVE-21390
> URL: https://issues.apache.org/jira/browse/HIVE-21390
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21390.1.patch, HIVE-21390.2.patch, 
> HIVE-21390.3.patch
>
>
> BI split strategy cuts the split at block boundaries however there are no 
> block boundaries in blob storage so we end up with 1 split for BI split 
> strategy. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21390) BI split strategy does not work for blob stores

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788366#comment-16788366
 ] 

Hive QA commented on HIVE-21390:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12961762/HIVE-21390.3.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 15821 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.updateSelectUpdate 
(batchId=326)
org.apache.hive.hcatalog.streaming.TestStreaming.testInterleavedTransactionBatchCommits
 (batchId=216)
org.apache.hive.hcatalog.streaming.TestStreaming.testMultipleTransactionBatchCommits
 (batchId=216)
org.apache.hive.hcatalog.streaming.mutate.TestMutations.testUpdatesAndDeletes 
(batchId=216)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16419/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16419/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16419/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12961762 - PreCommit-HIVE-Build

> BI split strategy does not work for blob stores
> ---
>
> Key: HIVE-21390
> URL: https://issues.apache.org/jira/browse/HIVE-21390
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21390.1.patch, HIVE-21390.2.patch, 
> HIVE-21390.3.patch
>
>
> BI split strategy cuts the split at block boundaries however there are no 
> block boundaries in blob storage so we end up with 1 split for BI split 
> strategy. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21401) Break up DDLTask - extract Table related operations

2019-03-08 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21401:
--
Attachment: HIVE-21401.05.patch

> Break up DDLTask - extract Table related operations
> ---
>
> Key: HIVE-21401
> URL: https://issues.apache.org/jira/browse/HIVE-21401
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21401.01.patch, HIVE-21401.02.patch, 
> HIVE-21401.03.patch, HIVE-21401.04.patch, HIVE-21401.05.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #2: extract all the table related operations from the old DDLTask except 
> alter table, and move them under the new package. Also create the new 
> internal framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21401) Break up DDLTask - extract Table related operations

2019-03-08 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21401:
--
Status: Open  (was: Patch Available)

> Break up DDLTask - extract Table related operations
> ---
>
> Key: HIVE-21401
> URL: https://issues.apache.org/jira/browse/HIVE-21401
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21401.01.patch, HIVE-21401.02.patch, 
> HIVE-21401.03.patch, HIVE-21401.04.patch, HIVE-21401.05.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #2: extract all the table related operations from the old DDLTask except 
> alter table, and move them under the new package. Also create the new 
> internal framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21401) Break up DDLTask - extract Table related operations

2019-03-08 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21401:
--
Status: Patch Available  (was: Open)

> Break up DDLTask - extract Table related operations
> ---
>
> Key: HIVE-21401
> URL: https://issues.apache.org/jira/browse/HIVE-21401
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21401.01.patch, HIVE-21401.02.patch, 
> HIVE-21401.03.patch, HIVE-21401.04.patch, HIVE-21401.05.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #2: extract all the table related operations from the old DDLTask except 
> alter table, and move them under the new package. Also create the new 
> internal framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21390) BI split strategy does not work for blob stores

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788356#comment-16788356
 ] 

Hive QA commented on HIVE-21390:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
38s{color} | {color:blue} common in master has 63 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
21s{color} | {color:blue} ql in master has 2258 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
46s{color} | {color:red} ql: The patch generated 5 new + 355 unchanged - 1 
fixed = 360 total (was 356) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16419/dev-support/hive-personality.sh
 |
| git revision | master / cdd8fa5 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16419/yetus/diff-checkstyle-ql.txt
 |
| modules | C: common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16419/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> BI split strategy does not work for blob stores
> ---
>
> Key: HIVE-21390
> URL: https://issues.apache.org/jira/browse/HIVE-21390
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21390.1.patch, HIVE-21390.2.patch, 
> HIVE-21390.3.patch
>
>
> BI split strategy cuts the split at block boundaries however there are no 
> block boundaries in blob storage so we end up with 1 split for BI split 
> strategy. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20656) Sensible defaults: Map aggregation memory configs are too aggressive

2019-03-08 Thread Prasanth Jayachandran (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788348#comment-16788348
 ] 

Prasanth Jayachandran commented on HIVE-20656:
--

Tried running both failures locally and it doesn't seem to fail. Not sure if it 
is already fixed in master. Giving it another try. 

> Sensible defaults: Map aggregation memory configs are too aggressive
> 
>
> Key: HIVE-20656
> URL: https://issues.apache.org/jira/browse/HIVE-20656
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-20656.1.patch, HIVE-20656.2.patch, 
> HIVE-20656.3.patch
>
>
> The defaults for the following configs seems to be too aggressive. In java 
> this can easily lead to several full GC pauses whose memory cannot be 
> reclaimed.
> {code:java}
> HIVEMAPAGGRHASHMEMORY("hive.map.aggr.hash.percentmemory", (float) 0.99,
> "Portion of total memory to be used by map-side group aggregation hash 
> table"),
> HIVEMAPAGGRMEMORYTHRESHOLD("hive.map.aggr.hash.force.flush.memory.threshold", 
> (float) 0.9,
> "The max memory to be used by map-side group aggregation hash table.\n" +
> "If the memory usage is higher than this number, force to flush 
> data"),{code}
>  
> We can be little bit conservative for these configs to avoid getting into GC 
> pause. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21409) Initial SessionState ClassLoader Reused For Subsequent Sessions

2019-03-08 Thread Shawn Weeks (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788350#comment-16788350
 ] 

Shawn Weeks commented on HIVE-21409:


I've changed registerJars in SessionState to get the classLoader from 
SessionState.getConf().getClassLoader() instead of the thread context and it 
seems to have cleared up the class loader pollution. Not 100% sure what it's 
doing or why the thread context at that point doesn't already have the session 
state's class loader.

> Initial SessionState ClassLoader Reused For Subsequent Sessions
> ---
>
> Key: HIVE-21409
> URL: https://issues.apache.org/jira/browse/HIVE-21409
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
>Reporter: Shawn Weeks
>Priority: Minor
> Attachments: create_class.sql, run.sql, setup.sql
>
>
> It appears that the first ClassLoader attached to a SessionState Static 
> Instance is being reused as the parent for all future sessions. This causes 
> any libraries added to the class path on the initial session to be added to 
> future sessions. It also appears that further sessions may be adding jars to 
> this initial ClassLoader as well leading to the class path getting more and 
> more polluted. This occurring on a build including HIVE-11878. I've included 
> some examples that greatly exaggerate the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20656) Sensible defaults: Map aggregation memory configs are too aggressive

2019-03-08 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-20656:
-
Attachment: HIVE-20656.3.patch

> Sensible defaults: Map aggregation memory configs are too aggressive
> 
>
> Key: HIVE-20656
> URL: https://issues.apache.org/jira/browse/HIVE-20656
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-20656.1.patch, HIVE-20656.2.patch, 
> HIVE-20656.3.patch
>
>
> The defaults for the following configs seems to be too aggressive. In java 
> this can easily lead to several full GC pauses whose memory cannot be 
> reclaimed.
> {code:java}
> HIVEMAPAGGRHASHMEMORY("hive.map.aggr.hash.percentmemory", (float) 0.99,
> "Portion of total memory to be used by map-side group aggregation hash 
> table"),
> HIVEMAPAGGRMEMORYTHRESHOLD("hive.map.aggr.hash.force.flush.memory.threshold", 
> (float) 0.9,
> "The max memory to be used by map-side group aggregation hash table.\n" +
> "If the memory usage is higher than this number, force to flush 
> data"),{code}
>  
> We can be little bit conservative for these configs to avoid getting into GC 
> pause. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21415) Parallel build is failing, trying to download incorrect hadoop-hdfs-client version

2019-03-08 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran reassigned HIVE-21415:



> Parallel build is failing, trying to download incorrect hadoop-hdfs-client 
> version
> --
>
> Key: HIVE-21415
> URL: https://issues.apache.org/jira/browse/HIVE-21415
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
>
> Running the following build command
> {code:java}
> mvn clean install -Pdist -DskipTests -Dpackaging.minimizeJar=false -T 1C 
> -DskipShade -Dremoteresources.skip=true -Dmaven.javadoc.skip=true{code}
> fails with the following exception for 3 modules (hplql, kryo-registrator, 
> packaging)
> {code:java}
> [ERROR] Failed to execute goal on project hive-packaging: Could not resolve 
> dependencies for project org.apache.hive:hive-packaging:pom:4.0.0-SNAPSHOT: 
> Failure to find org.apache.hadoop:hadoop-hdfs-client:jar:2.7.3 in 
> http://www.datanucleus.org/downloads/maven2 was cached in the local 
> repository, resolution will not be reattempted until the update interval of 
> datanucleus has elapsed or updates are forced -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR] mvn  -rf :hive-packaging{code}
>  
> It is trying to download 2.7.3 version but hadoop.version refers to 3.1.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21415) Parallel build is failing, trying to download incorrect hadoop-hdfs-client version

2019-03-08 Thread Prasanth Jayachandran (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788345#comment-16788345
 ] 

Prasanth Jayachandran commented on HIVE-21415:
--

[~vgarg] could you please review? Explicitly specifying version seems to be 
working.

> Parallel build is failing, trying to download incorrect hadoop-hdfs-client 
> version
> --
>
> Key: HIVE-21415
> URL: https://issues.apache.org/jira/browse/HIVE-21415
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21415.1.patch
>
>
> Running the following build command
> {code:java}
> mvn clean install -Pdist -DskipTests -Dpackaging.minimizeJar=false -T 1C 
> -DskipShade -Dremoteresources.skip=true -Dmaven.javadoc.skip=true{code}
> fails with the following exception for 3 modules (hplql, kryo-registrator, 
> packaging)
> {code:java}
> [ERROR] Failed to execute goal on project hive-packaging: Could not resolve 
> dependencies for project org.apache.hive:hive-packaging:pom:4.0.0-SNAPSHOT: 
> Failure to find org.apache.hadoop:hadoop-hdfs-client:jar:2.7.3 in 
> http://www.datanucleus.org/downloads/maven2 was cached in the local 
> repository, resolution will not be reattempted until the update interval of 
> datanucleus has elapsed or updates are forced -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR] mvn  -rf :hive-packaging{code}
>  
> It is trying to download 2.7.3 version but hadoop.version refers to 3.1.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21415) Parallel build is failing, trying to download incorrect hadoop-hdfs-client version

2019-03-08 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-21415:
-
Attachment: HIVE-21415.1.patch

> Parallel build is failing, trying to download incorrect hadoop-hdfs-client 
> version
> --
>
> Key: HIVE-21415
> URL: https://issues.apache.org/jira/browse/HIVE-21415
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21415.1.patch
>
>
> Running the following build command
> {code:java}
> mvn clean install -Pdist -DskipTests -Dpackaging.minimizeJar=false -T 1C 
> -DskipShade -Dremoteresources.skip=true -Dmaven.javadoc.skip=true{code}
> fails with the following exception for 3 modules (hplql, kryo-registrator, 
> packaging)
> {code:java}
> [ERROR] Failed to execute goal on project hive-packaging: Could not resolve 
> dependencies for project org.apache.hive:hive-packaging:pom:4.0.0-SNAPSHOT: 
> Failure to find org.apache.hadoop:hadoop-hdfs-client:jar:2.7.3 in 
> http://www.datanucleus.org/downloads/maven2 was cached in the local 
> repository, resolution will not be reattempted until the update interval of 
> datanucleus has elapsed or updates are forced -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR] mvn  -rf :hive-packaging{code}
>  
> It is trying to download 2.7.3 version but hadoop.version refers to 3.1.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21415) Parallel build is failing, trying to download incorrect hadoop-hdfs-client version

2019-03-08 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-21415:
-
Status: Patch Available  (was: Open)

> Parallel build is failing, trying to download incorrect hadoop-hdfs-client 
> version
> --
>
> Key: HIVE-21415
> URL: https://issues.apache.org/jira/browse/HIVE-21415
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21415.1.patch
>
>
> Running the following build command
> {code:java}
> mvn clean install -Pdist -DskipTests -Dpackaging.minimizeJar=false -T 1C 
> -DskipShade -Dremoteresources.skip=true -Dmaven.javadoc.skip=true{code}
> fails with the following exception for 3 modules (hplql, kryo-registrator, 
> packaging)
> {code:java}
> [ERROR] Failed to execute goal on project hive-packaging: Could not resolve 
> dependencies for project org.apache.hive:hive-packaging:pom:4.0.0-SNAPSHOT: 
> Failure to find org.apache.hadoop:hadoop-hdfs-client:jar:2.7.3 in 
> http://www.datanucleus.org/downloads/maven2 was cached in the local 
> repository, resolution will not be reattempted until the update interval of 
> datanucleus has elapsed or updates are forced -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR] mvn  -rf :hive-packaging{code}
>  
> It is trying to download 2.7.3 version but hadoop.version refers to 3.1.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21048) Remove needless org.mortbay.jetty from hadoop exclusions

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788339#comment-16788339
 ] 

Hive QA commented on HIVE-21048:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 10m 
47s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 41 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
12s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 16m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16418/dev-support/hive-personality.sh
 |
| git revision | master / cdd8fa5 |
| Default Java | 1.8.0_111 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16418/yetus/whitespace-tabs.txt
 |
| modules | C: storage-api common llap-tez ql service jdbc hcatalog 
hcatalog/core hcatalog/hcatalog-pig-adapter hcatalog/webhcat/svr . 
itests/qtest-druid U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16418/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Remove needless org.mortbay.jetty from hadoop exclusions
> 
>
> Key: HIVE-21048
> URL: https://issues.apache.org/jira/browse/HIVE-21048
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Major
> Attachments: HIVE-21048.01.patch, HIVE-21048.02.patch, 
> HIVE-21048.03.patch, HIVE-21048.04.patch, HIVE-21048.05.patch, 
> HIVE-21048.06.patch, HIVE-21048.07.patch, HIVE-21048.08.patch, 
> HIVE-21048.08.patch, HIVE-21048.09.patch, dep.out
>
>
> During HIVE-20638 i found that org.mortbay.jetty exclusions from e.g. hadoop 
> don't take effect, as the actual groupId of jetty is org.eclipse.jetty for 
> most of the current projects, please find attachment (example for hive 
> commons project).
> https://en.wikipedia.org/wiki/Jetty_(web_server)#History



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21409) Initial SessionState ClassLoader Reused For Subsequent Sessions

2019-03-08 Thread Shawn Weeks (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Weeks updated HIVE-21409:
---
Description: It appears that the first ClassLoader attached to a 
SessionState Static Instance is being reused as the parent for all future 
sessions. This causes any libraries added to the class path on the initial 
session to be added to future sessions. It also appears that further sessions 
may be adding jars to this initial ClassLoader as well leading to the class 
path getting more and more polluted. This occurring on a build including 
HIVE-11878. I've included some examples that greatly exaggerate the problem.  
(was: While trying to reproduce another bug I've ran across something 
interesting. It appears that the first session to a hiveserver2 instance after 
startup is able to contaminate the class path for subsequent sessions. I've 
written a small groovy udf to dump the current session class path as well as 
it's parents and in the attached example any jar added to class path in the 
initial session is present in future sessions. I've tried adding other jars as 
well and the behavior is there for all of them.

To demonstrate run setup.sql then restart the hiveserver2 instance. Then run 
run.sql and notice the last query after reconnect. I've only tested this so far 
on the HDP 2.6.5 release of Hive but it may be present on other versions.)
Summary: Initial SessionState ClassLoader Reused For Subsequent 
Sessions  (was: First Hive Session Class Path Additions Added to All Sessions)

> Initial SessionState ClassLoader Reused For Subsequent Sessions
> ---
>
> Key: HIVE-21409
> URL: https://issues.apache.org/jira/browse/HIVE-21409
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
>Reporter: Shawn Weeks
>Priority: Minor
> Attachments: create_class.sql, run.sql, setup.sql
>
>
> It appears that the first ClassLoader attached to a SessionState Static 
> Instance is being reused as the parent for all future sessions. This causes 
> any libraries added to the class path on the initial session to be added to 
> future sessions. It also appears that further sessions may be adding jars to 
> this initial ClassLoader as well leading to the class path getting more and 
> more polluted. This occurring on a build including HIVE-11878. I've included 
> some examples that greatly exaggerate the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21048) Remove needless org.mortbay.jetty from hadoop exclusions

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788332#comment-16788332
 ] 

Hive QA commented on HIVE-21048:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12961744/HIVE-21048.09.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 15821 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniDruidKafkaCliDriver.testCliDriver[druidkafkamini_avro]
 (batchId=275)
org.apache.hadoop.hive.cli.TestMiniDruidKafkaCliDriver.testCliDriver[druidkafkamini_basic]
 (batchId=275)
org.apache.hadoop.hive.cli.TestMiniDruidKafkaCliDriver.testCliDriver[druidkafkamini_csv]
 (batchId=275)
org.apache.hadoop.hive.cli.TestMiniDruidKafkaCliDriver.testCliDriver[druidkafkamini_delimited]
 (batchId=275)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16418/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16418/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16418/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12961744 - PreCommit-HIVE-Build

> Remove needless org.mortbay.jetty from hadoop exclusions
> 
>
> Key: HIVE-21048
> URL: https://issues.apache.org/jira/browse/HIVE-21048
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Major
> Attachments: HIVE-21048.01.patch, HIVE-21048.02.patch, 
> HIVE-21048.03.patch, HIVE-21048.04.patch, HIVE-21048.05.patch, 
> HIVE-21048.06.patch, HIVE-21048.07.patch, HIVE-21048.08.patch, 
> HIVE-21048.08.patch, HIVE-21048.09.patch, dep.out
>
>
> During HIVE-20638 i found that org.mortbay.jetty exclusions from e.g. hadoop 
> don't take effect, as the actual groupId of jetty is org.eclipse.jetty for 
> most of the current projects, please find attachment (example for hive 
> commons project).
> https://en.wikipedia.org/wiki/Jetty_(web_server)#History



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21339) LLAP: Cache hit also initializes an FS object

2019-03-08 Thread Prasanth Jayachandran (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788327#comment-16788327
 ] 

Prasanth Jayachandran commented on HIVE-21339:
--

The failure looks unrelated. Another try.

> LLAP: Cache hit also initializes an FS object 
> --
>
> Key: HIVE-21339
> URL: https://issues.apache.org/jira/browse/HIVE-21339
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 4.0.0
>Reporter: Gopal V
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21339.1.patch, HIVE-21339.2.patch, 
> HIVE-21339.3.patch, HIVE-21339.4.patch, llap-cache-fs-get.png, 
> llap-query7-cached.svg
>
>
> https://github.com/apache/hive/blob/master/llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/OrcEncodedDataReader.java#L214
> {code}
> // 1. Get file metadata from cache, or create the reader and read it.
> // Don't cache the filesystem object for now; Tez closes it and FS cache 
> will fix all that
> fs = split.getPath().getFileSystem(jobConf);
> fileKey = determineFileId(fs, split,
> HiveConf.getBoolVar(daemonConf, 
> ConfVars.LLAP_CACHE_ALLOW_SYNTHETIC_FILEID),
> HiveConf.getBoolVar(daemonConf, 
> ConfVars.LLAP_CACHE_DEFAULT_FS_FILE_ID),
> !HiveConf.getBoolVar(daemonConf, ConfVars.LLAP_IO_USE_FILEID_PATH)
> );
> {code}
>  !llap-cache-fs-get.png! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21339) LLAP: Cache hit also initializes an FS object

2019-03-08 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-21339:
-
Attachment: HIVE-21339.4.patch

> LLAP: Cache hit also initializes an FS object 
> --
>
> Key: HIVE-21339
> URL: https://issues.apache.org/jira/browse/HIVE-21339
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 4.0.0
>Reporter: Gopal V
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21339.1.patch, HIVE-21339.2.patch, 
> HIVE-21339.3.patch, HIVE-21339.4.patch, llap-cache-fs-get.png, 
> llap-query7-cached.svg
>
>
> https://github.com/apache/hive/blob/master/llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/OrcEncodedDataReader.java#L214
> {code}
> // 1. Get file metadata from cache, or create the reader and read it.
> // Don't cache the filesystem object for now; Tez closes it and FS cache 
> will fix all that
> fs = split.getPath().getFileSystem(jobConf);
> fileKey = determineFileId(fs, split,
> HiveConf.getBoolVar(daemonConf, 
> ConfVars.LLAP_CACHE_ALLOW_SYNTHETIC_FILEID),
> HiveConf.getBoolVar(daemonConf, 
> ConfVars.LLAP_CACHE_DEFAULT_FS_FILE_ID),
> !HiveConf.getBoolVar(daemonConf, ConfVars.LLAP_IO_USE_FILEID_PATH)
> );
> {code}
>  !llap-cache-fs-get.png! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21390) BI split strategy does not work for blob stores

2019-03-08 Thread Prasanth Jayachandran (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788322#comment-16788322
 ] 

Prasanth Jayachandran commented on HIVE-21390:
--

Fixes test failure

> BI split strategy does not work for blob stores
> ---
>
> Key: HIVE-21390
> URL: https://issues.apache.org/jira/browse/HIVE-21390
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21390.1.patch, HIVE-21390.2.patch, 
> HIVE-21390.3.patch
>
>
> BI split strategy cuts the split at block boundaries however there are no 
> block boundaries in blob storage so we end up with 1 split for BI split 
> strategy. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21390) BI split strategy does not work for blob stores

2019-03-08 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-21390:
-
Attachment: HIVE-21390.3.patch

> BI split strategy does not work for blob stores
> ---
>
> Key: HIVE-21390
> URL: https://issues.apache.org/jira/browse/HIVE-21390
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21390.1.patch, HIVE-21390.2.patch, 
> HIVE-21390.3.patch
>
>
> BI split strategy cuts the split at block boundaries however there are no 
> block boundaries in blob storage so we end up with 1 split for BI split 
> strategy. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21390) BI split strategy does not work for blob stores

2019-03-08 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-21390:
-
Attachment: (was: HIVE-21390.2.patch)

> BI split strategy does not work for blob stores
> ---
>
> Key: HIVE-21390
> URL: https://issues.apache.org/jira/browse/HIVE-21390
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21390.1.patch, HIVE-21390.2.patch, 
> HIVE-21390.3.patch
>
>
> BI split strategy cuts the split at block boundaries however there are no 
> block boundaries in blob storage so we end up with 1 split for BI split 
> strategy. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21390) BI split strategy does not work for blob stores

2019-03-08 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-21390:
-
Attachment: HIVE-21390.2.patch

> BI split strategy does not work for blob stores
> ---
>
> Key: HIVE-21390
> URL: https://issues.apache.org/jira/browse/HIVE-21390
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21390.1.patch, HIVE-21390.2.patch, 
> HIVE-21390.3.patch
>
>
> BI split strategy cuts the split at block boundaries however there are no 
> block boundaries in blob storage so we end up with 1 split for BI split 
> strategy. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21397) BloomFilter for hive Managed [ACID] table does not work as expected

2019-03-08 Thread Gopal V (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788295#comment-16788295
 ] 

Gopal V commented on HIVE-21397:


Left a comment on the ORC JIRA - ORC is currently unaware of ACID.

I like the "row.col" lookup code in ORC, but rewriting the selected column 
names should happen in ACID (so that we can change ACID row-layouts without 
changing ORC).

> BloomFilter for hive Managed [ACID] table does not work as expected
> ---
>
> Key: HIVE-21397
> URL: https://issues.apache.org/jira/browse/HIVE-21397
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2, Transactions
>Affects Versions: 3.1.1
>Reporter: vaibhav
>Assignee: Denys Kuzmenko
>Priority: Blocker
> Attachments: OrcUtils.patch, orc_file_dump.out, orc_file_dump.q
>
>
> Steps to Reproduce this issue : 
> - 
> 1. Create a HIveManaged table as below : 
> - 
> {code:java}
> CREATE TABLE `bloomTest`( 
>    `msisdn` string, 
>    `imsi` varchar(20), 
>    `imei` bigint, 
>    `cell_id` bigint) 
>  ROW FORMAT SERDE 
>    'org.apache.hadoop.hive.ql.io.orc.OrcSerde' 
>  STORED AS INPUTFORMAT 
>    'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' 
>  OUTPUTFORMAT 
>    'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' 
>  LOCATION 
>    
> 'hdfs://c1162-node2.squadron-labs.com:8020/warehouse/tablespace/managed/hive/bloomTest;
>  
>  TBLPROPERTIES ( 
>    'bucketing_version'='2', 
>    'orc.bloom.filter.columns'='msisdn,cell_id,imsi', 
>    'orc.bloom.filter.fpp'='0.02', 
>    'transactional'='true', 
>    'transactional_properties'='default', 
>    'transient_lastDdlTime'='1551206683') {code}
> - 
> 2. Insert a few rows. 
> - 
> - 
> 3. Check if bloom filter or active : [ It does not show bloom filters for 
> hive managed tables ] 
> - 
> {code:java}
> [hive@c1162-node2 root]$ hive --orcfiledump 
> hdfs://c1162-node2.squadron-labs.com:8020/warehouse/tablespace/managed/hive/bloomTest/delta_001_001_
>  | grep -i bloom 
> SLF4J: Class path contains multiple SLF4J bindings. 
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.1.0.0-78/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation. 
> SLF4J: Actual binding is of type 
> [org.apache.logging.slf4j.Log4jLoggerFactory] 
> Processing data file 
> hdfs://c1162-node2.squadron-labs.com:8020/warehouse/tablespace/managed/hive/bloomTest/delta_001_001_/bucket_0
>  [length: 791] 
> Structure for 
> hdfs://c1162-node2.squadron-labs.com:8020/warehouse/tablespace/managed/hive/bloomTest/delta_001_001_/bucket_0
>  {code}
> - 
> On Another hand: For hive External tables it works : 
> - 
> {code:java}
> CREATE external TABLE `ext_bloomTest`( 
>    `msisdn` string, 
>    `imsi` varchar(20), 
>    `imei` bigint, 
>    `cell_id` bigint) 
>  ROW FORMAT SERDE 
>    'org.apache.hadoop.hive.ql.io.orc.OrcSerde' 
>  STORED AS INPUTFORMAT 
>    'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' 
>  OUTPUTFORMAT 
>    'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' 
>  TBLPROPERTIES ( 
>    'bucketing_version'='2', 
>    'orc.bloom.filter.columns'='msisdn,cell_id,imsi', 
>    'orc.bloom.filter.fpp'='0.02') {code}
> - 
> {code:java}
> [hive@c1162-node2 root]$ hive --orcfiledump 
> hdfs://c1162-node2.squadron-labs.com:8020/warehouse/tablespace/external/hive/ext_bloomTest/00_0
>  | grep -i bloom 
> SLF4J: Class path contains multiple SLF4J bindings. 
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.1.0.0-78/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation. 
> SLF4J: Actual binding is of type 
> [org.apache.logging.slf4j.Log4jLoggerFactory] 
> Processing data file 
> hdfs://c1162-node2.squadron-labs.com:8020/warehouse/tablespace/external/hive/ext_bloomTest/00_0
>  [length: 755] 
> Structure for 
> hdfs://c1162-node2.squadron-labs.com:8020/warehouse/tablespace/external/hive/ext_bloomTest/00_0
>  
> Stream: column 1 section 

[jira] [Updated] (HIVE-21408) Disable synthetic join predicates for non-equi joins for unintended cases

2019-03-08 Thread Jason Dere (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-21408:
--
Fix Version/s: 4.0.0

> Disable synthetic join predicates for non-equi joins for unintended cases
> -
>
> Key: HIVE-21408
> URL: https://issues.apache.org/jira/browse/HIVE-21408
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21408.1.patch
>
>
> With support for synthetic join predicates on non-equi joins, it is important 
> to make sure those predicates are used only for intended purpose. Currently, 
> DPP and semi join reduction are not supposed to use it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21286) Hive should support clean-up of previously bootstrapped tables when retry from different dump.

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788280#comment-16788280
 ] 

Hive QA commented on HIVE-21286:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12961738/HIVE-21286.05.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15822 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16417/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16417/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16417/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12961738 - PreCommit-HIVE-Build

> Hive should support clean-up of previously bootstrapped tables when retry 
> from different dump.
> --
>
> Key: HIVE-21286
> URL: https://issues.apache.org/jira/browse/HIVE-21286
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, Replication, pull-request-available
> Attachments: HIVE-21286.01.patch, HIVE-21286.02.patch, 
> HIVE-21286.03.patch, HIVE-21286.04.patch, HIVE-21286.05.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> If external tables are enabled for replication on an existing repl policy, 
> then bootstrapping of external tables are combined with incremental dump.
> If incremental bootstrap load fails with non-retryable error for which user 
> will have to manually drop all the external tables before trying with another 
> bootstrap dump. For full bootstrap, to retry with different dump, we 
> suggested user to drop the DB but in this case they need to manually drop all 
> the external tables which is not so user friendly. So, need to handle it in 
> Hive side as follows.
> REPL LOAD takes additional config (passed by user in WITH clause) that says, 
> drop all the tables which are bootstrapped from previous dump. 
> hive.repl.clean.tables.from.bootstrap=
> Hive will use this config only if the current dump is combined bootstrap in 
> incremental dump.
> Caution to be taken by user that this config should not be passed if previous 
> REPL LOAD (with bootstrap) was successful or any successful incremental 
> dump+load happened after "previous_bootstrap_dump_dir".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21397) BloomFilter for hive Managed [ACID] table does not work as expected

2019-03-08 Thread Vaibhav Gumashta (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788262#comment-16788262
 ] 

Vaibhav Gumashta commented on HIVE-21397:
-

cc [~gopalv]

> BloomFilter for hive Managed [ACID] table does not work as expected
> ---
>
> Key: HIVE-21397
> URL: https://issues.apache.org/jira/browse/HIVE-21397
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2, Transactions
>Affects Versions: 3.1.1
>Reporter: vaibhav
>Assignee: Denys Kuzmenko
>Priority: Blocker
> Attachments: OrcUtils.patch, orc_file_dump.out, orc_file_dump.q
>
>
> Steps to Reproduce this issue : 
> - 
> 1. Create a HIveManaged table as below : 
> - 
> {code:java}
> CREATE TABLE `bloomTest`( 
>    `msisdn` string, 
>    `imsi` varchar(20), 
>    `imei` bigint, 
>    `cell_id` bigint) 
>  ROW FORMAT SERDE 
>    'org.apache.hadoop.hive.ql.io.orc.OrcSerde' 
>  STORED AS INPUTFORMAT 
>    'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' 
>  OUTPUTFORMAT 
>    'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' 
>  LOCATION 
>    
> 'hdfs://c1162-node2.squadron-labs.com:8020/warehouse/tablespace/managed/hive/bloomTest;
>  
>  TBLPROPERTIES ( 
>    'bucketing_version'='2', 
>    'orc.bloom.filter.columns'='msisdn,cell_id,imsi', 
>    'orc.bloom.filter.fpp'='0.02', 
>    'transactional'='true', 
>    'transactional_properties'='default', 
>    'transient_lastDdlTime'='1551206683') {code}
> - 
> 2. Insert a few rows. 
> - 
> - 
> 3. Check if bloom filter or active : [ It does not show bloom filters for 
> hive managed tables ] 
> - 
> {code:java}
> [hive@c1162-node2 root]$ hive --orcfiledump 
> hdfs://c1162-node2.squadron-labs.com:8020/warehouse/tablespace/managed/hive/bloomTest/delta_001_001_
>  | grep -i bloom 
> SLF4J: Class path contains multiple SLF4J bindings. 
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.1.0.0-78/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation. 
> SLF4J: Actual binding is of type 
> [org.apache.logging.slf4j.Log4jLoggerFactory] 
> Processing data file 
> hdfs://c1162-node2.squadron-labs.com:8020/warehouse/tablespace/managed/hive/bloomTest/delta_001_001_/bucket_0
>  [length: 791] 
> Structure for 
> hdfs://c1162-node2.squadron-labs.com:8020/warehouse/tablespace/managed/hive/bloomTest/delta_001_001_/bucket_0
>  {code}
> - 
> On Another hand: For hive External tables it works : 
> - 
> {code:java}
> CREATE external TABLE `ext_bloomTest`( 
>    `msisdn` string, 
>    `imsi` varchar(20), 
>    `imei` bigint, 
>    `cell_id` bigint) 
>  ROW FORMAT SERDE 
>    'org.apache.hadoop.hive.ql.io.orc.OrcSerde' 
>  STORED AS INPUTFORMAT 
>    'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' 
>  OUTPUTFORMAT 
>    'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' 
>  TBLPROPERTIES ( 
>    'bucketing_version'='2', 
>    'orc.bloom.filter.columns'='msisdn,cell_id,imsi', 
>    'orc.bloom.filter.fpp'='0.02') {code}
> - 
> {code:java}
> [hive@c1162-node2 root]$ hive --orcfiledump 
> hdfs://c1162-node2.squadron-labs.com:8020/warehouse/tablespace/external/hive/ext_bloomTest/00_0
>  | grep -i bloom 
> SLF4J: Class path contains multiple SLF4J bindings. 
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.1.0.0-78/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation. 
> SLF4J: Actual binding is of type 
> [org.apache.logging.slf4j.Log4jLoggerFactory] 
> Processing data file 
> hdfs://c1162-node2.squadron-labs.com:8020/warehouse/tablespace/external/hive/ext_bloomTest/00_0
>  [length: 755] 
> Structure for 
> hdfs://c1162-node2.squadron-labs.com:8020/warehouse/tablespace/external/hive/ext_bloomTest/00_0
>  
> Stream: column 1 section BLOOM_FILTER_UTF8 start: 41 length 110 
> Stream: column 2 section BLOOM_FILTER_UTF8 start: 178 length 114 
> Stream: column 4 section BLOOM_FILTER_UTF8 start: 340 length 109 {code}



--
This message was 

[jira] [Commented] (HIVE-21286) Hive should support clean-up of previously bootstrapped tables when retry from different dump.

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788256#comment-16788256
 ] 

Hive QA commented on HIVE-21286:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
27s{color} | {color:blue} ql in master has 2258 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
46s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
17s{color} | {color:red} itests/hive-unit: The patch generated 10 new + 18 
unchanged - 0 fixed = 28 total (was 18) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16417/dev-support/hive-personality.sh
 |
| git revision | master / cdd8fa5 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16417/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| modules | C: ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16417/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Hive should support clean-up of previously bootstrapped tables when retry 
> from different dump.
> --
>
> Key: HIVE-21286
> URL: https://issues.apache.org/jira/browse/HIVE-21286
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, Replication, pull-request-available
> Attachments: HIVE-21286.01.patch, HIVE-21286.02.patch, 
> HIVE-21286.03.patch, HIVE-21286.04.patch, HIVE-21286.05.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> If external tables are enabled for replication on an existing repl policy, 
> then bootstrapping of external tables are combined with incremental dump.
> If incremental bootstrap load fails with non-retryable error for which user 
> will have to manually drop all the external tables 

[jira] [Commented] (HIVE-21264) Improvements Around CharTypeInfo

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788220#comment-16788220
 ] 

Hive QA commented on HIVE-21264:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12961730/HIVE-21264.3.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15823 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[test_teradatabinaryfile] 
(batchId=2)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16414/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16414/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16414/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12961730 - PreCommit-HIVE-Build

> Improvements Around CharTypeInfo
> 
>
> Key: HIVE-21264
> URL: https://issues.apache.org/jira/browse/HIVE-21264
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HIVE-21264.1.patch, HIVE-21264.2.patch, 
> HIVE-21264.3.patch, HIVE-21264.3.patch, HIVE-21264.3.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{CharTypeInfo}} stores the type name of the data type (char/varchar) and 
> the length (1-255).  {{CharTypeInfo}} objects are often getting cached once 
> they are created.
> The {{hashcode()}} and {{equals()}} of its sub-classes varchar and char are 
> inconsistent.
> * Make hashcode and equals consistent (and fast)
> * Simplify the {{getQualifiedName}} implementation and reduce the scope to 
> protected
> * Other related nits



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21408) Disable synthetic join predicates for non-equi joins for unintended cases

2019-03-08 Thread Deepak Jaiswal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-21408:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to master.

Thanks [~vgarg] for the review.

> Disable synthetic join predicates for non-equi joins for unintended cases
> -
>
> Key: HIVE-21408
> URL: https://issues.apache.org/jira/browse/HIVE-21408
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-21408.1.patch
>
>
> With support for synthetic join predicates on non-equi joins, it is important 
> to make sure those predicates are used only for intended purpose. Currently, 
> DPP and semi join reduction are not supposed to use it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21402) Compaction state remains 'working' when major compaction fails

2019-03-08 Thread Ashutosh Chauhan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788227#comment-16788227
 ] 

Ashutosh Chauhan commented on HIVE-21402:
-

Actually looking deeply, actual Compaction is now moved to ql so compactions 
are run in HS2. HS2 should have calcite on classpath. So, this is a deployment 
issue. cc: [~vgumashta]

> Compaction state remains 'working' when major compaction fails
> --
>
> Key: HIVE-21402
> URL: https://issues.apache.org/jira/browse/HIVE-21402
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21402.patch
>
>
> When calcite is not on the HMS classpath, and query based compaction is 
> enabled then the compaction fails with NoClassDefFound error. Since the catch 
> block only catches Exceptions the following code block is not executed:
> {code:java}
> } catch (Exception e) {
>   LOG.error("Caught exception while trying to compact " + ci +
>   ".  Marking failed to avoid repeated failures, " + 
> StringUtils.stringifyException(e));
>   msc.markFailed(CompactionInfo.compactionInfoToStruct(ci));
>   msc.abortTxns(Collections.singletonList(compactorTxnId));
> }
> {code}
> So the compaction is not set to failed.
> Would be better to catch Throwable instead of Exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21410) find out the actual port number when hive.server2.thrift.port=0

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788224#comment-16788224
 ] 

Hive QA commented on HIVE-21410:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12961690/2019-03-08_163747.png

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16416/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16416/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16416/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-03-08 19:25:06.188
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-16416/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-03-08 19:25:06.191
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 0dd45a2 HIVE-21280 : Null pointer exception on running 
compaction against a MM table. (Aditya Shah via Ashutosh Chauhan)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 0dd45a2 HIVE-21280 : Null pointer exception on running 
compaction against a MM table. (Aditya Shah via Ashutosh Chauhan)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-03-08 19:25:07.421
+ rm -rf ../yetus_PreCommit-HIVE-Build-16416
+ mkdir ../yetus_PreCommit-HIVE-Build-16416
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-16416
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-16416/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
fatal: unrecognized input
fatal: unrecognized input
fatal: unrecognized input
The patch does not appear to apply with p0, p1, or p2
+ result=1
+ '[' 1 -ne 0 ']'
+ rm -rf yetus_PreCommit-HIVE-Build-16416
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12961690 - PreCommit-HIVE-Build

> find out the actual port number when hive.server2.thrift.port=0
> ---
>
> Key: HIVE-21410
> URL: https://issues.apache.org/jira/browse/HIVE-21410
> Project: Hive
>  Issue Type: Improvement
>Reporter: zuotingbing
>Assignee: zuotingbing
>Priority: Minor
> Attachments: 2019-03-08_163705.png, 2019-03-08_163747.png, 
> HIVE-21410.patch
>
>
> before fixed:
> !2019-03-08_163705.png!
> after fixed:
> !2019-03-08_163747.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21401) Break up DDLTask - extract Table related operations

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788222#comment-16788222
 ] 

Hive QA commented on HIVE-21401:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12961736/HIVE-21401.04.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16415/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16415/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16415/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-03-08 19:23:52.551
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-16415/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-03-08 19:23:52.554
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 0dd45a2 HIVE-21280 : Null pointer exception on running 
compaction against a MM table. (Aditya Shah via Ashutosh Chauhan)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 0dd45a2 HIVE-21280 : Null pointer exception on running 
compaction against a MM table. (Aditya Shah via Ashutosh Chauhan)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-03-08 19:23:53.608
+ rm -rf ../yetus_PreCommit-HIVE-Build-16415
+ mkdir ../yetus_PreCommit-HIVE-Build-16415
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-16415
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-16415/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java:4803
Falling back to three-way merge...
Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java' with 
conflicts.
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/plan/ShowCreateDatabaseDesc.java:1
error: ql/src/java/org/apache/hadoop/hive/ql/plan/ShowCreateDatabaseDesc.java: 
patch does not apply
error: 
core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/CreateTableHook.java:
 does not exist in index
error: 
core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzer.java:
 does not exist in index
error: 
util/src/main/java/org/apache/hadoop/hive/ql/metadata/DummySemanticAnalyzerHook.java:
 does not exist in index
error: 
util/src/main/java/org/apache/hadoop/hive/ql/metadata/DummySemanticAnalyzerHook1.java:
 does not exist in index
error: src/java/org/apache/hadoop/hive/ql/ddl/DDLOperation.java: does not exist 
in index
error: src/java/org/apache/hadoop/hive/ql/ddl/DDLOperationContext.java: does 
not exist in index
error: src/java/org/apache/hadoop/hive/ql/ddl/DDLTask2.java: does not exist in 
index
error: src/java/org/apache/hadoop/hive/ql/ddl/DDLWork2.java: does not exist in 
index
error: src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java: does not exist in 
index
error: 
src/java/org/apache/hadoop/hive/ql/exec/repl/bootstrap/load/table/LoadPartitions.java:
 does not exist in index
error: 
src/java/org/apache/hadoop/hive/ql/exec/repl/bootstrap/load/table/LoadTable.java:
 does not exist in index
error: src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java: does not exist in 
index
error: src/java/org/apache/hadoop/hive/ql/lockmgr/DbTxnManager.java: does not 
exist in index
error: src/java/org/apache/hadoop/hive/ql/lockmgr/HiveTxnManager.java: does not 
exist in index
error: 

[jira] [Updated] (HIVE-21411) LEFT JOIN CONVERT TO INNER JOIN LEAD TO WRONG RESULT

2019-03-08 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21411:
---
Description: 
when i have not assign alias table name to the left side table , left join 
convert to inner join. left side table's alias in ast tree called left.

 for example

select nvl(ss_wholesale_cost, 10), d_quarter_name from lulu.store_sales left 
join lulu.date_dim on ss_sold_date_sk = d_date_sk limit 10;

{noformat}
| STAGE DEPENDENCIES: |
| Stage-1 is a root stage |
| Stage-0 depends on stages: Stage-1 |
| |
| STAGE PLANS: |
| Stage: Stage-1 |
| Map Reduce |
| Map Operator Tree: |
| TableScan |
| alias: left |
| Statistics: Num rows: 200 Data size: 28499 Basic stats: COMPLETE Column 
stats: NONE |
| Filter Operator |
| predicate: ss_sold_date_sk is not null (type: boolean) |
| Statistics: Num rows: 200 Data size: 28499 Basic stats: COMPLETE Column 
stats: NONE |
| Select Operator |
| expressions: ss_wholesale_cost (type: decimal(7,2)), ss_sold_date_sk (type: 
bigint) |
| outputColumnNames: _col0, _col1 |
| Statistics: Num rows: 200 Data size: 28499 Basic stats: COMPLETE Column 
stats: NONE |
| Reduce Output Operator |
| key expressions: _col1 (type: bigint) |
| sort order: + |
| Map-reduce partition columns: _col1 (type: bigint) |
| Statistics: Num rows: 200 Data size: 28499 Basic stats: COMPLETE Column 
stats: NONE |
| value expressions: _col0 (type: decimal(7,2)) |
| TableScan |
| alias: date_dim |
| Statistics: Num rows: 200 Data size: 25639 Basic stats: COMPLETE Column 
stats: NONE |
| Filter Operator |
| predicate: d_date_sk is not null (type: boolean) |
| Statistics: Num rows: 200 Data size: 25639 Basic stats: COMPLETE Column 
stats: NONE |
| Select Operator |
| expressions: d_date_sk (type: bigint), d_quarter_name (type: string) |
| outputColumnNames: _col0, _col1 |
| Statistics: Num rows: 200 Data size: 25639 Basic stats: COMPLETE Column 
stats: NONE |
| Reduce Output Operator |
| key expressions: _col0 (type: bigint) |
| sort order: + |
| Map-reduce partition columns: _col0 (type: bigint) |
| Statistics: Num rows: 200 Data size: 25639 Basic stats: COMPLETE Column 
stats: NONE |
| value expressions: _col1 (type: string) |
| Reduce Operator Tree: |
| Join Operator |
| condition map: |
| Inner Join 0 to 1 |
| keys: |
| 0 _col1 (type: bigint) |
| 1 _col0 (type: bigint) |
| outputColumnNames: _col0, _col3 |
| Statistics: Num rows: 220 Data size: 31348 Basic stats: COMPLETE Column 
stats: NONE |
| Select Operator |
| expressions: NVL(_col0,10) (type: decimal(12,2)), _col3 (type: string) |
| outputColumnNames: _col0, _col1 |
| Statistics: Num rows: 220 Data size: 31348 Basic stats: COMPLETE Column 
stats: NONE |
| Limit |
| Number of rows: 100 |
| Statistics: Num rows: 100 Data size: 14200 Basic stats: COMPLETE Column 
stats: NONE |
| File Output Operator |
| compressed: false |
| Statistics: Num rows: 100 Data size: 14200 Basic stats: COMPLETE Column 
stats: NONE |
| table: |
| input format: org.apache.hadoop.mapred.SequenceFileInputFormat |
| output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat |
| serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe |
| |
| Stage: Stage-0 |
| Fetch Operator |
| limit: 100 |
| Processor Tree: |
| ListSink |
| |
++--+

{noformat}
 

  was:
when i have not assign alias table name to the left side table , left join 
convert to inner join. left side table's alias in ast tree called left.

 for example

select nvl(ss_wholesale_cost, 10), d_quarter_name from lulu.store_sales left 
join lulu.date_dim on ss_sold_date_sk = d_date_sk limit 10;

| STAGE DEPENDENCIES: |
| Stage-1 is a root stage |
| Stage-0 depends on stages: Stage-1 |
| |
| STAGE PLANS: |
| Stage: Stage-1 |
| Map Reduce |
| Map Operator Tree: |
| TableScan |
| alias: left |
| Statistics: Num rows: 200 Data size: 28499 Basic stats: COMPLETE Column 
stats: NONE |
| Filter Operator |
| predicate: ss_sold_date_sk is not null (type: boolean) |
| Statistics: Num rows: 200 Data size: 28499 Basic stats: COMPLETE Column 
stats: NONE |
| Select Operator |
| expressions: ss_wholesale_cost (type: decimal(7,2)), ss_sold_date_sk (type: 
bigint) |
| outputColumnNames: _col0, _col1 |
| Statistics: Num rows: 200 Data size: 28499 Basic stats: COMPLETE Column 
stats: NONE |
| Reduce Output Operator |
| key expressions: _col1 (type: bigint) |
| sort order: + |
| Map-reduce partition columns: _col1 (type: bigint) |
| Statistics: Num rows: 200 Data size: 28499 Basic stats: COMPLETE Column 
stats: NONE |
| value expressions: _col0 (type: decimal(7,2)) |
| TableScan |
| alias: date_dim |
| Statistics: Num rows: 200 Data size: 25639 Basic stats: COMPLETE Column 
stats: NONE |
| Filter Operator |
| predicate: d_date_sk is not null (type: boolean) |
| Statistics: Num rows: 200 Data size: 25639 Basic stats: 

[jira] [Commented] (HIVE-21264) Improvements Around CharTypeInfo

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788157#comment-16788157
 ] 

Hive QA commented on HIVE-21264:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
48s{color} | {color:blue} serde in master has 197 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} serde: The patch generated 0 new + 51 unchanged - 2 
fixed = 51 total (was 53) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16414/dev-support/hive-personality.sh
 |
| git revision | master / 0dd45a2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: serde U: serde |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16414/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Improvements Around CharTypeInfo
> 
>
> Key: HIVE-21264
> URL: https://issues.apache.org/jira/browse/HIVE-21264
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HIVE-21264.1.patch, HIVE-21264.2.patch, 
> HIVE-21264.3.patch, HIVE-21264.3.patch, HIVE-21264.3.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{CharTypeInfo}} stores the type name of the data type (char/varchar) and 
> the length (1-255).  {{CharTypeInfo}} objects are often getting cached once 
> they are created.
> The {{hashcode()}} and {{equals()}} of its sub-classes varchar and char are 
> inconsistent.
> * Make hashcode and equals consistent (and fast)
> * Simplify the {{getQualifiedName}} implementation and reduce the scope to 
> protected
> * Other related nits



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21408) Disable synthetic join predicates for non-equi joins for unintended cases

2019-03-08 Thread Vineet Garg (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788151#comment-16788151
 ] 

Vineet Garg commented on HIVE-21408:


LGTM +1

> Disable synthetic join predicates for non-equi joins for unintended cases
> -
>
> Key: HIVE-21408
> URL: https://issues.apache.org/jira/browse/HIVE-21408
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-21408.1.patch
>
>
> With support for synthetic join predicates on non-equi joins, it is important 
> to make sure those predicates are used only for intended purpose. Currently, 
> DPP and semi join reduction are not supposed to use it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21406) Add .factorypath files to .gitignore

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788126#comment-16788126
 ] 

Hive QA commented on HIVE-21406:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12961685/HIVE-21406.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 15821 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitions
 (batchId=263)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitionsUnionAll
 (batchId=263)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomNonExistent
 (batchId=263)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighBytesRead 
(batchId=263)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighShuffleBytes
 (batchId=263)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerSlowQueryElapsedTime
 (batchId=263)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerSlowQueryExecutionTime
 (batchId=263)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16413/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16413/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16413/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12961685 - PreCommit-HIVE-Build

> Add .factorypath files to .gitignore
> 
>
> Key: HIVE-21406
> URL: https://issues.apache.org/jira/browse/HIVE-21406
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Minor
> Attachments: HIVE-21406.01.patch, Screen Shot 2019-03-07 at 2.02.10 
> PM.png
>
>
> .factorypath files are generated by eclipse and should be ignored



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21048) Remove needless org.mortbay.jetty from hadoop exclusions

2019-03-08 Thread Laszlo Bodor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Bodor updated HIVE-21048:

Attachment: (was: HIVE-21048.09.patch)

> Remove needless org.mortbay.jetty from hadoop exclusions
> 
>
> Key: HIVE-21048
> URL: https://issues.apache.org/jira/browse/HIVE-21048
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Major
> Attachments: HIVE-21048.01.patch, HIVE-21048.02.patch, 
> HIVE-21048.03.patch, HIVE-21048.04.patch, HIVE-21048.05.patch, 
> HIVE-21048.06.patch, HIVE-21048.07.patch, HIVE-21048.08.patch, 
> HIVE-21048.08.patch, HIVE-21048.09.patch, dep.out
>
>
> During HIVE-20638 i found that org.mortbay.jetty exclusions from e.g. hadoop 
> don't take effect, as the actual groupId of jetty is org.eclipse.jetty for 
> most of the current projects, please find attachment (example for hive 
> commons project).
> https://en.wikipedia.org/wiki/Jetty_(web_server)#History



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21411) LEFT JOIN CONVERT TO INNER JOIN LEAD TO WRONG RESULT

2019-03-08 Thread Gopal V (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788093#comment-16788093
 ] 

Gopal V commented on HIVE-21411:


Are there any FK/PK relationships between ss_sold_date_sk and d_date_sk 
declared?

This rewrite is valid if you have 

{code}
alter table store_sales add constraint tpcds_1000_ss_d foreign key  
(ss_sold_date_sk) references date_dim (d_date_sk) disable novalidate rely;
{code}

> LEFT JOIN CONVERT TO INNER JOIN LEAD TO WRONG RESULT
> 
>
> Key: HIVE-21411
> URL: https://issues.apache.org/jira/browse/HIVE-21411
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.1, 2.2.0, 2.3.0
>Reporter: xialu
>Assignee: Ashutosh Chauhan
>Priority: Critical
>
> when i have not assign alias table name to the left side table , left join 
> convert to inner join. left side table's alias in ast tree called left.
>  for example
> select nvl(ss_wholesale_cost, 10), d_quarter_name from lulu.store_sales left 
> join lulu.date_dim on ss_sold_date_sk = d_date_sk limit 10;
> | STAGE DEPENDENCIES: |
> | Stage-1 is a root stage |
> | Stage-0 depends on stages: Stage-1 |
> | |
> | STAGE PLANS: |
> | Stage: Stage-1 |
> | Map Reduce |
> | Map Operator Tree: |
> | TableScan |
> | alias: left |
> | Statistics: Num rows: 200 Data size: 28499 Basic stats: COMPLETE Column 
> stats: NONE |
> | Filter Operator |
> | predicate: ss_sold_date_sk is not null (type: boolean) |
> | Statistics: Num rows: 200 Data size: 28499 Basic stats: COMPLETE Column 
> stats: NONE |
> | Select Operator |
> | expressions: ss_wholesale_cost (type: decimal(7,2)), ss_sold_date_sk (type: 
> bigint) |
> | outputColumnNames: _col0, _col1 |
> | Statistics: Num rows: 200 Data size: 28499 Basic stats: COMPLETE Column 
> stats: NONE |
> | Reduce Output Operator |
> | key expressions: _col1 (type: bigint) |
> | sort order: + |
> | Map-reduce partition columns: _col1 (type: bigint) |
> | Statistics: Num rows: 200 Data size: 28499 Basic stats: COMPLETE Column 
> stats: NONE |
> | value expressions: _col0 (type: decimal(7,2)) |
> | TableScan |
> | alias: date_dim |
> | Statistics: Num rows: 200 Data size: 25639 Basic stats: COMPLETE Column 
> stats: NONE |
> | Filter Operator |
> | predicate: d_date_sk is not null (type: boolean) |
> | Statistics: Num rows: 200 Data size: 25639 Basic stats: COMPLETE Column 
> stats: NONE |
> | Select Operator |
> | expressions: d_date_sk (type: bigint), d_quarter_name (type: string) |
> | outputColumnNames: _col0, _col1 |
> | Statistics: Num rows: 200 Data size: 25639 Basic stats: COMPLETE Column 
> stats: NONE |
> | Reduce Output Operator |
> | key expressions: _col0 (type: bigint) |
> | sort order: + |
> | Map-reduce partition columns: _col0 (type: bigint) |
> | Statistics: Num rows: 200 Data size: 25639 Basic stats: COMPLETE Column 
> stats: NONE |
> | value expressions: _col1 (type: string) |
> | Reduce Operator Tree: |
> | Join Operator |
> | condition map: |
> | Inner Join 0 to 1 |
> | keys: |
> | 0 _col1 (type: bigint) |
> | 1 _col0 (type: bigint) |
> | outputColumnNames: _col0, _col3 |
> | Statistics: Num rows: 220 Data size: 31348 Basic stats: COMPLETE Column 
> stats: NONE |
> | Select Operator |
> | expressions: NVL(_col0,10) (type: decimal(12,2)), _col3 (type: string) |
> | outputColumnNames: _col0, _col1 |
> | Statistics: Num rows: 220 Data size: 31348 Basic stats: COMPLETE Column 
> stats: NONE |
> | Limit |
> | Number of rows: 100 |
> | Statistics: Num rows: 100 Data size: 14200 Basic stats: COMPLETE Column 
> stats: NONE |
> | File Output Operator |
> | compressed: false |
> | Statistics: Num rows: 100 Data size: 14200 Basic stats: COMPLETE Column 
> stats: NONE |
> | table: |
> | input format: org.apache.hadoop.mapred.SequenceFileInputFormat |
> | output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat |
> | serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe |
> | |
> | Stage: Stage-0 |
> | Fetch Operator |
> | limit: 100 |
> | Processor Tree: |
> | ListSink |
> | |
> ++--+
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21048) Remove needless org.mortbay.jetty from hadoop exclusions

2019-03-08 Thread Laszlo Bodor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Bodor updated HIVE-21048:

Attachment: HIVE-21048.09.patch

> Remove needless org.mortbay.jetty from hadoop exclusions
> 
>
> Key: HIVE-21048
> URL: https://issues.apache.org/jira/browse/HIVE-21048
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Major
> Attachments: HIVE-21048.01.patch, HIVE-21048.02.patch, 
> HIVE-21048.03.patch, HIVE-21048.04.patch, HIVE-21048.05.patch, 
> HIVE-21048.06.patch, HIVE-21048.07.patch, HIVE-21048.08.patch, 
> HIVE-21048.08.patch, HIVE-21048.09.patch, dep.out
>
>
> During HIVE-20638 i found that org.mortbay.jetty exclusions from e.g. hadoop 
> don't take effect, as the actual groupId of jetty is org.eclipse.jetty for 
> most of the current projects, please find attachment (example for hive 
> commons project).
> https://en.wikipedia.org/wiki/Jetty_(web_server)#History



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21368) Vectorization: Unnecessary Decimal64 -> HiveDecimal conversion

2019-03-08 Thread Gopal V (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788084#comment-16788084
 ] 

Gopal V commented on HIVE-21368:


That looks like it is about map-join keys - the problem here is the map-join 
values (all the actual joins are on Long:Long).

> Vectorization: Unnecessary Decimal64 -> HiveDecimal conversion
> --
>
> Key: HIVE-21368
> URL: https://issues.apache.org/jira/browse/HIVE-21368
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal V
>Assignee: Teddy Choi
>Priority: Major
>
> Joins projecting Decimal64 have a suspicious cast in the inner loop
> {code}
> ConvertDecimal64ToDecimal(col 14:decimal(7,2)/DECIMAL_64) -> 24:decimal(7,2)'
> {code}
> {code}
> create temporary table foo(x int , y decimal(7,2));
> create temporary table bar(x int , y decimal(7,2));
> set hive.explain.user=false;
> explain vectorization detail select sum(foo.y) from foo, bar where foo.x = 
> bar.x;
> {code}
> {code}
> '  Map Join Operator'
> 'condition map:'
> ' Inner Join 0 to 1'
> 'keys:'
> '  0 _col0 (type: int)'
> '  1 _col0 (type: int)'
> 'Map Join Vectorization:'
> 'bigTableKeyColumnNums: [0]'
> 'bigTableRetainedColumnNums: [3]'
> 'bigTableValueColumnNums: [3]'
> 'bigTableValueExpressions: 
> ConvertDecimal64ToDecimal(col 1:decimal(7,2)/DECIMAL_64) -> 3:decimal(7,2)'
> 'className: VectorMapJoinInnerBigOnlyLongOperator'
> 'native: true'
> 'nativeConditionsMet: 
> hive.mapjoin.optimized.hashtable IS true, 
> hive.vectorized.execution.mapjoin.native.enabled IS true, 
> hive.execution.engine tez IN [tez, spark] IS true, One MapJoin Condition IS 
> true, No nullsafe IS true, Small table vectorizes IS true, Fast Hash Table 
> and No Hybrid Hash Join IS true'
> 'projectedOutputColumnNums: [3]'
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21048) Remove needless org.mortbay.jetty from hadoop exclusions

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788062#comment-16788062
 ] 

Hive QA commented on HIVE-21048:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 10m 
44s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 41 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
13s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 18m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16412/dev-support/hive-personality.sh
 |
| git revision | master / d42809e |
| Default Java | 1.8.0_111 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16412/yetus/whitespace-tabs.txt
 |
| modules | C: storage-api common llap-tez ql service jdbc hcatalog 
hcatalog/core hcatalog/hcatalog-pig-adapter hcatalog/webhcat/svr . 
itests/qtest-druid U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16412/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Remove needless org.mortbay.jetty from hadoop exclusions
> 
>
> Key: HIVE-21048
> URL: https://issues.apache.org/jira/browse/HIVE-21048
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Major
> Attachments: HIVE-21048.01.patch, HIVE-21048.02.patch, 
> HIVE-21048.03.patch, HIVE-21048.04.patch, HIVE-21048.05.patch, 
> HIVE-21048.06.patch, HIVE-21048.07.patch, HIVE-21048.08.patch, 
> HIVE-21048.08.patch, HIVE-21048.09.patch, dep.out
>
>
> During HIVE-20638 i found that org.mortbay.jetty exclusions from e.g. hadoop 
> don't take effect, as the actual groupId of jetty is org.eclipse.jetty for 
> most of the current projects, please find attachment (example for hive 
> commons project).
> https://en.wikipedia.org/wiki/Jetty_(web_server)#History



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21402) Compaction state remains 'working' when major compaction fails

2019-03-08 Thread Peter Vary (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788066#comment-16788066
 ] 

Peter Vary commented on HIVE-21402:
---

[~ashutoshc]: Here is the exception I got:
{code:java}
18:04:35.600 [PeterVary-MBP15.local-33] ERROR 
org.apache.hadoop.hive.ql.txn.compactor.Worker - Caught an exception in the 
main loop of compactor worker PeterVary-MBP15.local-33, 
java.lang.NoClassDefFoundError: org/apache/calcite/plan/RelOptRule
at 
org.apache.hadoop.hive.ql.Driver.dumpMetaCallTimingWithoutEx(Driver.java:1022)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:783)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1905)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1965)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1788)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1777)
at org.apache.hadoop.hive.ql.DriverUtils.runOnDriver(DriverUtils.java:54)
at org.apache.hadoop.hive.ql.DriverUtils.runOnDriver(DriverUtils.java:34)
at 
org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.runCrudCompaction(CompactorMR.java:407)
at org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.run(CompactorMR.java:249)
at org.apache.hadoop.hive.ql.txn.compactor.Worker.run(Unknown Source)
Caused by: java.lang.ClassNotFoundException: org.apache.calcite.plan.RelOptRule
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 11 more
{code}
 

> Compaction state remains 'working' when major compaction fails
> --
>
> Key: HIVE-21402
> URL: https://issues.apache.org/jira/browse/HIVE-21402
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21402.patch
>
>
> When calcite is not on the HMS classpath, and query based compaction is 
> enabled then the compaction fails with NoClassDefFound error. Since the catch 
> block only catches Exceptions the following code block is not executed:
> {code:java}
> } catch (Exception e) {
>   LOG.error("Caught exception while trying to compact " + ci +
>   ".  Marking failed to avoid repeated failures, " + 
> StringUtils.stringifyException(e));
>   msc.markFailed(CompactionInfo.compactionInfoToStruct(ci));
>   msc.abortTxns(Collections.singletonList(compactorTxnId));
> }
> {code}
> So the compaction is not set to failed.
> Would be better to catch Throwable instead of Exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21406) Add .factorypath files to .gitignore

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788060#comment-16788060
 ] 

Hive QA commented on HIVE-21406:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  1m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16413/dev-support/hive-personality.sh
 |
| git revision | master / 0dd45a2 |
| modules | C: . U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16413/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add .factorypath files to .gitignore
> 
>
> Key: HIVE-21406
> URL: https://issues.apache.org/jira/browse/HIVE-21406
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Minor
> Attachments: HIVE-21406.01.patch, Screen Shot 2019-03-07 at 2.02.10 
> PM.png
>
>
> .factorypath files are generated by eclipse and should be ignored



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21048) Remove needless org.mortbay.jetty from hadoop exclusions

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788034#comment-16788034
 ] 

Hive QA commented on HIVE-21048:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12961684/HIVE-21048.09.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 15820 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[test_teradatabinaryfile] 
(batchId=2)
org.apache.hadoop.hive.cli.TestMiniDruidKafkaCliDriver.testCliDriver[druidkafkamini_avro]
 (batchId=275)
org.apache.hadoop.hive.cli.TestMiniDruidKafkaCliDriver.testCliDriver[druidkafkamini_basic]
 (batchId=275)
org.apache.hadoop.hive.cli.TestMiniDruidKafkaCliDriver.testCliDriver[druidkafkamini_csv]
 (batchId=275)
org.apache.hadoop.hive.cli.TestMiniDruidKafkaCliDriver.testCliDriver[druidkafkamini_delimited]
 (batchId=275)
org.apache.hive.jdbc.TestActivePassiveHA.testActivePassiveHA (batchId=261)
org.apache.hive.jdbc.TestActivePassiveHA.testConnectionActivePassiveHAServiceDiscovery
 (batchId=261)
org.apache.hive.jdbc.TestActivePassiveHA.testManualFailover (batchId=261)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16412/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16412/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16412/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 8 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12961684 - PreCommit-HIVE-Build

> Remove needless org.mortbay.jetty from hadoop exclusions
> 
>
> Key: HIVE-21048
> URL: https://issues.apache.org/jira/browse/HIVE-21048
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Major
> Attachments: HIVE-21048.01.patch, HIVE-21048.02.patch, 
> HIVE-21048.03.patch, HIVE-21048.04.patch, HIVE-21048.05.patch, 
> HIVE-21048.06.patch, HIVE-21048.07.patch, HIVE-21048.08.patch, 
> HIVE-21048.08.patch, HIVE-21048.09.patch, dep.out
>
>
> During HIVE-20638 i found that org.mortbay.jetty exclusions from e.g. hadoop 
> don't take effect, as the actual groupId of jetty is org.eclipse.jetty for 
> most of the current projects, please find attachment (example for hive 
> commons project).
> https://en.wikipedia.org/wiki/Jetty_(web_server)#History



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21280) Null pointer exception on running compaction against a MM table.

2019-03-08 Thread Ashutosh Chauhan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788019#comment-16788019
 ] 

Ashutosh Chauhan commented on HIVE-21280:
-

+1
Both queryid and session path are needed after switch to query based 
compaction, which weren't needed earlier.

> Null pointer exception on running compaction against a MM table.
> 
>
> Key: HIVE-21280
> URL: https://issues.apache.org/jira/browse/HIVE-21280
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0, 3.1.1
>Reporter: Aditya Shah
>Assignee: Aditya Shah
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21280.patch
>
>
> On running compaction on MM table, got a null pointer exception while getting 
> HDFS session path. The error seemed to me that the session state was not 
> started for these queries. Even after making it start it further fails in 
> running a Teztask for insert overwrite on temp table with the contents of the 
> original table. The cause for this is Tezsession state is not able to 
> initialize due to Illegal Argument exception being thrown at the time of 
> setting up caller context in Tez task due to caller id which uses queryid 
> being an empty string. 
> I do think session state needs to be started and each of the queries running 
> for compaction (I'm also doubtful for stats updater thread's queries) should 
> have a query id. Some details are as follows:
> Steps to reproduce:
> 1) Using beeline with HS2 and HMS
> 2) create an MM table
> 3) Insert a few values in the table
> 4) alter table mm_table compact 'major'; 
> Stack trace on HMS:
> {code:java}
> compactor.Worker: Caught exception while trying to compact 
> id:8,dbname:default,tableName:acid_mm_orc,partName:null,state:^@,type:MAJOR,properties:null,runAs:null,tooManyAborts:false,highestWriteId:0.
>  Marking failed to avoid repeated failures, java.io.IOException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Failed to run create 
> temporary table default.tmp_compactor_acid_mm_orc_1550222367257(`a` int, `b` 
> string) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde'WITH 
> SERDEPROPERTIES (
> 'serialization.format'='1')STORED AS INPUTFORMAT 
> 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 
> 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' LOCATION 
> 'hdfs://localhost:9000/user/hive/warehouse/acid_mm_orc/_tmp_2d8a096c-2db5-4ed8-921c-b3f6d31e079e/_base'
>  TBLPROPERTIES ('transactional'='false')
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.runMmCompaction(CompactorMR.java:373)
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.run(CompactorMR.java:241)
> at org.apache.hadoop.hive.ql.txn.compactor.Worker.run(Worker.java:174)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Failed to run 
> create temporary table default.tmp_compactor_acid_mm_orc_1550222367257(`a` 
> int, `b` string) ROW FORMAT SERDE 
> 'org.apache.hadoop.hive.ql.io.orc.OrcSerde'WITH SERDEPROPERTIES (
> 'serialization.format'='1')STORED AS INPUTFORMAT 
> 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 
> 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' LOCATION 
> 'hdfs://localhost:9000/user/hive/warehouse/acid_mm_orc/_tmp_2d8a096c-2db5-4ed8-921c-b3f6d31e079e/_base'
>  TBLPROPERTIES ('transactional'='false')
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.runOnDriver(CompactorMR.java:525)
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.runMmCompaction(CompactorMR.java:365)
> ... 2 more
> Caused by: java.lang.NullPointerException: Non-local session path expected to 
> be non-null
> at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:228)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.getHDFSSessionPath(SessionState.java:815)
> at org.apache.hadoop.hive.ql.Context.(Context.java:309)
> at org.apache.hadoop.hive.ql.Context.(Context.java:295)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:591)
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1684)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1807)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1567)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1556)
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.runOnDriver(CompactorMR.java:522)
> ... 3 more
> {code}
> cc: [~ekoifman] [~vgumashta] [~sershe]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21280) Null pointer exception on running compaction against a MM table.

2019-03-08 Thread Ashutosh Chauhan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-21280:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to master. Thanks, Aditya!

> Null pointer exception on running compaction against a MM table.
> 
>
> Key: HIVE-21280
> URL: https://issues.apache.org/jira/browse/HIVE-21280
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0, 3.1.1
>Reporter: Aditya Shah
>Assignee: Aditya Shah
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21280.patch
>
>
> On running compaction on MM table, got a null pointer exception while getting 
> HDFS session path. The error seemed to me that the session state was not 
> started for these queries. Even after making it start it further fails in 
> running a Teztask for insert overwrite on temp table with the contents of the 
> original table. The cause for this is Tezsession state is not able to 
> initialize due to Illegal Argument exception being thrown at the time of 
> setting up caller context in Tez task due to caller id which uses queryid 
> being an empty string. 
> I do think session state needs to be started and each of the queries running 
> for compaction (I'm also doubtful for stats updater thread's queries) should 
> have a query id. Some details are as follows:
> Steps to reproduce:
> 1) Using beeline with HS2 and HMS
> 2) create an MM table
> 3) Insert a few values in the table
> 4) alter table mm_table compact 'major'; 
> Stack trace on HMS:
> {code:java}
> compactor.Worker: Caught exception while trying to compact 
> id:8,dbname:default,tableName:acid_mm_orc,partName:null,state:^@,type:MAJOR,properties:null,runAs:null,tooManyAborts:false,highestWriteId:0.
>  Marking failed to avoid repeated failures, java.io.IOException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Failed to run create 
> temporary table default.tmp_compactor_acid_mm_orc_1550222367257(`a` int, `b` 
> string) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde'WITH 
> SERDEPROPERTIES (
> 'serialization.format'='1')STORED AS INPUTFORMAT 
> 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 
> 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' LOCATION 
> 'hdfs://localhost:9000/user/hive/warehouse/acid_mm_orc/_tmp_2d8a096c-2db5-4ed8-921c-b3f6d31e079e/_base'
>  TBLPROPERTIES ('transactional'='false')
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.runMmCompaction(CompactorMR.java:373)
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.run(CompactorMR.java:241)
> at org.apache.hadoop.hive.ql.txn.compactor.Worker.run(Worker.java:174)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Failed to run 
> create temporary table default.tmp_compactor_acid_mm_orc_1550222367257(`a` 
> int, `b` string) ROW FORMAT SERDE 
> 'org.apache.hadoop.hive.ql.io.orc.OrcSerde'WITH SERDEPROPERTIES (
> 'serialization.format'='1')STORED AS INPUTFORMAT 
> 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 
> 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' LOCATION 
> 'hdfs://localhost:9000/user/hive/warehouse/acid_mm_orc/_tmp_2d8a096c-2db5-4ed8-921c-b3f6d31e079e/_base'
>  TBLPROPERTIES ('transactional'='false')
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.runOnDriver(CompactorMR.java:525)
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.runMmCompaction(CompactorMR.java:365)
> ... 2 more
> Caused by: java.lang.NullPointerException: Non-local session path expected to 
> be non-null
> at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:228)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.getHDFSSessionPath(SessionState.java:815)
> at org.apache.hadoop.hive.ql.Context.(Context.java:309)
> at org.apache.hadoop.hive.ql.Context.(Context.java:295)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:591)
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1684)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1807)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1567)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1556)
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.runOnDriver(CompactorMR.java:522)
> ... 3 more
> {code}
> cc: [~ekoifman] [~vgumashta] [~sershe]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21286) Hive should support clean-up of previously bootstrapped tables when retry from different dump.

2019-03-08 Thread Sankar Hariappan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-21286:

Status: Patch Available  (was: Open)

05.patch rebased with master.

> Hive should support clean-up of previously bootstrapped tables when retry 
> from different dump.
> --
>
> Key: HIVE-21286
> URL: https://issues.apache.org/jira/browse/HIVE-21286
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, Replication, pull-request-available
> Attachments: HIVE-21286.01.patch, HIVE-21286.02.patch, 
> HIVE-21286.03.patch, HIVE-21286.04.patch, HIVE-21286.05.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> If external tables are enabled for replication on an existing repl policy, 
> then bootstrapping of external tables are combined with incremental dump.
> If incremental bootstrap load fails with non-retryable error for which user 
> will have to manually drop all the external tables before trying with another 
> bootstrap dump. For full bootstrap, to retry with different dump, we 
> suggested user to drop the DB but in this case they need to manually drop all 
> the external tables which is not so user friendly. So, need to handle it in 
> Hive side as follows.
> REPL LOAD takes additional config (passed by user in WITH clause) that says, 
> drop all the tables which are bootstrapped from previous dump. 
> hive.repl.clean.tables.from.bootstrap=
> Hive will use this config only if the current dump is combined bootstrap in 
> incremental dump.
> Caution to be taken by user that this config should not be passed if previous 
> REPL LOAD (with bootstrap) was successful or any successful incremental 
> dump+load happened after "previous_bootstrap_dump_dir".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21286) Hive should support clean-up of previously bootstrapped tables when retry from different dump.

2019-03-08 Thread Sankar Hariappan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-21286:

Attachment: HIVE-21286.05.patch

> Hive should support clean-up of previously bootstrapped tables when retry 
> from different dump.
> --
>
> Key: HIVE-21286
> URL: https://issues.apache.org/jira/browse/HIVE-21286
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, Replication, pull-request-available
> Attachments: HIVE-21286.01.patch, HIVE-21286.02.patch, 
> HIVE-21286.03.patch, HIVE-21286.04.patch, HIVE-21286.05.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> If external tables are enabled for replication on an existing repl policy, 
> then bootstrapping of external tables are combined with incremental dump.
> If incremental bootstrap load fails with non-retryable error for which user 
> will have to manually drop all the external tables before trying with another 
> bootstrap dump. For full bootstrap, to retry with different dump, we 
> suggested user to drop the DB but in this case they need to manually drop all 
> the external tables which is not so user friendly. So, need to handle it in 
> Hive side as follows.
> REPL LOAD takes additional config (passed by user in WITH clause) that says, 
> drop all the tables which are bootstrapped from previous dump. 
> hive.repl.clean.tables.from.bootstrap=
> Hive will use this config only if the current dump is combined bootstrap in 
> incremental dump.
> Caution to be taken by user that this config should not be passed if previous 
> REPL LOAD (with bootstrap) was successful or any successful incremental 
> dump+load happened after "previous_bootstrap_dump_dir".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21286) Hive should support clean-up of previously bootstrapped tables when retry from different dump.

2019-03-08 Thread Sankar Hariappan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-21286:

Status: Open  (was: Patch Available)

> Hive should support clean-up of previously bootstrapped tables when retry 
> from different dump.
> --
>
> Key: HIVE-21286
> URL: https://issues.apache.org/jira/browse/HIVE-21286
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, Replication, pull-request-available
> Attachments: HIVE-21286.01.patch, HIVE-21286.02.patch, 
> HIVE-21286.03.patch, HIVE-21286.04.patch, HIVE-21286.05.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> If external tables are enabled for replication on an existing repl policy, 
> then bootstrapping of external tables are combined with incremental dump.
> If incremental bootstrap load fails with non-retryable error for which user 
> will have to manually drop all the external tables before trying with another 
> bootstrap dump. For full bootstrap, to retry with different dump, we 
> suggested user to drop the DB but in this case they need to manually drop all 
> the external tables which is not so user friendly. So, need to handle it in 
> Hive side as follows.
> REPL LOAD takes additional config (passed by user in WITH clause) that says, 
> drop all the tables which are bootstrapped from previous dump. 
> hive.repl.clean.tables.from.bootstrap=
> Hive will use this config only if the current dump is combined bootstrap in 
> incremental dump.
> Caution to be taken by user that this config should not be passed if previous 
> REPL LOAD (with bootstrap) was successful or any successful incremental 
> dump+load happened after "previous_bootstrap_dump_dir".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21402) Compaction state remains 'working' when major compaction fails

2019-03-08 Thread Ashutosh Chauhan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788012#comment-16788012
 ] 

Ashutosh Chauhan commented on HIVE-21402:
-

[~pvary] What  exception was thrown instead in that case?

> Compaction state remains 'working' when major compaction fails
> --
>
> Key: HIVE-21402
> URL: https://issues.apache.org/jira/browse/HIVE-21402
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21402.patch
>
>
> When calcite is not on the HMS classpath, and query based compaction is 
> enabled then the compaction fails with NoClassDefFound error. Since the catch 
> block only catches Exceptions the following code block is not executed:
> {code:java}
> } catch (Exception e) {
>   LOG.error("Caught exception while trying to compact " + ci +
>   ".  Marking failed to avoid repeated failures, " + 
> StringUtils.stringifyException(e));
>   msc.markFailed(CompactionInfo.compactionInfoToStruct(ci));
>   msc.abortTxns(Collections.singletonList(compactorTxnId));
> }
> {code}
> So the compaction is not set to failed.
> Would be better to catch Throwable instead of Exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21403) Incorrect error code returned when retry bootstrap with different dump.

2019-03-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21403?focusedWorklogId=210224=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-210224
 ]

ASF GitHub Bot logged work on HIVE-21403:
-

Author: ASF GitHub Bot
Created on: 08/Mar/19 16:14
Start Date: 08/Mar/19 16:14
Worklog Time Spent: 10m 
  Work Description: sankarh commented on pull request #559: HIVE-21403: 
Incorrect error code returned when retry bootstrap with different dump.
URL: https://github.com/apache/hive/pull/559
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 210224)
Time Spent: 40m  (was: 0.5h)

> Incorrect error code returned when retry bootstrap with different dump.
> ---
>
> Key: HIVE-21403
> URL: https://issues.apache.org/jira/browse/HIVE-21403
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Minor
>  Labels: DR, pull-request-available, replication
> Fix For: 4.0.0
>
> Attachments: HIVE-21403.01.patch, HIVE-21403.02.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When retry incremental bootstrap on a table with different bootstrap dump 
> throws 4 as error code instead of 20017.
> {code}
> Error while processing statement: FAILED: Execution Error, return code 4 
> from org.apache.hadoop.hive.ql.exec.repl.ReplLoadTask. 
> InvalidOperationException(message:Load path 
> hdfs://ctr-e139-1542663976389-61669-01-03.hwx.site:8020/apps/hive/repl/3d704b34-bf1a-40c9-b70c-57319e6462f6
>  not valid as target database is bootstrapped from some other path : 
> hdfs://ctr-e139-1542663976389-61669-01-03.hwx.site:8020/apps/hive/repl/c3e5ec9e-d951-48aa-b3f4-9aeaf5e010ea.)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21403) Incorrect error code returned when retry bootstrap with different dump.

2019-03-08 Thread Sankar Hariappan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-21403:

   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

02.patch committed to master.
Thanks [~maheshk114] for the review!

> Incorrect error code returned when retry bootstrap with different dump.
> ---
>
> Key: HIVE-21403
> URL: https://issues.apache.org/jira/browse/HIVE-21403
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Minor
>  Labels: DR, pull-request-available, replication
> Fix For: 4.0.0
>
> Attachments: HIVE-21403.01.patch, HIVE-21403.02.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When retry incremental bootstrap on a table with different bootstrap dump 
> throws 4 as error code instead of 20017.
> {code}
> Error while processing statement: FAILED: Execution Error, return code 4 
> from org.apache.hadoop.hive.ql.exec.repl.ReplLoadTask. 
> InvalidOperationException(message:Load path 
> hdfs://ctr-e139-1542663976389-61669-01-03.hwx.site:8020/apps/hive/repl/3d704b34-bf1a-40c9-b70c-57319e6462f6
>  not valid as target database is bootstrapped from some other path : 
> hdfs://ctr-e139-1542663976389-61669-01-03.hwx.site:8020/apps/hive/repl/c3e5ec9e-d951-48aa-b3f4-9aeaf5e010ea.)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21410) find out the actual port number when hive.server2.thrift.port=0

2019-03-08 Thread Ashutosh Chauhan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-21410:

Status: Patch Available  (was: Open)

> find out the actual port number when hive.server2.thrift.port=0
> ---
>
> Key: HIVE-21410
> URL: https://issues.apache.org/jira/browse/HIVE-21410
> Project: Hive
>  Issue Type: Improvement
>Reporter: zuotingbing
>Assignee: zuotingbing
>Priority: Minor
> Attachments: 2019-03-08_163705.png, 2019-03-08_163747.png, 
> HIVE-21410.patch
>
>
> before fixed:
> !2019-03-08_163705.png!
> after fixed:
> !2019-03-08_163747.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20580) OrcInputFormat.isOriginal() should not rely on hive.acid.key.index

2019-03-08 Thread Ashutosh Chauhan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788001#comment-16788001
 ] 

Ashutosh Chauhan commented on HIVE-20580:
-

There are some test util methods in {{TestAcidUtils}} which might be useful 
here. Also, {{TestAcidOnTez}}

> OrcInputFormat.isOriginal() should not rely on hive.acid.key.index
> --
>
> Key: HIVE-20580
> URL: https://issues.apache.org/jira/browse/HIVE-20580
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.1.0
>Reporter: Eugene Koifman
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-20580.patch
>
>
> {{org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.isOriginal()}} is checking 
> for presence of {{hive.acid.key.index}} in the footer.  This is only created 
> when the file is written by {{OrcRecordUpdater}}.  It should instead check 
> for presence of Acid metadata columns so that a file can be produced by 
> something other than {{OrcRecordUpater}}.
> Also, {{hive.acid.key.index}} counts number of different type of events which 
> is not really useful for Acid V2 (as of Hive 3) since each file only has 1 
> type of event.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21397) BloomFilter for hive Managed [ACID] table does not work as expected

2019-03-08 Thread Ashutosh Chauhan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788004#comment-16788004
 ] 

Ashutosh Chauhan commented on HIVE-21397:
-

yes.. submit to ORC project. Once landed there, we can upgrade ORC version in 
Hive which contains this fix.

> BloomFilter for hive Managed [ACID] table does not work as expected
> ---
>
> Key: HIVE-21397
> URL: https://issues.apache.org/jira/browse/HIVE-21397
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2, Transactions
>Affects Versions: 3.1.1
>Reporter: vaibhav
>Assignee: Denys Kuzmenko
>Priority: Blocker
> Attachments: OrcUtils.patch, orc_file_dump.out, orc_file_dump.q
>
>
> Steps to Reproduce this issue : 
> - 
> 1. Create a HIveManaged table as below : 
> - 
> {code:java}
> CREATE TABLE `bloomTest`( 
>    `msisdn` string, 
>    `imsi` varchar(20), 
>    `imei` bigint, 
>    `cell_id` bigint) 
>  ROW FORMAT SERDE 
>    'org.apache.hadoop.hive.ql.io.orc.OrcSerde' 
>  STORED AS INPUTFORMAT 
>    'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' 
>  OUTPUTFORMAT 
>    'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' 
>  LOCATION 
>    
> 'hdfs://c1162-node2.squadron-labs.com:8020/warehouse/tablespace/managed/hive/bloomTest;
>  
>  TBLPROPERTIES ( 
>    'bucketing_version'='2', 
>    'orc.bloom.filter.columns'='msisdn,cell_id,imsi', 
>    'orc.bloom.filter.fpp'='0.02', 
>    'transactional'='true', 
>    'transactional_properties'='default', 
>    'transient_lastDdlTime'='1551206683') {code}
> - 
> 2. Insert a few rows. 
> - 
> - 
> 3. Check if bloom filter or active : [ It does not show bloom filters for 
> hive managed tables ] 
> - 
> {code:java}
> [hive@c1162-node2 root]$ hive --orcfiledump 
> hdfs://c1162-node2.squadron-labs.com:8020/warehouse/tablespace/managed/hive/bloomTest/delta_001_001_
>  | grep -i bloom 
> SLF4J: Class path contains multiple SLF4J bindings. 
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.1.0.0-78/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation. 
> SLF4J: Actual binding is of type 
> [org.apache.logging.slf4j.Log4jLoggerFactory] 
> Processing data file 
> hdfs://c1162-node2.squadron-labs.com:8020/warehouse/tablespace/managed/hive/bloomTest/delta_001_001_/bucket_0
>  [length: 791] 
> Structure for 
> hdfs://c1162-node2.squadron-labs.com:8020/warehouse/tablespace/managed/hive/bloomTest/delta_001_001_/bucket_0
>  {code}
> - 
> On Another hand: For hive External tables it works : 
> - 
> {code:java}
> CREATE external TABLE `ext_bloomTest`( 
>    `msisdn` string, 
>    `imsi` varchar(20), 
>    `imei` bigint, 
>    `cell_id` bigint) 
>  ROW FORMAT SERDE 
>    'org.apache.hadoop.hive.ql.io.orc.OrcSerde' 
>  STORED AS INPUTFORMAT 
>    'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' 
>  OUTPUTFORMAT 
>    'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' 
>  TBLPROPERTIES ( 
>    'bucketing_version'='2', 
>    'orc.bloom.filter.columns'='msisdn,cell_id,imsi', 
>    'orc.bloom.filter.fpp'='0.02') {code}
> - 
> {code:java}
> [hive@c1162-node2 root]$ hive --orcfiledump 
> hdfs://c1162-node2.squadron-labs.com:8020/warehouse/tablespace/external/hive/ext_bloomTest/00_0
>  | grep -i bloom 
> SLF4J: Class path contains multiple SLF4J bindings. 
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.1.0.0-78/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation. 
> SLF4J: Actual binding is of type 
> [org.apache.logging.slf4j.Log4jLoggerFactory] 
> Processing data file 
> hdfs://c1162-node2.squadron-labs.com:8020/warehouse/tablespace/external/hive/ext_bloomTest/00_0
>  [length: 755] 
> Structure for 
> hdfs://c1162-node2.squadron-labs.com:8020/warehouse/tablespace/external/hive/ext_bloomTest/00_0
>  
> Stream: column 1 section BLOOM_FILTER_UTF8 start: 41 length 110 
> Stream: column 2 section BLOOM_FILTER_UTF8 start: 178 length 114 
> 

[jira] [Updated] (HIVE-21401) Break up DDLTask - extract Table related operations

2019-03-08 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21401:
--
Status: Open  (was: Patch Available)

> Break up DDLTask - extract Table related operations
> ---
>
> Key: HIVE-21401
> URL: https://issues.apache.org/jira/browse/HIVE-21401
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21401.01.patch, HIVE-21401.02.patch, 
> HIVE-21401.03.patch, HIVE-21401.04.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #2: extract all the table related operations from the old DDLTask except 
> alter table, and move them under the new package. Also create the new 
> internal framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21401) Break up DDLTask - extract Table related operations

2019-03-08 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21401:
--
Attachment: HIVE-21401.04.patch

> Break up DDLTask - extract Table related operations
> ---
>
> Key: HIVE-21401
> URL: https://issues.apache.org/jira/browse/HIVE-21401
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21401.01.patch, HIVE-21401.02.patch, 
> HIVE-21401.03.patch, HIVE-21401.04.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #2: extract all the table related operations from the old DDLTask except 
> alter table, and move them under the new package. Also create the new 
> internal framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21401) Break up DDLTask - extract Table related operations

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16787982#comment-16787982
 ] 

Hive QA commented on HIVE-21401:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12961682/HIVE-21401.03.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16410/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16410/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16410/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-03-08 15:37:47.791
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-16410/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-03-08 15:37:47.794
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at d42809e HIVE-19968: UDF exception is not throw out (Laszlo Bodor 
via Zoltan Haindrich)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at d42809e HIVE-19968: UDF exception is not throw out (Laszlo Bodor 
via Zoltan Haindrich)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-03-08 15:37:48.878
+ rm -rf ../yetus_PreCommit-HIVE-Build-16410
+ mkdir ../yetus_PreCommit-HIVE-Build-16410
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-16410
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-16410/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java:4803
Falling back to three-way merge...
Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java' with 
conflicts.
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/plan/ShowCreateDatabaseDesc.java:1
error: ql/src/java/org/apache/hadoop/hive/ql/plan/ShowCreateDatabaseDesc.java: 
patch does not apply
error: 
core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/CreateTableHook.java:
 does not exist in index
error: 
core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzer.java:
 does not exist in index
error: 
util/src/main/java/org/apache/hadoop/hive/ql/metadata/DummySemanticAnalyzerHook.java:
 does not exist in index
error: 
util/src/main/java/org/apache/hadoop/hive/ql/metadata/DummySemanticAnalyzerHook1.java:
 does not exist in index
error: src/java/org/apache/hadoop/hive/ql/ddl/DDLOperation.java: does not exist 
in index
error: src/java/org/apache/hadoop/hive/ql/ddl/DDLOperationContext.java: does 
not exist in index
error: src/java/org/apache/hadoop/hive/ql/ddl/DDLTask2.java: does not exist in 
index
error: src/java/org/apache/hadoop/hive/ql/ddl/DDLWork2.java: does not exist in 
index
error: src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java: does not exist in 
index
error: 
src/java/org/apache/hadoop/hive/ql/exec/repl/bootstrap/load/table/LoadPartitions.java:
 does not exist in index
error: 
src/java/org/apache/hadoop/hive/ql/exec/repl/bootstrap/load/table/LoadTable.java:
 does not exist in index
error: src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java: does not exist in 
index
error: src/java/org/apache/hadoop/hive/ql/lockmgr/DbTxnManager.java: does not 
exist in index
error: src/java/org/apache/hadoop/hive/ql/lockmgr/HiveTxnManager.java: does not 
exist in index
error: src/java/org/apache/hadoop/hive/ql/lockmgr/HiveTxnManagerImpl.java: does 
not exist in index
error: 

[jira] [Updated] (HIVE-21401) Break up DDLTask - extract Table related operations

2019-03-08 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21401:
--
Status: Patch Available  (was: Open)

> Break up DDLTask - extract Table related operations
> ---
>
> Key: HIVE-21401
> URL: https://issues.apache.org/jira/browse/HIVE-21401
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21401.01.patch, HIVE-21401.02.patch, 
> HIVE-21401.03.patch, HIVE-21401.04.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #2: extract all the table related operations from the old DDLTask except 
> alter table, and move them under the new package. Also create the new 
> internal framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21403) Incorrect error code returned when retry bootstrap with different dump.

2019-03-08 Thread mahesh kumar behera (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788000#comment-16788000
 ] 

mahesh kumar behera commented on HIVE-21403:


[^HIVE-21403.02.patch] looks fine to me.

+1

> Incorrect error code returned when retry bootstrap with different dump.
> ---
>
> Key: HIVE-21403
> URL: https://issues.apache.org/jira/browse/HIVE-21403
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Minor
>  Labels: DR, pull-request-available, replication
> Attachments: HIVE-21403.01.patch, HIVE-21403.02.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When retry incremental bootstrap on a table with different bootstrap dump 
> throws 4 as error code instead of 20017.
> {code}
> Error while processing statement: FAILED: Execution Error, return code 4 
> from org.apache.hadoop.hive.ql.exec.repl.ReplLoadTask. 
> InvalidOperationException(message:Load path 
> hdfs://ctr-e139-1542663976389-61669-01-03.hwx.site:8020/apps/hive/repl/3d704b34-bf1a-40c9-b70c-57319e6462f6
>  not valid as target database is bootstrapped from some other path : 
> hdfs://ctr-e139-1542663976389-61669-01-03.hwx.site:8020/apps/hive/repl/c3e5ec9e-d951-48aa-b3f4-9aeaf5e010ea.)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21401) Break up DDLTask - extract Table related operations

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16787983#comment-16787983
 ] 

Hive QA commented on HIVE-21401:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12961682/HIVE-21401.03.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16411/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16411/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16411/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Tests exited with: Exception: Patch URL 
https://issues.apache.org/jira/secure/attachment/12961682/HIVE-21401.03.patch 
was found in seen patch url's cache and a test was probably run already on it. 
Aborting...
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12961682 - PreCommit-HIVE-Build

> Break up DDLTask - extract Table related operations
> ---
>
> Key: HIVE-21401
> URL: https://issues.apache.org/jira/browse/HIVE-21401
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21401.01.patch, HIVE-21401.02.patch, 
> HIVE-21401.03.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #2: extract all the table related operations from the old DDLTask except 
> alter table, and move them under the new package. Also create the new 
> internal framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21325) Hive external table replication failed with Permission denied issue.

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16787979#comment-16787979
 ] 

Hive QA commented on HIVE-21325:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12961669/HIVE-21325.03.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15820 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[test_teradatabinaryfile] 
(batchId=2)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16409/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16409/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16409/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12961669 - PreCommit-HIVE-Build

> Hive external table replication failed with Permission denied issue.
> 
>
> Key: HIVE-21325
> URL: https://issues.apache.org/jira/browse/HIVE-21325
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21325.01.patch, HIVE-21325.02.patch, 
> HIVE-21325.03.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> During external table replication the file copy is done in parallel to the 
> meta data replication. If the file copy task creates the directory with do as 
> set to true, it will create the directory with permission set to the user 
> running the repl command. In that case the meta data task while creating the 
> table may fail as hive user might not have access to the created directory.
> The fix should be
>  # While creating directory, if sql based authentication is enabled, then 
> disable storage based authentication for hive user.
>  # Currently the created directory has the login user access, it should 
> retain the source clusters owner, group and permission.
>  # For external table replication don't create the directory during create 
> table and add partition.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21325) Hive external table replication failed with Permission denied issue.

2019-03-08 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16787958#comment-16787958
 ] 

Hive QA commented on HIVE-21325:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
16s{color} | {color:blue} standalone-metastore/metastore-server in master has 
179 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
9s{color} | {color:blue} ql in master has 2258 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
44s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
8s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
39s{color} | {color:red} ql: The patch generated 1 new + 18 unchanged - 1 fixed 
= 19 total (was 19) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16409/dev-support/hive-personality.sh
 |
| git revision | master / d42809e |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16409/yetus/diff-checkstyle-ql.txt
 |
| modules | C: standalone-metastore/metastore-server ql itests/hive-unit 
itests/hive-unit-hadoop2 U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16409/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Hive external table replication failed with Permission denied issue.
> 
>
> Key: HIVE-21325
> URL: https://issues.apache.org/jira/browse/HIVE-21325
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21325.01.patch, HIVE-21325.02.patch, 
> HIVE-21325.03.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> During external table replication the file copy is done in parallel to the 
> meta data replication. If the file copy task creates the directory with do as 
> 

  1   2   >