[jira] [Updated] (HIVE-20295) Remove !isNumber check after failed constant interpretation

2019-02-04 Thread Ivan Suller (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Suller updated HIVE-20295:
---
Attachment: HIVE-20295.09.patch

> Remove !isNumber check after failed constant interpretation
> ---
>
> Key: HIVE-20295
> URL: https://issues.apache.org/jira/browse/HIVE-20295
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Ivan Suller
>Priority: Major
> Attachments: HIVE-20295.01.patch, HIVE-20295.02.patch, 
> HIVE-20295.03.patch, HIVE-20295.04.patch, HIVE-20295.05.patch, 
> HIVE-20295.06.patch, HIVE-20295.07.patch, HIVE-20295.08.patch, 
> HIVE-20295.09.patch
>
>
> During constant interpretation; if the number can't be parsed - it might be 
> possible that the comparsion is out of range for the type in question - in 
> which case it could be removed.
> https://github.com/apache/hive/blob/2cabb8da150b8fb980223fbd6c2c93b842ca3ee5/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java#L1163



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21199) Replace all occurences of new Byte with Byte.valueOf

2019-02-04 Thread Ivan Suller (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Suller updated HIVE-21199:
---
Attachment: HIVE-21199.04.patch

> Replace all occurences of new Byte with Byte.valueOf
> 
>
> Key: HIVE-21199
> URL: https://issues.apache.org/jira/browse/HIVE-21199
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Ivan Suller
>Assignee: Ivan Suller
>Priority: Trivial
> Attachments: HIVE-21199.01.patch, HIVE-21199.02.patch, 
> HIVE-21199.03.patch, HIVE-21199.04.patch
>
>
> Creating Byte objects with new Byte(...) creates a new object, while 
> Byte.valueOf(...) can be cached (and is actually cached in most if not all 
> JVMs) thus reducing GC overhead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21009) LDAP - Specify binddn for ldap-search

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760532#comment-16760532
 ] 

Hive QA commented on HIVE-21009:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957582/HIVE-21009.04.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15735 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15939/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15939/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15939/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12957582 - PreCommit-HIVE-Build

> LDAP - Specify binddn for ldap-search
> -
>
> Key: HIVE-21009
> URL: https://issues.apache.org/jira/browse/HIVE-21009
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.1.0, 2.1.1, 2.2.0, 2.3.0, 2.3.1, 2.3.2
>Reporter: Thomas Uhren
>Assignee: David McGinnis
>Priority: Major
>  Labels: features, newbie, security
> Attachments: HIVE-21009.01.patch, HIVE-21009.02.patch, 
> HIVE-21009.03.patch, HIVE-21009.04.patch, HIVE-21009.patch
>
>
> When user accounts cannot do an LDAP search, there is currently no way of 
> specifying a custom binddn to use for the ldap-search.
> So I'm missing something like that:
> {code}
> hive.server2.authentication.ldap.bindn=cn=ldapuser,ou=user,dc=example
> hive.server2.authentication.ldap.bindnpw=password
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21009) LDAP - Specify binddn for ldap-search

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760505#comment-16760505
 ] 

Hive QA commented on HIVE-21009:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
 3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
27s{color} | {color:blue} common in master has 65 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
33s{color} | {color:blue} service in master has 48 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
20s{color} | {color:red} service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
20s{color} | {color:red} service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 20s{color} 
| {color:red} service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
14s{color} | {color:red} common: The patch generated 2 new + 428 unchanged - 0 
fixed = 430 total (was 428) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
10s{color} | {color:red} service: The patch generated 5 new + 24 unchanged - 0 
fixed = 29 total (was 24) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
16s{color} | {color:red} service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
11s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  
xml  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15939/dev-support/hive-personality.sh
 |
| git revision | master / 313e49f |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15939/yetus/patch-mvninstall-service.txt
 |
| compile | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15939/yetus/patch-compile-service.txt
 |
| javac | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15939/yetus/patch-compile-service.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15939/yetus/diff-checkstyle-common.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15939/yetus/diff-checkstyle-service.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15939/yetus/whitespace-eol.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15939/yetus/patch-findbugs-service.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15939/yetus/patch-asflicense-problems.txt
 |
| modules | C: common service U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15939/yetus.txt |
| Powered by | Apache 

[jira] [Commented] (HIVE-21211) Upgrade jetty version to 9.4.x

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760495#comment-16760495
 ] 

Hive QA commented on HIVE-21211:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957581/HIVE-21211.3.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15938/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15938/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15938/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-02-05 06:14:04.019
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-15938/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-02-05 06:14:04.021
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 313e49f HIVE-20699: Query based compactor for full CRUD Acid 
tables (Vaibhav Gumashta reviewed by Eugene Koifman)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 313e49f HIVE-20699: Query based compactor for full CRUD Acid 
tables (Vaibhav Gumashta reviewed by Eugene Koifman)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-02-05 06:14:05.458
+ rm -rf ../yetus_PreCommit-HIVE-Build-15938
+ mkdir ../yetus_PreCommit-HIVE-Build-15938
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-15938
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-15938/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
Going to apply patch with: git apply -p0
+ [[ maven == \m\a\v\e\n ]]
+ rm -rf /data/hiveptest/working/maven/org/apache/hive
+ mvn -B clean install -DskipTests -T 4 -q 
-Dmaven.repo.local=/data/hiveptest/working/maven
protoc-jar: executing: [/tmp/protoc1390342796151067496.exe, --version]
libprotoc 2.5.0
protoc-jar: executing: [/tmp/protoc1390342796151067496.exe, 
-I/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore,
 
--java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/target/generated-sources,
 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto]
ANTLR Parser Generator  Version 3.5.2
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process 
(process-resource-bundles) on project hive-pre-upgrade: Execution 
process-resource-bundles of goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process failed. 
ConcurrentModificationException -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hive-pre-upgrade
+ result=1
+ '[' 1 -ne 0 ']'
+ rm -rf yetus_PreCommit-HIVE-Build-15938
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12957581 - PreCommit-HIVE-Build

> Upgrade jetty version to 9.4.x
> --
>
> Key: 

[jira] [Commented] (HIVE-21063) Support statistics in cachedStore for transactional table

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760493#comment-16760493
 ] 

Hive QA commented on HIVE-21063:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957580/HIVE-21063.04.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15736 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15937/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15937/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15937/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12957580 - PreCommit-HIVE-Build

> Support statistics in cachedStore for transactional table
> -
>
> Key: HIVE-21063
> URL: https://issues.apache.org/jira/browse/HIVE-21063
> Project: Hive
>  Issue Type: Task
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21063.01.patch, HIVE-21063.02.patch, 
> HIVE-21063.03.patch, HIVE-21063.04.patch
>
>
> Currently statistics for transactional table is not stored in cached store 
> for consistency issues. Need to add validation for valid write ids and 
> generation of aggregate stats based on valid partitions. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-685) add UDFquote

2019-02-04 Thread Mani M (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760484#comment-16760484
 ] 

Mani M commented on HIVE-685:
-

Hi @pvary
I have created the test case for genericudfquote function with the same format 
as used in lpad function, but 885 test cases are getting failed as given above, 
along with the other test cases. Can you throw some light to sort out this 
issue.

> add UDFquote
> 
>
> Key: HIVE-685
> URL: https://issues.apache.org/jira/browse/HIVE-685
> Project: Hive
>  Issue Type: New Feature
>Reporter: Namit Jain
>Assignee: Mani M
>Priority: Major
>  Labels: todoc4.0, udf
> Fix For: 4.0.0
>
> Attachments: HIVE.685.02.PATCH, HIVE.685.03.PATCH, HIVE.685.04.PATCH, 
> HIVE.685.05.PATCH, HIVE.685.06.PATCH, HIVE.685.07.PATCH, HIVE.685.PATCH
>
>
> add UDFquote
> look at
> http://dev.mysql.com/doc/refman/5.0/en/func-op-summary-ref.html
> for details



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21063) Support statistics in cachedStore for transactional table

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760477#comment-16760477
 ] 

Hive QA commented on HIVE-21063:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m  
6s{color} | {color:blue} standalone-metastore/metastore-server in master has 
184 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
37s{color} | {color:blue} ql in master has 2307 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
23s{color} | {color:blue} hcatalog/server-extensions in master has 3 extant 
Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
35s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 6s{color} | {color:green} The patch metastore-server passed checkstyle {color} 
|
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} ql: The patch generated 0 new + 15 unchanged - 1 
fixed = 15 total (was 16) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} The patch server-extensions passed checkstyle 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} itests/hive-unit: The patch generated 0 new + 15 
unchanged - 5 fixed = 15 total (was 20) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} standalone-metastore/metastore-server generated 0 new 
+ 183 unchanged - 1 fixed = 183 total (was 184) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
39s{color} | {color:green} ql in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} server-extensions in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} hive-unit in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} standalone-metastore_metastore-server generated 0 
new + 48 unchanged - 1 fixed = 48 total (was 49) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} ql in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} server-extensions in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} hive-unit in the patch passed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
11s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} 

[jira] [Updated] (HIVE-21213) Acid table bootstrap replication needs to handle directory created by compaction with txn id

2019-02-04 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-21213:
---
Description: The current implementation of compaction uses the txn id in 
the directory name. This is used to isolate the queries from reading the 
directory until compaction has finished and to avoid the compactor marking used 
earlier. In case of replication, the directory can not be copied as the txn 
list at target may be different from source. So conversion logic is required to 
create a new directory with valid txn at target and dump the data to the newly 
created directory.  (was: The current implementation of compaction makes use of 
compaction to use the txn id in the directory name. This is used to isolate the 
queries from reading the directory until compaction has finished. In case of 
replication, the directory can not be copied as the txn list at target may be 
different from source. So conversion logic is required to create a new 
directory with valid txn at target and dump the data to the newly created 
directory.)

> Acid table bootstrap replication needs to handle directory created by 
> compaction with txn id
> 
>
> Key: HIVE-21213
> URL: https://issues.apache.org/jira/browse/HIVE-21213
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive, HiveServer2, repl
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>
> The current implementation of compaction uses the txn id in the directory 
> name. This is used to isolate the queries from reading the directory until 
> compaction has finished and to avoid the compactor marking used earlier. In 
> case of replication, the directory can not be copied as the txn list at 
> target may be different from source. So conversion logic is required to 
> create a new directory with valid txn at target and dump the data to the 
> newly created directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21213) Acid table bootstrap replication needs to handle directory created by compaction with txn id

2019-02-04 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera reassigned HIVE-21213:
--


> Acid table bootstrap replication needs to handle directory created by 
> compaction with txn id
> 
>
> Key: HIVE-21213
> URL: https://issues.apache.org/jira/browse/HIVE-21213
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive, HiveServer2, repl
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>
> The current implementation of compaction makes use of compaction to use the 
> txn id in the directory name. This is used to isolate the queries from 
> reading the directory until compaction has finished. In case of replication, 
> the directory can not be copied as the txn list at target may be different 
> from source. So conversion logic is required to create a new directory with 
> valid txn at target and dump the data to the newly created directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21210) CombineHiveInputFormat Thread Pool Sizing

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760467#comment-16760467
 ] 

Hive QA commented on HIVE-21210:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957577/HIVE-21210.2.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 31 failed/errored test(s), 15731 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] 
(batchId=57)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dynamic_partition_insert]
 (batchId=60)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_1] (batchId=91)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_disablecbo_1] 
(batchId=55)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_disablecbo_3] 
(batchId=39)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_all] (batchId=73)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_buckets] (batchId=65)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_cttas] (batchId=50)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[nonmr_fetch] (batchId=22)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[partition_wise_fileformat15]
 (batchId=32)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[partition_wise_fileformat16]
 (batchId=41)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[pcs] (batchId=54)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[pointlookup3] (batchId=7)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_vc] (batchId=91)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppr_pushdown3] 
(batchId=30)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_groupby_reduce] 
(batchId=61)
org.apache.hadoop.hive.cli.TestLocalSparkCliDriver.testCliDriver[spark_local_queries]
 (batchId=277)
org.apache.hadoop.hive.metastore.TestObjectStore.catalogs (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDatabaseOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDeprecatedConfigIsOverwritten
 (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropParitionsCleanup
 (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropPartitionsCacheCrossSession
 (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSqlErrorMetrics 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testEmptyTrustStoreProps 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testMasterKeyOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testMaxEventResponse 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testPartitionOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testQueryCloseOnError 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testRoleOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testTableOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testUseSSLProperty 
(batchId=230)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15936/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15936/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15936/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 31 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12957577 - PreCommit-HIVE-Build

> CombineHiveInputFormat Thread Pool Sizing
> -
>
> Key: HIVE-21210
> URL: https://issues.apache.org/jira/browse/HIVE-21210
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HIVE-21210.1.patch, HIVE-21210.2.patch
>
>
> Threadpools.
> Hive uses threadpools in several different places and each implementation is 
> a little different and requires different configurations. I think that Hive 
> needs to reign in and standardize the way that threadpools are used and 
> threadpools should scale automatically without manual configuration. At any 
> given time, there are many hundreds of threads running in the HS2 as the 
> number of simultaneous connections increases and they surely cause contention 
> with one-another.
> Here is an example:
> {code:java|title=CombineHiveInputFormat.java}
>   // max number of threads we can use to check 

[jira] [Updated] (HIVE-21211) Upgrade jetty version to 9.4.x

2019-02-04 Thread Jaume M (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaume M updated HIVE-21211:
---
Attachment: HIVE-21211.3.patch
Status: Patch Available  (was: Open)

> Upgrade jetty version to 9.4.x
> --
>
> Key: HIVE-21211
> URL: https://issues.apache.org/jira/browse/HIVE-21211
> Project: Hive
>  Issue Type: Task
>Reporter: Jaume M
>Assignee: Jaume M
>Priority: Major
> Attachments: HIVE-21211.1.patch, HIVE-21211.2.patch, 
> HIVE-21211.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21009) LDAP - Specify binddn for ldap-search

2019-02-04 Thread David McGinnis (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David McGinnis updated HIVE-21009:
--
Status: In Progress  (was: Patch Available)

Need to make patch using --binary for the jceks file.

> LDAP - Specify binddn for ldap-search
> -
>
> Key: HIVE-21009
> URL: https://issues.apache.org/jira/browse/HIVE-21009
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.3.2, 2.3.1, 2.3.0, 2.2.0, 2.1.1, 2.1.0
>Reporter: Thomas Uhren
>Assignee: David McGinnis
>Priority: Major
>  Labels: features, newbie, security
> Attachments: HIVE-21009.01.patch, HIVE-21009.02.patch, 
> HIVE-21009.03.patch, HIVE-21009.04.patch, HIVE-21009.patch
>
>
> When user accounts cannot do an LDAP search, there is currently no way of 
> specifying a custom binddn to use for the ldap-search.
> So I'm missing something like that:
> {code}
> hive.server2.authentication.ldap.bindn=cn=ldapuser,ou=user,dc=example
> hive.server2.authentication.ldap.bindnpw=password
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21009) LDAP - Specify binddn for ldap-search

2019-02-04 Thread David McGinnis (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David McGinnis updated HIVE-21009:
--
Attachment: HIVE-21009.04.patch

> LDAP - Specify binddn for ldap-search
> -
>
> Key: HIVE-21009
> URL: https://issues.apache.org/jira/browse/HIVE-21009
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.1.0, 2.1.1, 2.2.0, 2.3.0, 2.3.1, 2.3.2
>Reporter: Thomas Uhren
>Assignee: David McGinnis
>Priority: Major
>  Labels: features, newbie, security
> Attachments: HIVE-21009.01.patch, HIVE-21009.02.patch, 
> HIVE-21009.03.patch, HIVE-21009.04.patch, HIVE-21009.patch
>
>
> When user accounts cannot do an LDAP search, there is currently no way of 
> specifying a custom binddn to use for the ldap-search.
> So I'm missing something like that:
> {code}
> hive.server2.authentication.ldap.bindn=cn=ldapuser,ou=user,dc=example
> hive.server2.authentication.ldap.bindnpw=password
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21009) LDAP - Specify binddn for ldap-search

2019-02-04 Thread David McGinnis (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David McGinnis updated HIVE-21009:
--
Status: Patch Available  (was: In Progress)

> LDAP - Specify binddn for ldap-search
> -
>
> Key: HIVE-21009
> URL: https://issues.apache.org/jira/browse/HIVE-21009
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.3.2, 2.3.1, 2.3.0, 2.2.0, 2.1.1, 2.1.0
>Reporter: Thomas Uhren
>Assignee: David McGinnis
>Priority: Major
>  Labels: features, newbie, security
> Attachments: HIVE-21009.01.patch, HIVE-21009.02.patch, 
> HIVE-21009.03.patch, HIVE-21009.04.patch, HIVE-21009.patch
>
>
> When user accounts cannot do an LDAP search, there is currently no way of 
> specifying a custom binddn to use for the ldap-search.
> So I'm missing something like that:
> {code}
> hive.server2.authentication.ldap.bindn=cn=ldapuser,ou=user,dc=example
> hive.server2.authentication.ldap.bindnpw=password
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21210) CombineHiveInputFormat Thread Pool Sizing

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760440#comment-16760440
 ] 

Hive QA commented on HIVE-21210:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
31s{color} | {color:blue} common in master has 65 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
34s{color} | {color:blue} ql in master has 2307 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} common: The patch generated 5 new + 0 unchanged - 0 
fixed = 5 total (was 0) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
34s{color} | {color:red} ql: The patch generated 4 new + 10 unchanged - 46 
fixed = 14 total (was 56) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
12s{color} | {color:red} common generated 18 new + 27 unchanged - 0 fixed = 45 
total (was 27) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15936/dev-support/hive-personality.sh
 |
| git revision | master / 313e49f |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15936/yetus/diff-checkstyle-common.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15936/yetus/diff-checkstyle-ql.txt
 |
| javadoc | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15936/yetus/diff-javadoc-javadoc-common.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15936/yetus/patch-asflicense-problems.txt
 |
| modules | C: common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15936/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> CombineHiveInputFormat Thread Pool Sizing
> -
>
> Key: HIVE-21210
> URL: https://issues.apache.org/jira/browse/HIVE-21210
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HIVE-21210.1.patch, HIVE-21210.2.patch
>
>
> Threadpools.
> Hive uses threadpools in several different places and each implementation is 
> a little 

[jira] [Updated] (HIVE-21211) Upgrade jetty version to 9.4.x

2019-02-04 Thread Jaume M (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaume M updated HIVE-21211:
---
Status: Open  (was: Patch Available)

> Upgrade jetty version to 9.4.x
> --
>
> Key: HIVE-21211
> URL: https://issues.apache.org/jira/browse/HIVE-21211
> Project: Hive
>  Issue Type: Task
>Reporter: Jaume M
>Assignee: Jaume M
>Priority: Major
> Attachments: HIVE-21211.1.patch, HIVE-21211.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21063) Support statistics in cachedStore for transactional table

2019-02-04 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-21063:
---
Status: Patch Available  (was: Open)

> Support statistics in cachedStore for transactional table
> -
>
> Key: HIVE-21063
> URL: https://issues.apache.org/jira/browse/HIVE-21063
> Project: Hive
>  Issue Type: Task
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21063.01.patch, HIVE-21063.02.patch, 
> HIVE-21063.03.patch, HIVE-21063.04.patch
>
>
> Currently statistics for transactional table is not stored in cached store 
> for consistency issues. Need to add validation for valid write ids and 
> generation of aggregate stats based on valid partitions. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21009) LDAP - Specify binddn for ldap-search

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760427#comment-16760427
 ] 

Hive QA commented on HIVE-21009:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957571/HIVE-21009.03.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15935/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15935/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15935/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-02-05 03:52:00.728
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-15935/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-02-05 03:52:00.731
+ cd apache-github-source-source
+ git fetch origin
>From https://github.com/apache/hive
   4a4b9ca..313e49f  master -> origin/master
+ git reset --hard HEAD
HEAD is now at 4a4b9ca HIVE-21159 Modify Merge statement logic to perform 
Update split early (Eugene Koifman, reviewed by Vaibhav Gumashta)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded.
  (use "git pull" to update your local branch)
+ git reset --hard origin/master
HEAD is now at 313e49f HIVE-20699: Query based compactor for full CRUD Acid 
tables (Vaibhav Gumashta reviewed by Eugene Koifman)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-02-05 03:52:02.658
+ rm -rf ../yetus_PreCommit-HIVE-Build-15935
+ mkdir ../yetus_PreCommit-HIVE-Build-15935
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-15935
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-15935/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
fatal: git diff header lacks filename information when removing 0 leading 
pathname components (line 229)
error: cannot apply binary patch to 
'service/src/test/resources/creds/test.jceks' without full index line
Falling back to three-way merge...
error: cannot apply binary patch to 
'service/src/test/resources/creds/test.jceks' without full index line
error: service/src/test/resources/creds/test.jceks: patch does not apply
error: src/java/org/apache/hadoop/hive/conf/HiveConf.java: does not exist in 
index
error: patch failed: pom.xml:323
Falling back to three-way merge...
Applied patch to 'pom.xml' with conflicts.
error: 
src/java/org/apache/hive/service/auth/LdapAuthenticationProviderImpl.java: does 
not exist in index
error: 
src/test/org/apache/hive/service/auth/TestLdapAuthenticationProviderImpl.java: 
does not exist in index
error: cannot apply binary patch to 'src/test/resources/creds/test.jceks' 
without full index line
Falling back to three-way merge...
error: cannot apply binary patch to 'src/test/resources/creds/test.jceks' 
without full index line
error: src/test/resources/creds/test.jceks: patch does not apply
The patch does not appear to apply with p0, p1, or p2
+ result=1
+ '[' 1 -ne 0 ']'
+ rm -rf yetus_PreCommit-HIVE-Build-15935
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12957571 - PreCommit-HIVE-Build

> LDAP - Specify binddn for ldap-search
> -
>
> Key: HIVE-21009
> URL: https://issues.apache.org/jira/browse/HIVE-21009
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.1.0, 2.1.1, 2.2.0, 2.3.0, 2.3.1, 2.3.2
>Reporter: Thomas Uhren
>  

[jira] [Updated] (HIVE-21063) Support statistics in cachedStore for transactional table

2019-02-04 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-21063:
---
Attachment: HIVE-21063.04.patch

> Support statistics in cachedStore for transactional table
> -
>
> Key: HIVE-21063
> URL: https://issues.apache.org/jira/browse/HIVE-21063
> Project: Hive
>  Issue Type: Task
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21063.01.patch, HIVE-21063.02.patch, 
> HIVE-21063.03.patch, HIVE-21063.04.patch
>
>
> Currently statistics for transactional table is not stored in cached store 
> for consistency issues. Need to add validation for valid write ids and 
> generation of aggregate stats based on valid partitions. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21063) Support statistics in cachedStore for transactional table

2019-02-04 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-21063:
---
Status: Open  (was: Patch Available)

> Support statistics in cachedStore for transactional table
> -
>
> Key: HIVE-21063
> URL: https://issues.apache.org/jira/browse/HIVE-21063
> Project: Hive
>  Issue Type: Task
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21063.01.patch, HIVE-21063.02.patch, 
> HIVE-21063.03.patch
>
>
> Currently statistics for transactional table is not stored in cached store 
> for consistency issues. Need to add validation for valid write ids and 
> generation of aggregate stats based on valid partitions. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21063) Support statistics in cachedStore for transactional table

2019-02-04 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-21063:
---
Attachment: (was: HIVE-21063.04.patch)

> Support statistics in cachedStore for transactional table
> -
>
> Key: HIVE-21063
> URL: https://issues.apache.org/jira/browse/HIVE-21063
> Project: Hive
>  Issue Type: Task
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21063.01.patch, HIVE-21063.02.patch, 
> HIVE-21063.03.patch
>
>
> Currently statistics for transactional table is not stored in cached store 
> for consistency issues. Need to add validation for valid write ids and 
> generation of aggregate stats based on valid partitions. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21207) Use 0.12.0 libthrift version in Hive

2019-02-04 Thread Vihang Karajgaonkar (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760419#comment-16760419
 ] 

Vihang Karajgaonkar commented on HIVE-21207:


Hi [~osayankin] I think you will also have to regenerate the thrift files.

> Use 0.12.0 libthrift version in Hive
> 
>
> Key: HIVE-21207
> URL: https://issues.apache.org/jira/browse/HIVE-21207
> Project: Hive
>  Issue Type: Improvement
>Reporter: Oleksiy Sayankin
>Assignee: Oleksiy Sayankin
>Priority: Major
> Attachments: HIVE-21207.1.patch
>
>
> Use 0.12.0 libthrift version in Hive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21212) LLAP: shuffle port config uses internal configuration

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760417#comment-16760417
 ] 

Hive QA commented on HIVE-21212:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957570/HIVE-21212.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15724 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15934/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15934/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15934/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12957570 - PreCommit-HIVE-Build

> LLAP: shuffle port config uses internal configuration
> -
>
> Key: HIVE-21212
> URL: https://issues.apache.org/jira/browse/HIVE-21212
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21212.1.patch
>
>
> LlapDaemon main() reads daemon configuration but for shuffle port it reads 
> internal config instead of hive.llap.daemon.yarn.shuffle.port
> [https://github.com/apache/hive/blob/c8eb03affa2533f4827cf6497e7c9873bc9520a7/llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/LlapDaemon.java#L535]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20849) Review of ConstantPropagateProcFactory

2019-02-04 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760414#comment-16760414
 ] 

BELUGA BEHR commented on HIVE-20849:


[~pvary] [~ngangam] Please review. Thanks. :)

> Review of ConstantPropagateProcFactory
> --
>
> Key: HIVE-20849
> URL: https://issues.apache.org/jira/browse/HIVE-20849
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Affects Versions: 3.1.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20849.1.patch, HIVE-20849.1.patch, 
> HIVE-20849.2.patch, HIVE-20849.3.patch, HIVE-20849.4.patch, 
> HIVE-20849.5.patch, HIVE-20849.6.patch
>
>
> I was looking at this class because it blasts a lot of useless (to an admin) 
> information to the logs.  Especially if the table has a lot of columns, I see 
> big blocks of logging that are meaningless to me.  I request that the logging 
> is toned down to debug, and some other improvements to the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20849) Review of ConstantPropagateProcFactory

2019-02-04 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760413#comment-16760413
 ] 

BELUGA BEHR commented on HIVE-20849:


The patch here includes the following change in addition to the logging:

{code:java}
- new Long(((Integer) convObj).intValue());
+ Long.valueOf(((Integer) convObj).longValue());
{code}

This is because with the current code, a new {{Long}} object is created every 
time, whereas the {{Long#valueOf}} code benefits from caching:

{quote}
If a new Long instance is not required, this method should generally be used in 
preference to the constructor Long(long), as this method is likely to yield 
significantly better space and time performance by caching frequently requested 
values
{quote}

https://docs.oracle.com/javase/7/docs/api/java/lang/Long.html#valueOf(long)

> Review of ConstantPropagateProcFactory
> --
>
> Key: HIVE-20849
> URL: https://issues.apache.org/jira/browse/HIVE-20849
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Affects Versions: 3.1.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20849.1.patch, HIVE-20849.1.patch, 
> HIVE-20849.2.patch, HIVE-20849.3.patch, HIVE-20849.4.patch, 
> HIVE-20849.5.patch, HIVE-20849.6.patch
>
>
> I was looking at this class because it blasts a lot of useless (to an admin) 
> information to the logs.  Especially if the table has a lot of columns, I see 
> big blocks of logging that are meaningless to me.  I request that the logging 
> is toned down to debug, and some other improvements to the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21210) CombineHiveInputFormat Thread Pool Sizing

2019-02-04 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-21210:
---
Status: Patch Available  (was: Open)

> CombineHiveInputFormat Thread Pool Sizing
> -
>
> Key: HIVE-21210
> URL: https://issues.apache.org/jira/browse/HIVE-21210
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HIVE-21210.1.patch, HIVE-21210.2.patch
>
>
> Threadpools.
> Hive uses threadpools in several different places and each implementation is 
> a little different and requires different configurations. I think that Hive 
> needs to reign in and standardize the way that threadpools are used and 
> threadpools should scale automatically without manual configuration. At any 
> given time, there are many hundreds of threads running in the HS2 as the 
> number of simultaneous connections increases and they surely cause contention 
> with one-another.
> Here is an example:
> {code:java|title=CombineHiveInputFormat.java}
>   // max number of threads we can use to check non-combinable paths
>   private static final int MAX_CHECK_NONCOMBINABLE_THREAD_NUM = 50;
>   private static final int DEFAULT_NUM_PATH_PER_THREAD = 100;
> {code}
> When building the splits for a MR job, there are up to 50 threads running per 
> query and there is not much scaling here, it's simply 1 thread : 100 files 
> ratio.  This implies that to process 5000 files, there are 50 threads, after 
> that, 50 threads are still used. Many Hive jobs these days involve more than 
> 5000 files so it's not scaling well on bigger sizes.
> This is not configurable (even manually), it doesn't change when the hardware 
> specs increase, and 50 threads seems like a lot when a service must support 
> up to 80 connections:
> [https://www.cloudera.com/documentation/enterprise/5/latest/topics/admin_hive_tuning.html]
> Not to mention, I have never seen a scenario where HS2 is running on a host 
> all by itself and has the entire system dedicated to it. Therefore it should 
> be more friendly and spin up fewer threads.
> I am attaching a patch here that provides a few features:
>  * Common module that produces {{ExecutorService}} which caps the number of 
> threads it spins up at the number of processors a host has. Keep in mind that 
> a class may submit as much work units ({{Callables}} as they would like, but 
> the number of threads in the pool is capped.
>  * Common module for partitioning work. That is, allow for a generic 
> framework for dividing work into partitions (i.e. batches)
>  * Modify {{CombineHiveInputFormat}} to take advantage of both modules, 
> performing its same duties in a more Java OO way that is currently implemented
>  * Add a partitioning (batching) implementation that enforces partitioning of 
> a {{Collection}} based on the natural log of the {{Collection}} size so that 
> it scales more slowly than a simple 1:100 ratio.
>  * Simplify unit test code for {{CombineHiveInputFormat}}
> My hope is to introduce these tools to {{CombineHiveInputFormat}} and then to 
> drop it into other places.  One of the things I will introduce here is a 
> "direct thread" {{ExecutorService}} so that even if there is a configuration 
> for a thread pool to be disabled, it will still use an {{ExecutorService}} so 
> that the project can avoid logic like "if this function is services by a 
> thread pool, use a {{ExecutorService}} (and remember to close it later!) 
> otherwise, create a single thread" so that things like [HIVE-16949] can be 
> avoided in the future.  Everything will just use an {{ExecutorService}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21210) CombineHiveInputFormat Thread Pool Sizing

2019-02-04 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-21210:
---
Attachment: HIVE-21210.2.patch

> CombineHiveInputFormat Thread Pool Sizing
> -
>
> Key: HIVE-21210
> URL: https://issues.apache.org/jira/browse/HIVE-21210
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HIVE-21210.1.patch, HIVE-21210.2.patch
>
>
> Threadpools.
> Hive uses threadpools in several different places and each implementation is 
> a little different and requires different configurations. I think that Hive 
> needs to reign in and standardize the way that threadpools are used and 
> threadpools should scale automatically without manual configuration. At any 
> given time, there are many hundreds of threads running in the HS2 as the 
> number of simultaneous connections increases and they surely cause contention 
> with one-another.
> Here is an example:
> {code:java|title=CombineHiveInputFormat.java}
>   // max number of threads we can use to check non-combinable paths
>   private static final int MAX_CHECK_NONCOMBINABLE_THREAD_NUM = 50;
>   private static final int DEFAULT_NUM_PATH_PER_THREAD = 100;
> {code}
> When building the splits for a MR job, there are up to 50 threads running per 
> query and there is not much scaling here, it's simply 1 thread : 100 files 
> ratio.  This implies that to process 5000 files, there are 50 threads, after 
> that, 50 threads are still used. Many Hive jobs these days involve more than 
> 5000 files so it's not scaling well on bigger sizes.
> This is not configurable (even manually), it doesn't change when the hardware 
> specs increase, and 50 threads seems like a lot when a service must support 
> up to 80 connections:
> [https://www.cloudera.com/documentation/enterprise/5/latest/topics/admin_hive_tuning.html]
> Not to mention, I have never seen a scenario where HS2 is running on a host 
> all by itself and has the entire system dedicated to it. Therefore it should 
> be more friendly and spin up fewer threads.
> I am attaching a patch here that provides a few features:
>  * Common module that produces {{ExecutorService}} which caps the number of 
> threads it spins up at the number of processors a host has. Keep in mind that 
> a class may submit as much work units ({{Callables}} as they would like, but 
> the number of threads in the pool is capped.
>  * Common module for partitioning work. That is, allow for a generic 
> framework for dividing work into partitions (i.e. batches)
>  * Modify {{CombineHiveInputFormat}} to take advantage of both modules, 
> performing its same duties in a more Java OO way that is currently implemented
>  * Add a partitioning (batching) implementation that enforces partitioning of 
> a {{Collection}} based on the natural log of the {{Collection}} size so that 
> it scales more slowly than a simple 1:100 ratio.
>  * Simplify unit test code for {{CombineHiveInputFormat}}
> My hope is to introduce these tools to {{CombineHiveInputFormat}} and then to 
> drop it into other places.  One of the things I will introduce here is a 
> "direct thread" {{ExecutorService}} so that even if there is a configuration 
> for a thread pool to be disabled, it will still use an {{ExecutorService}} so 
> that the project can avoid logic like "if this function is services by a 
> thread pool, use a {{ExecutorService}} (and remember to close it later!) 
> otherwise, create a single thread" so that things like [HIVE-16949] can be 
> avoided in the future.  Everything will just use an {{ExecutorService}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21210) CombineHiveInputFormat Thread Pool Sizing

2019-02-04 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-21210:
---
Status: Open  (was: Patch Available)

> CombineHiveInputFormat Thread Pool Sizing
> -
>
> Key: HIVE-21210
> URL: https://issues.apache.org/jira/browse/HIVE-21210
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HIVE-21210.1.patch
>
>
> Threadpools.
> Hive uses threadpools in several different places and each implementation is 
> a little different and requires different configurations. I think that Hive 
> needs to reign in and standardize the way that threadpools are used and 
> threadpools should scale automatically without manual configuration. At any 
> given time, there are many hundreds of threads running in the HS2 as the 
> number of simultaneous connections increases and they surely cause contention 
> with one-another.
> Here is an example:
> {code:java|title=CombineHiveInputFormat.java}
>   // max number of threads we can use to check non-combinable paths
>   private static final int MAX_CHECK_NONCOMBINABLE_THREAD_NUM = 50;
>   private static final int DEFAULT_NUM_PATH_PER_THREAD = 100;
> {code}
> When building the splits for a MR job, there are up to 50 threads running per 
> query and there is not much scaling here, it's simply 1 thread : 100 files 
> ratio.  This implies that to process 5000 files, there are 50 threads, after 
> that, 50 threads are still used. Many Hive jobs these days involve more than 
> 5000 files so it's not scaling well on bigger sizes.
> This is not configurable (even manually), it doesn't change when the hardware 
> specs increase, and 50 threads seems like a lot when a service must support 
> up to 80 connections:
> [https://www.cloudera.com/documentation/enterprise/5/latest/topics/admin_hive_tuning.html]
> Not to mention, I have never seen a scenario where HS2 is running on a host 
> all by itself and has the entire system dedicated to it. Therefore it should 
> be more friendly and spin up fewer threads.
> I am attaching a patch here that provides a few features:
>  * Common module that produces {{ExecutorService}} which caps the number of 
> threads it spins up at the number of processors a host has. Keep in mind that 
> a class may submit as much work units ({{Callables}} as they would like, but 
> the number of threads in the pool is capped.
>  * Common module for partitioning work. That is, allow for a generic 
> framework for dividing work into partitions (i.e. batches)
>  * Modify {{CombineHiveInputFormat}} to take advantage of both modules, 
> performing its same duties in a more Java OO way that is currently implemented
>  * Add a partitioning (batching) implementation that enforces partitioning of 
> a {{Collection}} based on the natural log of the {{Collection}} size so that 
> it scales more slowly than a simple 1:100 ratio.
>  * Simplify unit test code for {{CombineHiveInputFormat}}
> My hope is to introduce these tools to {{CombineHiveInputFormat}} and then to 
> drop it into other places.  One of the things I will introduce here is a 
> "direct thread" {{ExecutorService}} so that even if there is a configuration 
> for a thread pool to be disabled, it will still use an {{ExecutorService}} so 
> that the project can avoid logic like "if this function is services by a 
> thread pool, use a {{ExecutorService}} (and remember to close it later!) 
> otherwise, create a single thread" so that things like [HIVE-16949] can be 
> avoided in the future.  Everything will just use an {{ExecutorService}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21193) Support LZO Compression with CombineHiveInputFormat

2019-02-04 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-21193:
---
Status: Open  (was: Patch Available)

> Support LZO Compression with CombineHiveInputFormat
> ---
>
> Key: HIVE-21193
> URL: https://issues.apache.org/jira/browse/HIVE-21193
> Project: Hive
>  Issue Type: Improvement
>  Components: Compression
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HIVE-21193.1.patch
>
>
> In regards to LZO compression with Hive...
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+LZO
> It does not work out of the box if there are {{.lzo.index}} files present.  
> As I understand it, this is because of the default Hive input format 
> {{CombineHiveInputFormat}} does not handle this correctly.  It does not like 
> that there are a mix of data files and some index files, it lumps them 
> altogether when making the combined splits and Mappers fail when they try to 
> process the {{.lzo.index}} files as data.  When using the original 
> {{HiveInputFormat}}, it correctly identifies the {{.lzo.index}} files because 
> it considers each file individually.
> Allow {{CombineHiveInputFormat}} to short-circuit LZO files and to not 
> combine them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21193) Support LZO Compression with CombineHiveInputFormat

2019-02-04 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760393#comment-16760393
 ] 

BELUGA BEHR commented on HIVE-21193:


Will write unit test... waiting on [HIVE-21210] to be considered for inclusion 
into the project.

> Support LZO Compression with CombineHiveInputFormat
> ---
>
> Key: HIVE-21193
> URL: https://issues.apache.org/jira/browse/HIVE-21193
> Project: Hive
>  Issue Type: Improvement
>  Components: Compression
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HIVE-21193.1.patch
>
>
> In regards to LZO compression with Hive...
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+LZO
> It does not work out of the box if there are {{.lzo.index}} files present.  
> As I understand it, this is because of the default Hive input format 
> {{CombineHiveInputFormat}} does not handle this correctly.  It does not like 
> that there are a mix of data files and some index files, it lumps them 
> altogether when making the combined splits and Mappers fail when they try to 
> process the {{.lzo.index}} files as data.  When using the original 
> {{HiveInputFormat}}, it correctly identifies the {{.lzo.index}} files because 
> it considers each file individually.
> Allow {{CombineHiveInputFormat}} to short-circuit LZO files and to not 
> combine them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21009) LDAP - Specify binddn for ldap-search

2019-02-04 Thread David McGinnis (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David McGinnis updated HIVE-21009:
--
Status: Patch Available  (was: In Progress)

The patch is now relative to the root folder instead of the service/ folder.

> LDAP - Specify binddn for ldap-search
> -
>
> Key: HIVE-21009
> URL: https://issues.apache.org/jira/browse/HIVE-21009
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.3.2, 2.3.1, 2.3.0, 2.2.0, 2.1.1, 2.1.0
>Reporter: Thomas Uhren
>Assignee: David McGinnis
>Priority: Major
>  Labels: features, newbie, security
> Attachments: HIVE-21009.01.patch, HIVE-21009.02.patch, 
> HIVE-21009.03.patch, HIVE-21009.patch
>
>
> When user accounts cannot do an LDAP search, there is currently no way of 
> specifying a custom binddn to use for the ldap-search.
> So I'm missing something like that:
> {code}
> hive.server2.authentication.ldap.bindn=cn=ldapuser,ou=user,dc=example
> hive.server2.authentication.ldap.bindnpw=password
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21009) LDAP - Specify binddn for ldap-search

2019-02-04 Thread David McGinnis (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David McGinnis updated HIVE-21009:
--
Attachment: HIVE-21009.03.patch

> LDAP - Specify binddn for ldap-search
> -
>
> Key: HIVE-21009
> URL: https://issues.apache.org/jira/browse/HIVE-21009
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.1.0, 2.1.1, 2.2.0, 2.3.0, 2.3.1, 2.3.2
>Reporter: Thomas Uhren
>Assignee: David McGinnis
>Priority: Major
>  Labels: features, newbie, security
> Attachments: HIVE-21009.01.patch, HIVE-21009.02.patch, 
> HIVE-21009.03.patch, HIVE-21009.patch
>
>
> When user accounts cannot do an LDAP search, there is currently no way of 
> specifying a custom binddn to use for the ldap-search.
> So I'm missing something like that:
> {code}
> hive.server2.authentication.ldap.bindn=cn=ldapuser,ou=user,dc=example
> hive.server2.authentication.ldap.bindnpw=password
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21009) LDAP - Specify binddn for ldap-search

2019-02-04 Thread David McGinnis (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David McGinnis updated HIVE-21009:
--
Status: In Progress  (was: Patch Available)

Accidentally gave patch from service/ folder instead of from root. Trying this 
again.

> LDAP - Specify binddn for ldap-search
> -
>
> Key: HIVE-21009
> URL: https://issues.apache.org/jira/browse/HIVE-21009
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.3.2, 2.3.1, 2.3.0, 2.2.0, 2.1.1, 2.1.0
>Reporter: Thomas Uhren
>Assignee: David McGinnis
>Priority: Major
>  Labels: features, newbie, security
> Attachments: HIVE-21009.01.patch, HIVE-21009.02.patch, 
> HIVE-21009.patch
>
>
> When user accounts cannot do an LDAP search, there is currently no way of 
> specifying a custom binddn to use for the ldap-search.
> So I'm missing something like that:
> {code}
> hive.server2.authentication.ldap.bindn=cn=ldapuser,ou=user,dc=example
> hive.server2.authentication.ldap.bindnpw=password
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21212) LLAP: shuffle port config uses internal configuration

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760388#comment-16760388
 ] 

Hive QA commented on HIVE-21212:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
38s{color} | {color:blue} llap-server in master has 81 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15934/dev-support/hive-personality.sh
 |
| git revision | master / 4a4b9ca |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: llap-server U: llap-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15934/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> LLAP: shuffle port config uses internal configuration
> -
>
> Key: HIVE-21212
> URL: https://issues.apache.org/jira/browse/HIVE-21212
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21212.1.patch
>
>
> LlapDaemon main() reads daemon configuration but for shuffle port it reads 
> internal config instead of hive.llap.daemon.yarn.shuffle.port
> [https://github.com/apache/hive/blob/c8eb03affa2533f4827cf6497e7c9873bc9520a7/llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/LlapDaemon.java#L535]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20699) Query based compactor for full CRUD Acid tables

2019-02-04 Thread Vaibhav Gumashta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-20699:

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to master. Thanks [~ekoifman] for patiently reviewing.

> Query based compactor for full CRUD Acid tables
> ---
>
> Key: HIVE-20699
> URL: https://issues.apache.org/jira/browse/HIVE-20699
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Affects Versions: 3.1.0
>Reporter: Eugene Koifman
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-20699.1.patch, HIVE-20699.1.patch, 
> HIVE-20699.10.patch, HIVE-20699.11.patch, HIVE-20699.11.patch, 
> HIVE-20699.2.patch, HIVE-20699.3.patch, HIVE-20699.4.patch, 
> HIVE-20699.5.patch, HIVE-20699.6.patch, HIVE-20699.7.patch, 
> HIVE-20699.8.patch, HIVE-20699.9.patch
>
>
> Currently the Acid compactor is implemented as generated MR job 
> ({{CompactorMR.java}}).
> It could also be expressed as a Hive query that reads from a given partition 
> and writes data back to the same partition.  This will merge the deltas and 
> 'apply' the delete events.  The simplest would be to just use Insert 
> Overwrite but that will change all ROW__IDs which we don't want.
> Need to implement this in a way that preserves ROW__IDs and creates a new 
> {{base_x}} directory to handle Major compaction.
> Minor compaction will be investigated separately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20699) Query based compactor for full CRUD Acid tables

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760377#comment-16760377
 ] 

Hive QA commented on HIVE-20699:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957558/HIVE-20699.11.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15729 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15932/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15932/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15932/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12957558 - PreCommit-HIVE-Build

> Query based compactor for full CRUD Acid tables
> ---
>
> Key: HIVE-20699
> URL: https://issues.apache.org/jira/browse/HIVE-20699
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Affects Versions: 3.1.0
>Reporter: Eugene Koifman
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-20699.1.patch, HIVE-20699.1.patch, 
> HIVE-20699.10.patch, HIVE-20699.11.patch, HIVE-20699.11.patch, 
> HIVE-20699.2.patch, HIVE-20699.3.patch, HIVE-20699.4.patch, 
> HIVE-20699.5.patch, HIVE-20699.6.patch, HIVE-20699.7.patch, 
> HIVE-20699.8.patch, HIVE-20699.9.patch
>
>
> Currently the Acid compactor is implemented as generated MR job 
> ({{CompactorMR.java}}).
> It could also be expressed as a Hive query that reads from a given partition 
> and writes data back to the same partition.  This will merge the deltas and 
> 'apply' the delete events.  The simplest would be to just use Insert 
> Overwrite but that will change all ROW__IDs which we don't want.
> Need to implement this in a way that preserves ROW__IDs and creates a new 
> {{base_x}} directory to handle Major compaction.
> Minor compaction will be investigated separately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21009) LDAP - Specify binddn for ldap-search

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760380#comment-16760380
 ] 

Hive QA commented on HIVE-21009:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957564/HIVE-21009.02.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15933/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15933/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15933/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-02-05 01:36:14.804
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-15933/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-02-05 01:36:14.807
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 4a4b9ca HIVE-21159 Modify Merge statement logic to perform 
Update split early (Eugene Koifman, reviewed by Vaibhav Gumashta)
+ git clean -f -d
Removing ${project.basedir}/
Removing itests/${project.basedir}/
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 4a4b9ca HIVE-21159 Modify Merge statement logic to perform 
Update split early (Eugene Koifman, reviewed by Vaibhav Gumashta)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-02-05 01:36:15.514
+ rm -rf ../yetus_PreCommit-HIVE-Build-15933
+ mkdir ../yetus_PreCommit-HIVE-Build-15933
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-15933
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-15933/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
fatal: git diff header lacks filename information when removing 0 leading 
pathname components (line 229)
error: cannot apply binary patch to 
'service/src/test/resources/creds/test.jceks' without full index line
Falling back to three-way merge...
error: cannot apply binary patch to 
'service/src/test/resources/creds/test.jceks' without full index line
error: service/src/test/resources/creds/test.jceks: patch does not apply
error: src/java/org/apache/hadoop/hive/conf/HiveConf.java: does not exist in 
index
error: patch failed: pom.xml:323
Falling back to three-way merge...
Applied patch to 'pom.xml' with conflicts.
error: 
src/java/org/apache/hive/service/auth/LdapAuthenticationProviderImpl.java: does 
not exist in index
error: 
src/test/org/apache/hive/service/auth/TestLdapAuthenticationProviderImpl.java: 
does not exist in index
error: cannot apply binary patch to 'src/test/resources/creds/test.jceks' 
without full index line
Falling back to three-way merge...
error: cannot apply binary patch to 'src/test/resources/creds/test.jceks' 
without full index line
error: src/test/resources/creds/test.jceks: patch does not apply
The patch does not appear to apply with p0, p1, or p2
+ result=1
+ '[' 1 -ne 0 ']'
+ rm -rf yetus_PreCommit-HIVE-Build-15933
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12957564 - PreCommit-HIVE-Build

> LDAP - Specify binddn for ldap-search
> -
>
> Key: HIVE-21009
> URL: https://issues.apache.org/jira/browse/HIVE-21009
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.1.0, 2.1.1, 2.2.0, 2.3.0, 2.3.1, 2.3.2
>Reporter: Thomas Uhren
>Assignee: David McGinnis
>Priority: Major
>  Labels: 

[jira] [Updated] (HIVE-21212) LLAP: shuffle port config uses internal configuration

2019-02-04 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-21212:
-
Affects Version/s: (was: 3.2.0)

> LLAP: shuffle port config uses internal configuration
> -
>
> Key: HIVE-21212
> URL: https://issues.apache.org/jira/browse/HIVE-21212
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21212.1.patch
>
>
> LlapDaemon main() reads daemon configuration but for shuffle port it reads 
> internal config instead of hive.llap.daemon.yarn.shuffle.port
> [https://github.com/apache/hive/blob/c8eb03affa2533f4827cf6497e7c9873bc9520a7/llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/LlapDaemon.java#L535]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20699) Query based compactor for full CRUD Acid tables

2019-02-04 Thread Eugene Koifman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760381#comment-16760381
 ] 

Eugene Koifman commented on HIVE-20699:
---

+1 patch 11

> Query based compactor for full CRUD Acid tables
> ---
>
> Key: HIVE-20699
> URL: https://issues.apache.org/jira/browse/HIVE-20699
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Affects Versions: 3.1.0
>Reporter: Eugene Koifman
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-20699.1.patch, HIVE-20699.1.patch, 
> HIVE-20699.10.patch, HIVE-20699.11.patch, HIVE-20699.11.patch, 
> HIVE-20699.2.patch, HIVE-20699.3.patch, HIVE-20699.4.patch, 
> HIVE-20699.5.patch, HIVE-20699.6.patch, HIVE-20699.7.patch, 
> HIVE-20699.8.patch, HIVE-20699.9.patch
>
>
> Currently the Acid compactor is implemented as generated MR job 
> ({{CompactorMR.java}}).
> It could also be expressed as a Hive query that reads from a given partition 
> and writes data back to the same partition.  This will merge the deltas and 
> 'apply' the delete events.  The simplest would be to just use Insert 
> Overwrite but that will change all ROW__IDs which we don't want.
> Need to implement this in a way that preserves ROW__IDs and creates a new 
> {{base_x}} directory to handle Major compaction.
> Minor compaction will be investigated separately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21212) LLAP: shuffle port config uses internal configuration

2019-02-04 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-21212:
-
Status: Patch Available  (was: Open)

> LLAP: shuffle port config uses internal configuration
> -
>
> Key: HIVE-21212
> URL: https://issues.apache.org/jira/browse/HIVE-21212
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21212.1.patch
>
>
> LlapDaemon main() reads daemon configuration but for shuffle port it reads 
> internal config instead of hive.llap.daemon.yarn.shuffle.port
> [https://github.com/apache/hive/blob/c8eb03affa2533f4827cf6497e7c9873bc9520a7/llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/LlapDaemon.java#L535]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21212) LLAP: shuffle port config uses internal configuration

2019-02-04 Thread Prasanth Jayachandran (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760376#comment-16760376
 ] 

Prasanth Jayachandran commented on HIVE-21212:
--

[~gopalv] can you please review this one liner?

> LLAP: shuffle port config uses internal configuration
> -
>
> Key: HIVE-21212
> URL: https://issues.apache.org/jira/browse/HIVE-21212
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21212.1.patch
>
>
> LlapDaemon main() reads daemon configuration but for shuffle port it reads 
> internal config instead of hive.llap.daemon.yarn.shuffle.port
> [https://github.com/apache/hive/blob/c8eb03affa2533f4827cf6497e7c9873bc9520a7/llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/LlapDaemon.java#L535]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21212) LLAP: shuffle port config uses internal configuration

2019-02-04 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-21212:
-
Attachment: HIVE-21212.1.patch

> LLAP: shuffle port config uses internal configuration
> -
>
> Key: HIVE-21212
> URL: https://issues.apache.org/jira/browse/HIVE-21212
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21212.1.patch
>
>
> LlapDaemon main() reads daemon configuration but for shuffle port it reads 
> internal config instead of hive.llap.daemon.yarn.shuffle.port
> [https://github.com/apache/hive/blob/c8eb03affa2533f4827cf6497e7c9873bc9520a7/llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/LlapDaemon.java#L535]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21212) LLAP: shuffle port config uses internal configuration

2019-02-04 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-21212:
-
Target Version/s: 4.0.0  (was: 4.0.0, 3.2.0)

> LLAP: shuffle port config uses internal configuration
> -
>
> Key: HIVE-21212
> URL: https://issues.apache.org/jira/browse/HIVE-21212
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21212.1.patch
>
>
> LlapDaemon main() reads daemon configuration but for shuffle port it reads 
> internal config instead of hive.llap.daemon.yarn.shuffle.port
> [https://github.com/apache/hive/blob/c8eb03affa2533f4827cf6497e7c9873bc9520a7/llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/LlapDaemon.java#L535]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20699) Query based compactor for full CRUD Acid tables

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760363#comment-16760363
 ] 

Hive QA commented on HIVE-20699:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
30s{color} | {color:blue} common in master has 65 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
43s{color} | {color:blue} ql in master has 2305 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
37s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
42s{color} | {color:red} ql: The patch generated 18 new + 628 unchanged - 3 
fixed = 646 total (was 631) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
16s{color} | {color:red} itests/hive-unit: The patch generated 20 new + 180 
unchanged - 4 fixed = 200 total (was 184) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 29 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
45s{color} | {color:red} ql generated 2 new + 2305 unchanged - 0 fixed = 2307 
total (was 2305) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
53s{color} | {color:red} ql generated 3 new + 97 unchanged - 3 fixed = 100 
total (was 100) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  
org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.buildCrudMajorCompactionQuery(HiveConf,
 Table, Partition, String) concatenates strings using + in a loop  At 
CompactorMR.java:strings using + in a loop  At CompactorMR.java:[line 534] |
|  |  
org.apache.hadoop.hive.ql.udf.generic.GenericUDFValidateAcidSortOrder$WriteIdRowId
 defines compareTo(GenericUDFValidateAcidSortOrder$WriteIdRowId) and uses 
Object.equals()  At GenericUDFValidateAcidSortOrder.java:Object.equals()  At 
GenericUDFValidateAcidSortOrder.java:[lines 88-97] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15932/dev-support/hive-personality.sh
 |
| git revision | master / 4a4b9ca |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15932/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15932/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| whitespace | 

[jira] [Commented] (HIVE-21001) Upgrade to calcite-1.18

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760342#comment-16760342
 ] 

Hive QA commented on HIVE-21001:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957555/HIVE-21001.21.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 15724 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_test_ts]
 (batchId=195)
org.apache.hive.jdbc.TestSSL.testMetastoreWithSSL (batchId=260)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15931/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15931/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15931/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12957555 - PreCommit-HIVE-Build

> Upgrade to calcite-1.18
> ---
>
> Key: HIVE-21001
> URL: https://issues.apache.org/jira/browse/HIVE-21001
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21001.01.patch, HIVE-21001.01.patch, 
> HIVE-21001.02.patch, HIVE-21001.03.patch, HIVE-21001.04.patch, 
> HIVE-21001.05.patch, HIVE-21001.06.patch, HIVE-21001.06.patch, 
> HIVE-21001.07.patch, HIVE-21001.08.patch, HIVE-21001.08.patch, 
> HIVE-21001.08.patch, HIVE-21001.09.patch, HIVE-21001.09.patch, 
> HIVE-21001.09.patch, HIVE-21001.10.patch, HIVE-21001.11.patch, 
> HIVE-21001.12.patch, HIVE-21001.13.patch, HIVE-21001.15.patch, 
> HIVE-21001.16.patch, HIVE-21001.17.patch, HIVE-21001.18.patch, 
> HIVE-21001.18.patch, HIVE-21001.19.patch, HIVE-21001.20.patch, 
> HIVE-21001.21.patch
>
>
> XLEAR LIBRARY CACHE 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21009) LDAP - Specify binddn for ldap-search

2019-02-04 Thread David McGinnis (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David McGinnis updated HIVE-21009:
--
  Labels: features newbie security  (was: )
Release Note: 
Adds the following configuration properties for plain LDAP authentication to 
use a specified bind user to connect to the server:

hive.server2.authentication.ldap.binddn - Fully qualified name of the bind user 
you want to use.
hive.server2.authentication.ldap.bindpw - The password for the bind user 
specified in the parameter above. This may be contained in the configuration 
parameters directly, or inside of a jceks file.

Target Version/s: 4.0.0
  Status: Patch Available  (was: In Progress)

This change adds the ability for users to specify a single bind user which is 
used to connect to LDAP to get the full user name before authenticating the 
user itself.

> LDAP - Specify binddn for ldap-search
> -
>
> Key: HIVE-21009
> URL: https://issues.apache.org/jira/browse/HIVE-21009
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.3.2, 2.3.1, 2.3.0, 2.2.0, 2.1.1, 2.1.0
>Reporter: Thomas Uhren
>Assignee: David McGinnis
>Priority: Major
>  Labels: newbie, security, features
> Attachments: HIVE-21009.01.patch, HIVE-21009.02.patch, 
> HIVE-21009.patch
>
>
> When user accounts cannot do an LDAP search, there is currently no way of 
> specifying a custom binddn to use for the ldap-search.
> So I'm missing something like that:
> {code}
> hive.server2.authentication.ldap.bindn=cn=ldapuser,ou=user,dc=example
> hive.server2.authentication.ldap.bindnpw=password
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21009) LDAP - Specify binddn for ldap-search

2019-02-04 Thread Prasanth Jayachandran (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760340#comment-16760340
 ] 

Prasanth Jayachandran commented on HIVE-21009:
--

Thanks for the updated patch [~mcginnda] . +1 still. Will get it committed 
after pre-commit test results.

> LDAP - Specify binddn for ldap-search
> -
>
> Key: HIVE-21009
> URL: https://issues.apache.org/jira/browse/HIVE-21009
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.1.0, 2.1.1, 2.2.0, 2.3.0, 2.3.1, 2.3.2
>Reporter: Thomas Uhren
>Assignee: David McGinnis
>Priority: Major
> Attachments: HIVE-21009.01.patch, HIVE-21009.02.patch, 
> HIVE-21009.patch
>
>
> When user accounts cannot do an LDAP search, there is currently no way of 
> specifying a custom binddn to use for the ldap-search.
> So I'm missing something like that:
> {code}
> hive.server2.authentication.ldap.bindn=cn=ldapuser,ou=user,dc=example
> hive.server2.authentication.ldap.bindnpw=password
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21001) Upgrade to calcite-1.18

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760338#comment-16760338
 ] 

Hive QA commented on HIVE-21001:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
31s{color} | {color:blue} ql in master has 2305 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
27s{color} | {color:blue} accumulo-handler in master has 21 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
28s{color} | {color:blue} hbase-handler in master has 15 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
14s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} ql: The patch generated 0 new + 16 unchanged - 2 
fixed = 16 total (was 18) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} The patch accumulo-handler passed checkstyle {color} 
|
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} The patch hbase-handler passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} The patch . passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  findbugs  
checkstyle  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15931/dev-support/hive-personality.sh
 |
| git revision | master / 4a4b9ca |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql accumulo-handler hbase-handler . U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15931/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Upgrade to calcite-1.18
> ---
>
> Key: HIVE-21001
> URL: https://issues.apache.org/jira/browse/HIVE-21001
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: 

[jira] [Updated] (HIVE-21009) LDAP - Specify binddn for ldap-search

2019-02-04 Thread David McGinnis (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David McGinnis updated HIVE-21009:
--
Attachment: HIVE-21009.02.patch

> LDAP - Specify binddn for ldap-search
> -
>
> Key: HIVE-21009
> URL: https://issues.apache.org/jira/browse/HIVE-21009
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.1.0, 2.1.1, 2.2.0, 2.3.0, 2.3.1, 2.3.2
>Reporter: Thomas Uhren
>Assignee: David McGinnis
>Priority: Major
> Attachments: HIVE-21009.01.patch, HIVE-21009.02.patch, 
> HIVE-21009.patch
>
>
> When user accounts cannot do an LDAP search, there is currently no way of 
> specifying a custom binddn to use for the ldap-search.
> So I'm missing something like that:
> {code}
> hive.server2.authentication.ldap.bindn=cn=ldapuser,ou=user,dc=example
> hive.server2.authentication.ldap.bindnpw=password
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21009) LDAP - Specify binddn for ldap-search

2019-02-04 Thread David McGinnis (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760329#comment-16760329
 ] 

David McGinnis commented on HIVE-21009:
---

[~prasanth_j]: Thanks for the catch! Not surprised this already exists, 
should've looked before. I've got a change made and am running tests currently. 
Will upload patch once the tests finish successfully.

> LDAP - Specify binddn for ldap-search
> -
>
> Key: HIVE-21009
> URL: https://issues.apache.org/jira/browse/HIVE-21009
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.1.0, 2.1.1, 2.2.0, 2.3.0, 2.3.1, 2.3.2
>Reporter: Thomas Uhren
>Assignee: David McGinnis
>Priority: Major
> Attachments: HIVE-21009.01.patch, HIVE-21009.patch
>
>
> When user accounts cannot do an LDAP search, there is currently no way of 
> specifying a custom binddn to use for the ldap-search.
> So I'm missing something like that:
> {code}
> hive.server2.authentication.ldap.bindn=cn=ldapuser,ou=user,dc=example
> hive.server2.authentication.ldap.bindnpw=password
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21009) LDAP - Specify binddn for ldap-search

2019-02-04 Thread Prasanth Jayachandran (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760321#comment-16760321
 ] 

Prasanth Jayachandran commented on HIVE-21009:
--

nit: Hadoop configuration getPassword() seems to be iterating over credentials 
provider and fallsback to config. which seems similar to what you are doing, 
isn't it? 
[https://hadoop.apache.org/docs/r2.6.4/api/org/apache/hadoop/conf/Configuration.html#getPassword(java.lang.String)]

 

> LDAP - Specify binddn for ldap-search
> -
>
> Key: HIVE-21009
> URL: https://issues.apache.org/jira/browse/HIVE-21009
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.1.0, 2.1.1, 2.2.0, 2.3.0, 2.3.1, 2.3.2
>Reporter: Thomas Uhren
>Assignee: David McGinnis
>Priority: Major
> Attachments: HIVE-21009.01.patch, HIVE-21009.patch
>
>
> When user accounts cannot do an LDAP search, there is currently no way of 
> specifying a custom binddn to use for the ldap-search.
> So I'm missing something like that:
> {code}
> hive.server2.authentication.ldap.bindn=cn=ldapuser,ou=user,dc=example
> hive.server2.authentication.ldap.bindnpw=password
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21009) LDAP - Specify binddn for ldap-search

2019-02-04 Thread Prasanth Jayachandran (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760314#comment-16760314
 ] 

Prasanth Jayachandran commented on HIVE-21009:
--

lgtm, +1

> LDAP - Specify binddn for ldap-search
> -
>
> Key: HIVE-21009
> URL: https://issues.apache.org/jira/browse/HIVE-21009
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.1.0, 2.1.1, 2.2.0, 2.3.0, 2.3.1, 2.3.2
>Reporter: Thomas Uhren
>Assignee: David McGinnis
>Priority: Major
> Attachments: HIVE-21009.01.patch, HIVE-21009.patch
>
>
> When user accounts cannot do an LDAP search, there is currently no way of 
> specifying a custom binddn to use for the ldap-search.
> So I'm missing something like that:
> {code}
> hive.server2.authentication.ldap.bindn=cn=ldapuser,ou=user,dc=example
> hive.server2.authentication.ldap.bindnpw=password
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21009) LDAP - Specify binddn for ldap-search

2019-02-04 Thread David McGinnis (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David McGinnis updated HIVE-21009:
--
Attachment: HIVE-21009.01.patch

> LDAP - Specify binddn for ldap-search
> -
>
> Key: HIVE-21009
> URL: https://issues.apache.org/jira/browse/HIVE-21009
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.1.0, 2.1.1, 2.2.0, 2.3.0, 2.3.1, 2.3.2
>Reporter: Thomas Uhren
>Assignee: David McGinnis
>Priority: Major
> Attachments: HIVE-21009.01.patch, HIVE-21009.patch
>
>
> When user accounts cannot do an LDAP search, there is currently no way of 
> specifying a custom binddn to use for the ldap-search.
> So I'm missing something like that:
> {code}
> hive.server2.authentication.ldap.bindn=cn=ldapuser,ou=user,dc=example
> hive.server2.authentication.ldap.bindnpw=password
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20699) Query based compactor for full CRUD Acid tables

2019-02-04 Thread Vaibhav Gumashta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-20699:

Attachment: HIVE-20699.11.patch

> Query based compactor for full CRUD Acid tables
> ---
>
> Key: HIVE-20699
> URL: https://issues.apache.org/jira/browse/HIVE-20699
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Affects Versions: 3.1.0
>Reporter: Eugene Koifman
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-20699.1.patch, HIVE-20699.1.patch, 
> HIVE-20699.10.patch, HIVE-20699.11.patch, HIVE-20699.11.patch, 
> HIVE-20699.2.patch, HIVE-20699.3.patch, HIVE-20699.4.patch, 
> HIVE-20699.5.patch, HIVE-20699.6.patch, HIVE-20699.7.patch, 
> HIVE-20699.8.patch, HIVE-20699.9.patch
>
>
> Currently the Acid compactor is implemented as generated MR job 
> ({{CompactorMR.java}}).
> It could also be expressed as a Hive query that reads from a given partition 
> and writes data back to the same partition.  This will merge the deltas and 
> 'apply' the delete events.  The simplest would be to just use Insert 
> Overwrite but that will change all ROW__IDs which we don't want.
> Need to implement this in a way that preserves ROW__IDs and creates a new 
> {{base_x}} directory to handle Major compaction.
> Minor compaction will be investigated separately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-20699) Query based compactor for full CRUD Acid tables

2019-02-04 Thread Vaibhav Gumashta (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760303#comment-16760303
 ] 

Vaibhav Gumashta edited comment on HIVE-20699 at 2/4/19 11:20 PM:
--

Patch compiles fine for me. Trying again


was (Author: vgumashta):
Patch compiles fine. Trying again

> Query based compactor for full CRUD Acid tables
> ---
>
> Key: HIVE-20699
> URL: https://issues.apache.org/jira/browse/HIVE-20699
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Affects Versions: 3.1.0
>Reporter: Eugene Koifman
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-20699.1.patch, HIVE-20699.1.patch, 
> HIVE-20699.10.patch, HIVE-20699.11.patch, HIVE-20699.11.patch, 
> HIVE-20699.2.patch, HIVE-20699.3.patch, HIVE-20699.4.patch, 
> HIVE-20699.5.patch, HIVE-20699.6.patch, HIVE-20699.7.patch, 
> HIVE-20699.8.patch, HIVE-20699.9.patch
>
>
> Currently the Acid compactor is implemented as generated MR job 
> ({{CompactorMR.java}}).
> It could also be expressed as a Hive query that reads from a given partition 
> and writes data back to the same partition.  This will merge the deltas and 
> 'apply' the delete events.  The simplest would be to just use Insert 
> Overwrite but that will change all ROW__IDs which we don't want.
> Need to implement this in a way that preserves ROW__IDs and creates a new 
> {{base_x}} directory to handle Major compaction.
> Minor compaction will be investigated separately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20699) Query based compactor for full CRUD Acid tables

2019-02-04 Thread Vaibhav Gumashta (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760303#comment-16760303
 ] 

Vaibhav Gumashta commented on HIVE-20699:
-

Patch compiles fine. Trying again

> Query based compactor for full CRUD Acid tables
> ---
>
> Key: HIVE-20699
> URL: https://issues.apache.org/jira/browse/HIVE-20699
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Affects Versions: 3.1.0
>Reporter: Eugene Koifman
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-20699.1.patch, HIVE-20699.1.patch, 
> HIVE-20699.10.patch, HIVE-20699.11.patch, HIVE-20699.2.patch, 
> HIVE-20699.3.patch, HIVE-20699.4.patch, HIVE-20699.5.patch, 
> HIVE-20699.6.patch, HIVE-20699.7.patch, HIVE-20699.8.patch, HIVE-20699.9.patch
>
>
> Currently the Acid compactor is implemented as generated MR job 
> ({{CompactorMR.java}}).
> It could also be expressed as a Hive query that reads from a given partition 
> and writes data back to the same partition.  This will merge the deltas and 
> 'apply' the delete events.  The simplest would be to just use Insert 
> Overwrite but that will change all ROW__IDs which we don't want.
> Need to implement this in a way that preserves ROW__IDs and creates a new 
> {{base_x}} directory to handle Major compaction.
> Minor compaction will be investigated separately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21009) LDAP - Specify binddn for ldap-search

2019-02-04 Thread Prasanth Jayachandran (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760300#comment-16760300
 ] 

Prasanth Jayachandran commented on HIVE-21009:
--

To get password, using conf.getPassword() is more secure as it reads using 
hadoop's credentials provider (which could be jceks file). 

> LDAP - Specify binddn for ldap-search
> -
>
> Key: HIVE-21009
> URL: https://issues.apache.org/jira/browse/HIVE-21009
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.1.0, 2.1.1, 2.2.0, 2.3.0, 2.3.1, 2.3.2
>Reporter: Thomas Uhren
>Assignee: David McGinnis
>Priority: Major
> Attachments: HIVE-21009.patch
>
>
> When user accounts cannot do an LDAP search, there is currently no way of 
> specifying a custom binddn to use for the ldap-search.
> So I'm missing something like that:
> {code}
> hive.server2.authentication.ldap.bindn=cn=ldapuser,ou=user,dc=example
> hive.server2.authentication.ldap.bindnpw=password
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20699) Query based compactor for full CRUD Acid tables

2019-02-04 Thread Vaibhav Gumashta (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760289#comment-16760289
 ] 

Vaibhav Gumashta commented on HIVE-20699:
-

[~ekoifman] Thanks, fixed the memory settings for the test case, so I don't see 
the memory estimation returning -ve values now. Also removed redundant imports.

> Query based compactor for full CRUD Acid tables
> ---
>
> Key: HIVE-20699
> URL: https://issues.apache.org/jira/browse/HIVE-20699
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Affects Versions: 3.1.0
>Reporter: Eugene Koifman
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-20699.1.patch, HIVE-20699.1.patch, 
> HIVE-20699.10.patch, HIVE-20699.11.patch, HIVE-20699.2.patch, 
> HIVE-20699.3.patch, HIVE-20699.4.patch, HIVE-20699.5.patch, 
> HIVE-20699.6.patch, HIVE-20699.7.patch, HIVE-20699.8.patch, HIVE-20699.9.patch
>
>
> Currently the Acid compactor is implemented as generated MR job 
> ({{CompactorMR.java}}).
> It could also be expressed as a Hive query that reads from a given partition 
> and writes data back to the same partition.  This will merge the deltas and 
> 'apply' the delete events.  The simplest would be to just use Insert 
> Overwrite but that will change all ROW__IDs which we don't want.
> Need to implement this in a way that preserves ROW__IDs and creates a new 
> {{base_x}} directory to handle Major compaction.
> Minor compaction will be investigated separately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20699) Query based compactor for full CRUD Acid tables

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760292#comment-16760292
 ] 

Hive QA commented on HIVE-20699:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957554/HIVE-20699.11.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15930/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15930/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15930/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-02-04 23:01:12.956
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-15930/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-02-04 23:01:12.959
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 4a4b9ca HIVE-21159 Modify Merge statement logic to perform 
Update split early (Eugene Koifman, reviewed by Vaibhav Gumashta)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 4a4b9ca HIVE-21159 Modify Merge statement logic to perform 
Update split early (Eugene Koifman, reviewed by Vaibhav Gumashta)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-02-04 23:01:14.009
+ rm -rf ../yetus_PreCommit-HIVE-Build-15930
+ mkdir ../yetus_PreCommit-HIVE-Build-15930
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-15930
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-15930/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java: does not 
exist in index
error: 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/TestAcidOnTez.java: 
does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java: does 
not exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/HiveSplitGenerator.java: does 
not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/SplitGrouper.java: does 
not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java: does not 
exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcRawRecordMerger.java: 
does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcRecordUpdater.java: 
does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcSplit.java: does not 
exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java: 
does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorMR.java: 
does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Initiator.java: 
does not exist in index
error: a/ql/src/test/results/clientpositive/show_functions.q.out: does not 
exist in index
Going to apply patch with: git apply -p1
/data/hiveptest/working/scratch/build.patch:12: trailing whitespace.
SPLIT_GROUPING_MODE("hive.split.grouping.mode", "query", new 
StringSet("query", "compactor"), 
/data/hiveptest/working/scratch/build.patch:47: trailing whitespace.
 
/data/hiveptest/working/scratch/build.patch:98: trailing whitespace.
conf.setBoolean("hive.merge.tezfiles", false); 
/data/hiveptest/working/scratch/build.patch:223: trailing whitespace.
conf.setBoolean("hive.merge.tezfiles", false); 
/data/hiveptest/working/scratch/build.patch:559: trailing whitespace.

warning: squelched 24 whitespace errors
warning: 29 lines add 

[jira] [Updated] (HIVE-21001) Upgrade to calcite-1.18

2019-02-04 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-21001:

Attachment: HIVE-21001.21.patch

> Upgrade to calcite-1.18
> ---
>
> Key: HIVE-21001
> URL: https://issues.apache.org/jira/browse/HIVE-21001
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21001.01.patch, HIVE-21001.01.patch, 
> HIVE-21001.02.patch, HIVE-21001.03.patch, HIVE-21001.04.patch, 
> HIVE-21001.05.patch, HIVE-21001.06.patch, HIVE-21001.06.patch, 
> HIVE-21001.07.patch, HIVE-21001.08.patch, HIVE-21001.08.patch, 
> HIVE-21001.08.patch, HIVE-21001.09.patch, HIVE-21001.09.patch, 
> HIVE-21001.09.patch, HIVE-21001.10.patch, HIVE-21001.11.patch, 
> HIVE-21001.12.patch, HIVE-21001.13.patch, HIVE-21001.15.patch, 
> HIVE-21001.16.patch, HIVE-21001.17.patch, HIVE-21001.18.patch, 
> HIVE-21001.18.patch, HIVE-21001.19.patch, HIVE-21001.20.patch, 
> HIVE-21001.21.patch
>
>
> XLEAR LIBRARY CACHE 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20699) Query based compactor for full CRUD Acid tables

2019-02-04 Thread Vaibhav Gumashta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-20699:

Attachment: HIVE-20699.11.patch

> Query based compactor for full CRUD Acid tables
> ---
>
> Key: HIVE-20699
> URL: https://issues.apache.org/jira/browse/HIVE-20699
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Affects Versions: 3.1.0
>Reporter: Eugene Koifman
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-20699.1.patch, HIVE-20699.1.patch, 
> HIVE-20699.10.patch, HIVE-20699.11.patch, HIVE-20699.2.patch, 
> HIVE-20699.3.patch, HIVE-20699.4.patch, HIVE-20699.5.patch, 
> HIVE-20699.6.patch, HIVE-20699.7.patch, HIVE-20699.8.patch, HIVE-20699.9.patch
>
>
> Currently the Acid compactor is implemented as generated MR job 
> ({{CompactorMR.java}}).
> It could also be expressed as a Hive query that reads from a given partition 
> and writes data back to the same partition.  This will merge the deltas and 
> 'apply' the delete events.  The simplest would be to just use Insert 
> Overwrite but that will change all ROW__IDs which we don't want.
> Need to implement this in a way that preserves ROW__IDs and creates a new 
> {{base_x}} directory to handle Major compaction.
> Minor compaction will be investigated separately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21211) Upgrade jetty version to 9.4.x

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760288#comment-16760288
 ] 

Hive QA commented on HIVE-21211:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957547/HIVE-21211.2.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15929/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15929/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15929/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-02-04 22:58:08.588
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-15929/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-02-04 22:58:08.591
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 4a4b9ca HIVE-21159 Modify Merge statement logic to perform 
Update split early (Eugene Koifman, reviewed by Vaibhav Gumashta)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 4a4b9ca HIVE-21159 Modify Merge statement logic to perform 
Update split early (Eugene Koifman, reviewed by Vaibhav Gumashta)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-02-04 22:58:09.952
+ rm -rf ../yetus_PreCommit-HIVE-Build-15929
+ mkdir ../yetus_PreCommit-HIVE-Build-15929
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-15929
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-15929/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
Going to apply patch with: git apply -p0
+ [[ maven == \m\a\v\e\n ]]
+ rm -rf /data/hiveptest/working/maven/org/apache/hive
+ mvn -B clean install -DskipTests -T 4 -q 
-Dmaven.repo.local=/data/hiveptest/working/maven
protoc-jar: executing: [/tmp/protoc7352817020909347337.exe, --version]
libprotoc 2.5.0
protoc-jar: executing: [/tmp/protoc7352817020909347337.exe, 
-I/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore,
 
--java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/target/generated-sources,
 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto]
ANTLR Parser Generator  Version 3.5.2
[ERROR] Failed to execute goal on project hive-shims-0.23: Could not resolve 
dependencies for project 
org.apache.hive.shims:hive-shims-0.23:jar:4.0.0-SNAPSHOT: The following 
artifacts could not be resolved: 
org.eclipse.jetty:jetty-server:jar:9.4.14.v20181114, 
org.eclipse.jetty:jetty-http:jar:9.4.14.v20181114, 
org.eclipse.jetty:jetty-io:jar:9.4.14.v20181114, 
org.apache.hadoop:hadoop-yarn-server-resourcemanager:jar:3.1.0, 
org.apache.hadoop:hadoop-yarn-server-common:jar:3.1.0, 
org.apache.hadoop:hadoop-yarn-registry:jar:3.1.0, dnsjava:dnsjava:jar:2.1.7, 
org.apache.geronimo.specs:geronimo-jcache_1.0_spec:jar:1.0-alpha-1, 
org.ehcache:ehcache:jar:3.3.1, com.zaxxer:HikariCP-java7:jar:2.4.12, 
com.microsoft.sqlserver:mssql-jdbc:jar:6.2.1.jre7, 
org.apache.hadoop:hadoop-yarn-server-applicationhistoryservice:jar:3.1.0, 
de.ruedigermoeller:fst:jar:2.50, com.cedarsoftware:java-util:jar:1.9.0, 
org.apache.hadoop:hadoop-yarn-server-web-proxy:jar:3.1.0, 
org.apache.hadoop:hadoop-yarn-server-tests:jar:tests:3.1.0, 
org.apache.hadoop:hadoop-yarn-server-nodemanager:jar:3.1.0, 

[jira] [Commented] (HIVE-20699) Query based compactor for full CRUD Acid tables

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760276#comment-16760276
 ] 

Hive QA commented on HIVE-20699:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957535/HIVE-20699.10.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15729 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15927/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15927/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15927/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12957535 - PreCommit-HIVE-Build

> Query based compactor for full CRUD Acid tables
> ---
>
> Key: HIVE-20699
> URL: https://issues.apache.org/jira/browse/HIVE-20699
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Affects Versions: 3.1.0
>Reporter: Eugene Koifman
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-20699.1.patch, HIVE-20699.1.patch, 
> HIVE-20699.10.patch, HIVE-20699.2.patch, HIVE-20699.3.patch, 
> HIVE-20699.4.patch, HIVE-20699.5.patch, HIVE-20699.6.patch, 
> HIVE-20699.7.patch, HIVE-20699.8.patch, HIVE-20699.9.patch
>
>
> Currently the Acid compactor is implemented as generated MR job 
> ({{CompactorMR.java}}).
> It could also be expressed as a Hive query that reads from a given partition 
> and writes data back to the same partition.  This will merge the deltas and 
> 'apply' the delete events.  The simplest would be to just use Insert 
> Overwrite but that will change all ROW__IDs which we don't want.
> Need to implement this in a way that preserves ROW__IDs and creates a new 
> {{base_x}} directory to handle Major compaction.
> Minor compaction will be investigated separately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21001) Upgrade to calcite-1.18

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760283#comment-16760283
 ] 

Hive QA commented on HIVE-21001:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957536/HIVE-21001.20.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15928/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15928/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15928/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-02-04 22:56:14.081
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-15928/source-prep.txt
+ [[ true == \t\r\u\e ]]
+ rm -rf ivy maven
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-02-04 22:56:14.809
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 4a4b9ca HIVE-21159 Modify Merge statement logic to perform 
Update split early (Eugene Koifman, reviewed by Vaibhav Gumashta)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 4a4b9ca HIVE-21159 Modify Merge statement logic to perform 
Update split early (Eugene Koifman, reviewed by Vaibhav Gumashta)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-02-04 22:56:15.472
+ rm -rf ../yetus_PreCommit-HIVE-Build-15928
+ mkdir ../yetus_PreCommit-HIVE-Build-15928
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-15928
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-15928/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
Going to apply patch with: git apply -p0
/data/hiveptest/working/scratch/build.patch:627: trailing whitespace.
Map 1 
/data/hiveptest/working/scratch/build.patch:648: trailing whitespace.
Reducer 2 
/data/hiveptest/working/scratch/build.patch:707: trailing whitespace.
Map 1 
/data/hiveptest/working/scratch/build.patch:728: trailing whitespace.
Reducer 2 
/data/hiveptest/working/scratch/build.patch:2000: trailing whitespace.
  null sort order: 
warning: squelched 79 whitespace errors
warning: 84 lines add whitespace errors.
+ [[ maven == \m\a\v\e\n ]]
+ rm -rf /data/hiveptest/working/maven/org/apache/hive
+ mvn -B clean install -DskipTests -T 4 -q 
-Dmaven.repo.local=/data/hiveptest/working/maven
protoc-jar: executing: [/tmp/protoc2340378592414924822.exe, --version]
libprotoc 2.5.0
protoc-jar: executing: [/tmp/protoc2340378592414924822.exe, 
-I/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore,
 
--java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/target/generated-sources,
 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto]
ANTLR Parser Generator  Version 3.5.2
[ERROR] Failed to execute goal on project hive-shims-common: Could not resolve 
dependencies for project 
org.apache.hive.shims:hive-shims-common:jar:4.0.0-SNAPSHOT: The following 
artifacts could not be resolved: 
org.codehaus.jackson:jackson-core-asl:jar:1.9.13, 
org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13: Could not find artifact 
org.codehaus.jackson:jackson-core-asl:jar:1.9.13 in datanucleus 
(http://www.datanucleus.org/downloads/maven2) -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the 

[jira] [Commented] (HIVE-20699) Query based compactor for full CRUD Acid tables

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760265#comment-16760265
 ] 

Hive QA commented on HIVE-20699:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
10s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
30s{color} | {color:blue} common in master has 65 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
31s{color} | {color:blue} ql in master has 2305 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
36s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
40s{color} | {color:red} ql: The patch generated 25 new + 628 unchanged - 3 
fixed = 653 total (was 631) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
16s{color} | {color:red} itests/hive-unit: The patch generated 20 new + 180 
unchanged - 4 fixed = 200 total (was 184) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 27 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
50s{color} | {color:red} ql generated 2 new + 2305 unchanged - 0 fixed = 2307 
total (was 2305) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
54s{color} | {color:red} ql generated 3 new + 97 unchanged - 3 fixed = 100 
total (was 100) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
13s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  
org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.buildCrudMajorCompactionQuery(HiveConf,
 Table, Partition, String) concatenates strings using + in a loop  At 
CompactorMR.java:strings using + in a loop  At CompactorMR.java:[line 534] |
|  |  
org.apache.hadoop.hive.ql.udf.generic.GenericUDFValidateAcidSortOrder$WriteIdRowId
 defines compareTo(GenericUDFValidateAcidSortOrder$WriteIdRowId) and uses 
Object.equals()  At GenericUDFValidateAcidSortOrder.java:Object.equals()  At 
GenericUDFValidateAcidSortOrder.java:[lines 88-97] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15927/dev-support/hive-personality.sh
 |
| git revision | master / 4a4b9ca |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15927/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15927/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| whitespace | 

[jira] [Commented] (HIVE-20699) Query based compactor for full CRUD Acid tables

2019-02-04 Thread Eugene Koifman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760259#comment-16760259
 ] 

Eugene Koifman commented on HIVE-20699:
---

There are a few unused imports in SplitGrouper
HiveSplitGenerator has unused imports and
{noformat}
 if (HiveConf.getBoolVar(conf, HiveConf.ConfVars.HIVE_IN_TEZ_TEST)) {
taskResource = Math.max(taskResource, 1);
  }
{noformat}
what does this do?


> Query based compactor for full CRUD Acid tables
> ---
>
> Key: HIVE-20699
> URL: https://issues.apache.org/jira/browse/HIVE-20699
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Affects Versions: 3.1.0
>Reporter: Eugene Koifman
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-20699.1.patch, HIVE-20699.1.patch, 
> HIVE-20699.10.patch, HIVE-20699.2.patch, HIVE-20699.3.patch, 
> HIVE-20699.4.patch, HIVE-20699.5.patch, HIVE-20699.6.patch, 
> HIVE-20699.7.patch, HIVE-20699.8.patch, HIVE-20699.9.patch
>
>
> Currently the Acid compactor is implemented as generated MR job 
> ({{CompactorMR.java}}).
> It could also be expressed as a Hive query that reads from a given partition 
> and writes data back to the same partition.  This will merge the deltas and 
> 'apply' the delete events.  The simplest would be to just use Insert 
> Overwrite but that will change all ROW__IDs which we don't want.
> Need to implement this in a way that preserves ROW__IDs and creates a new 
> {{base_x}} directory to handle Major compaction.
> Minor compaction will be investigated separately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21211) Upgrade jetty version to 9.4.x

2019-02-04 Thread Jaume M (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaume M updated HIVE-21211:
---
Attachment: HIVE-21211.2.patch
Status: Patch Available  (was: Open)

> Upgrade jetty version to 9.4.x
> --
>
> Key: HIVE-21211
> URL: https://issues.apache.org/jira/browse/HIVE-21211
> Project: Hive
>  Issue Type: Task
>Reporter: Jaume M
>Assignee: Jaume M
>Priority: Major
> Attachments: HIVE-21211.1.patch, HIVE-21211.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21211) Upgrade jetty version to 9.4.x

2019-02-04 Thread Jaume M (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaume M updated HIVE-21211:
---
Status: Open  (was: Patch Available)

> Upgrade jetty version to 9.4.x
> --
>
> Key: HIVE-21211
> URL: https://issues.apache.org/jira/browse/HIVE-21211
> Project: Hive
>  Issue Type: Task
>Reporter: Jaume M
>Assignee: Jaume M
>Priority: Major
> Attachments: HIVE-21211.1.patch, HIVE-21211.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21211) Upgrade jetty version to 9.4.x

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760248#comment-16760248
 ] 

Hive QA commented on HIVE-21211:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957528/HIVE-21211.1.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15926/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15926/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15926/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-02-04 21:35:34.626
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-15926/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-02-04 21:35:34.629
+ cd apache-github-source-source
+ git fetch origin
>From https://github.com/apache/hive
   c8eb03a..4a4b9ca  master -> origin/master
+ git reset --hard HEAD
HEAD is now at c8eb03a HIVE-21143: Add rewrite rules to open/close Between 
operators (Zoltan Haindrich reviewed by Jesus Camacho Rodriguez)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded.
  (use "git pull" to update your local branch)
+ git reset --hard origin/master
HEAD is now at 4a4b9ca HIVE-21159 Modify Merge statement logic to perform 
Update split early (Eugene Koifman, reviewed by Vaibhav Gumashta)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-02-04 21:35:35.515
+ rm -rf ../yetus_PreCommit-HIVE-Build-15926
+ mkdir ../yetus_PreCommit-HIVE-Build-15926
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-15926
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-15926/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
Going to apply patch with: git apply -p0
+ [[ maven == \m\a\v\e\n ]]
+ rm -rf /data/hiveptest/working/maven/org/apache/hive
+ mvn -B clean install -DskipTests -T 4 -q 
-Dmaven.repo.local=/data/hiveptest/working/maven
protoc-jar: executing: [/tmp/protoc4492020511348169387.exe, --version]
libprotoc 2.5.0
protoc-jar: executing: [/tmp/protoc4492020511348169387.exe, 
-I/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore,
 
--java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/target/generated-sources,
 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto]
ANTLR Parser Generator  Version 3.5.2
protoc-jar: executing: [/tmp/protoc8986512365280706742.exe, --version]
libprotoc 2.5.0
ANTLR Parser Generator  Version 3.5.2
Output file 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-server/target/generated-sources/org/apache/hadoop/hive/metastore/parser/FilterParser.java
 does not exist: must build 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/parser/Filter.g
org/apache/hadoop/hive/metastore/parser/Filter.g
log4j:WARN No appenders could be found for logger (DataNucleus.Persistence).
log4j:WARN Please initialize the log4j system properly.
DataNucleus Enhancer (version 4.1.17) for API "JDO"
DataNucleus Enhancer completed with success for 41 classes.
ANTLR Parser Generator  Version 3.5.2
Output file 

[jira] [Commented] (HIVE-21210) CombineHiveInputFormat Thread Pool Sizing

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760245#comment-16760245
 ] 

Hive QA commented on HIVE-21210:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957520/HIVE-21210.1.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 17 failed/errored test(s), 15726 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] 
(batchId=57)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input42] (batchId=81)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_1] (batchId=91)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_3] (batchId=59)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_disablecbo_1] 
(batchId=55)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_disablecbo_3] 
(batchId=39)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_all] (batchId=73)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[nonmr_fetch] (batchId=22)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_int_type_promotion] 
(batchId=45)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[partition_wise_fileformat12]
 (batchId=90)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[partition_wise_fileformat16]
 (batchId=41)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[pcs] (batchId=54)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[pointlookup3] (batchId=7)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_vc] (batchId=91)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppr_pushdown3] 
(batchId=30)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[rand_partitionpruner2] 
(batchId=59)
org.apache.hadoop.hive.cli.TestLocalSparkCliDriver.testCliDriver[spark_local_queries]
 (batchId=277)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15925/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15925/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15925/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 17 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12957520 - PreCommit-HIVE-Build

> CombineHiveInputFormat Thread Pool Sizing
> -
>
> Key: HIVE-21210
> URL: https://issues.apache.org/jira/browse/HIVE-21210
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HIVE-21210.1.patch
>
>
> Threadpools.
> Hive uses threadpools in several different places and each implementation is 
> a little different and requires different configurations. I think that Hive 
> needs to reign in and standardize the way that threadpools are used and 
> threadpools should scale automatically without manual configuration. At any 
> given time, there are many hundreds of threads running in the HS2 as the 
> number of simultaneous connections increases and they surely cause contention 
> with one-another.
> Here is an example:
> {code:java|title=CombineHiveInputFormat.java}
>   // max number of threads we can use to check non-combinable paths
>   private static final int MAX_CHECK_NONCOMBINABLE_THREAD_NUM = 50;
>   private static final int DEFAULT_NUM_PATH_PER_THREAD = 100;
> {code}
> When building the splits for a MR job, there are up to 50 threads running per 
> query and there is not much scaling here, it's simply 1 thread : 100 files 
> ratio.  This implies that to process 5000 files, there are 50 threads, after 
> that, 50 threads are still used. Many Hive jobs these days involve more than 
> 5000 files so it's not scaling well on bigger sizes.
> This is not configurable (even manually), it doesn't change when the hardware 
> specs increase, and 50 threads seems like a lot when a service must support 
> up to 80 connections:
> [https://www.cloudera.com/documentation/enterprise/5/latest/topics/admin_hive_tuning.html]
> Not to mention, I have never seen a scenario where HS2 is running on a host 
> all by itself and has the entire system dedicated to it. Therefore it should 
> be more friendly and spin up fewer threads.
> I am attaching a patch here that provides a few features:
>  * Common module that produces {{ExecutorService}} which caps the number of 
> threads it spins up at the number of processors a host has. Keep in 

[jira] [Commented] (HIVE-21210) CombineHiveInputFormat Thread Pool Sizing

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760214#comment-16760214
 ] 

Hive QA commented on HIVE-21210:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
31s{color} | {color:blue} common in master has 65 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
42s{color} | {color:blue} ql in master has 2305 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} common: The patch generated 6 new + 0 unchanged - 0 
fixed = 6 total (was 0) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
34s{color} | {color:red} ql: The patch generated 4 new + 10 unchanged - 46 
fixed = 14 total (was 56) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15925/dev-support/hive-personality.sh
 |
| git revision | master / 4a4b9ca |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15925/yetus/diff-checkstyle-common.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15925/yetus/diff-checkstyle-ql.txt
 |
| modules | C: common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15925/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> CombineHiveInputFormat Thread Pool Sizing
> -
>
> Key: HIVE-21210
> URL: https://issues.apache.org/jira/browse/HIVE-21210
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HIVE-21210.1.patch
>
>
> Threadpools.
> Hive uses threadpools in several different places and each implementation is 
> a little different and requires different configurations. I think that Hive 
> needs to reign in and standardize the way that threadpools are used and 
> threadpools should scale automatically without manual configuration. At any 
> given time, there are many hundreds of threads 

[jira] [Commented] (HIVE-21172) DEFAULT keyword handling in MERGE UPDATE clause issues

2019-02-04 Thread Eugene Koifman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760191#comment-16760191
 ] 

Eugene Koifman commented on HIVE-21172:
---

[~vgarg], HIVE-21159 is in. thank you

> DEFAULT keyword handling in MERGE UPDATE clause issues
> --
>
> Key: HIVE-21172
> URL: https://issues.apache.org/jira/browse/HIVE-21172
> Project: Hive
>  Issue Type: Sub-task
>  Components: SQL, Transactions
>Affects Versions: 4.0.0
>Reporter: Eugene Koifman
>Priority: Major
>
> once HIVE-21159 lands, enable {{HiveConf.MERGE_SPLIT_UPDATE}} and run these 
> tests.
> TestMiniLlapLocalCliDriver.testCliDriver[sqlmerge_stats]
>  mvn test -Dtest=TestMiniLlapLocalCliDriver 
> -Dqfile=insert_into_default_keyword.q
> Merge is rewritten as a multi-insert. When Update clause has DEFAULT, it's 
> not properly replaced with a value in the muli-insert - it's treated as a 
> literal
> {noformat}
> INSERT INTO `default`.`acidTable`-- update clause(insert part)
>  SELECT `t`.`key`, `DEFAULT`, `t`.`value`
>WHERE `t`.`key` = `s`.`key` AND `s`.`key` > 3 AND NOT(`s`.`key` < 3)
> {noformat}
> See {{LOG.info("Going to reparse <" + originalQuery + "> as \n<" + 
> rewrittenQueryStr.toString() + ">");}} in hive.log
> {{MergeSemanticAnalyzer.replaceDefaultKeywordForMerge()}} is only called in 
> {{handleInsert}} but not {{handleUpdate()}}. Why does issue only show up with 
> {{MERGE_SPLIT_UPDATE}}?
> Once this is fixed, HiveConf.MERGE_SPLIT_UPDATE should be true by default



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21159) Modify Merge statement logic to perform Update split early

2019-02-04 Thread Eugene Koifman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-21159:
--
   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

committed to master
thanks Vaibhav for the review

> Modify Merge statement logic to perform Update split early
> --
>
> Key: HIVE-21159
> URL: https://issues.apache.org/jira/browse/HIVE-21159
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21159.01.patch, HIVE-21159.02.patch, 
> HIVE-21159.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-16815) Clean up javadoc from error for the rest of modules

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-16815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760183#comment-16760183
 ] 

Hive QA commented on HIVE-16815:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12873267/HIVE-16815.2.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15924/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15924/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15924/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-02-04 20:21:49.842
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-15924/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-02-04 20:21:49.845
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at c8eb03a HIVE-21143: Add rewrite rules to open/close Between 
operators (Zoltan Haindrich reviewed by Jesus Camacho Rodriguez)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at c8eb03a HIVE-21143: Add rewrite rules to open/close Between 
operators (Zoltan Haindrich reviewed by Jesus Camacho Rodriguez)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-02-04 20:21:50.540
+ rm -rf ../yetus_PreCommit-HIVE-Build-15924
+ mkdir ../yetus_PreCommit-HIVE-Build-15924
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-15924
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-15924/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: beeline/src/java/org/apache/hive/beeline/HiveSchemaTool.java: does not 
exist in index
error: patch failed: 
hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableOutputFormat.java:46
Falling back to three-way merge...
Applied patch to 
'hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableOutputFormat.java'
 with conflicts.
error: patch failed: 
hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/DelimitedInputWriter.java:45
Falling back to three-way merge...
Applied patch to 
'hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/DelimitedInputWriter.java'
 cleanly.
error: patch failed: 
hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/mutate/worker/MutatorCoordinator.java:46
Falling back to three-way merge...
Applied patch to 
'hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/mutate/worker/MutatorCoordinator.java'
 cleanly.
error: 
src/java/org/apache/hadoop/hive/accumulo/predicate/AccumuloPredicateHandler.java:
 does not exist in index
error: 
src/java/org/apache/hadoop/hive/accumulo/serde/AccumuloCompositeRowId.java: 
does not exist in index
error: src/java/org/apache/hive/beeline/HiveSchemaTool.java: does not exist in 
index
error: src/java/org/apache/hadoop/hive/contrib/mr/GenericMR.java: does not 
exist in index
error: src/java/org/apache/hadoop/hive/contrib/serde2/RegexSerDe.java: does not 
exist in index
error: 
src/java/org/apache/hadoop/hive/contrib/udaf/example/UDAFExampleAvg.java: does 
not exist in index
error: src/java/org/apache/hadoop/hive/hbase/HBaseCompositeKey.java: does not 
exist in index
error: src/java/org/apache/hadoop/hive/hbase/HBaseSerDe.java: does not exist in 
index
error: src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableOutputFormat.java: 
does not exist in index
error: src/java/org/apache/hadoop/hive/hbase/struct/HBaseStructValue.java: does 
not exist in index

[jira] [Commented] (HIVE-21063) Support statistics in cachedStore for transactional table

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760182#comment-16760182
 ] 

Hive QA commented on HIVE-21063:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957508/HIVE-21063.04.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 15731 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.TestObjectStore.catalogs (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDatabaseOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDeprecatedConfigIsOverwritten
 (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropParitionsCleanup
 (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropPartitionsCacheCrossSession
 (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSqlErrorMetrics 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testEmptyTrustStoreProps 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testMasterKeyOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testMaxEventResponse 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testPartitionOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testQueryCloseOnError 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testRoleOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testTableOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testUseSSLProperty 
(batchId=230)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15922/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15922/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15922/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 14 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12957508 - PreCommit-HIVE-Build

> Support statistics in cachedStore for transactional table
> -
>
> Key: HIVE-21063
> URL: https://issues.apache.org/jira/browse/HIVE-21063
> Project: Hive
>  Issue Type: Task
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21063.01.patch, HIVE-21063.02.patch, 
> HIVE-21063.03.patch, HIVE-21063.04.patch
>
>
> Currently statistics for transactional table is not stored in cached store 
> for consistency issues. Need to add validation for valid write ids and 
> generation of aggregate stats based on valid partitions. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21001) Upgrade to calcite-1.18

2019-02-04 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-21001:

Attachment: HIVE-21001.20.patch

> Upgrade to calcite-1.18
> ---
>
> Key: HIVE-21001
> URL: https://issues.apache.org/jira/browse/HIVE-21001
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21001.01.patch, HIVE-21001.01.patch, 
> HIVE-21001.02.patch, HIVE-21001.03.patch, HIVE-21001.04.patch, 
> HIVE-21001.05.patch, HIVE-21001.06.patch, HIVE-21001.06.patch, 
> HIVE-21001.07.patch, HIVE-21001.08.patch, HIVE-21001.08.patch, 
> HIVE-21001.08.patch, HIVE-21001.09.patch, HIVE-21001.09.patch, 
> HIVE-21001.09.patch, HIVE-21001.10.patch, HIVE-21001.11.patch, 
> HIVE-21001.12.patch, HIVE-21001.13.patch, HIVE-21001.15.patch, 
> HIVE-21001.16.patch, HIVE-21001.17.patch, HIVE-21001.18.patch, 
> HIVE-21001.18.patch, HIVE-21001.19.patch, HIVE-21001.20.patch
>
>
> XLEAR LIBRARY CACHE 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21001) Upgrade to calcite-1.18

2019-02-04 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-21001:

Description: XLEAR LIBRARY CACHE   (was: CLEAR LIBRARY CACHE )

> Upgrade to calcite-1.18
> ---
>
> Key: HIVE-21001
> URL: https://issues.apache.org/jira/browse/HIVE-21001
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21001.01.patch, HIVE-21001.01.patch, 
> HIVE-21001.02.patch, HIVE-21001.03.patch, HIVE-21001.04.patch, 
> HIVE-21001.05.patch, HIVE-21001.06.patch, HIVE-21001.06.patch, 
> HIVE-21001.07.patch, HIVE-21001.08.patch, HIVE-21001.08.patch, 
> HIVE-21001.08.patch, HIVE-21001.09.patch, HIVE-21001.09.patch, 
> HIVE-21001.09.patch, HIVE-21001.10.patch, HIVE-21001.11.patch, 
> HIVE-21001.12.patch, HIVE-21001.13.patch, HIVE-21001.15.patch, 
> HIVE-21001.16.patch, HIVE-21001.17.patch, HIVE-21001.18.patch, 
> HIVE-21001.18.patch, HIVE-21001.19.patch, HIVE-21001.20.patch
>
>
> XLEAR LIBRARY CACHE 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20699) Query based compactor for full CRUD Acid tables

2019-02-04 Thread Vaibhav Gumashta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-20699:

Attachment: (was: HIVE-20699.10.patch)

> Query based compactor for full CRUD Acid tables
> ---
>
> Key: HIVE-20699
> URL: https://issues.apache.org/jira/browse/HIVE-20699
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Affects Versions: 3.1.0
>Reporter: Eugene Koifman
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-20699.1.patch, HIVE-20699.1.patch, 
> HIVE-20699.10.patch, HIVE-20699.2.patch, HIVE-20699.3.patch, 
> HIVE-20699.4.patch, HIVE-20699.5.patch, HIVE-20699.6.patch, 
> HIVE-20699.7.patch, HIVE-20699.8.patch, HIVE-20699.9.patch
>
>
> Currently the Acid compactor is implemented as generated MR job 
> ({{CompactorMR.java}}).
> It could also be expressed as a Hive query that reads from a given partition 
> and writes data back to the same partition.  This will merge the deltas and 
> 'apply' the delete events.  The simplest would be to just use Insert 
> Overwrite but that will change all ROW__IDs which we don't want.
> Need to implement this in a way that preserves ROW__IDs and creates a new 
> {{base_x}} directory to handle Major compaction.
> Minor compaction will be investigated separately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20699) Query based compactor for full CRUD Acid tables

2019-02-04 Thread Vaibhav Gumashta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-20699:

Attachment: HIVE-20699.10.patch

> Query based compactor for full CRUD Acid tables
> ---
>
> Key: HIVE-20699
> URL: https://issues.apache.org/jira/browse/HIVE-20699
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Affects Versions: 3.1.0
>Reporter: Eugene Koifman
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-20699.1.patch, HIVE-20699.1.patch, 
> HIVE-20699.10.patch, HIVE-20699.2.patch, HIVE-20699.3.patch, 
> HIVE-20699.4.patch, HIVE-20699.5.patch, HIVE-20699.6.patch, 
> HIVE-20699.7.patch, HIVE-20699.8.patch, HIVE-20699.9.patch
>
>
> Currently the Acid compactor is implemented as generated MR job 
> ({{CompactorMR.java}}).
> It could also be expressed as a Hive query that reads from a given partition 
> and writes data back to the same partition.  This will merge the deltas and 
> 'apply' the delete events.  The simplest would be to just use Insert 
> Overwrite but that will change all ROW__IDs which we don't want.
> Need to implement this in a way that preserves ROW__IDs and creates a new 
> {{base_x}} directory to handle Major compaction.
> Minor compaction will be investigated separately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20699) Query based compactor for full CRUD Acid tables

2019-02-04 Thread Vaibhav Gumashta (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760159#comment-16760159
 ] 

Vaibhav Gumashta commented on HIVE-20699:
-

Test failure is unrelated, but uploading v10 again to get another run.

> Query based compactor for full CRUD Acid tables
> ---
>
> Key: HIVE-20699
> URL: https://issues.apache.org/jira/browse/HIVE-20699
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Affects Versions: 3.1.0
>Reporter: Eugene Koifman
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-20699.1.patch, HIVE-20699.1.patch, 
> HIVE-20699.10.patch, HIVE-20699.2.patch, HIVE-20699.3.patch, 
> HIVE-20699.4.patch, HIVE-20699.5.patch, HIVE-20699.6.patch, 
> HIVE-20699.7.patch, HIVE-20699.8.patch, HIVE-20699.9.patch
>
>
> Currently the Acid compactor is implemented as generated MR job 
> ({{CompactorMR.java}}).
> It could also be expressed as a Hive query that reads from a given partition 
> and writes data back to the same partition.  This will merge the deltas and 
> 'apply' the delete events.  The simplest would be to just use Insert 
> Overwrite but that will change all ROW__IDs which we don't want.
> Need to implement this in a way that preserves ROW__IDs and creates a new 
> {{base_x}} directory to handle Major compaction.
> Minor compaction will be investigated separately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20699) Query based compactor for full CRUD Acid tables

2019-02-04 Thread Vaibhav Gumashta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-20699:

Attachment: (was: HIVE-20699.10.patch)

> Query based compactor for full CRUD Acid tables
> ---
>
> Key: HIVE-20699
> URL: https://issues.apache.org/jira/browse/HIVE-20699
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Affects Versions: 3.1.0
>Reporter: Eugene Koifman
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-20699.1.patch, HIVE-20699.1.patch, 
> HIVE-20699.10.patch, HIVE-20699.2.patch, HIVE-20699.3.patch, 
> HIVE-20699.4.patch, HIVE-20699.5.patch, HIVE-20699.6.patch, 
> HIVE-20699.7.patch, HIVE-20699.8.patch, HIVE-20699.9.patch
>
>
> Currently the Acid compactor is implemented as generated MR job 
> ({{CompactorMR.java}}).
> It could also be expressed as a Hive query that reads from a given partition 
> and writes data back to the same partition.  This will merge the deltas and 
> 'apply' the delete events.  The simplest would be to just use Insert 
> Overwrite but that will change all ROW__IDs which we don't want.
> Need to implement this in a way that preserves ROW__IDs and creates a new 
> {{base_x}} directory to handle Major compaction.
> Minor compaction will be investigated separately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21063) Support statistics in cachedStore for transactional table

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760158#comment-16760158
 ] 

Hive QA commented on HIVE-21063:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m  
6s{color} | {color:blue} standalone-metastore/metastore-server in master has 
184 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
38s{color} | {color:blue} ql in master has 2305 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
26s{color} | {color:blue} hcatalog/server-extensions in master has 3 extant 
Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
38s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 7s{color} | {color:green} The patch metastore-server passed checkstyle {color} 
|
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} ql: The patch generated 0 new + 15 unchanged - 1 
fixed = 15 total (was 16) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} The patch server-extensions passed checkstyle 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} itests/hive-unit: The patch generated 0 new + 15 
unchanged - 5 fixed = 15 total (was 20) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} standalone-metastore/metastore-server generated 0 
new + 183 unchanged - 1 fixed = 183 total (was 184) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
47s{color} | {color:green} ql in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} server-extensions in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} hive-unit in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} standalone-metastore_metastore-server generated 0 
new + 48 unchanged - 1 fixed = 48 total (was 49) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} ql in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} server-extensions in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} hive-unit in the patch passed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} 

[jira] [Commented] (HIVE-21159) Modify Merge statement logic to perform Update split early

2019-02-04 Thread Vaibhav Gumashta (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760155#comment-16760155
 ] 

Vaibhav Gumashta commented on HIVE-21159:
-

+1

> Modify Merge statement logic to perform Update split early
> --
>
> Key: HIVE-21159
> URL: https://issues.apache.org/jira/browse/HIVE-21159
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-21159.01.patch, HIVE-21159.02.patch, 
> HIVE-21159.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21212) LLAP: shuffle port config uses internal configuration

2019-02-04 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran reassigned HIVE-21212:



> LLAP: shuffle port config uses internal configuration
> -
>
> Key: HIVE-21212
> URL: https://issues.apache.org/jira/browse/HIVE-21212
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
>
> LlapDaemon main() reads daemon configuration but for shuffle port it reads 
> internal config instead of hive.llap.daemon.yarn.shuffle.port
> [https://github.com/apache/hive/blob/c8eb03affa2533f4827cf6497e7c9873bc9520a7/llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/LlapDaemon.java#L535]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21207) Use 0.12.0 libthrift version in Hive

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760116#comment-16760116
 ] 

Hive QA commented on HIVE-21207:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957499/HIVE-21207.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15724 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.jdbc.TestSSL.testMetastoreWithSSL (batchId=260)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15920/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15920/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15920/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12957499 - PreCommit-HIVE-Build

> Use 0.12.0 libthrift version in Hive
> 
>
> Key: HIVE-21207
> URL: https://issues.apache.org/jira/browse/HIVE-21207
> Project: Hive
>  Issue Type: Improvement
>Reporter: Oleksiy Sayankin
>Assignee: Oleksiy Sayankin
>Priority: Major
> Attachments: HIVE-21207.1.patch
>
>
> Use 0.12.0 libthrift version in Hive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21207) Use 0.12.0 libthrift version in Hive

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760105#comment-16760105
 ] 

Hive QA commented on HIVE-21207:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
50s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15920/dev-support/hive-personality.sh
 |
| git revision | master / 02a688d |
| Default Java | 1.8.0_111 |
| modules | C: . U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15920/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Use 0.12.0 libthrift version in Hive
> 
>
> Key: HIVE-21207
> URL: https://issues.apache.org/jira/browse/HIVE-21207
> Project: Hive
>  Issue Type: Improvement
>Reporter: Oleksiy Sayankin
>Assignee: Oleksiy Sayankin
>Priority: Major
> Attachments: HIVE-21207.1.patch
>
>
> Use 0.12.0 libthrift version in Hive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21143) Add rewrite rules to open/close Between operators

2019-02-04 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-21143:

   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

pushed to master. Thank you [~jcamachorodriguez] for reviewing the changes!

> Add rewrite rules to open/close Between operators
> -
>
> Key: HIVE-21143
> URL: https://issues.apache.org/jira/browse/HIVE-21143
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21143.01.patch, HIVE-21143.02.patch, 
> HIVE-21143.03.patch, HIVE-21143.03.patch, HIVE-21143.04.patch, 
> HIVE-21143.05.patch, HIVE-21143.06.patch, HIVE-21143.07.patch, 
> HIVE-21143.08.patch, HIVE-21143.08.patch, HIVE-21143.09.patch, 
> HIVE-21143.10.patch, HIVE-21143.11.patch, HIVE-21143.12.patch, 
> HIVE-21143.12.patch, HIVE-21143.12.patch, HIVE-21143.13.patch, 
> HIVE-21143.13.patch
>
>
> During query compilation it's better to have BETWEEN statements in open form, 
> as Calcite current not considering them during simplification.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21211) Upgrade jetty version to 9.4.x

2019-02-04 Thread Jaume M (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaume M reassigned HIVE-21211:
--

Assignee: Jaume M

> Upgrade jetty version to 9.4.x
> --
>
> Key: HIVE-21211
> URL: https://issues.apache.org/jira/browse/HIVE-21211
> Project: Hive
>  Issue Type: Task
>Reporter: Jaume M
>Assignee: Jaume M
>Priority: Major
> Attachments: HIVE-21211.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21211) Upgrade jetty version to 9.4.x

2019-02-04 Thread Jaume M (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaume M updated HIVE-21211:
---
Attachment: HIVE-21211.1.patch
Status: Patch Available  (was: Open)

> Upgrade jetty version to 9.4.x
> --
>
> Key: HIVE-21211
> URL: https://issues.apache.org/jira/browse/HIVE-21211
> Project: Hive
>  Issue Type: Task
>Reporter: Jaume M
>Priority: Major
> Attachments: HIVE-21211.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21211) Upgrade jetty version to 9.4.x

2019-02-04 Thread Jaume M (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaume M updated HIVE-21211:
---
Summary: Upgrade jetty version to 9.4.x  (was: Upgrade jetty version to 9.4)

> Upgrade jetty version to 9.4.x
> --
>
> Key: HIVE-21211
> URL: https://issues.apache.org/jira/browse/HIVE-21211
> Project: Hive
>  Issue Type: Task
>Reporter: Jaume M
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21199) Replace all occurences of new Byte with Byte.valueOf

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760069#comment-16760069
 ] 

Hive QA commented on HIVE-21199:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957497/HIVE-21199.03.patch

{color:green}SUCCESS:{color} +1 due to 7 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15722 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.mapreduce.TestHCatMutableNonPartitioned.testHCatNonPartitionedTable[2]
 (batchId=214)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15919/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15919/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15919/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12957497 - PreCommit-HIVE-Build

> Replace all occurences of new Byte with Byte.valueOf
> 
>
> Key: HIVE-21199
> URL: https://issues.apache.org/jira/browse/HIVE-21199
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Ivan Suller
>Assignee: Ivan Suller
>Priority: Trivial
> Attachments: HIVE-21199.01.patch, HIVE-21199.02.patch, 
> HIVE-21199.03.patch
>
>
> Creating Byte objects with new Byte(...) creates a new object, while 
> Byte.valueOf(...) can be cached (and is actually cached in most if not all 
> JVMs) thus reducing GC overhead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21199) Replace all occurences of new Byte with Byte.valueOf

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760039#comment-16760039
 ] 

Hive QA commented on HIVE-21199:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
10s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
35s{color} | {color:blue} serde in master has 198 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
26s{color} | {color:blue} ql in master has 2304 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
26s{color} | {color:blue} beeline in master has 53 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
32s{color} | {color:blue} hcatalog/core in master has 30 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} serde: The patch generated 0 new + 37 unchanged - 3 
fixed = 37 total (was 40) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} ql: The patch generated 0 new + 141 unchanged - 13 
fixed = 141 total (was 154) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} The patch beeline passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} The patch core passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} serde generated 0 new + 197 unchanged - 1 fixed = 
197 total (was 198) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
41s{color} | {color:green} ql generated 0 new + 2299 unchanged - 5 fixed = 2299 
total (was 2304) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} beeline generated 0 new + 52 unchanged - 1 fixed = 
52 total (was 53) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} core in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15919/dev-support/hive-personality.sh
 |
| git revision | master / ebd4152 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: serde ql 

[jira] [Commented] (HIVE-21143) Add rewrite rules to open/close Between operators

2019-02-04 Thread Jesus Camacho Rodriguez (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760021#comment-16760021
 ] 

Jesus Camacho Rodriguez commented on HIVE-21143:


+1

> Add rewrite rules to open/close Between operators
> -
>
> Key: HIVE-21143
> URL: https://issues.apache.org/jira/browse/HIVE-21143
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21143.01.patch, HIVE-21143.02.patch, 
> HIVE-21143.03.patch, HIVE-21143.03.patch, HIVE-21143.04.patch, 
> HIVE-21143.05.patch, HIVE-21143.06.patch, HIVE-21143.07.patch, 
> HIVE-21143.08.patch, HIVE-21143.08.patch, HIVE-21143.09.patch, 
> HIVE-21143.10.patch, HIVE-21143.11.patch, HIVE-21143.12.patch, 
> HIVE-21143.12.patch, HIVE-21143.12.patch, HIVE-21143.13.patch, 
> HIVE-21143.13.patch
>
>
> During query compilation it's better to have BETWEEN statements in open form, 
> as Calcite current not considering them during simplification.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21184) Add explain and explain formatted CBO plan with cost information

2019-02-04 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-21184:
---
   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master, thanks for reviewing [~ashutoshc]

> Add explain and explain formatted CBO plan with cost information
> 
>
> Key: HIVE-21184
> URL: https://issues.apache.org/jira/browse/HIVE-21184
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21184.01.patch, HIVE-21184.03.patch, 
> HIVE-21184.04.patch, HIVE-21184.05.patch
>
>
> Plan is more readable than full DAG. Explain formatted/extended will print 
> the plan.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20295) Remove !isNumber check after failed constant interpretation

2019-02-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16760013#comment-16760013
 ] 

Hive QA commented on HIVE-20295:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957493/HIVE-20295.08.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15764 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[timestamptz_2] 
(batchId=86)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15918/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15918/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15918/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12957493 - PreCommit-HIVE-Build

> Remove !isNumber check after failed constant interpretation
> ---
>
> Key: HIVE-20295
> URL: https://issues.apache.org/jira/browse/HIVE-20295
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Ivan Suller
>Priority: Major
> Attachments: HIVE-20295.01.patch, HIVE-20295.02.patch, 
> HIVE-20295.03.patch, HIVE-20295.04.patch, HIVE-20295.05.patch, 
> HIVE-20295.06.patch, HIVE-20295.07.patch, HIVE-20295.08.patch
>
>
> During constant interpretation; if the number can't be parsed - it might be 
> possible that the comparsion is out of range for the type in question - in 
> which case it could be removed.
> https://github.com/apache/hive/blob/2cabb8da150b8fb980223fbd6c2c93b842ca3ee5/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java#L1163



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21210) CombineHiveInputFormat Thread Pool Sizing

2019-02-04 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-21210:
---
Status: Patch Available  (was: Open)

> CombineHiveInputFormat Thread Pool Sizing
> -
>
> Key: HIVE-21210
> URL: https://issues.apache.org/jira/browse/HIVE-21210
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HIVE-21210.1.patch
>
>
> Threadpools.
> Hive uses threadpools in several different places and each implementation is 
> a little different and requires different configurations. I think that Hive 
> needs to reign in and standardize the way that threadpools are used and 
> threadpools should scale automatically without manual configuration. At any 
> given time, there are many hundreds of threads running in the HS2 as the 
> number of simultaneous connections increases and they surely cause contention 
> with one-another.
> Here is an example:
> {code:java|title=CombineHiveInputFormat.java}
>   // max number of threads we can use to check non-combinable paths
>   private static final int MAX_CHECK_NONCOMBINABLE_THREAD_NUM = 50;
>   private static final int DEFAULT_NUM_PATH_PER_THREAD = 100;
> {code}
> When building the splits for a MR job, there are up to 50 threads running per 
> query and there is not much scaling here, it's simply 1 thread : 100 files 
> ratio.  This implies that to process 5000 files, there are 50 threads, after 
> that, 50 threads are still used. Many Hive jobs these days involve more than 
> 5000 files so it's not scaling well on bigger sizes.
> This is not configurable (even manually), it doesn't change when the hardware 
> specs increase, and 50 threads seems like a lot when a service must support 
> up to 80 connections:
> [https://www.cloudera.com/documentation/enterprise/5/latest/topics/admin_hive_tuning.html]
> Not to mention, I have never seen a scenario where HS2 is running on a host 
> all by itself and has the entire system dedicated to it. Therefore it should 
> be more friendly and spin up fewer threads.
> I am attaching a patch here that provides a few features:
>  * Common module that produces {{ExecutorService}} which caps the number of 
> threads it spins up at the number of processors a host has. Keep in mind that 
> a class may submit as much work units ({{Callables}} as they would like, but 
> the number of threads in the pool is capped.
>  * Common module for partitioning work. That is, allow for a generic 
> framework for dividing work into partitions (i.e. batches)
>  * Modify {{CombineHiveInputFormat}} to take advantage of both modules, 
> performing its same duties in a more Java OO way that is currently implemented
>  * Add a partitioning (batching) implementation that enforces partitioning of 
> a {{Collection}} based on the natural log of the {{Collection}} size so that 
> it scales more slowly than a simple 1:100 ratio.
>  * Simplify unit test code for {{CombineHiveInputFormat}}
> My hope is to introduce these tools to {{CombineHiveInputFormat}} and then to 
> drop it into other places.  One of the things I will introduce here is a 
> "direct thread" {{ExecutorService}} so that even if there is a configuration 
> for a thread pool to be disabled, it will still use an {{ExecutorService}} so 
> that the project can avoid logic like "if this function is services by a 
> thread pool, use a {{ExecutorService}} (and remember to close it later!) 
> otherwise, create a single thread" so that things like [HIVE-16949] can be 
> avoided in the future.  Everything will just use an {{ExecutorService}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21210) CombineHiveInputFormat Thread Pool Sizing

2019-02-04 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR reassigned HIVE-21210:
--

Assignee: BELUGA BEHR

> CombineHiveInputFormat Thread Pool Sizing
> -
>
> Key: HIVE-21210
> URL: https://issues.apache.org/jira/browse/HIVE-21210
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HIVE-21210.1.patch
>
>
> Threadpools.
> Hive uses threadpools in several different places and each implementation is 
> a little different and requires different configurations. I think that Hive 
> needs to reign in and standardize the way that threadpools are used and 
> threadpools should scale automatically without manual configuration. At any 
> given time, there are many hundreds of threads running in the HS2 as the 
> number of simultaneous connections increases and they surely cause contention 
> with one-another.
> Here is an example:
> {code:java|title=CombineHiveInputFormat.java}
>   // max number of threads we can use to check non-combinable paths
>   private static final int MAX_CHECK_NONCOMBINABLE_THREAD_NUM = 50;
>   private static final int DEFAULT_NUM_PATH_PER_THREAD = 100;
> {code}
> When building the splits for a MR job, there are up to 50 threads running per 
> query and there is not much scaling here, it's simply 1 thread : 100 files 
> ratio.  This implies that to process 5000 files, there are 50 threads, after 
> that, 50 threads are still used. Many Hive jobs these days involve more than 
> 5000 files so it's not scaling well on bigger sizes.
> This is not configurable (even manually), it doesn't change when the hardware 
> specs increase, and 50 threads seems like a lot when a service must support 
> up to 80 connections:
> [https://www.cloudera.com/documentation/enterprise/5/latest/topics/admin_hive_tuning.html]
> Not to mention, I have never seen a scenario where HS2 is running on a host 
> all by itself and has the entire system dedicated to it. Therefore it should 
> be more friendly and spin up fewer threads.
> I am attaching a patch here that provides a few features:
>  * Common module that produces {{ExecutorService}} which caps the number of 
> threads it spins up at the number of processors a host has. Keep in mind that 
> a class may submit as much work units ({{Callables}} as they would like, but 
> the number of threads in the pool is capped.
>  * Common module for partitioning work. That is, allow for a generic 
> framework for dividing work into partitions (i.e. batches)
>  * Modify {{CombineHiveInputFormat}} to take advantage of both modules, 
> performing its same duties in a more Java OO way that is currently implemented
>  * Add a partitioning (batching) implementation that enforces partitioning of 
> a {{Collection}} based on the natural log of the {{Collection}} size so that 
> it scales more slowly than a simple 1:100 ratio.
>  * Simplify unit test code for {{CombineHiveInputFormat}}
> My hope is to introduce these tools to {{CombineHiveInputFormat}} and then to 
> drop it into other places.  One of the things I will introduce here is a 
> "direct thread" {{ExecutorService}} so that even if there is a configuration 
> for a thread pool to be disabled, it will still use an {{ExecutorService}} so 
> that the project can avoid logic like "if this function is services by a 
> thread pool, use a {{ExecutorService}} (and remember to close it later!) 
> otherwise, create a single thread" so that things like [HIVE-16949] can be 
> avoided in the future.  Everything will just use an {{ExecutorService}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21210) CombineHiveInputFormat Thread Pool Sizing

2019-02-04 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-21210:
---
Attachment: HIVE-21210.1.patch

> CombineHiveInputFormat Thread Pool Sizing
> -
>
> Key: HIVE-21210
> URL: https://issues.apache.org/jira/browse/HIVE-21210
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Priority: Major
> Attachments: HIVE-21210.1.patch
>
>
> Threadpools.
> Hive uses threadpools in several different places and each implementation is 
> a little different and requires different configurations. I think that Hive 
> needs to reign in and standardize the way that threadpools are used and 
> threadpools should scale automatically without manual configuration. At any 
> given time, there are many hundreds of threads running in the HS2 as the 
> number of simultaneous connections increases and they surely cause contention 
> with one-another.
> Here is an example:
> {code:java|title=CombineHiveInputFormat.java}
>   // max number of threads we can use to check non-combinable paths
>   private static final int MAX_CHECK_NONCOMBINABLE_THREAD_NUM = 50;
>   private static final int DEFAULT_NUM_PATH_PER_THREAD = 100;
> {code}
> When building the splits for a MR job, there are up to 50 threads running per 
> query and there is not much scaling here, it's simply 1 thread : 100 files 
> ratio.  This implies that to process 5000 files, there are 50 threads, after 
> that, 50 threads are still used. Many Hive jobs these days involve more than 
> 5000 files so it's not scaling well on bigger sizes.
> This is not configurable (even manually), it doesn't change when the hardware 
> specs increase, and 50 threads seems like a lot when a service must support 
> up to 80 connections:
> [https://www.cloudera.com/documentation/enterprise/5/latest/topics/admin_hive_tuning.html]
> Not to mention, I have never seen a scenario where HS2 is running on a host 
> all by itself and has the entire system dedicated to it. Therefore it should 
> be more friendly and spin up fewer threads.
> I am attaching a patch here that provides a few features:
>  * Common module that produces {{ExecutorService}} which caps the number of 
> threads it spins up at the number of processors a host has. Keep in mind that 
> a class may submit as much work units ({{Callables}} as they would like, but 
> the number of threads in the pool is capped.
>  * Common module for partitioning work. That is, allow for a generic 
> framework for dividing work into partitions (i.e. batches)
>  * Modify {{CombineHiveInputFormat}} to take advantage of both modules, 
> performing its same duties in a more Java OO way that is currently implemented
>  * Add a partitioning (batching) implementation that enforces partitioning of 
> a {{Collection}} based on the natural log of the {{Collection}} size so that 
> it scales more slowly than a simple 1:100 ratio.
>  * Simplify unit test code for {{CombineHiveInputFormat}}
> My hope is to introduce these tools to {{CombineHiveInputFormat}} and then to 
> drop it into other places.  One of the things I will introduce here is a 
> "direct thread" {{ExecutorService}} so that even if there is a configuration 
> for a thread pool to be disabled, it will still use an {{ExecutorService}} so 
> that the project can avoid logic like "if this function is services by a 
> thread pool, use a {{ExecutorService}} (and remember to close it later!) 
> otherwise, create a single thread" so that things like [HIVE-16949] can be 
> avoided in the future.  Everything will just use an {{ExecutorService}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21044) Add SLF4J reporter to the metastore metrics system

2019-02-04 Thread Karthik Manamcheri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Manamcheri updated HIVE-21044:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add SLF4J reporter to the metastore metrics system
> --
>
> Key: HIVE-21044
> URL: https://issues.apache.org/jira/browse/HIVE-21044
> Project: Hive
>  Issue Type: New Feature
>  Components: Standalone Metastore
>Reporter: Karthik Manamcheri
>Assignee: Karthik Manamcheri
>Priority: Minor
>  Labels: metrics
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-21044.1.patch, HIVE-21044.2.branch-3.patch, 
> HIVE-21044.2.patch, HIVE-21044.3.patch, HIVE-21044.4.patch, 
> HIVE-21044.branch-3.patch
>
>
> Lets add SLF4J reporter as an option in Metrics reporting system. Currently 
> we support JMX, JSON and Console reporting.
> We will add a new option to {{hive.service.metrics.reporter}} called SLF4J. 
> We can use the 
> {{[Slf4jReporter|https://metrics.dropwizard.io/3.1.0/apidocs/com/codahale/metrics/Slf4jReporter.html]}}
>  class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >