[jira] [Commented] (HIVE-22990) Build acknowledgement mechanism for repl dump and load

2020-03-17 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061425#comment-17061425
 ] 

Hive QA commented on HIVE-22990:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
44s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
44s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
44s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
42s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} The patch ql passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} itests/hive-unit: The patch generated 0 new + 806 
unchanged - 3 fixed = 806 total (was 809) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21153/dev-support/hive-personality.sh
 |
| git revision | master / 26cc315 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21153/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Build acknowledgement mechanism for repl dump and load
> --
>
> Key: HIVE-22990
> URL: https://issues.apache.org/jira/browse/HIVE-22990
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22990.01.patch, HIVE-22990.02.patch, 
> HIVE-22990.03.patch, HIVE-22990.04.patch, HIVE-22990.05.patch, 
> HIVE-22990.06.patch, HIVE-22990.07.patch, HIVE-22990.08.patch, 
> HIVE-22990.09.patch, HIVE-22990.10.patch, HIVE-22990.11.patch, 
> HIVE-22990.12.patch, HIVE-22990.13.patch, HIVE-22990.14.patch, 
> HIVE-22990.15.patch, HIVE-22990.16.patch, HIVE-22990.17.patch, 
> HIVE-22990.18.patch, HIVE-22990.19.patch, HIVE-22990.20.patch, 
> HIVE-22990.21.patch, HIVE-22990.22.patch, HIVE-22990.23.patch, 
> HIVE-22990.24.patch, HIVE-22990.25.patch, HIVE-22990.patch
>
>

[jira] [Updated] (HIVE-23034) Arrow serializer should not keep the reference of arrow offset and validity buffers

2020-03-17 Thread Shubham Chaurasia (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shubham Chaurasia updated HIVE-23034:
-
Attachment: HIVE-23034.01.patch
Status: Patch Available  (was: Open)

> Arrow serializer should not keep the reference of arrow offset and validity 
> buffers
> ---
>
> Key: HIVE-23034
> URL: https://issues.apache.org/jira/browse/HIVE-23034
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Serializers/Deserializers
>Reporter: Shubham Chaurasia
>Assignee: Shubham Chaurasia
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23034.01.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, a part of writeList() method in arrow serializer is implemented 
> like - 
> {code:java}
> final ArrowBuf offsetBuffer = arrowVector.getOffsetBuffer();
> int nextOffset = 0;
> for (int rowIndex = 0; rowIndex < size; rowIndex++) {
>   int selectedIndex = rowIndex;
>   if (vectorizedRowBatch.selectedInUse) {
> selectedIndex = vectorizedRowBatch.selected[rowIndex];
>   }
>   if (hiveVector.isNull[selectedIndex]) {
> offsetBuffer.setInt(rowIndex * OFFSET_WIDTH, nextOffset);
>   } else {
> offsetBuffer.setInt(rowIndex * OFFSET_WIDTH, nextOffset);
> nextOffset += (int) hiveVector.lengths[selectedIndex];
> arrowVector.setNotNull(rowIndex);
>   }
> }
> offsetBuffer.setInt(size * OFFSET_WIDTH, nextOffset);
> {code}
> 1) Here we obtain a reference to {{final ArrowBuf offsetBuffer = 
> arrowVector.getOffsetBuffer();}} and keep updating the arrow vector and 
> offset vector. 
> Problem - 
> {{arrowVector.setNotNull(rowIndex)}} keeps checking the index and reallocates 
> the offset and validity buffers when a threshold is crossed, updates the 
> references internally and also releases the old buffers (which decrements the 
> buffer reference count). Now the reference which we obtained in 1) becomes 
> obsolete. Furthermore if try to read or write old buffer, we see - 
> {code:java}
> Caused by: io.netty.util.IllegalReferenceCountException: refCnt: 0
>   at 
> io.netty.buffer.AbstractByteBuf.ensureAccessible(AbstractByteBuf.java:1413)
>   at io.netty.buffer.ArrowBuf.checkIndexD(ArrowBuf.java:131)
>   at io.netty.buffer.ArrowBuf.chk(ArrowBuf.java:162)
>   at io.netty.buffer.ArrowBuf.setInt(ArrowBuf.java:656)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.writeList(Serializer.java:432)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.write(Serializer.java:285)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.writeStruct(Serializer.java:352)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.write(Serializer.java:288)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.writeList(Serializer.java:419)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.write(Serializer.java:285)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.serializeBatch(Serializer.java:205)
> {code}
>  
> Solution - 
> This can be fixed by getting the buffers each time ( 
> {{arrowVector.getOffsetBuffer()}} ) we want to update them. 
> In our internal tests, this is very frequently seen on arrow 0.8.0 but not on 
> 0.10.0 but should be handled the same way for 0.10.0 too as it does the same 
> thing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23034) Arrow serializer should not keep the reference of arrow offset and validity buffers

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-23034:
--
Labels: pull-request-available  (was: )

> Arrow serializer should not keep the reference of arrow offset and validity 
> buffers
> ---
>
> Key: HIVE-23034
> URL: https://issues.apache.org/jira/browse/HIVE-23034
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Serializers/Deserializers
>Reporter: Shubham Chaurasia
>Assignee: Shubham Chaurasia
>Priority: Major
>  Labels: pull-request-available
>
> Currently, a part of writeList() method in arrow serializer is implemented 
> like - 
> {code:java}
> final ArrowBuf offsetBuffer = arrowVector.getOffsetBuffer();
> int nextOffset = 0;
> for (int rowIndex = 0; rowIndex < size; rowIndex++) {
>   int selectedIndex = rowIndex;
>   if (vectorizedRowBatch.selectedInUse) {
> selectedIndex = vectorizedRowBatch.selected[rowIndex];
>   }
>   if (hiveVector.isNull[selectedIndex]) {
> offsetBuffer.setInt(rowIndex * OFFSET_WIDTH, nextOffset);
>   } else {
> offsetBuffer.setInt(rowIndex * OFFSET_WIDTH, nextOffset);
> nextOffset += (int) hiveVector.lengths[selectedIndex];
> arrowVector.setNotNull(rowIndex);
>   }
> }
> offsetBuffer.setInt(size * OFFSET_WIDTH, nextOffset);
> {code}
> 1) Here we obtain a reference to {{final ArrowBuf offsetBuffer = 
> arrowVector.getOffsetBuffer();}} and keep updating the arrow vector and 
> offset vector. 
> Problem - 
> {{arrowVector.setNotNull(rowIndex)}} keeps checking the index and reallocates 
> the offset and validity buffers when a threshold is crossed, updates the 
> references internally and also releases the old buffers (which decrements the 
> buffer reference count). Now the reference which we obtained in 1) becomes 
> obsolete. Furthermore if try to read or write old buffer, we see - 
> {code:java}
> Caused by: io.netty.util.IllegalReferenceCountException: refCnt: 0
>   at 
> io.netty.buffer.AbstractByteBuf.ensureAccessible(AbstractByteBuf.java:1413)
>   at io.netty.buffer.ArrowBuf.checkIndexD(ArrowBuf.java:131)
>   at io.netty.buffer.ArrowBuf.chk(ArrowBuf.java:162)
>   at io.netty.buffer.ArrowBuf.setInt(ArrowBuf.java:656)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.writeList(Serializer.java:432)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.write(Serializer.java:285)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.writeStruct(Serializer.java:352)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.write(Serializer.java:288)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.writeList(Serializer.java:419)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.write(Serializer.java:285)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.serializeBatch(Serializer.java:205)
> {code}
>  
> Solution - 
> This can be fixed by getting the buffers each time ( 
> {{arrowVector.getOffsetBuffer()}} ) we want to update them. 
> In our internal tests, this is very frequently seen on arrow 0.8.0 but not on 
> 0.10.0 but should be handled the same way for 0.10.0 too as it does the same 
> thing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23034) Arrow serializer should not keep the reference of arrow offset and validity buffers

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23034?focusedWorklogId=405235=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-405235
 ]

ASF GitHub Bot logged work on HIVE-23034:
-

Author: ASF GitHub Bot
Created on: 18/Mar/20 05:45
Start Date: 18/Mar/20 05:45
Worklog Time Spent: 10m 
  Work Description: ShubhamChaurasia commented on pull request #957: 
HIVE-23034: Arrow serializer should not keep the reference of arrow offset and 
validity buffers
URL: https://github.com/apache/hive/pull/957
 
 
   …ffset and validity buffers
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 405235)
Remaining Estimate: 0h
Time Spent: 10m

> Arrow serializer should not keep the reference of arrow offset and validity 
> buffers
> ---
>
> Key: HIVE-23034
> URL: https://issues.apache.org/jira/browse/HIVE-23034
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Serializers/Deserializers
>Reporter: Shubham Chaurasia
>Assignee: Shubham Chaurasia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, a part of writeList() method in arrow serializer is implemented 
> like - 
> {code:java}
> final ArrowBuf offsetBuffer = arrowVector.getOffsetBuffer();
> int nextOffset = 0;
> for (int rowIndex = 0; rowIndex < size; rowIndex++) {
>   int selectedIndex = rowIndex;
>   if (vectorizedRowBatch.selectedInUse) {
> selectedIndex = vectorizedRowBatch.selected[rowIndex];
>   }
>   if (hiveVector.isNull[selectedIndex]) {
> offsetBuffer.setInt(rowIndex * OFFSET_WIDTH, nextOffset);
>   } else {
> offsetBuffer.setInt(rowIndex * OFFSET_WIDTH, nextOffset);
> nextOffset += (int) hiveVector.lengths[selectedIndex];
> arrowVector.setNotNull(rowIndex);
>   }
> }
> offsetBuffer.setInt(size * OFFSET_WIDTH, nextOffset);
> {code}
> 1) Here we obtain a reference to {{final ArrowBuf offsetBuffer = 
> arrowVector.getOffsetBuffer();}} and keep updating the arrow vector and 
> offset vector. 
> Problem - 
> {{arrowVector.setNotNull(rowIndex)}} keeps checking the index and reallocates 
> the offset and validity buffers when a threshold is crossed, updates the 
> references internally and also releases the old buffers (which decrements the 
> buffer reference count). Now the reference which we obtained in 1) becomes 
> obsolete. Furthermore if try to read or write old buffer, we see - 
> {code:java}
> Caused by: io.netty.util.IllegalReferenceCountException: refCnt: 0
>   at 
> io.netty.buffer.AbstractByteBuf.ensureAccessible(AbstractByteBuf.java:1413)
>   at io.netty.buffer.ArrowBuf.checkIndexD(ArrowBuf.java:131)
>   at io.netty.buffer.ArrowBuf.chk(ArrowBuf.java:162)
>   at io.netty.buffer.ArrowBuf.setInt(ArrowBuf.java:656)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.writeList(Serializer.java:432)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.write(Serializer.java:285)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.writeStruct(Serializer.java:352)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.write(Serializer.java:288)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.writeList(Serializer.java:419)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.write(Serializer.java:285)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.serializeBatch(Serializer.java:205)
> {code}
>  
> Solution - 
> This can be fixed by getting the buffers each time ( 
> {{arrowVector.getOffsetBuffer()}} ) we want to update them. 
> In our internal tests, this is very frequently seen on arrow 0.8.0 but not on 
> 0.10.0 but should be handled the same way for 0.10.0 too as it does the same 
> thing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-18983) Add support for table properties inheritance in Create table like

2020-03-17 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-18983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061399#comment-17061399
 ] 

Hive QA commented on HIVE-18983:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12919862/HIVE-18983.12.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21152/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21152/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21152/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2020-03-18 05:14:40.204
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-21152/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2020-03-18 05:14:40.207
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 26cc315 HIVE-23011: Shared work optimizer should check residual 
predicates when comparing joins (Jesus Camacho Rodriguez, reviewed by Vineet 
Garg)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 26cc315 HIVE-23011: Shared work optimizer should check residual 
predicates when comparing joins (Jesus Camacho Rodriguez, reviewed by Vineet 
Garg)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2020-03-18 05:14:40.958
+ rm -rf ../yetus_PreCommit-HIVE-Build-21152
+ mkdir ../yetus_PreCommit-HIVE-Build-21152
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-21152
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-21152/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
Trying to apply the patch with -p0
error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java: 
does not exist in index
error: 
a/ql/src/test/results/clientpositive/create_alter_list_bucketing_table1.q.out: 
does not exist in index
error: a/ql/src/test/results/clientpositive/create_like.q.out: does not exist 
in index
error: a/ql/src/test/results/clientpositive/create_like2.q.out: does not exist 
in index
error: a/ql/src/test/results/clientpositive/explain_ddl.q.out: does not exist 
in index
error: 
a/standalone-metastore/src/main/java/org/apache/hadoop/hive/common/StatsSetupConst.java:
 does not exist in index
Trying to apply the patch with -p1
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java:13084
Falling back to three-way merge...
Applied patch to 
'ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java' with 
conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/create_alter_list_bucketing_table1.q.out:309
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/create_alter_list_bucketing_table1.q.out' 
with conflicts.
error: patch failed: ql/src/test/results/clientpositive/create_like.q.out:441
Falling back to three-way merge...
Applied patch to 'ql/src/test/results/clientpositive/create_like.q.out' with 
conflicts.
error: patch failed: ql/src/test/results/clientpositive/create_like2.q.out:41
Falling back to three-way merge...
Applied patch to 'ql/src/test/results/clientpositive/create_like2.q.out' with 
conflicts.
error: patch failed: ql/src/test/results/clientpositive/explain_ddl.q.out:448
Falling back to three-way merge...
Applied patch to 'ql/src/test/results/clientpositive/explain_ddl.q.out' with 
conflicts.
error: 
standalone-metastore/src/main/java/org/apache/hadoop/hive/common/StatsSetupConst.java:
 does not 

[jira] [Commented] (HIVE-23037) Print Logging Information for Exception in AcidUtils tryListLocatedHdfsStatus

2020-03-17 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061398#comment-17061398
 ] 

Hive QA commented on HIVE-23037:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12996944/HIVE-23037.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 18108 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.TestMetastoreHousekeepingLeaderEmptyConfig.testHouseKeepingThreadExistence
 (batchId=252)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21151/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21151/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21151/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12996944 - PreCommit-HIVE-Build

> Print Logging Information for Exception in AcidUtils tryListLocatedHdfsStatus
> -
>
> Key: HIVE-23037
> URL: https://issues.apache.org/jira/browse/HIVE-23037
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HIVE-23037.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23004) Support Decimal64 operations across multiple vertices

2020-03-17 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-23004:

Attachment: HIVE-23004.10.patch
Status: Patch Available  (was: Open)

> Support Decimal64 operations across multiple vertices
> -
>
> Key: HIVE-23004
> URL: https://issues.apache.org/jira/browse/HIVE-23004
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-23004.1.patch, HIVE-23004.10.patch, 
> HIVE-23004.2.patch, HIVE-23004.4.patch, HIVE-23004.6.patch, 
> HIVE-23004.7.patch, HIVE-23004.8.patch, HIVE-23004.9.patch
>
>
> Support Decimal64 operations across multiple vertices



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23004) Support Decimal64 operations across multiple vertices

2020-03-17 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-23004:

Status: Open  (was: Patch Available)

> Support Decimal64 operations across multiple vertices
> -
>
> Key: HIVE-23004
> URL: https://issues.apache.org/jira/browse/HIVE-23004
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-23004.1.patch, HIVE-23004.2.patch, 
> HIVE-23004.4.patch, HIVE-23004.6.patch, HIVE-23004.7.patch, 
> HIVE-23004.8.patch, HIVE-23004.9.patch
>
>
> Support Decimal64 operations across multiple vertices



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-22932) Unable to kill Beeline with Ctrl+C

2020-03-17 Thread bianqi (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061381#comment-17061381
 ] 

bianqi edited comment on HIVE-22932 at 3/18/20, 4:41 AM:
-

{quote}Unknown HS2 problem when communicating with Thrift server.
 Error: org.apache.thrift.transport.TTransportException: 
java.net.SocketException: Broken pipe (state=08S01,code=0){quote}

h6.  your thrift service is not running or your thift service's port cannot 
communication with beeline.
h6. 


was (Author: bianqi):
{quote}Unknown HS2 problem when communicating with Thrift server.
Error: org.apache.thrift.transport.TTransportException: 
java.net.SocketException: Broken pipe (state=08S01,code=0)

your thrift service is not running or your thift service's port cannot 
communication with beeline.
{quote}

> Unable to kill Beeline with Ctrl+C
> --
>
> Key: HIVE-22932
> URL: https://issues.apache.org/jira/browse/HIVE-22932
> Project: Hive
>  Issue Type: Bug
>Reporter: Renukaprasad C
>Priority: Blocker
>
> Stopped the server and tried to stop the beeline console with "Ctrl+C". But 
> it unable to kill the process & gets process gets hanged. 
> Read call got blocked. 
> Attached the thread dump.
> 0: jdbc:hive2://localhost:1> show tables;
> Unknown HS2 problem when communicating with Thrift server.
> Error: org.apache.thrift.transport.TTransportException: 
> java.net.SocketException: Broken pipe (state=08S01,code=0)
> 0: jdbc:hive2://localhost:1> Interrupting... Please be patient this may 
> take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> 2020-02-26 17:40:42
> Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.72-b15 mixed mode):
> "NonBlockingInputStreamThread" #16 daemon prio=5 os_prio=0 
> tid=0x7f0318c10800 nid=0x258c in Object.wait() [0x7f031c193000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on <0xfe9113c0> (a 
> jline.internal.NonBlockingInputStream)
> at 
> jline.internal.NonBlockingInputStream.run(NonBlockingInputStream.java:278)
> - locked <0xfe9113c0> (a 
> jline.internal.NonBlockingInputStream)
> at java.lang.Thread.run(Thread.java:745)
> "Service Thread" #11 daemon prio=9 os_prio=0 tid=0x7f032006c000 
> nid=0x257b runnable [0x]
>java.lang.Thread.State: RUNNABLE
> "C1 CompilerThread3" #10 daemon prio=9 os_prio=0 tid=0x7f0320060800 
> nid=0x257a waiting on condition [0x]
>java.lang.Thread.State: RUNNABLE
> "C2 CompilerThread2" #9 daemon prio=9 os_prio=0 tid=0x7f0320056000 
> nid=0x2579 waiting on condition [0x]
>java.lang.Thread.State: RUNNABLE
> "C2 CompilerThread1" #8 daemon prio=9 os_prio=0 tid=0x7f0320054000 
> nid=0x2578 waiting on condition [0x]
>java.lang.Thread.State: RUNNABLE
> "C2 CompilerThread0" #7 daemon prio=9 os_prio=0 tid=0x7f0320051000 
> nid=0x2577 waiting on condition [0x]
>java.lang.Thread.State: RUNNABLE
> "JDWP Event Helper Thread" #6 daemon prio=10 os_prio=0 tid=0x7f032004f000 
> nid=0x2576 runnable [0x]
>java.lang.Thread.State: RUNNABLE
> "JDWP Transport Listener: dt_socket" #5 daemon prio=10 os_prio=0 
> tid=0x7f032004b800 nid=0x2575 runnable [0x]
>java.lang.Thread.State: RUNNABLE
> "Signal Dispatcher" #4 daemon prio=9 os_prio=0 tid=0x7f0320035800 
> nid=0x2574 waiting on condition [0x]
>java.lang.Thread.State: RUNNABLE
> "Finalizer" #3 daemon prio=8 os_prio=0 tid=0x7f0320003800 nid=0x2572 in 
> Object.wait() [0x7f0324b1c000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on 

[jira] [Commented] (HIVE-22932) Unable to kill Beeline with Ctrl+C

2020-03-17 Thread bianqi (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061381#comment-17061381
 ] 

bianqi commented on HIVE-22932:
---

{quote}Unknown HS2 problem when communicating with Thrift server.
Error: org.apache.thrift.transport.TTransportException: 
java.net.SocketException: Broken pipe (state=08S01,code=0)

your thrift service is not running or your thift service's port cannot 
communication with beeline.
{quote}

> Unable to kill Beeline with Ctrl+C
> --
>
> Key: HIVE-22932
> URL: https://issues.apache.org/jira/browse/HIVE-22932
> Project: Hive
>  Issue Type: Bug
>Reporter: Renukaprasad C
>Priority: Blocker
>
> Stopped the server and tried to stop the beeline console with "Ctrl+C". But 
> it unable to kill the process & gets process gets hanged. 
> Read call got blocked. 
> Attached the thread dump.
> 0: jdbc:hive2://localhost:1> show tables;
> Unknown HS2 problem when communicating with Thrift server.
> Error: org.apache.thrift.transport.TTransportException: 
> java.net.SocketException: Broken pipe (state=08S01,code=0)
> 0: jdbc:hive2://localhost:1> Interrupting... Please be patient this may 
> take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> Interrupting... Please be patient this may take some time.
> 2020-02-26 17:40:42
> Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.72-b15 mixed mode):
> "NonBlockingInputStreamThread" #16 daemon prio=5 os_prio=0 
> tid=0x7f0318c10800 nid=0x258c in Object.wait() [0x7f031c193000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on <0xfe9113c0> (a 
> jline.internal.NonBlockingInputStream)
> at 
> jline.internal.NonBlockingInputStream.run(NonBlockingInputStream.java:278)
> - locked <0xfe9113c0> (a 
> jline.internal.NonBlockingInputStream)
> at java.lang.Thread.run(Thread.java:745)
> "Service Thread" #11 daemon prio=9 os_prio=0 tid=0x7f032006c000 
> nid=0x257b runnable [0x]
>java.lang.Thread.State: RUNNABLE
> "C1 CompilerThread3" #10 daemon prio=9 os_prio=0 tid=0x7f0320060800 
> nid=0x257a waiting on condition [0x]
>java.lang.Thread.State: RUNNABLE
> "C2 CompilerThread2" #9 daemon prio=9 os_prio=0 tid=0x7f0320056000 
> nid=0x2579 waiting on condition [0x]
>java.lang.Thread.State: RUNNABLE
> "C2 CompilerThread1" #8 daemon prio=9 os_prio=0 tid=0x7f0320054000 
> nid=0x2578 waiting on condition [0x]
>java.lang.Thread.State: RUNNABLE
> "C2 CompilerThread0" #7 daemon prio=9 os_prio=0 tid=0x7f0320051000 
> nid=0x2577 waiting on condition [0x]
>java.lang.Thread.State: RUNNABLE
> "JDWP Event Helper Thread" #6 daemon prio=10 os_prio=0 tid=0x7f032004f000 
> nid=0x2576 runnable [0x]
>java.lang.Thread.State: RUNNABLE
> "JDWP Transport Listener: dt_socket" #5 daemon prio=10 os_prio=0 
> tid=0x7f032004b800 nid=0x2575 runnable [0x]
>java.lang.Thread.State: RUNNABLE
> "Signal Dispatcher" #4 daemon prio=9 os_prio=0 tid=0x7f0320035800 
> nid=0x2574 waiting on condition [0x]
>java.lang.Thread.State: RUNNABLE
> "Finalizer" #3 daemon prio=8 os_prio=0 tid=0x7f0320003800 nid=0x2572 in 
> Object.wait() [0x7f0324b1c000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on <0xfe930770> (a 
> java.lang.ref.ReferenceQueue$Lock)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143)
> - locked <0xfe930770> (a java.lang.ref.ReferenceQueue$Lock)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:164)
> at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:209)
> "Reference Handler" #2 

[jira] [Commented] (HIVE-23037) Print Logging Information for Exception in AcidUtils tryListLocatedHdfsStatus

2020-03-17 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061378#comment-17061378
 ] 

Hive QA commented on HIVE-23037:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
47s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21151/dev-support/hive-personality.sh
 |
| git revision | master / 26cc315 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21151/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Print Logging Information for Exception in AcidUtils tryListLocatedHdfsStatus
> -
>
> Key: HIVE-23037
> URL: https://issues.apache.org/jira/browse/HIVE-23037
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HIVE-23037.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23004) Support Decimal64 operations across multiple vertices

2020-03-17 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061368#comment-17061368
 ] 

Hive QA commented on HIVE-23004:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12996971/HIVE-23004.9.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 18 failed/errored test(s), 18109 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[groupby_groupingset_bug]
 (batchId=192)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_tmp_table]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mapjoin_decimal_vectorized]
 (batchId=177)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_rebuild_dummy]
 (batchId=174)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_time_window]
 (batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_time_window_2]
 (batchId=184)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_decimal64_case_when_nvl]
 (batchId=177)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_decimal64_case_when_nvl_cbo]
 (batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_decimal64_multi_vertex]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_decimal_aggregate]
 (batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_decimal_round]
 (batchId=176)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_decimal_trailing]
 (batchId=176)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_decimal_udf]
 (batchId=194)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_outer_reference_windowed]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_dynamic_semijoin_reduction2]
 (batchId=179)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vector_data_types] 
(batchId=153)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vector_decimal_aggregate]
 (batchId=126)
org.apache.hadoop.hive.metastore.TestMetastoreHousekeepingLeaderEmptyConfig.testHouseKeepingThreadExistence
 (batchId=252)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21150/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21150/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21150/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 18 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12996971 - PreCommit-HIVE-Build

> Support Decimal64 operations across multiple vertices
> -
>
> Key: HIVE-23004
> URL: https://issues.apache.org/jira/browse/HIVE-23004
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-23004.1.patch, HIVE-23004.2.patch, 
> HIVE-23004.4.patch, HIVE-23004.6.patch, HIVE-23004.7.patch, 
> HIVE-23004.8.patch, HIVE-23004.9.patch
>
>
> Support Decimal64 operations across multiple vertices



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22821) Add necessary endpoints for proactive cache eviction

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22821?focusedWorklogId=405190=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-405190
 ]

ASF GitHub Bot logged work on HIVE-22821:
-

Author: ASF GitHub Bot
Created on: 18/Mar/20 03:36
Start Date: 18/Mar/20 03:36
Worklog Time Spent: 10m 
  Work Description: b-slim commented on pull request #909: HIVE-22821
URL: https://github.com/apache/hive/pull/909#discussion_r394089629
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/llap/ProactiveEviction.java
 ##
 @@ -0,0 +1,311 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.llap;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.LinkedHashMap;
+import java.util.LinkedHashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.TimeUnit;
+import javax.net.SocketFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hive.common.io.CacheTag;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.llap.daemon.rpc.LlapDaemonProtocolProtos;
+import org.apache.hadoop.hive.llap.impl.LlapManagementProtocolClientImpl;
+import org.apache.hadoop.hive.llap.registry.LlapServiceInstance;
+import org.apache.hadoop.hive.llap.registry.impl.LlapRegistryService;
+import org.apache.hadoop.io.retry.RetryPolicies;
+import org.apache.hadoop.io.retry.RetryPolicy;
+import org.apache.hadoop.net.NetUtils;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Through this class the caller (typically HS2) can request eviction of 
buffers from LLAP cache by specifying a DB,
+ * table or partition name/(value). Request sending is implemented here.
+ */
+public final class ProactiveEviction {
+
+  private ProactiveEviction() {
+// Not to be used;
+  }
+
+  /**
+   * Trigger LLAP cache eviction of buffers related to entities residing in 
request parameter.
+   * @param conf
+   * @param request
+   */
+  public static void evict(Configuration conf, Request request) {
+if (!HiveConf.getBoolVar(conf, 
HiveConf.ConfVars.LLAP_IO_PROACTIVE_EVICTION_ENABLED)) {
+  return;
+}
+
+try {
+  LlapRegistryService llapRegistryService = 
LlapRegistryService.getClient(conf);
+  Collection instances = 
llapRegistryService.getInstances().getAll();
+  if (instances.size() == 0) {
+// Not in LLAP mode.
+return;
+  }
+  ExecutorService executorService = Executors.newCachedThreadPool();
 
 Review comment:
   I hope you are convinced that Asynchronous stuff can be trick to get right 
and most of the benefit can turn into debugging nightmare. Let me know if you 
have more questions
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 405190)
Time Spent: 4h  (was: 3h 50m)

> Add necessary endpoints for proactive cache eviction
> 
>
> Key: HIVE-22821
> URL: https://issues.apache.org/jira/browse/HIVE-22821
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Ádám Szita
>Assignee: Ádám Szita
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22821.0.patch, HIVE-22821.1.patch, 
> HIVE-22821.2.patch
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Implement the parts required for iHS2 -> LLAP daemons communication:
>  * protobuf message schema and endpoints
>  * Hive configuration
>  * for use cases:
>  ** dropping db
>  ** dropping table
>  

[jira] [Work logged] (HIVE-22821) Add necessary endpoints for proactive cache eviction

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22821?focusedWorklogId=405188=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-405188
 ]

ASF GitHub Bot logged work on HIVE-22821:
-

Author: ASF GitHub Bot
Created on: 18/Mar/20 03:35
Start Date: 18/Mar/20 03:35
Worklog Time Spent: 10m 
  Work Description: b-slim commented on pull request #909: HIVE-22821
URL: https://github.com/apache/hive/pull/909#discussion_r394089275
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/llap/ProactiveEviction.java
 ##
 @@ -0,0 +1,311 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.llap;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.LinkedHashMap;
+import java.util.LinkedHashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.TimeUnit;
+import javax.net.SocketFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hive.common.io.CacheTag;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.llap.daemon.rpc.LlapDaemonProtocolProtos;
+import org.apache.hadoop.hive.llap.impl.LlapManagementProtocolClientImpl;
+import org.apache.hadoop.hive.llap.registry.LlapServiceInstance;
+import org.apache.hadoop.hive.llap.registry.impl.LlapRegistryService;
+import org.apache.hadoop.io.retry.RetryPolicies;
+import org.apache.hadoop.io.retry.RetryPolicy;
+import org.apache.hadoop.net.NetUtils;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Through this class the caller (typically HS2) can request eviction of 
buffers from LLAP cache by specifying a DB,
+ * table or partition name/(value). Request sending is implemented here.
+ */
+public final class ProactiveEviction {
+
+  private ProactiveEviction() {
+// Not to be used;
+  }
+
+  /**
+   * Trigger LLAP cache eviction of buffers related to entities residing in 
request parameter.
+   * @param conf
+   * @param request
+   */
+  public static void evict(Configuration conf, Request request) {
+if (!HiveConf.getBoolVar(conf, 
HiveConf.ConfVars.LLAP_IO_PROACTIVE_EVICTION_ENABLED)) {
+  return;
+}
+
+try {
+  LlapRegistryService llapRegistryService = 
LlapRegistryService.getClient(conf);
+  Collection instances = 
llapRegistryService.getInstances().getAll();
+  if (instances.size() == 0) {
+// Not in LLAP mode.
+return;
+  }
+  ExecutorService executorService = Executors.newCachedThreadPool();
 
 Review comment:
   For number 4 you can refer to this spec 
https://wiki.sei.cmu.edu/confluence/display/java/TPS02-J.+Ensure+that+tasks+submitted+to+a+thread+pool+are+interruptible
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 405188)
Time Spent: 3h 50m  (was: 3h 40m)

> Add necessary endpoints for proactive cache eviction
> 
>
> Key: HIVE-22821
> URL: https://issues.apache.org/jira/browse/HIVE-22821
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Ádám Szita
>Assignee: Ádám Szita
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22821.0.patch, HIVE-22821.1.patch, 
> HIVE-22821.2.patch
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Implement the parts required for iHS2 -> LLAP daemons communication:
>  * protobuf message schema and endpoints
>  * Hive configuration
>  * for use cases:
>  ** dropping db
>  ** dropping table
>  

[jira] [Work logged] (HIVE-22821) Add necessary endpoints for proactive cache eviction

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22821?focusedWorklogId=405187=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-405187
 ]

ASF GitHub Bot logged work on HIVE-22821:
-

Author: ASF GitHub Bot
Created on: 18/Mar/20 03:34
Start Date: 18/Mar/20 03:34
Worklog Time Spent: 10m 
  Work Description: b-slim commented on pull request #909: HIVE-22821
URL: https://github.com/apache/hive/pull/909#discussion_r394089108
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/llap/ProactiveEviction.java
 ##
 @@ -0,0 +1,311 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.llap;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.LinkedHashMap;
+import java.util.LinkedHashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.TimeUnit;
+import javax.net.SocketFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hive.common.io.CacheTag;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.llap.daemon.rpc.LlapDaemonProtocolProtos;
+import org.apache.hadoop.hive.llap.impl.LlapManagementProtocolClientImpl;
+import org.apache.hadoop.hive.llap.registry.LlapServiceInstance;
+import org.apache.hadoop.hive.llap.registry.impl.LlapRegistryService;
+import org.apache.hadoop.io.retry.RetryPolicies;
+import org.apache.hadoop.io.retry.RetryPolicy;
+import org.apache.hadoop.net.NetUtils;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Through this class the caller (typically HS2) can request eviction of 
buffers from LLAP cache by specifying a DB,
+ * table or partition name/(value). Request sending is implemented here.
+ */
+public final class ProactiveEviction {
+
+  private ProactiveEviction() {
+// Not to be used;
+  }
+
+  /**
+   * Trigger LLAP cache eviction of buffers related to entities residing in 
request parameter.
+   * @param conf
+   * @param request
+   */
+  public static void evict(Configuration conf, Request request) {
+if (!HiveConf.getBoolVar(conf, 
HiveConf.ConfVars.LLAP_IO_PROACTIVE_EVICTION_ENABLED)) {
+  return;
+}
+
+try {
+  LlapRegistryService llapRegistryService = 
LlapRegistryService.getClient(conf);
+  Collection instances = 
llapRegistryService.getInstances().getAll();
+  if (instances.size() == 0) {
+// Not in LLAP mode.
+return;
+  }
+  ExecutorService executorService = Executors.newCachedThreadPool();
 
 Review comment:
   To be more clear about point one you can read the [java 
spec](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ExecutorService.html)
 it is always very good point to start **An unused ExecutorService should be 
shut down to allow reclamation of its resources.**
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 405187)
Time Spent: 3h 40m  (was: 3.5h)

> Add necessary endpoints for proactive cache eviction
> 
>
> Key: HIVE-22821
> URL: https://issues.apache.org/jira/browse/HIVE-22821
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Ádám Szita
>Assignee: Ádám Szita
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22821.0.patch, HIVE-22821.1.patch, 
> HIVE-22821.2.patch
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Implement the parts required for iHS2 -> LLAP daemons communication:
>  * protobuf message 

[jira] [Work logged] (HIVE-22821) Add necessary endpoints for proactive cache eviction

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22821?focusedWorklogId=405173=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-405173
 ]

ASF GitHub Bot logged work on HIVE-22821:
-

Author: ASF GitHub Bot
Created on: 18/Mar/20 03:10
Start Date: 18/Mar/20 03:10
Worklog Time Spent: 10m 
  Work Description: b-slim commented on pull request #909: HIVE-22821
URL: https://github.com/apache/hive/pull/909#discussion_r394083474
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/llap/ProactiveEviction.java
 ##
 @@ -0,0 +1,311 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.llap;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.LinkedHashMap;
+import java.util.LinkedHashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.TimeUnit;
+import javax.net.SocketFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hive.common.io.CacheTag;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.llap.daemon.rpc.LlapDaemonProtocolProtos;
+import org.apache.hadoop.hive.llap.impl.LlapManagementProtocolClientImpl;
+import org.apache.hadoop.hive.llap.registry.LlapServiceInstance;
+import org.apache.hadoop.hive.llap.registry.impl.LlapRegistryService;
+import org.apache.hadoop.io.retry.RetryPolicies;
+import org.apache.hadoop.io.retry.RetryPolicy;
+import org.apache.hadoop.net.NetUtils;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Through this class the caller (typically HS2) can request eviction of 
buffers from LLAP cache by specifying a DB,
+ * table or partition name/(value). Request sending is implemented here.
+ */
+public final class ProactiveEviction {
+
+  private ProactiveEviction() {
+// Not to be used;
+  }
+
+  /**
+   * Trigger LLAP cache eviction of buffers related to entities residing in 
request parameter.
+   * @param conf
+   * @param request
+   */
+  public static void evict(Configuration conf, Request request) {
+if (!HiveConf.getBoolVar(conf, 
HiveConf.ConfVars.LLAP_IO_PROACTIVE_EVICTION_ENABLED)) {
+  return;
+}
+
+try {
+  LlapRegistryService llapRegistryService = 
LlapRegistryService.getClient(conf);
+  Collection instances = 
llapRegistryService.getInstances().getAll();
+  if (instances.size() == 0) {
+// Not in LLAP mode.
+return;
+  }
+  ExecutorService executorService = Executors.newCachedThreadPool();
 
 Review comment:
   This code has multiple issues and it is not safe.
   
   1. First issue, there is no explicit shutdown of the executor and that is a 
big issue by it self,  because some implementations shutdown themselves in a 
finalizer **BUT JVM spec do not guaranteed to ever be call finalize()**.
   2. Second this code is creating a thread pool on each call, that is a huge 
resource wast.
   3. Third point Threads are not named thus makes it hard to debug.
   4. Fourth assume some of those tasks hangs on IO what will happen ? will the 
JVM ever stops ?
   5. Nth issue Assume assume we have 500 llap nodes (yes we have cluster with 
500 node running), this will fire 500 threads is that okay ?
   
   IMO First cut of this pr should avoid all this Async stuff and start with a 
simple version OR you need to have a static fixed thread pool with a well 
defined **life cycle** and clear strategy on how to deal with **blocking IO**, 
**errors** and **shutdown** 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 405173)
Time Spent: 3.5h  (was: 3h 20m)

> Add 

[jira] [Commented] (HIVE-23004) Support Decimal64 operations across multiple vertices

2020-03-17 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061347#comment-17061347
 ] 

Hive QA commented on HIVE-23004:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
42s{color} | {color:blue} serde in master has 197 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
40s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
17s{color} | {color:red} serde: The patch generated 6 new + 289 unchanged - 0 
fixed = 295 total (was 289) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
46s{color} | {color:red} ql: The patch generated 28 new + 849 unchanged - 3 
fixed = 877 total (was 852) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
50s{color} | {color:red} serde generated 3 new + 194 unchanged - 3 fixed = 197 
total (was 197) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
52s{color} | {color:red} ql generated 2 new + 1530 unchanged - 1 fixed = 1532 
total (was 1531) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:serde |
|  |  new 
org.apache.hadoop.hive.serde2.binarysortable.fast.BinarySortableDeserializeRead(TypeInfo[],
 DataTypePhysicalVariation[], boolean, boolean[], byte[], byte[]) may expose 
internal representation by storing an externally mutable object into 
BinarySortableDeserializeRead.columnNotNullMarker  At 
BinarySortableDeserializeRead.java:byte[], byte[]) may expose internal 
representation by storing an externally mutable object into 
BinarySortableDeserializeRead.columnNotNullMarker  At 
BinarySortableDeserializeRead.java:[line 145] |
|  |  new 
org.apache.hadoop.hive.serde2.binarysortable.fast.BinarySortableDeserializeRead(TypeInfo[],
 DataTypePhysicalVariation[], boolean, boolean[], byte[], byte[]) may expose 
internal representation by storing an externally mutable object into 
BinarySortableDeserializeRead.columnNullMarker  At 
BinarySortableDeserializeRead.java:byte[], byte[]) may expose internal 
representation by storing an externally mutable object into 
BinarySortableDeserializeRead.columnNullMarker  At 
BinarySortableDeserializeRead.java:[line 144] |
|  |  new 
org.apache.hadoop.hive.serde2.binarysortable.fast.BinarySortableDeserializeRead(TypeInfo[],
 DataTypePhysicalVariation[], boolean, boolean[], byte[], byte[]) may expose 
internal representation by storing an externally mutable object into 
BinarySortableDeserializeRead.columnSortOrderIsDesc  At 
BinarySortableDeserializeRead.java:byte[], 

[jira] [Assigned] (HIVE-23040) Checkpointing for repl dump incremental phase

2020-03-17 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi reassigned HIVE-23040:
--


> Checkpointing for repl dump incremental phase
> -
>
> Key: HIVE-23040
> URL: https://issues.apache.org/jira/browse/HIVE-23040
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-23039) Checkpointing for repl dump bootstrap phase

2020-03-17 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi reassigned HIVE-23039:
--


> Checkpointing for repl dump bootstrap phase
> ---
>
> Key: HIVE-23039
> URL: https://issues.apache.org/jira/browse/HIVE-23039
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-17 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061322#comment-17061322
 ] 

Hive QA commented on HIVE-22997:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12996939/HIVE-22997.9.patch

{color:green}SUCCESS:{color} +1 due to 8 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 18109 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_partition_cluster]
 (batchId=190)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query28] 
(batchId=306)
org.apache.hadoop.hive.ql.parse.TestScheduledReplicationScenarios.testExternalTablesReplLoadBootstrapIncr
 (batchId=270)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21149/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21149/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21149/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12996939 - PreCommit-HIVE-Build

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.2.patch, HIVE-22997.4.patch, 
> HIVE-22997.5.patch, HIVE-22997.6.patch, HIVE-22997.7.patch, 
> HIVE-22997.8.patch, HIVE-22997.9.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22476) Hive datediff function provided inconsistent results when hive.fetch.task.conversion is set to none

2020-03-17 Thread Slim Bouguerra (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Slim Bouguerra updated HIVE-22476:
--
Attachment: HIVE-22476.9.patch

> Hive datediff function provided inconsistent results when 
> hive.fetch.task.conversion is set to none
> ---
>
> Key: HIVE-22476
> URL: https://issues.apache.org/jira/browse/HIVE-22476
> Project: Hive
>  Issue Type: Bug
>Reporter: Slim Bouguerra
>Assignee: Slim Bouguerra
>Priority: Major
> Attachments: HIVE-22476.2.patch, HIVE-22476.3.patch, 
> HIVE-22476.5.patch, HIVE-22476.6.patch, HIVE-22476.7.patch, 
> HIVE-22476.7.patch, HIVE-22476.8.patch, HIVE-22476.8.patch, 
> HIVE-22476.8.patch, HIVE-22476.9.patch
>
>
> The actual issue stems to the different date parser used by various part of 
> the engine.
> Fetch task uses udfdatediff via {code} 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFToDate{code} while the 
> vectorized llap execution uses {code}VectorUDFDateDiffScalarCol{code}.
> This fix is meant to be not very intrusive and will add more support to the 
> GenericUDFToDate by enhancing the parser.
> For the longer term will be better to use one parser for all the operators.
> Thanks [~Rajkumar Singh] for the repro example
> {code} 
> create external table testdatediff(datetimecol string) stored as orc;
> insert into testdatediff values ('2019-09-09T10:45:49+02:00'),('2019-07-24');
> select datetimecol from testdatediff where datediff(cast(current_timestamp as 
> string), datetimecol)<183;
> set hive.ferch.task.conversion=none;
> select datetimecol from testdatediff where datediff(cast(current_timestamp as 
> string), datetimecol)<183;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23004) Support Decimal64 operations across multiple vertices

2020-03-17 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-23004:

Attachment: HIVE-23004.9.patch
Status: Patch Available  (was: Open)

> Support Decimal64 operations across multiple vertices
> -
>
> Key: HIVE-23004
> URL: https://issues.apache.org/jira/browse/HIVE-23004
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-23004.1.patch, HIVE-23004.2.patch, 
> HIVE-23004.4.patch, HIVE-23004.6.patch, HIVE-23004.7.patch, 
> HIVE-23004.8.patch, HIVE-23004.9.patch
>
>
> Support Decimal64 operations across multiple vertices



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23004) Support Decimal64 operations across multiple vertices

2020-03-17 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-23004:

Attachment: (was: HIVE-23004.9.patch)

> Support Decimal64 operations across multiple vertices
> -
>
> Key: HIVE-23004
> URL: https://issues.apache.org/jira/browse/HIVE-23004
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-23004.1.patch, HIVE-23004.2.patch, 
> HIVE-23004.4.patch, HIVE-23004.6.patch, HIVE-23004.7.patch, HIVE-23004.8.patch
>
>
> Support Decimal64 operations across multiple vertices



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23004) Support Decimal64 operations across multiple vertices

2020-03-17 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-23004:

Status: Open  (was: Patch Available)

> Support Decimal64 operations across multiple vertices
> -
>
> Key: HIVE-23004
> URL: https://issues.apache.org/jira/browse/HIVE-23004
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-23004.1.patch, HIVE-23004.2.patch, 
> HIVE-23004.4.patch, HIVE-23004.6.patch, HIVE-23004.7.patch, HIVE-23004.8.patch
>
>
> Support Decimal64 operations across multiple vertices



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23004) Support Decimal64 operations across multiple vertices

2020-03-17 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-23004:

Status: Open  (was: Patch Available)

> Support Decimal64 operations across multiple vertices
> -
>
> Key: HIVE-23004
> URL: https://issues.apache.org/jira/browse/HIVE-23004
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-23004.1.patch, HIVE-23004.2.patch, 
> HIVE-23004.4.patch, HIVE-23004.6.patch, HIVE-23004.7.patch, 
> HIVE-23004.8.patch, HIVE-23004.9.patch
>
>
> Support Decimal64 operations across multiple vertices



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23004) Support Decimal64 operations across multiple vertices

2020-03-17 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-23004:

Attachment: HIVE-23004.9.patch
Status: Patch Available  (was: Open)

> Support Decimal64 operations across multiple vertices
> -
>
> Key: HIVE-23004
> URL: https://issues.apache.org/jira/browse/HIVE-23004
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-23004.1.patch, HIVE-23004.2.patch, 
> HIVE-23004.4.patch, HIVE-23004.6.patch, HIVE-23004.7.patch, 
> HIVE-23004.8.patch, HIVE-23004.9.patch
>
>
> Support Decimal64 operations across multiple vertices



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22990) Build acknowledgement mechanism for repl dump and load

2020-03-17 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi updated HIVE-22990:
---
Status: In Progress  (was: Patch Available)

> Build acknowledgement mechanism for repl dump and load
> --
>
> Key: HIVE-22990
> URL: https://issues.apache.org/jira/browse/HIVE-22990
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22990.01.patch, HIVE-22990.02.patch, 
> HIVE-22990.03.patch, HIVE-22990.04.patch, HIVE-22990.05.patch, 
> HIVE-22990.06.patch, HIVE-22990.07.patch, HIVE-22990.08.patch, 
> HIVE-22990.09.patch, HIVE-22990.10.patch, HIVE-22990.11.patch, 
> HIVE-22990.12.patch, HIVE-22990.13.patch, HIVE-22990.14.patch, 
> HIVE-22990.15.patch, HIVE-22990.16.patch, HIVE-22990.17.patch, 
> HIVE-22990.18.patch, HIVE-22990.19.patch, HIVE-22990.20.patch, 
> HIVE-22990.21.patch, HIVE-22990.22.patch, HIVE-22990.23.patch, 
> HIVE-22990.24.patch, HIVE-22990.25.patch, HIVE-22990.patch
>
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22990) Build acknowledgement mechanism for repl dump and load

2020-03-17 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi updated HIVE-22990:
---
Attachment: HIVE-22990.25.patch
Status: Patch Available  (was: In Progress)

> Build acknowledgement mechanism for repl dump and load
> --
>
> Key: HIVE-22990
> URL: https://issues.apache.org/jira/browse/HIVE-22990
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22990.01.patch, HIVE-22990.02.patch, 
> HIVE-22990.03.patch, HIVE-22990.04.patch, HIVE-22990.05.patch, 
> HIVE-22990.06.patch, HIVE-22990.07.patch, HIVE-22990.08.patch, 
> HIVE-22990.09.patch, HIVE-22990.10.patch, HIVE-22990.11.patch, 
> HIVE-22990.12.patch, HIVE-22990.13.patch, HIVE-22990.14.patch, 
> HIVE-22990.15.patch, HIVE-22990.16.patch, HIVE-22990.17.patch, 
> HIVE-22990.18.patch, HIVE-22990.19.patch, HIVE-22990.20.patch, 
> HIVE-22990.21.patch, HIVE-22990.22.patch, HIVE-22990.23.patch, 
> HIVE-22990.24.patch, HIVE-22990.25.patch, HIVE-22990.patch
>
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22990) Build acknowledgement mechanism for repl dump and load

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22990?focusedWorklogId=405114=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-405114
 ]

ASF GitHub Bot logged work on HIVE-22990:
-

Author: ASF GitHub Bot
Created on: 18/Mar/20 01:48
Start Date: 18/Mar/20 01:48
Worklog Time Spent: 10m 
  Work Description: aasha commented on pull request #949: HIVE-22990 Add 
file based ack for replication
URL: https://github.com/apache/hive/pull/949#discussion_r394056775
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplLoadTask.java
 ##
 @@ -398,6 +319,21 @@ private void dropTablesExcludedInReplScope(ReplScope 
replScope) throws HiveExcep
 dbName);
   }
 
+  private void createReplLoadCompleteAckTask() {
+if ((work.isIncrementalLoad() && 
!work.incrementalLoadTasksBuilder().hasMoreWork() && 
!work.hasBootstrapLoadTasks())
 
 Review comment:
   The external table tasks are already added before this method is called. But 
have still added the check.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 405114)
Time Spent: 5h 40m  (was: 5.5h)

> Build acknowledgement mechanism for repl dump and load
> --
>
> Key: HIVE-22990
> URL: https://issues.apache.org/jira/browse/HIVE-22990
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22990.01.patch, HIVE-22990.02.patch, 
> HIVE-22990.03.patch, HIVE-22990.04.patch, HIVE-22990.05.patch, 
> HIVE-22990.06.patch, HIVE-22990.07.patch, HIVE-22990.08.patch, 
> HIVE-22990.09.patch, HIVE-22990.10.patch, HIVE-22990.11.patch, 
> HIVE-22990.12.patch, HIVE-22990.13.patch, HIVE-22990.14.patch, 
> HIVE-22990.15.patch, HIVE-22990.16.patch, HIVE-22990.17.patch, 
> HIVE-22990.18.patch, HIVE-22990.19.patch, HIVE-22990.20.patch, 
> HIVE-22990.21.patch, HIVE-22990.22.patch, HIVE-22990.23.patch, 
> HIVE-22990.24.patch, HIVE-22990.patch
>
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22990) Build acknowledgement mechanism for repl dump and load

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22990?focusedWorklogId=405111=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-405111
 ]

ASF GitHub Bot logged work on HIVE-22990:
-

Author: ASF GitHub Bot
Created on: 18/Mar/20 01:40
Start Date: 18/Mar/20 01:40
Worklog Time Spent: 10m 
  Work Description: aasha commented on pull request #949: HIVE-22990 Add 
file based ack for replication
URL: https://github.com/apache/hive/pull/949#discussion_r394056775
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplLoadTask.java
 ##
 @@ -398,6 +319,21 @@ private void dropTablesExcludedInReplScope(ReplScope 
replScope) throws HiveExcep
 dbName);
   }
 
+  private void createReplLoadCompleteAckTask() {
+if ((work.isIncrementalLoad() && 
!work.incrementalLoadTasksBuilder().hasMoreWork() && 
!work.hasBootstrapLoadTasks())
 
 Review comment:
   The external table tasks are already added before this method is called
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 405111)
Time Spent: 5.5h  (was: 5h 20m)

> Build acknowledgement mechanism for repl dump and load
> --
>
> Key: HIVE-22990
> URL: https://issues.apache.org/jira/browse/HIVE-22990
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22990.01.patch, HIVE-22990.02.patch, 
> HIVE-22990.03.patch, HIVE-22990.04.patch, HIVE-22990.05.patch, 
> HIVE-22990.06.patch, HIVE-22990.07.patch, HIVE-22990.08.patch, 
> HIVE-22990.09.patch, HIVE-22990.10.patch, HIVE-22990.11.patch, 
> HIVE-22990.12.patch, HIVE-22990.13.patch, HIVE-22990.14.patch, 
> HIVE-22990.15.patch, HIVE-22990.16.patch, HIVE-22990.17.patch, 
> HIVE-22990.18.patch, HIVE-22990.19.patch, HIVE-22990.20.patch, 
> HIVE-22990.21.patch, HIVE-22990.22.patch, HIVE-22990.23.patch, 
> HIVE-22990.24.patch, HIVE-22990.patch
>
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22476) Hive datediff function provided inconsistent results when hive.fetch.task.conversion is set to none

2020-03-17 Thread Jesus Camacho Rodriguez (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061298#comment-17061298
 ] 

Jesus Camacho Rodriguez commented on HIVE-22476:


[~bslim], could you rebase this patch / create a PR so we can check it in? 
Thanks

> Hive datediff function provided inconsistent results when 
> hive.fetch.task.conversion is set to none
> ---
>
> Key: HIVE-22476
> URL: https://issues.apache.org/jira/browse/HIVE-22476
> Project: Hive
>  Issue Type: Bug
>Reporter: Slim Bouguerra
>Assignee: Slim Bouguerra
>Priority: Major
> Attachments: HIVE-22476.2.patch, HIVE-22476.3.patch, 
> HIVE-22476.5.patch, HIVE-22476.6.patch, HIVE-22476.7.patch, 
> HIVE-22476.7.patch, HIVE-22476.8.patch, HIVE-22476.8.patch, HIVE-22476.8.patch
>
>
> The actual issue stems to the different date parser used by various part of 
> the engine.
> Fetch task uses udfdatediff via {code} 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFToDate{code} while the 
> vectorized llap execution uses {code}VectorUDFDateDiffScalarCol{code}.
> This fix is meant to be not very intrusive and will add more support to the 
> GenericUDFToDate by enhancing the parser.
> For the longer term will be better to use one parser for all the operators.
> Thanks [~Rajkumar Singh] for the repro example
> {code} 
> create external table testdatediff(datetimecol string) stored as orc;
> insert into testdatediff values ('2019-09-09T10:45:49+02:00'),('2019-07-24');
> select datetimecol from testdatediff where datediff(cast(current_timestamp as 
> string), datetimecol)<183;
> set hive.ferch.task.conversion=none;
> select datetimecol from testdatediff where datediff(cast(current_timestamp as 
> string), datetimecol)<183;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-22476) Hive datediff function provided inconsistent results when hive.fetch.task.conversion is set to none

2020-03-17 Thread Jesus Camacho Rodriguez (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061298#comment-17061298
 ] 

Jesus Camacho Rodriguez edited comment on HIVE-22476 at 3/18/20, 1:34 AM:
--

[~bslim], could you rebase this patch / get a green run so we can check it in? 
Thanks


was (Author: jcamachorodriguez):
[~bslim], could you rebase this patch / create a PR so we can check it in? 
Thanks

> Hive datediff function provided inconsistent results when 
> hive.fetch.task.conversion is set to none
> ---
>
> Key: HIVE-22476
> URL: https://issues.apache.org/jira/browse/HIVE-22476
> Project: Hive
>  Issue Type: Bug
>Reporter: Slim Bouguerra
>Assignee: Slim Bouguerra
>Priority: Major
> Attachments: HIVE-22476.2.patch, HIVE-22476.3.patch, 
> HIVE-22476.5.patch, HIVE-22476.6.patch, HIVE-22476.7.patch, 
> HIVE-22476.7.patch, HIVE-22476.8.patch, HIVE-22476.8.patch, HIVE-22476.8.patch
>
>
> The actual issue stems to the different date parser used by various part of 
> the engine.
> Fetch task uses udfdatediff via {code} 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFToDate{code} while the 
> vectorized llap execution uses {code}VectorUDFDateDiffScalarCol{code}.
> This fix is meant to be not very intrusive and will add more support to the 
> GenericUDFToDate by enhancing the parser.
> For the longer term will be better to use one parser for all the operators.
> Thanks [~Rajkumar Singh] for the repro example
> {code} 
> create external table testdatediff(datetimecol string) stored as orc;
> insert into testdatediff values ('2019-09-09T10:45:49+02:00'),('2019-07-24');
> select datetimecol from testdatediff where datediff(cast(current_timestamp as 
> string), datetimecol)<183;
> set hive.ferch.task.conversion=none;
> select datetimecol from testdatediff where datediff(cast(current_timestamp as 
> string), datetimecol)<183;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-17 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061294#comment-17061294
 ] 

Hive QA commented on HIVE-22997:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
2s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
48s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
41s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} ql: The patch generated 0 new + 76 unchanged - 1 
fixed = 76 total (was 77) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} itests/hive-unit: The patch generated 0 new + 649 
unchanged - 1 fixed = 649 total (was 650) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
54s{color} | {color:red} ql generated 1 new + 1530 unchanged - 1 fixed = 1531 
total (was 1531) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  org.apache.hadoop.hive.ql.exec.repl.ReplDumpWork is Serializable; 
consider declaring a serialVersionUID  At ReplDumpWork.java:a serialVersionUID  
At ReplDumpWork.java:[lines 39-119] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21149/dev-support/hive-personality.sh
 |
| git revision | master / 26cc315 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21149/yetus/new-findbugs-ql.html
 |
| modules | C: ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21149/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.2.patch, HIVE-22997.4.patch, 
> 

[jira] [Work logged] (HIVE-22990) Build acknowledgement mechanism for repl dump and load

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22990?focusedWorklogId=405100=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-405100
 ]

ASF GitHub Bot logged work on HIVE-22990:
-

Author: ASF GitHub Bot
Created on: 18/Mar/20 01:19
Start Date: 18/Mar/20 01:19
Worklog Time Spent: 10m 
  Work Description: aasha commented on pull request #949: HIVE-22990 Add 
file based ack for replication
URL: https://github.com/apache/hive/pull/949#discussion_r394056775
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplLoadTask.java
 ##
 @@ -398,6 +319,21 @@ private void dropTablesExcludedInReplScope(ReplScope 
replScope) throws HiveExcep
 dbName);
   }
 
+  private void createReplLoadCompleteAckTask() {
+if ((work.isIncrementalLoad() && 
!work.incrementalLoadTasksBuilder().hasMoreWork() && 
!work.hasBootstrapLoadTasks())
 
 Review comment:
   The check is already present before the method is being called
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 405100)
Time Spent: 5h 20m  (was: 5h 10m)

> Build acknowledgement mechanism for repl dump and load
> --
>
> Key: HIVE-22990
> URL: https://issues.apache.org/jira/browse/HIVE-22990
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22990.01.patch, HIVE-22990.02.patch, 
> HIVE-22990.03.patch, HIVE-22990.04.patch, HIVE-22990.05.patch, 
> HIVE-22990.06.patch, HIVE-22990.07.patch, HIVE-22990.08.patch, 
> HIVE-22990.09.patch, HIVE-22990.10.patch, HIVE-22990.11.patch, 
> HIVE-22990.12.patch, HIVE-22990.13.patch, HIVE-22990.14.patch, 
> HIVE-22990.15.patch, HIVE-22990.16.patch, HIVE-22990.17.patch, 
> HIVE-22990.18.patch, HIVE-22990.19.patch, HIVE-22990.20.patch, 
> HIVE-22990.21.patch, HIVE-22990.22.patch, HIVE-22990.23.patch, 
> HIVE-22990.24.patch, HIVE-22990.patch
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22990) Build acknowledgement mechanism for repl dump and load

2020-03-17 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi updated HIVE-22990:
---
Attachment: HIVE-22990.24.patch
Status: Patch Available  (was: In Progress)

> Build acknowledgement mechanism for repl dump and load
> --
>
> Key: HIVE-22990
> URL: https://issues.apache.org/jira/browse/HIVE-22990
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22990.01.patch, HIVE-22990.02.patch, 
> HIVE-22990.03.patch, HIVE-22990.04.patch, HIVE-22990.05.patch, 
> HIVE-22990.06.patch, HIVE-22990.07.patch, HIVE-22990.08.patch, 
> HIVE-22990.09.patch, HIVE-22990.10.patch, HIVE-22990.11.patch, 
> HIVE-22990.12.patch, HIVE-22990.13.patch, HIVE-22990.14.patch, 
> HIVE-22990.15.patch, HIVE-22990.16.patch, HIVE-22990.17.patch, 
> HIVE-22990.18.patch, HIVE-22990.19.patch, HIVE-22990.20.patch, 
> HIVE-22990.21.patch, HIVE-22990.22.patch, HIVE-22990.23.patch, 
> HIVE-22990.24.patch, HIVE-22990.patch
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22990) Build acknowledgement mechanism for repl dump and load

2020-03-17 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi updated HIVE-22990:
---
Status: In Progress  (was: Patch Available)

> Build acknowledgement mechanism for repl dump and load
> --
>
> Key: HIVE-22990
> URL: https://issues.apache.org/jira/browse/HIVE-22990
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22990.01.patch, HIVE-22990.02.patch, 
> HIVE-22990.03.patch, HIVE-22990.04.patch, HIVE-22990.05.patch, 
> HIVE-22990.06.patch, HIVE-22990.07.patch, HIVE-22990.08.patch, 
> HIVE-22990.09.patch, HIVE-22990.10.patch, HIVE-22990.11.patch, 
> HIVE-22990.12.patch, HIVE-22990.13.patch, HIVE-22990.14.patch, 
> HIVE-22990.15.patch, HIVE-22990.16.patch, HIVE-22990.17.patch, 
> HIVE-22990.18.patch, HIVE-22990.19.patch, HIVE-22990.20.patch, 
> HIVE-22990.21.patch, HIVE-22990.22.patch, HIVE-22990.23.patch, 
> HIVE-22990.patch
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23035) Scheduled query executor may hang in case TezAMs are launched on-demand

2020-03-17 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061278#comment-17061278
 ] 

Hive QA commented on HIVE-23035:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12996931/HIVE-23035.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 18108 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21148/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21148/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21148/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12996931 - PreCommit-HIVE-Build

> Scheduled query executor may hang in case TezAMs are launched on-demand
> ---
>
> Key: HIVE-23035
> URL: https://issues.apache.org/jira/browse/HIVE-23035
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-23035.01.patch
>
>
> Right now the schq executor hangs during session initialization - because it 
> tries to open the tez session while it initializes the SessionState



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23035) Scheduled query executor may hang in case TezAMs are launched on-demand

2020-03-17 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061260#comment-17061260
 ] 

Hive QA commented on HIVE-23035:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
46s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21148/dev-support/hive-personality.sh
 |
| git revision | master / 26cc315 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21148/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Scheduled query executor may hang in case TezAMs are launched on-demand
> ---
>
> Key: HIVE-23035
> URL: https://issues.apache.org/jira/browse/HIVE-23035
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-23035.01.patch
>
>
> Right now the schq executor hangs during session initialization - because it 
> tries to open the tez session while it initializes the SessionState



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22842) Timestamp/date vectors in Arrow serializer should use correct calendar for value representation

2020-03-17 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061252#comment-17061252
 ] 

Hive QA commented on HIVE-22842:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12996920/HIVE-22842.02.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 18117 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21147/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21147/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21147/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12996920 - PreCommit-HIVE-Build

> Timestamp/date vectors in Arrow serializer should use correct calendar for 
> value representation
> ---
>
> Key: HIVE-22842
> URL: https://issues.apache.org/jira/browse/HIVE-22842
> Project: Hive
>  Issue Type: Improvement
>Reporter: Jesus Camacho Rodriguez
>Assignee: Shubham Chaurasia
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22842.01.patch, HIVE-22842.02.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22842) Timestamp/date vectors in Arrow serializer should use correct calendar for value representation

2020-03-17 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061221#comment-17061221
 ] 

Hive QA commented on HIVE-22842:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
16s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
34s{color} | {color:blue} common in master has 63 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
47s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
40s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
41s{color} | {color:red} ql: The patch generated 1 new + 210 unchanged - 2 
fixed = 211 total (was 212) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
18s{color} | {color:red} itests/hive-unit: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21147/dev-support/hive-personality.sh
 |
| git revision | master / 26cc315 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21147/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21147/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| modules | C: common ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21147/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Timestamp/date vectors in Arrow serializer should use correct calendar for 
> value representation
> ---
>
> Key: HIVE-22842
> URL: https://issues.apache.org/jira/browse/HIVE-22842
> Project: Hive
>  Issue Type: Improvement
>Reporter: Jesus Camacho Rodriguez
>Assignee: Shubham Chaurasia
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22842.01.patch, HIVE-22842.02.patch
>
>   

[jira] [Commented] (HIVE-22990) Build acknowledgement mechanism for repl dump and load

2020-03-17 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061210#comment-17061210
 ] 

Hive QA commented on HIVE-22990:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12996945/HIVE-22990.23.patch

{color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 18108 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.beeline.TestBeeLineWithArgs.testRowsAffected (batchId=286)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21146/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21146/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21146/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12996945 - PreCommit-HIVE-Build

> Build acknowledgement mechanism for repl dump and load
> --
>
> Key: HIVE-22990
> URL: https://issues.apache.org/jira/browse/HIVE-22990
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22990.01.patch, HIVE-22990.02.patch, 
> HIVE-22990.03.patch, HIVE-22990.04.patch, HIVE-22990.05.patch, 
> HIVE-22990.06.patch, HIVE-22990.07.patch, HIVE-22990.08.patch, 
> HIVE-22990.09.patch, HIVE-22990.10.patch, HIVE-22990.11.patch, 
> HIVE-22990.12.patch, HIVE-22990.13.patch, HIVE-22990.14.patch, 
> HIVE-22990.15.patch, HIVE-22990.16.patch, HIVE-22990.17.patch, 
> HIVE-22990.18.patch, HIVE-22990.19.patch, HIVE-22990.20.patch, 
> HIVE-22990.21.patch, HIVE-22990.22.patch, HIVE-22990.23.patch, 
> HIVE-22990.patch
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22842) Timestamp/date vectors in Arrow serializer should use correct calendar for value representation

2020-03-17 Thread Jesus Camacho Rodriguez (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061203#comment-17061203
 ] 

Jesus Camacho Rodriguez commented on HIVE-22842:


+1 (pending additional tests as discussed)

> Timestamp/date vectors in Arrow serializer should use correct calendar for 
> value representation
> ---
>
> Key: HIVE-22842
> URL: https://issues.apache.org/jira/browse/HIVE-22842
> Project: Hive
>  Issue Type: Improvement
>Reporter: Jesus Camacho Rodriguez
>Assignee: Shubham Chaurasia
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22842.01.patch, HIVE-22842.02.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22098) Data loss occurs when multiple tables are join with different bucket_version

2020-03-17 Thread David Mollitor (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061197#comment-17061197
 ] 

David Mollitor commented on HIVE-22098:
---

I linked this case to [HIVE-18983], because when a table is created with 
{{CREATE TABLE LIKE}}, then the bucket_version information is missing and it 
later triggers this issue.

> Data loss occurs when multiple tables are join with different bucket_version
> 
>
> Key: HIVE-22098
> URL: https://issues.apache.org/jira/browse/HIVE-22098
> Project: Hive
>  Issue Type: Bug
>  Components: Operators
>Affects Versions: 3.1.0, 3.1.2
>Reporter: LuGuangMing
>Assignee: LuGuangMing
>Priority: Blocker
>  Labels: data-loss, wrongresults
> Attachments: HIVE-22098.1.patch, image-2019-08-12-18-45-15-771.png, 
> join_test.sql, table_a_data.orc, table_b_data.orc, table_c_data.orc
>
>
> When different bucketVersion of tables do join and no of reducers is greater 
> than 2, the result is incorrect (*data loss*).
>  *Scenario 1*: Three tables join. The temporary result data of table_a in the 
> first table and table_b in the second table joins result is recorded as 
> tmp_a_b, When it joins with the third table, the bucket_version=2 of the 
> table created by default after hive-3.0.0, temporary data tmp_a_b initialized 
> the bucketVerison=-1, and then ReduceSinkOperator Verketison=-1 is joined. In 
> the init method, the hash algorithm of selecting join column is selected 
> according to bucketVersion. If bucketVersion = 2 and is not an acid 
> operation, it will acquired the new algorithm of hash. Otherwise, the old 
> algorithm of hash is acquired. Because of the inconsistency of the algorithm 
> of hash, the partition of data allocation caused are different. At stage of 
> Reducer, Data with the same key can not be paired resulting in data loss.
> *Scenario 2*: create two test tables, create table 
> table_bucketversion_1(col_1 string, col_2 string) TBLPROPERTIES 
> ('bucketing_version'='1'); table_bucketversion_2(col_1 string, col_2 string) 
> TBLPROPERTIES ('bucketing_version'='2');
>  when use table_bucketversion_1 to join table_bucketversion_2, partial result 
> data will be loss due to bucketVerison is different.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22990) Build acknowledgement mechanism for repl dump and load

2020-03-17 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061185#comment-17061185
 ] 

Hive QA commented on HIVE-22990:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
46s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
42s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} The patch ql passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} itests/hive-unit: The patch generated 0 new + 806 
unchanged - 3 fixed = 806 total (was 809) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21146/dev-support/hive-personality.sh
 |
| git revision | master / 26cc315 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21146/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Build acknowledgement mechanism for repl dump and load
> --
>
> Key: HIVE-22990
> URL: https://issues.apache.org/jira/browse/HIVE-22990
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22990.01.patch, HIVE-22990.02.patch, 
> HIVE-22990.03.patch, HIVE-22990.04.patch, HIVE-22990.05.patch, 
> HIVE-22990.06.patch, HIVE-22990.07.patch, HIVE-22990.08.patch, 
> HIVE-22990.09.patch, HIVE-22990.10.patch, HIVE-22990.11.patch, 
> HIVE-22990.12.patch, HIVE-22990.13.patch, HIVE-22990.14.patch, 
> HIVE-22990.15.patch, HIVE-22990.16.patch, HIVE-22990.17.patch, 
> HIVE-22990.18.patch, HIVE-22990.19.patch, HIVE-22990.20.patch, 
> HIVE-22990.21.patch, HIVE-22990.22.patch, HIVE-22990.23.patch, 
> HIVE-22990.patch
>
>  Time Spent: 5h 10m
>  Remaining 

[jira] [Work logged] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22997?focusedWorklogId=404945=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-404945
 ]

ASF GitHub Bot logged work on HIVE-22997:
-

Author: ASF GitHub Bot
Created on: 17/Mar/20 20:29
Start Date: 17/Mar/20 20:29
Worklog Time Spent: 10m 
  Work Description: pkumarsinha commented on pull request #951: HIVE-22997 
: Copy external table to target during Repl Dump operation
URL: https://github.com/apache/hive/pull/951#discussion_r393762700
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenariosExternalTables.java
 ##
 @@ -436,16 +438,29 @@ public void externalTableIncrementalReplication() throws 
Throwable {
 }
 
 List loadWithClause = externalTableBasePathWithClause();
-replica.load(replicatedDbName, primaryDbName, loadWithClause)
+replica.load(replicatedDbName, primaryDbName, withClause)
 .run("use " + replicatedDbName)
 .run("show tables like 't1'")
 .verifyResult("t1")
 .run("show partitions t1")
 .verifyResults(new String[] { "country=india", "country=us" })
 .run("select place from t1 order by place")
-.verifyResults(new String[] { "bangalore", "mumbai", "pune" })
+.verifyResults(new String[] {})
 
 Review comment:
   Because copy was happening during load.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 404945)
Time Spent: 4h 50m  (was: 4h 40m)

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.2.patch, HIVE-22997.4.patch, 
> HIVE-22997.5.patch, HIVE-22997.6.patch, HIVE-22997.7.patch, 
> HIVE-22997.8.patch, HIVE-22997.9.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22990) Build acknowledgement mechanism for repl dump and load

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22990?focusedWorklogId=404941=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-404941
 ]

ASF GitHub Bot logged work on HIVE-22990:
-

Author: ASF GitHub Bot
Created on: 17/Mar/20 20:22
Start Date: 17/Mar/20 20:22
Worklog Time Spent: 10m 
  Work Description: pkumarsinha commented on pull request #949: HIVE-22990 
Add file based ack for replication
URL: https://github.com/apache/hive/pull/949#discussion_r393848763
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/parse/ReplicationSemanticAnalyzer.java
 ##
 @@ -429,36 +428,19 @@ private Path getCurrentLoadPath() throws IOException, 
SemanticException {
 }
 FileStatus[] statuses = 
loadPathBase.getFileSystem(conf).listStatus(loadPathBase);
 if (statuses.length > 0) {
-  //sort based on last modified. Recent one is at the end
-  Arrays.sort(statuses, new Comparator() {
-public int compare(FileStatus f1, FileStatus f2) {
-  return Long.compare(f1.getModificationTime(), 
f2.getModificationTime());
+  //sort based on last modified. Recent one is at the beginning
+  FileStatus latestUpdatedStatus = statuses[0];
+  for (FileStatus status : statuses) {
+if (status.getModificationTime() > 
latestUpdatedStatus.getModificationTime()) {
+  latestUpdatedStatus = status;
 }
-  });
-  if (replScope.getDbName() != null) {
-String currentReplStatusOfTarget
-= getReplStatus(replScope.getDbName());
-if (currentReplStatusOfTarget == null) { //bootstrap
-  return new Path(statuses[0].getPath(), ReplUtils.REPL_HIVE_BASE_DIR);
-} else {
-  DumpMetaData latestDump = new DumpMetaData(
-  new Path(statuses[statuses.length - 1].getPath(), 
ReplUtils.REPL_HIVE_BASE_DIR), conf);
-  if (Long.parseLong(currentReplStatusOfTarget.trim()) >= 
latestDump.getEventTo()) {
-isTargetAlreadyLoaded = true;
-  } else {
-for (FileStatus status : statuses) {
-  Path hiveLoadPath = new Path(status.getPath(), 
ReplUtils.REPL_HIVE_BASE_DIR);
-  DumpMetaData dmd = new DumpMetaData(hiveLoadPath, conf);
-  if (dmd.isIncrementalDump()
-  && Long.parseLong(currentReplStatusOfTarget.trim()) < 
dmd.getEventTo()) {
-return hiveLoadPath;
-  }
-}
-  }
+  }
+  Path hiveDumpPath = new Path(latestUpdatedStatus.getPath(), 
ReplUtils.REPL_HIVE_BASE_DIR);
+  if (loadPathBase.getFileSystem(conf).exists(hiveDumpPath)) {
 
 Review comment:
   Why do we need this check? Aren't other two checks below good enough?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 404941)
Time Spent: 5h 10m  (was: 5h)

> Build acknowledgement mechanism for repl dump and load
> --
>
> Key: HIVE-22990
> URL: https://issues.apache.org/jira/browse/HIVE-22990
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22990.01.patch, HIVE-22990.02.patch, 
> HIVE-22990.03.patch, HIVE-22990.04.patch, HIVE-22990.05.patch, 
> HIVE-22990.06.patch, HIVE-22990.07.patch, HIVE-22990.08.patch, 
> HIVE-22990.09.patch, HIVE-22990.10.patch, HIVE-22990.11.patch, 
> HIVE-22990.12.patch, HIVE-22990.13.patch, HIVE-22990.14.patch, 
> HIVE-22990.15.patch, HIVE-22990.16.patch, HIVE-22990.17.patch, 
> HIVE-22990.18.patch, HIVE-22990.19.patch, HIVE-22990.20.patch, 
> HIVE-22990.21.patch, HIVE-22990.22.patch, HIVE-22990.23.patch, 
> HIVE-22990.patch
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22990) Build acknowledgement mechanism for repl dump and load

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22990?focusedWorklogId=404942=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-404942
 ]

ASF GitHub Bot logged work on HIVE-22990:
-

Author: ASF GitHub Bot
Created on: 17/Mar/20 20:22
Start Date: 17/Mar/20 20:22
Worklog Time Spent: 10m 
  Work Description: pkumarsinha commented on pull request #949: HIVE-22990 
Add file based ack for replication
URL: https://github.com/apache/hive/pull/949#discussion_r393945947
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplLoadTask.java
 ##
 @@ -398,6 +319,21 @@ private void dropTablesExcludedInReplScope(ReplScope 
replScope) throws HiveExcep
 dbName);
   }
 
+  private void createReplLoadCompleteAckTask() {
+if ((work.isIncrementalLoad() && 
!work.incrementalLoadTasksBuilder().hasMoreWork() && 
!work.hasBootstrapLoadTasks())
 
 Review comment:
   During incremental, when we have no events in dump and only external tables 
are present, will the event folder still be created? If not the check 
work.incrementalLoadTasksBuilder().hasMoreWork() might not suffice?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 404942)
Time Spent: 5h 10m  (was: 5h)

> Build acknowledgement mechanism for repl dump and load
> --
>
> Key: HIVE-22990
> URL: https://issues.apache.org/jira/browse/HIVE-22990
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22990.01.patch, HIVE-22990.02.patch, 
> HIVE-22990.03.patch, HIVE-22990.04.patch, HIVE-22990.05.patch, 
> HIVE-22990.06.patch, HIVE-22990.07.patch, HIVE-22990.08.patch, 
> HIVE-22990.09.patch, HIVE-22990.10.patch, HIVE-22990.11.patch, 
> HIVE-22990.12.patch, HIVE-22990.13.patch, HIVE-22990.14.patch, 
> HIVE-22990.15.patch, HIVE-22990.16.patch, HIVE-22990.17.patch, 
> HIVE-22990.18.patch, HIVE-22990.19.patch, HIVE-22990.20.patch, 
> HIVE-22990.21.patch, HIVE-22990.22.patch, HIVE-22990.23.patch, 
> HIVE-22990.patch
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22888) Rewrite checkLock inner select with JOIN operator

2020-03-17 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061165#comment-17061165
 ] 

Hive QA commented on HIVE-22888:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12996915/HIVE-22888.11.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 18108 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21145/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21145/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21145/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12996915 - PreCommit-HIVE-Build

> Rewrite checkLock inner select with JOIN operator
> -
>
> Key: HIVE-22888
> URL: https://issues.apache.org/jira/browse/HIVE-22888
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-22888.1.patch, HIVE-22888.10.patch, 
> HIVE-22888.11.patch, HIVE-22888.2.patch, HIVE-22888.3.patch, 
> HIVE-22888.4.patch, HIVE-22888.5.patch, HIVE-22888.6.patch, 
> HIVE-22888.8.patch, HIVE-22888.9.patch, acid-lock-perf-test.pdf
>
>
> - Created extra (db, tbl, part) index on HIVE_LOCKS table;
> - Replaced inner select under checkLocks using multiple IN statements with 
> JOIN operator; 
> generated query looks like :
> {code}
> SELECT LS.* FROM (
> SELECT HL_LOCK_EXT_ID, HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_STATE, 
> HL_LOCK_TYPE FROM HIVE_LOCKS
> WHERE HL_LOCK_EXT_ID < 333) LS
> INNER JOIN (
> SELECT HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_TYPE FROM HIVE_LOCKS WHERE 
> HL_LOCK_EXT_ID = 333) LBC
> ON LS.HL_DB = LBC.HL_DB
> AND (LS.HL_TABLE IS NULL OR LBC.HL_TABLE IS NULL OR LS.HL_TABLE = 
> LBC.HL_TABLE
> AND (LS.HL_PARTITION IS NULL OR LBC.HL_PARTITION IS NULL OR 
> LS.HL_PARTITION = LBC.HL_PARTITION))
> WHERE  (LBC.HL_TXNID = 0 OR LS.HL_TXNID != LBC.HL_TXNID) 
> AND (LBC.HL_LOCK_TYPE='e'
>AND !(LS.HL_TABLE IS NULL AND LS.HL_LOCK_TYPE='r' AND LBC.HL_TABLE 
> IS NOT NULL )
> OR LBC.HL_LOCK_TYPE='w' AND LS.HL_LOCK_TYPE IN ('w','e')
> OR LBC.HL_LOCK_TYPE='r' AND LS.HL_LOCK_TYPE='e'
>AND !(LS.HL_TABLE IS NOT NULL AND LBC.HL_TABLE IS NULL))
> LIMIT 1;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22888) Rewrite checkLock inner select with JOIN operator

2020-03-17 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061130#comment-17061130
 ] 

Hive QA commented on HIVE-22888:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
18s{color} | {color:blue} standalone-metastore/metastore-server in master has 
186 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
23s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 12 new + 564 unchanged - 38 fixed = 576 total (was 602) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
19s{color} | {color:red} standalone-metastore/metastore-server generated 1 new 
+ 185 unchanged - 1 fixed = 186 total (was 186) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:standalone-metastore/metastore-server |
|  |  org.apache.hadoop.hive.metastore.txn.TxnHandler.checkLock(Connection, 
long, long) passes a nonconstant String to an execute or addBatch method on an 
SQL statement  At TxnHandler.java:nonconstant String to an execute or addBatch 
method on an SQL statement  At TxnHandler.java:[line 4435] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21145/dev-support/hive-personality.sh
 |
| git revision | master / 26cc315 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21145/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21145/yetus/new-findbugs-standalone-metastore_metastore-server.html
 |
| modules | C: standalone-metastore/metastore-server U: 
standalone-metastore/metastore-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21145/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Rewrite checkLock inner select with JOIN operator
> -
>
> Key: HIVE-22888
> URL: https://issues.apache.org/jira/browse/HIVE-22888
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-22888.1.patch, HIVE-22888.10.patch, 
> HIVE-22888.11.patch, HIVE-22888.2.patch, HIVE-22888.3.patch, 
> HIVE-22888.4.patch, HIVE-22888.5.patch, HIVE-22888.6.patch, 
> HIVE-22888.8.patch, HIVE-22888.9.patch, acid-lock-perf-test.pdf
>
>
> - Created extra (db, tbl, part) index on HIVE_LOCKS table;
> - Replaced inner select under checkLocks using multiple 

[jira] [Commented] (HIVE-22995) Add support for location for managed tables on database

2020-03-17 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061109#comment-17061109
 ] 

Hive QA commented on HIVE-22995:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12996903/HIVE-22995.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 154 failed/errored test(s), 18112 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_custom_key2]
 (batchId=298)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_predicate_pushdown]
 (batchId=298)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=298)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_single_sourced_multi_insert]
 (batchId=298)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[escape_comments] 
(batchId=306)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[create_database]
 (batchId=309)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_change_db_location]
 (batchId=9)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_db_owner] 
(batchId=52)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[authorization_owner_actions_db]
 (batchId=50)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[database_location] 
(batchId=102)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[database_properties] 
(batchId=74)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[db_ddl_explain] 
(batchId=101)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[describe_database] 
(batchId=42)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dynpart_merge] 
(batchId=42)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[escape_comments] 
(batchId=88)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exim_10_external_managed]
 (batchId=80)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[partition_discovery] 
(batchId=27)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[unicode_comments] 
(batchId=45)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_outer_join_constants]
 (batchId=33)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_distinct_gby] 
(batchId=86)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[view_alias] (batchId=97)
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver[hbase_binary_storage_queries]
 (batchId=113)
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver[hbase_custom_key3] 
(batchId=110)
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver[hbase_custom_key] 
(batchId=110)
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver[hbase_handler_bulk] 
(batchId=114)
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver[hbase_joins] 
(batchId=115)
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver[hbase_ppd_join] 
(batchId=112)
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver[hbase_ppd_key_range]
 (batchId=111)
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver[hbase_pushdown] 
(batchId=111)
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver[hbase_queries] 
(batchId=112)
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver[hbase_scan_params] 
(batchId=115)
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver[hbase_single_sourced_multi_insert]
 (batchId=114)
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver[hbase_timestamp] 
(batchId=113)
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver[hbase_timestamp_format]
 (batchId=114)
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver[hbasestats] 
(batchId=111)
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver[ppd_key_ranges] 
(batchId=110)
org.apache.hadoop.hive.cli.TestKuduCliDriver.testCliDriver[kudu_queries] 
(batchId=297)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_expressions]
 (batchId=205)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_test_alter]
 (batchId=205)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_test_insert]
 (batchId=205)
org.apache.hadoop.hive.cli.TestMiniDruidKafkaCliDriver.testCliDriver[druidkafkamini_basic]
 (batchId=306)
org.apache.hadoop.hive.cli.TestMiniHiveKafkaCliDriver.testCliDriver[kafka_storage_handler]
 (batchId=306)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[whroot_external1]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[convert_decimal64_to_decimal]
 (batchId=181)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[csv_llap] 
(batchId=182)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynamic_semijoin_reduction_4]
 (batchId=175)

[jira] [Updated] (HIVE-23011) Shared work optimizer should check residual predicates when comparing joins

2020-03-17 Thread Jesus Camacho Rodriguez (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-23011:
---
Fix Version/s: 4.0.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Shared work optimizer should check residual predicates when comparing joins
> ---
>
> Key: HIVE-23011
> URL: https://issues.apache.org/jira/browse/HIVE-23011
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23011.patch, HIVE-23011.patch, HIVE-23011.patch, 
> HIVE-23011.patch, HIVE-23011.patch, HIVE-23011.patch, HIVE-23011.patch, 
> HIVE-23011.patch, HIVE-23011.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22995) Add support for location for managed tables on database

2020-03-17 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061071#comment-17061071
 ] 

Hive QA commented on HIVE-22995:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 14s{color} 
| {color:red} 
/data/hiveptest/logs/PreCommit-HIVE-Build-21144/patches/PreCommit-HIVE-Build-21144.patch
 does not apply to master. Rebase required? Wrong Branch? See 
http://cwiki.apache.org/confluence/display/Hive/HowToContribute for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21144/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add support for location for managed tables on database
> ---
>
> Key: HIVE-22995
> URL: https://issues.apache.org/jira/browse/HIVE-22995
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-22995.1.patch, HIVE-22995.2.patch, Hive Metastore 
> Support for Tenant-based storage heirarchy.pdf
>
>
> I have attached the initial spec to this jira.
> Default location for database would be the external table base directory. 
> Managed location can be optionally specified.
> {code}
> CREATE (DATABASE|SCHEMA) [IF NOT EXISTS] database_name
>   [COMMENT database_comment]
>   [LOCATION hdfs_path]
> [MANAGEDLOCATION hdfs_path]
>   [WITH DBPROPERTIES (property_name=property_value, ...)];
> ALTER (DATABASE|SCHEMA) database_name SET 
> MANAGEDLOCATION
>  hdfs_path;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22940) Make the datasketches functions available as predefined functions

2020-03-17 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061067#comment-17061067
 ] 

Hive QA commented on HIVE-22940:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12996896/HIVE-22940.04.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 18099 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.org.apache.hadoop.hive.cli.TestAccumuloCliDriver
 (batchId=298)
org.apache.hadoop.hive.cli.TestKuduCliDriver.org.apache.hadoop.hive.cli.TestKuduCliDriver
 (batchId=297)
org.apache.hadoop.hive.cli.TestKuduNegativeCliDriver.org.apache.hadoop.hive.cli.TestKuduNegativeCliDriver
 (batchId=297)
org.apache.hive.jdbc.TestJdbcWithLocalClusterSpark.org.apache.hive.jdbc.TestJdbcWithLocalClusterSpark
 (batchId=295)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21143/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21143/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21143/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12996896 - PreCommit-HIVE-Build

> Make the datasketches functions available as predefined functions 
> --
>
> Key: HIVE-22940
> URL: https://issues.apache.org/jira/browse/HIVE-22940
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22940.01.patch, HIVE-22940.02.patch, 
> HIVE-22940.03.patch, HIVE-22940.04.patch
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22940) Make the datasketches functions available as predefined functions

2020-03-17 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061058#comment-17061058
 ] 

Hive QA commented on HIVE-22940:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
40s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
45s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
43s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} ql: The patch generated 0 new + 82 unchanged - 2 
fixed = 82 total (was 84) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} root: The patch generated 0 new + 82 unchanged - 2 
fixed = 82 total (was 84) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  findbugs  
checkstyle  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21143/dev-support/hive-personality.sh
 |
| git revision | master / 4daa57c |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql . itests U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21143/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Make the datasketches functions available as predefined functions 
> --
>
> Key: HIVE-22940
> URL: https://issues.apache.org/jira/browse/HIVE-22940
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22940.01.patch, HIVE-22940.02.patch, 
> HIVE-22940.03.patch, HIVE-22940.04.patch
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22990) Build acknowledgement mechanism for repl dump and load

2020-03-17 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi updated HIVE-22990:
---
Attachment: HIVE-22990.23.patch
Status: Patch Available  (was: In Progress)

> Build acknowledgement mechanism for repl dump and load
> --
>
> Key: HIVE-22990
> URL: https://issues.apache.org/jira/browse/HIVE-22990
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22990.01.patch, HIVE-22990.02.patch, 
> HIVE-22990.03.patch, HIVE-22990.04.patch, HIVE-22990.05.patch, 
> HIVE-22990.06.patch, HIVE-22990.07.patch, HIVE-22990.08.patch, 
> HIVE-22990.09.patch, HIVE-22990.10.patch, HIVE-22990.11.patch, 
> HIVE-22990.12.patch, HIVE-22990.13.patch, HIVE-22990.14.patch, 
> HIVE-22990.15.patch, HIVE-22990.16.patch, HIVE-22990.17.patch, 
> HIVE-22990.18.patch, HIVE-22990.19.patch, HIVE-22990.20.patch, 
> HIVE-22990.21.patch, HIVE-22990.22.patch, HIVE-22990.23.patch, 
> HIVE-22990.patch
>
>  Time Spent: 5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22990) Build acknowledgement mechanism for repl dump and load

2020-03-17 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi updated HIVE-22990:
---
Status: In Progress  (was: Patch Available)

> Build acknowledgement mechanism for repl dump and load
> --
>
> Key: HIVE-22990
> URL: https://issues.apache.org/jira/browse/HIVE-22990
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22990.01.patch, HIVE-22990.02.patch, 
> HIVE-22990.03.patch, HIVE-22990.04.patch, HIVE-22990.05.patch, 
> HIVE-22990.06.patch, HIVE-22990.07.patch, HIVE-22990.08.patch, 
> HIVE-22990.09.patch, HIVE-22990.10.patch, HIVE-22990.11.patch, 
> HIVE-22990.12.patch, HIVE-22990.13.patch, HIVE-22990.14.patch, 
> HIVE-22990.15.patch, HIVE-22990.16.patch, HIVE-22990.17.patch, 
> HIVE-22990.18.patch, HIVE-22990.19.patch, HIVE-22990.20.patch, 
> HIVE-22990.21.patch, HIVE-22990.22.patch, HIVE-22990.23.patch, 
> HIVE-22990.patch
>
>  Time Spent: 5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22990) Build acknowledgement mechanism for repl dump and load

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22990?focusedWorklogId=404840=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-404840
 ]

ASF GitHub Bot logged work on HIVE-22990:
-

Author: ASF GitHub Bot
Created on: 17/Mar/20 16:25
Start Date: 17/Mar/20 16:25
Worklog Time Spent: 10m 
  Work Description: aasha commented on pull request #949: HIVE-22990 Add 
file based ack for replication
URL: https://github.com/apache/hive/pull/949#discussion_r393806962
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplLoadTask.java
 ##
 @@ -398,6 +319,21 @@ private void dropTablesExcludedInReplScope(ReplScope 
replScope) throws HiveExcep
 dbName);
   }
 
+  private void createReplLoadCompleteAckTask() {
+if ((work.isIncrementalLoad() && 
!work.incrementalLoadTasksBuilder().hasMoreWork() && 
!work.hasBootstrapLoadTasks())
+|| (!work.isIncrementalLoad() && !work.hasBootstrapLoadTasks())) {
+  //All repl load tasks are executed and status is 0, create the task to 
add the acknowledgement
+  ReplLoadCompleteAckWork replLoadCompleteAckWork = new 
ReplLoadCompleteAckWork(work.dumpDirectory);
+  Task loadCompleteAckWorkTask = 
TaskFactory.get(replLoadCompleteAckWork, conf);
+  if (this.childTasks.isEmpty()) {
 
 Review comment:
   No. Its initialised at line 440 for incremental and 86 for bootstrap before 
this method is called
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 404840)
Time Spent: 5h  (was: 4h 50m)

> Build acknowledgement mechanism for repl dump and load
> --
>
> Key: HIVE-22990
> URL: https://issues.apache.org/jira/browse/HIVE-22990
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22990.01.patch, HIVE-22990.02.patch, 
> HIVE-22990.03.patch, HIVE-22990.04.patch, HIVE-22990.05.patch, 
> HIVE-22990.06.patch, HIVE-22990.07.patch, HIVE-22990.08.patch, 
> HIVE-22990.09.patch, HIVE-22990.10.patch, HIVE-22990.11.patch, 
> HIVE-22990.12.patch, HIVE-22990.13.patch, HIVE-22990.14.patch, 
> HIVE-22990.15.patch, HIVE-22990.16.patch, HIVE-22990.17.patch, 
> HIVE-22990.18.patch, HIVE-22990.19.patch, HIVE-22990.20.patch, 
> HIVE-22990.21.patch, HIVE-22990.22.patch, HIVE-22990.patch
>
>  Time Spent: 5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23037) Print Logging Information for Exception in AcidUtils tryListLocatedHdfsStatus

2020-03-17 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HIVE-23037:
--
Status: Patch Available  (was: Open)

> Print Logging Information for Exception in AcidUtils tryListLocatedHdfsStatus
> -
>
> Key: HIVE-23037
> URL: https://issues.apache.org/jira/browse/HIVE-23037
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HIVE-23037.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23037) Print Logging Information for Exception in AcidUtils tryListLocatedHdfsStatus

2020-03-17 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HIVE-23037:
--
Attachment: HIVE-23037.1.patch

> Print Logging Information for Exception in AcidUtils tryListLocatedHdfsStatus
> -
>
> Key: HIVE-23037
> URL: https://issues.apache.org/jira/browse/HIVE-23037
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HIVE-23037.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-23037) Print Logging Information for Exception in AcidUtils tryListLocatedHdfsStatus

2020-03-17 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor reassigned HIVE-23037:
-


> Print Logging Information for Exception in AcidUtils tryListLocatedHdfsStatus
> -
>
> Key: HIVE-23037
> URL: https://issues.apache.org/jira/browse/HIVE-23037
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HIVE-23037.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22990) Build acknowledgement mechanism for repl dump and load

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22990?focusedWorklogId=404838=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-404838
 ]

ASF GitHub Bot logged work on HIVE-22990:
-

Author: ASF GitHub Bot
Created on: 17/Mar/20 16:21
Start Date: 17/Mar/20 16:21
Worklog Time Spent: 10m 
  Work Description: aasha commented on pull request #949: HIVE-22990 Add 
file based ack for replication
URL: https://github.com/apache/hive/pull/949#discussion_r393803832
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplDumpTask.java
 ##
 @@ -146,7 +146,7 @@ public int execute() {
 }
 prepareReturnValues(Arrays.asList(currentDumpPath.toUri().toString(), 
String.valueOf(lastReplId)));
 writeDumpCompleteAck(hiveDumpRoot);
-deletePreviousDumpMeta(previousDumpMetaPath);
+deleteAllPreviousDumpMeta(dumpRoot, currentDumpPath);
 
 Review comment:
   status is a cached copy. So there won't be a problem
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 404838)
Time Spent: 4h 40m  (was: 4.5h)

> Build acknowledgement mechanism for repl dump and load
> --
>
> Key: HIVE-22990
> URL: https://issues.apache.org/jira/browse/HIVE-22990
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22990.01.patch, HIVE-22990.02.patch, 
> HIVE-22990.03.patch, HIVE-22990.04.patch, HIVE-22990.05.patch, 
> HIVE-22990.06.patch, HIVE-22990.07.patch, HIVE-22990.08.patch, 
> HIVE-22990.09.patch, HIVE-22990.10.patch, HIVE-22990.11.patch, 
> HIVE-22990.12.patch, HIVE-22990.13.patch, HIVE-22990.14.patch, 
> HIVE-22990.15.patch, HIVE-22990.16.patch, HIVE-22990.17.patch, 
> HIVE-22990.18.patch, HIVE-22990.19.patch, HIVE-22990.20.patch, 
> HIVE-22990.21.patch, HIVE-22990.22.patch, HIVE-22990.patch
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22990) Build acknowledgement mechanism for repl dump and load

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22990?focusedWorklogId=404839=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-404839
 ]

ASF GitHub Bot logged work on HIVE-22990:
-

Author: ASF GitHub Bot
Created on: 17/Mar/20 16:22
Start Date: 17/Mar/20 16:22
Worklog Time Spent: 10m 
  Work Description: aasha commented on pull request #949: HIVE-22990 Add 
file based ack for replication
URL: https://github.com/apache/hive/pull/949#discussion_r393804594
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplLoadTask.java
 ##
 @@ -286,6 +277,7 @@ a database ( directory )
 
   // Populate the driver context with the scratch dir info from the repl 
context, so that the temp dirs will be cleaned up later
   
context.getFsScratchDirs().putAll(loadContext.pathInfo.getFsScratchDirs());
+  createReplLoadCompleteAckTask();
 
 Review comment:
   From executeBootStrapLoad the ack task will be added. We need this ack to be 
added at the end before we return a status.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 404839)
Time Spent: 4h 50m  (was: 4h 40m)

> Build acknowledgement mechanism for repl dump and load
> --
>
> Key: HIVE-22990
> URL: https://issues.apache.org/jira/browse/HIVE-22990
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22990.01.patch, HIVE-22990.02.patch, 
> HIVE-22990.03.patch, HIVE-22990.04.patch, HIVE-22990.05.patch, 
> HIVE-22990.06.patch, HIVE-22990.07.patch, HIVE-22990.08.patch, 
> HIVE-22990.09.patch, HIVE-22990.10.patch, HIVE-22990.11.patch, 
> HIVE-22990.12.patch, HIVE-22990.13.patch, HIVE-22990.14.patch, 
> HIVE-22990.15.patch, HIVE-22990.16.patch, HIVE-22990.17.patch, 
> HIVE-22990.18.patch, HIVE-22990.19.patch, HIVE-22990.20.patch, 
> HIVE-22990.21.patch, HIVE-22990.22.patch, HIVE-22990.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23035) Scheduled query executor may hang in case TezAMs are launched on-demand

2020-03-17 Thread Jira


[ 
https://issues.apache.org/jira/browse/HIVE-23035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061018#comment-17061018
 ] 

László Bodor commented on HIVE-23035:
-

+1, pending tests


> Scheduled query executor may hang in case TezAMs are launched on-demand
> ---
>
> Key: HIVE-23035
> URL: https://issues.apache.org/jira/browse/HIVE-23035
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-23035.01.patch
>
>
> Right now the schq executor hangs during session initialization - because it 
> tries to open the tez session while it initializes the SessionState



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22990) Build acknowledgement mechanism for repl dump and load

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22990?focusedWorklogId=404820=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-404820
 ]

ASF GitHub Bot logged work on HIVE-22990:
-

Author: ASF GitHub Bot
Created on: 17/Mar/20 15:56
Start Date: 17/Mar/20 15:56
Worklog Time Spent: 10m 
  Work Description: pkumarsinha commented on pull request #949: HIVE-22990 
Add file based ack for replication
URL: https://github.com/apache/hive/pull/949#discussion_r393780241
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplLoadTask.java
 ##
 @@ -286,6 +277,7 @@ a database ( directory )
 
   // Populate the driver context with the scratch dir info from the repl 
context, so that the temp dirs will be cleaned up later
   
context.getFsScratchDirs().putAll(loadContext.pathInfo.getFsScratchDirs());
+  createReplLoadCompleteAckTask();
 
 Review comment:
   1) During incremental when there is only bootstrap task and no  events, this 
might not reach here, it will go back right from Line 436: return 
executeBootStrapLoad();
   2) How does it make sure that the ack task is added after the Repl event ID 
update task?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 404820)
Time Spent: 4h 20m  (was: 4h 10m)

> Build acknowledgement mechanism for repl dump and load
> --
>
> Key: HIVE-22990
> URL: https://issues.apache.org/jira/browse/HIVE-22990
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22990.01.patch, HIVE-22990.02.patch, 
> HIVE-22990.03.patch, HIVE-22990.04.patch, HIVE-22990.05.patch, 
> HIVE-22990.06.patch, HIVE-22990.07.patch, HIVE-22990.08.patch, 
> HIVE-22990.09.patch, HIVE-22990.10.patch, HIVE-22990.11.patch, 
> HIVE-22990.12.patch, HIVE-22990.13.patch, HIVE-22990.14.patch, 
> HIVE-22990.15.patch, HIVE-22990.16.patch, HIVE-22990.17.patch, 
> HIVE-22990.18.patch, HIVE-22990.19.patch, HIVE-22990.20.patch, 
> HIVE-22990.21.patch, HIVE-22990.22.patch, HIVE-22990.patch
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22990) Build acknowledgement mechanism for repl dump and load

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22990?focusedWorklogId=404821=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-404821
 ]

ASF GitHub Bot logged work on HIVE-22990:
-

Author: ASF GitHub Bot
Created on: 17/Mar/20 15:56
Start Date: 17/Mar/20 15:56
Worklog Time Spent: 10m 
  Work Description: pkumarsinha commented on pull request #949: HIVE-22990 
Add file based ack for replication
URL: https://github.com/apache/hive/pull/949#discussion_r393782909
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenarios.java
 ##
 @@ -312,7 +312,22 @@ public void testBasic() throws IOException {
 verifySetup("SELECT * from " + dbName + ".unptned_empty", empty, driver);
 
 String replicatedDbName = dbName + "_dupe";
-bootstrapLoadAndVerify(dbName, replicatedDbName);
+Tuple bootstrapDump = bootstrapLoadAndVerify(dbName, replicatedDbName);
+
+FileSystem fs = new Path(bootstrapDump.dumpLocation).getFileSystem(hconf);
+Path dumpPath = new Path(bootstrapDump.dumpLocation, 
ReplUtils.REPL_HIVE_BASE_DIR);
+boolean dumpAckFound = false;
+boolean loadAckFound = false;
+for (FileStatus status : fs.listStatus(dumpPath)) {
+  if 
(status.getPath().getName().equalsIgnoreCase(ReplUtils.DUMP_ACKNOWLEDGEMENT)) {
+dumpAckFound = true;
+  }
+  if 
(status.getPath().getName().equalsIgnoreCase(ReplUtils.LOAD_ACKNOWLEDGEMENT)) {
+loadAckFound = true;
+  }
+}
+assertTrue(dumpAckFound);
+assertTrue(loadAckFound);
 
 Review comment:
   Why not just use:
   assertTrue(fs.exists(new Path(dumpPath, ReplUtils.DUMP_ACKNOWLEDGEMENT)));
   assertTrue(fs.exists(new Path(dumpPath, ReplUtils.LOAD_ACKNOWLEDGEMENT)));
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 404821)
Time Spent: 4.5h  (was: 4h 20m)

> Build acknowledgement mechanism for repl dump and load
> --
>
> Key: HIVE-22990
> URL: https://issues.apache.org/jira/browse/HIVE-22990
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22990.01.patch, HIVE-22990.02.patch, 
> HIVE-22990.03.patch, HIVE-22990.04.patch, HIVE-22990.05.patch, 
> HIVE-22990.06.patch, HIVE-22990.07.patch, HIVE-22990.08.patch, 
> HIVE-22990.09.patch, HIVE-22990.10.patch, HIVE-22990.11.patch, 
> HIVE-22990.12.patch, HIVE-22990.13.patch, HIVE-22990.14.patch, 
> HIVE-22990.15.patch, HIVE-22990.16.patch, HIVE-22990.17.patch, 
> HIVE-22990.18.patch, HIVE-22990.19.patch, HIVE-22990.20.patch, 
> HIVE-22990.21.patch, HIVE-22990.22.patch, HIVE-22990.patch
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22990) Build acknowledgement mechanism for repl dump and load

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22990?focusedWorklogId=404822=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-404822
 ]

ASF GitHub Bot logged work on HIVE-22990:
-

Author: ASF GitHub Bot
Created on: 17/Mar/20 15:56
Start Date: 17/Mar/20 15:56
Worklog Time Spent: 10m 
  Work Description: pkumarsinha commented on pull request #949: HIVE-22990 
Add file based ack for replication
URL: https://github.com/apache/hive/pull/949#discussion_r393783338
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenarios.java
 ##
 @@ -838,7 +859,26 @@ public void testIncrementalAdds() throws IOException {
 verifySetup("SELECT a from " + dbName + ".ptned_late WHERE b=2", 
ptn_data_2, driver);
 
 // Perform REPL-DUMP/LOAD
-incrementalLoadAndVerify(dbName, replDbName);
+Tuple incrementalDump = incrementalLoadAndVerify(dbName, replDbName);
+FileSystem fs = new Path(bootstrapDump.dumpLocation).getFileSystem(hconf);
+boolean dumpAckFound = false;
+boolean loadAckFound = false;
+assertFalse(fs.exists(new Path(bootstrapDump.dumpLocation)));
+fs = new Path(incrementalDump.dumpLocation).getFileSystem(hconf);
+Path dumpPath = new Path(incrementalDump.dumpLocation, 
ReplUtils.REPL_HIVE_BASE_DIR);
+dumpAckFound = false;
+loadAckFound = false;
+for (FileStatus status : fs.listStatus(dumpPath)) {
+  if 
(status.getPath().getName().equalsIgnoreCase(ReplUtils.DUMP_ACKNOWLEDGEMENT)) {
+dumpAckFound = true;
+  }
+  if 
(status.getPath().getName().equalsIgnoreCase(ReplUtils.LOAD_ACKNOWLEDGEMENT)) {
+loadAckFound = true;
+  }
+}
+
+assertTrue(dumpAckFound);
+assertTrue(loadAckFound);
 
 Review comment:
   Why not just use:
   assertTrue(fs.exists(new Path(dumpPath, ReplUtils.DUMP_ACKNOWLEDGEMENT)));
   assertTrue(fs.exists(new Path(dumpPath, ReplUtils.LOAD_ACKNOWLEDGEMENT)));
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 404822)
Time Spent: 4.5h  (was: 4h 20m)

> Build acknowledgement mechanism for repl dump and load
> --
>
> Key: HIVE-22990
> URL: https://issues.apache.org/jira/browse/HIVE-22990
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22990.01.patch, HIVE-22990.02.patch, 
> HIVE-22990.03.patch, HIVE-22990.04.patch, HIVE-22990.05.patch, 
> HIVE-22990.06.patch, HIVE-22990.07.patch, HIVE-22990.08.patch, 
> HIVE-22990.09.patch, HIVE-22990.10.patch, HIVE-22990.11.patch, 
> HIVE-22990.12.patch, HIVE-22990.13.patch, HIVE-22990.14.patch, 
> HIVE-22990.15.patch, HIVE-22990.16.patch, HIVE-22990.17.patch, 
> HIVE-22990.18.patch, HIVE-22990.19.patch, HIVE-22990.20.patch, 
> HIVE-22990.21.patch, HIVE-22990.22.patch, HIVE-22990.patch
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22990) Build acknowledgement mechanism for repl dump and load

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22990?focusedWorklogId=404819=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-404819
 ]

ASF GitHub Bot logged work on HIVE-22990:
-

Author: ASF GitHub Bot
Created on: 17/Mar/20 15:56
Start Date: 17/Mar/20 15:56
Worklog Time Spent: 10m 
  Work Description: pkumarsinha commented on pull request #949: HIVE-22990 
Add file based ack for replication
URL: https://github.com/apache/hive/pull/949#discussion_r393780762
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplLoadTask.java
 ##
 @@ -398,6 +319,21 @@ private void dropTablesExcludedInReplScope(ReplScope 
replScope) throws HiveExcep
 dbName);
   }
 
+  private void createReplLoadCompleteAckTask() {
+if ((work.isIncrementalLoad() && 
!work.incrementalLoadTasksBuilder().hasMoreWork() && 
!work.hasBootstrapLoadTasks())
+|| (!work.isIncrementalLoad() && !work.hasBootstrapLoadTasks())) {
+  //All repl load tasks are executed and status is 0, create the task to 
add the acknowledgement
+  ReplLoadCompleteAckWork replLoadCompleteAckWork = new 
ReplLoadCompleteAckWork(work.dumpDirectory);
+  Task loadCompleteAckWorkTask = 
TaskFactory.get(replLoadCompleteAckWork, conf);
+  if (this.childTasks.isEmpty()) {
 
 Review comment:
   Isn't childTask null sometime?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 404819)
Time Spent: 4h 10m  (was: 4h)

> Build acknowledgement mechanism for repl dump and load
> --
>
> Key: HIVE-22990
> URL: https://issues.apache.org/jira/browse/HIVE-22990
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22990.01.patch, HIVE-22990.02.patch, 
> HIVE-22990.03.patch, HIVE-22990.04.patch, HIVE-22990.05.patch, 
> HIVE-22990.06.patch, HIVE-22990.07.patch, HIVE-22990.08.patch, 
> HIVE-22990.09.patch, HIVE-22990.10.patch, HIVE-22990.11.patch, 
> HIVE-22990.12.patch, HIVE-22990.13.patch, HIVE-22990.14.patch, 
> HIVE-22990.15.patch, HIVE-22990.16.patch, HIVE-22990.17.patch, 
> HIVE-22990.18.patch, HIVE-22990.19.patch, HIVE-22990.20.patch, 
> HIVE-22990.21.patch, HIVE-22990.22.patch, HIVE-22990.patch
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-17 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061008#comment-17061008
 ] 

Hive QA commented on HIVE-22997:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12996894/HIVE-22997.8.patch

{color:green}SUCCESS:{color} +1 due to 8 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 18108 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schq_materialized]
 (batchId=184)
org.apache.hadoop.hive.ql.parse.TestScheduledReplicationScenarios.testExternalTablesReplLoadBootstrapIncr
 (batchId=270)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21142/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21142/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21142/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12996894 - PreCommit-HIVE-Build

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.2.patch, HIVE-22997.4.patch, 
> HIVE-22997.5.patch, HIVE-22997.6.patch, HIVE-22997.7.patch, 
> HIVE-22997.8.patch, HIVE-22997.9.patch
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-22888) Rewrite checkLock inner select with JOIN operator

2020-03-17 Thread Denys Kuzmenko (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061002#comment-17061002
 ] 

Denys Kuzmenko edited comment on HIVE-22888 at 3/17/20, 3:35 PM:
-

MySql: v5.7.23, v5.1.46
Postgres: v9.3,
Oracle: XE 11g
MsSql: 2017-GA

{code}
(select  EX.*, REQ.HL_LOCK_INT_ID AS REQ_LOCK_INT_ID FROM ( SELECT 
HL_LOCK_EXT_ID, HL_LOCK_INT_ID, HL_TXNID, HL_DB, HL_TABLE, HL_PARTITION, 
HL_LOCK_STATE, HL_LOCK_TYPE FROM HIVE_LOCKS WHERE HL_LOCK_EXT_ID < 78588) EX 

INNER JOIN ( SELECT HL_LOCK_INT_ID, HL_TXNID, HL_DB, HL_TABLE, HL_PARTITION, 
HL_LOCK_TYPE FROM HIVE_LOCKS WHERE HL_LOCK_EXT_ID = 78588) REQ 

ON EX.HL_DB = REQ.HL_DB AND (EX.HL_TABLE IS NULL OR REQ.HL_TABLE IS NULL OR 
EX.HL_TABLE = REQ.HL_TABLE AND (EX.HL_PARTITION IS NULL OR REQ.HL_PARTITION IS 
NULL OR EX.HL_PARTITION = REQ.HL_PARTITION)) 

WHERE (REQ.HL_TXNID = 0 OR EX.HL_TXNID != REQ.HL_TXNID) AND  
REQ.HL_LOCK_TYPE='e' AND NOT (EX.HL_TABLE IS NULL AND EX.HL_LOCK_TYPE='r' AND 
REQ.HL_TABLE IS NOT NULL) limit 1) 

UNION ALL 

(select  EX.*, REQ.HL_LOCK_INT_ID AS REQ_LOCK_INT_ID FROM ( SELECT 
HL_LOCK_EXT_ID, HL_LOCK_INT_ID, HL_TXNID, HL_DB, HL_TABLE, HL_PARTITION, 
HL_LOCK_STATE, HL_LOCK_TYPE FROM HIVE_LOCKS WHERE HL_LOCK_EXT_ID < 78588) EX 

INNER JOIN ( SELECT HL_LOCK_INT_ID, HL_TXNID, HL_DB, HL_TABLE, HL_PARTITION, 
HL_LOCK_TYPE FROM HIVE_LOCKS WHERE HL_LOCK_EXT_ID = 78588) REQ 

ON EX.HL_DB = REQ.HL_DB AND (EX.HL_TABLE IS NULL OR REQ.HL_TABLE IS NULL OR 
EX.HL_TABLE = REQ.HL_TABLE AND (EX.HL_PARTITION IS NULL OR REQ.HL_PARTITION IS 
NULL OR EX.HL_PARTITION = REQ.HL_PARTITION)) 

WHERE (REQ.HL_TXNID = 0 OR EX.HL_TXNID != REQ.HL_TXNID) AND  
REQ.HL_LOCK_TYPE='w' AND EX.HL_LOCK_TYPE IN ('w','e') limit 1) 

UNION ALL 

(select  EX.*, REQ.HL_LOCK_INT_ID AS REQ_LOCK_INT_ID FROM ( SELECT 
HL_LOCK_EXT_ID, HL_LOCK_INT_ID, HL_TXNID, HL_DB, HL_TABLE, HL_PARTITION, 
HL_LOCK_STATE, HL_LOCK_TYPE FROM HIVE_LOCKS WHERE HL_LOCK_EXT_ID < 78588) EX 

INNER JOIN ( SELECT HL_LOCK_INT_ID, HL_TXNID, HL_DB, HL_TABLE, HL_PARTITION, 
HL_LOCK_TYPE FROM HIVE_LOCKS WHERE HL_LOCK_EXT_ID = 78588) REQ 

ON EX.HL_DB = REQ.HL_DB AND (EX.HL_TABLE IS NULL OR REQ.HL_TABLE IS NULL OR 
EX.HL_TABLE = REQ.HL_TABLE AND (EX.HL_PARTITION IS NULL OR REQ.HL_PARTITION IS 
NULL OR EX.HL_PARTITION = REQ.HL_PARTITION)) 

WHERE (REQ.HL_TXNID = 0 OR EX.HL_TXNID != REQ.HL_TXNID) AND  
REQ.HL_LOCK_TYPE='r' AND EX.HL_LOCK_TYPE='e' AND NOT (EX.HL_TABLE IS NOT NULL 
AND REQ.HL_TABLE IS NULL) limit 1)
{code}


was (Author: dkuzmenko):
MySql: v5.7.23, v5.1.46
Postgres: v9.3,
Oracle: XE 11g
MsSql: 2017-GA

{code}
(select  EX.*, REQ.HL_LOCK_INT_ID AS REQ_LOCK_INT_ID FROM ( SELECT 
HL_LOCK_EXT_ID, HL_LOCK_INT_ID, HL_TXNID, HL_DB, HL_TABLE, HL_PARTITION, 
HL_LOCK_STATE, HL_LOCK_TYPE FROM HIVE_LOCKS WHERE HL_LOCK_EXT_ID < 78588) EX 
INNER JOIN ( SELECT HL_LOCK_INT_ID, HL_TXNID, HL_DB, HL_TABLE, HL_PARTITION, 
HL_LOCK_TYPE FROM HIVE_LOCKS WHERE HL_LOCK_EXT_ID = 78588) REQ ON EX.HL_DB = 
REQ.HL_DB AND (EX.HL_TABLE IS NULL OR REQ.HL_TABLE IS NULL OR EX.HL_TABLE = 
REQ.HL_TABLE AND (EX.HL_PARTITION IS NULL OR REQ.HL_PARTITION IS NULL OR 
EX.HL_PARTITION = REQ.HL_PARTITION)) WHERE (REQ.HL_TXNID = 0 OR EX.HL_TXNID != 
REQ.HL_TXNID) AND  REQ.HL_LOCK_TYPE='e' AND NOT (EX.HL_TABLE IS NULL AND 
EX.HL_LOCK_TYPE='r' AND REQ.HL_TABLE IS NOT NULL) limit 1) UNION ALL (select  
EX.*, REQ.HL_LOCK_INT_ID AS REQ_LOCK_INT_ID FROM ( SELECT HL_LOCK_EXT_ID, 
HL_LOCK_INT_ID, HL_TXNID, HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_STATE, 
HL_LOCK_TYPE FROM HIVE_LOCKS WHERE HL_LOCK_EXT_ID < 78588) EX INNER JOIN ( 
SELECT HL_LOCK_INT_ID, HL_TXNID, HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_TYPE 
FROM HIVE_LOCKS WHERE HL_LOCK_EXT_ID = 78588) REQ ON EX.HL_DB = REQ.HL_DB AND 
(EX.HL_TABLE IS NULL OR REQ.HL_TABLE IS NULL OR EX.HL_TABLE = REQ.HL_TABLE AND 
(EX.HL_PARTITION IS NULL OR REQ.HL_PARTITION IS NULL OR EX.HL_PARTITION = 
REQ.HL_PARTITION)) WHERE (REQ.HL_TXNID = 0 OR EX.HL_TXNID != REQ.HL_TXNID) AND  
REQ.HL_LOCK_TYPE='w' AND EX.HL_LOCK_TYPE IN ('w','e') limit 1) UNION ALL 
(select  EX.*, REQ.HL_LOCK_INT_ID AS REQ_LOCK_INT_ID FROM ( SELECT 
HL_LOCK_EXT_ID, HL_LOCK_INT_ID, HL_TXNID, HL_DB, HL_TABLE, HL_PARTITION, 
HL_LOCK_STATE, HL_LOCK_TYPE FROM HIVE_LOCKS WHERE HL_LOCK_EXT_ID < 78588) EX 
INNER JOIN ( SELECT HL_LOCK_INT_ID, HL_TXNID, HL_DB, HL_TABLE, HL_PARTITION, 
HL_LOCK_TYPE FROM HIVE_LOCKS WHERE HL_LOCK_EXT_ID = 78588) REQ ON EX.HL_DB = 
REQ.HL_DB AND (EX.HL_TABLE IS NULL OR REQ.HL_TABLE IS NULL OR EX.HL_TABLE = 
REQ.HL_TABLE AND (EX.HL_PARTITION IS NULL OR REQ.HL_PARTITION IS NULL OR 
EX.HL_PARTITION = REQ.HL_PARTITION)) WHERE (REQ.HL_TXNID = 0 OR EX.HL_TXNID != 
REQ.HL_TXNID) AND  REQ.HL_LOCK_TYPE='r' AND EX.HL_LOCK_TYPE='e' AND NOT 
(EX.HL_TABLE IS NOT NULL AND REQ.HL_TABLE IS NULL) limit 1)
{code}

> Rewrite checkLock inner select with JOIN operator
> -
>
>   

[jira] [Commented] (HIVE-22888) Rewrite checkLock inner select with JOIN operator

2020-03-17 Thread Denys Kuzmenko (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17061002#comment-17061002
 ] 

Denys Kuzmenko commented on HIVE-22888:
---

MySql: v5.7.23, v5.1.46
Postgres: v9.3,
Oracle: XE 11g
MsSql: 2017-GA

{code}
(select  EX.*, REQ.HL_LOCK_INT_ID AS REQ_LOCK_INT_ID FROM ( SELECT 
HL_LOCK_EXT_ID, HL_LOCK_INT_ID, HL_TXNID, HL_DB, HL_TABLE, HL_PARTITION, 
HL_LOCK_STATE, HL_LOCK_TYPE FROM HIVE_LOCKS WHERE HL_LOCK_EXT_ID < 78588) EX 
INNER JOIN ( SELECT HL_LOCK_INT_ID, HL_TXNID, HL_DB, HL_TABLE, HL_PARTITION, 
HL_LOCK_TYPE FROM HIVE_LOCKS WHERE HL_LOCK_EXT_ID = 78588) REQ ON EX.HL_DB = 
REQ.HL_DB AND (EX.HL_TABLE IS NULL OR REQ.HL_TABLE IS NULL OR EX.HL_TABLE = 
REQ.HL_TABLE AND (EX.HL_PARTITION IS NULL OR REQ.HL_PARTITION IS NULL OR 
EX.HL_PARTITION = REQ.HL_PARTITION)) WHERE (REQ.HL_TXNID = 0 OR EX.HL_TXNID != 
REQ.HL_TXNID) AND  REQ.HL_LOCK_TYPE='e' AND NOT (EX.HL_TABLE IS NULL AND 
EX.HL_LOCK_TYPE='r' AND REQ.HL_TABLE IS NOT NULL) limit 1) UNION ALL (select  
EX.*, REQ.HL_LOCK_INT_ID AS REQ_LOCK_INT_ID FROM ( SELECT HL_LOCK_EXT_ID, 
HL_LOCK_INT_ID, HL_TXNID, HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_STATE, 
HL_LOCK_TYPE FROM HIVE_LOCKS WHERE HL_LOCK_EXT_ID < 78588) EX INNER JOIN ( 
SELECT HL_LOCK_INT_ID, HL_TXNID, HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_TYPE 
FROM HIVE_LOCKS WHERE HL_LOCK_EXT_ID = 78588) REQ ON EX.HL_DB = REQ.HL_DB AND 
(EX.HL_TABLE IS NULL OR REQ.HL_TABLE IS NULL OR EX.HL_TABLE = REQ.HL_TABLE AND 
(EX.HL_PARTITION IS NULL OR REQ.HL_PARTITION IS NULL OR EX.HL_PARTITION = 
REQ.HL_PARTITION)) WHERE (REQ.HL_TXNID = 0 OR EX.HL_TXNID != REQ.HL_TXNID) AND  
REQ.HL_LOCK_TYPE='w' AND EX.HL_LOCK_TYPE IN ('w','e') limit 1) UNION ALL 
(select  EX.*, REQ.HL_LOCK_INT_ID AS REQ_LOCK_INT_ID FROM ( SELECT 
HL_LOCK_EXT_ID, HL_LOCK_INT_ID, HL_TXNID, HL_DB, HL_TABLE, HL_PARTITION, 
HL_LOCK_STATE, HL_LOCK_TYPE FROM HIVE_LOCKS WHERE HL_LOCK_EXT_ID < 78588) EX 
INNER JOIN ( SELECT HL_LOCK_INT_ID, HL_TXNID, HL_DB, HL_TABLE, HL_PARTITION, 
HL_LOCK_TYPE FROM HIVE_LOCKS WHERE HL_LOCK_EXT_ID = 78588) REQ ON EX.HL_DB = 
REQ.HL_DB AND (EX.HL_TABLE IS NULL OR REQ.HL_TABLE IS NULL OR EX.HL_TABLE = 
REQ.HL_TABLE AND (EX.HL_PARTITION IS NULL OR REQ.HL_PARTITION IS NULL OR 
EX.HL_PARTITION = REQ.HL_PARTITION)) WHERE (REQ.HL_TXNID = 0 OR EX.HL_TXNID != 
REQ.HL_TXNID) AND  REQ.HL_LOCK_TYPE='r' AND EX.HL_LOCK_TYPE='e' AND NOT 
(EX.HL_TABLE IS NOT NULL AND REQ.HL_TABLE IS NULL) limit 1)
{code}

> Rewrite checkLock inner select with JOIN operator
> -
>
> Key: HIVE-22888
> URL: https://issues.apache.org/jira/browse/HIVE-22888
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-22888.1.patch, HIVE-22888.10.patch, 
> HIVE-22888.11.patch, HIVE-22888.2.patch, HIVE-22888.3.patch, 
> HIVE-22888.4.patch, HIVE-22888.5.patch, HIVE-22888.6.patch, 
> HIVE-22888.8.patch, HIVE-22888.9.patch, acid-lock-perf-test.pdf
>
>
> - Created extra (db, tbl, part) index on HIVE_LOCKS table;
> - Replaced inner select under checkLocks using multiple IN statements with 
> JOIN operator; 
> generated query looks like :
> {code}
> SELECT LS.* FROM (
> SELECT HL_LOCK_EXT_ID, HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_STATE, 
> HL_LOCK_TYPE FROM HIVE_LOCKS
> WHERE HL_LOCK_EXT_ID < 333) LS
> INNER JOIN (
> SELECT HL_DB, HL_TABLE, HL_PARTITION, HL_LOCK_TYPE FROM HIVE_LOCKS WHERE 
> HL_LOCK_EXT_ID = 333) LBC
> ON LS.HL_DB = LBC.HL_DB
> AND (LS.HL_TABLE IS NULL OR LBC.HL_TABLE IS NULL OR LS.HL_TABLE = 
> LBC.HL_TABLE
> AND (LS.HL_PARTITION IS NULL OR LBC.HL_PARTITION IS NULL OR 
> LS.HL_PARTITION = LBC.HL_PARTITION))
> WHERE  (LBC.HL_TXNID = 0 OR LS.HL_TXNID != LBC.HL_TXNID) 
> AND (LBC.HL_LOCK_TYPE='e'
>AND !(LS.HL_TABLE IS NULL AND LS.HL_LOCK_TYPE='r' AND LBC.HL_TABLE 
> IS NOT NULL )
> OR LBC.HL_LOCK_TYPE='w' AND LS.HL_LOCK_TYPE IN ('w','e')
> OR LBC.HL_LOCK_TYPE='r' AND LS.HL_LOCK_TYPE='e'
>AND !(LS.HL_TABLE IS NOT NULL AND LBC.HL_TABLE IS NULL))
> LIMIT 1;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22997?focusedWorklogId=404796=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-404796
 ]

ASF GitHub Bot logged work on HIVE-22997:
-

Author: ASF GitHub Bot
Created on: 17/Mar/20 15:31
Start Date: 17/Mar/20 15:31
Worklog Time Spent: 10m 
  Work Description: pkumarsinha commented on pull request #951: HIVE-22997 
: Copy external table to target during Repl Dump operation
URL: https://github.com/apache/hive/pull/951#discussion_r393723769
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestTableLevelReplicationScenarios.java
 ##
 @@ -918,7 +922,9 @@ public void 
testRenameTableScenariosWithReplaceExternalTable() throws Throwable
 String newPolicy = primaryDbName + ".'(in[0-9]+)|(out1500)|(in2)'";
 dumpWithClause = Arrays.asList(
 "'" + HiveConf.ConfVars.REPL_INCLUDE_EXTERNAL_TABLES.varname + 
"'='true'",
-"'" + HiveConf.ConfVars.REPL_BOOTSTRAP_EXTERNAL_TABLES.varname + 
"'='false'"
+"'" + HiveConf.ConfVars.REPL_BOOTSTRAP_EXTERNAL_TABLES.varname + 
"'='false'",
 
 Review comment:
   Missed it here, will fix
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 404796)
Time Spent: 4h 40m  (was: 4.5h)

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.2.patch, HIVE-22997.4.patch, 
> HIVE-22997.5.patch, HIVE-22997.6.patch, HIVE-22997.7.patch, 
> HIVE-22997.8.patch, HIVE-22997.9.patch
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22997?focusedWorklogId=404793=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-404793
 ]

ASF GitHub Bot logged work on HIVE-22997:
-

Author: ASF GitHub Bot
Created on: 17/Mar/20 15:28
Start Date: 17/Mar/20 15:28
Worklog Time Spent: 10m 
  Work Description: pkumarsinha commented on pull request #951: HIVE-22997 
: Copy external table to target during Repl Dump operation
URL: https://github.com/apache/hive/pull/951#discussion_r393723769
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestTableLevelReplicationScenarios.java
 ##
 @@ -918,7 +922,9 @@ public void 
testRenameTableScenariosWithReplaceExternalTable() throws Throwable
 String newPolicy = primaryDbName + ".'(in[0-9]+)|(out1500)|(in2)'";
 dumpWithClause = Arrays.asList(
 "'" + HiveConf.ConfVars.REPL_INCLUDE_EXTERNAL_TABLES.varname + 
"'='true'",
-"'" + HiveConf.ConfVars.REPL_BOOTSTRAP_EXTERNAL_TABLES.varname + 
"'='false'"
+"'" + HiveConf.ConfVars.REPL_BOOTSTRAP_EXTERNAL_TABLES.varname + 
"'='false'",
 
 Review comment:
   Yes, I was not planning to refactor all the places.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 404793)
Time Spent: 4.5h  (was: 4h 20m)

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.2.patch, HIVE-22997.4.patch, 
> HIVE-22997.5.patch, HIVE-22997.6.patch, HIVE-22997.7.patch, 
> HIVE-22997.8.patch, HIVE-22997.9.patch
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22997?focusedWorklogId=404790=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-404790
 ]

ASF GitHub Bot logged work on HIVE-22997:
-

Author: ASF GitHub Bot
Created on: 17/Mar/20 15:26
Start Date: 17/Mar/20 15:26
Worklog Time Spent: 10m 
  Work Description: pkumarsinha commented on pull request #951: HIVE-22997 
: Copy external table to target during Repl Dump operation
URL: https://github.com/apache/hive/pull/951#discussion_r393762700
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenariosExternalTables.java
 ##
 @@ -436,16 +438,29 @@ public void externalTableIncrementalReplication() throws 
Throwable {
 }
 
 List loadWithClause = externalTableBasePathWithClause();
-replica.load(replicatedDbName, primaryDbName, loadWithClause)
+replica.load(replicatedDbName, primaryDbName, withClause)
 .run("use " + replicatedDbName)
 .run("show tables like 't1'")
 .verifyResult("t1")
 .run("show partitions t1")
 .verifyResults(new String[] { "country=india", "country=us" })
 .run("select place from t1 order by place")
-.verifyResults(new String[] { "bangalore", "mumbai", "pune" })
+.verifyResults(new String[] {})
 
 Review comment:
   Because copy is happening during load.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 404790)
Time Spent: 4h 20m  (was: 4h 10m)

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.2.patch, HIVE-22997.4.patch, 
> HIVE-22997.5.patch, HIVE-22997.6.patch, HIVE-22997.7.patch, 
> HIVE-22997.8.patch, HIVE-22997.9.patch
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23004) Support Decimal64 operations across multiple vertices

2020-03-17 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-23004:

Attachment: HIVE-23004.8.patch
Status: Patch Available  (was: Open)

> Support Decimal64 operations across multiple vertices
> -
>
> Key: HIVE-23004
> URL: https://issues.apache.org/jira/browse/HIVE-23004
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-23004.1.patch, HIVE-23004.2.patch, 
> HIVE-23004.4.patch, HIVE-23004.6.patch, HIVE-23004.7.patch, HIVE-23004.8.patch
>
>
> Support Decimal64 operations across multiple vertices



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23004) Support Decimal64 operations across multiple vertices

2020-03-17 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-23004:

Status: Open  (was: Patch Available)

> Support Decimal64 operations across multiple vertices
> -
>
> Key: HIVE-23004
> URL: https://issues.apache.org/jira/browse/HIVE-23004
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-23004.1.patch, HIVE-23004.2.patch, 
> HIVE-23004.4.patch, HIVE-23004.6.patch, HIVE-23004.7.patch, HIVE-23004.8.patch
>
>
> Support Decimal64 operations across multiple vertices



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-17 Thread PRAVIN KUMAR SINHA (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PRAVIN KUMAR SINHA updated HIVE-22997:
--
Attachment: HIVE-22997.9.patch

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.2.patch, HIVE-22997.4.patch, 
> HIVE-22997.5.patch, HIVE-22997.6.patch, HIVE-22997.7.patch, 
> HIVE-22997.8.patch, HIVE-22997.9.patch
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-17 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17060976#comment-17060976
 ] 

Hive QA commented on HIVE-22997:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
46s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
39s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
43s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} ql: The patch generated 0 new + 76 unchanged - 1 
fixed = 76 total (was 77) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} itests/hive-unit: The patch generated 0 new + 649 
unchanged - 1 fixed = 649 total (was 650) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
54s{color} | {color:red} ql generated 1 new + 1530 unchanged - 1 fixed = 1531 
total (was 1531) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  org.apache.hadoop.hive.ql.exec.repl.ReplDumpWork is Serializable; 
consider declaring a serialVersionUID  At ReplDumpWork.java:a serialVersionUID  
At ReplDumpWork.java:[lines 39-119] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21142/dev-support/hive-personality.sh
 |
| git revision | master / 4daa57c |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21142/yetus/new-findbugs-ql.html
 |
| modules | C: ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21142/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.2.patch, HIVE-22997.4.patch, 
> 

[jira] [Work logged] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22997?focusedWorklogId=404755=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-404755
 ]

ASF GitHub Bot logged work on HIVE-22997:
-

Author: ASF GitHub Bot
Created on: 17/Mar/20 14:34
Start Date: 17/Mar/20 14:34
Worklog Time Spent: 10m 
  Work Description: pkumarsinha commented on pull request #951: HIVE-22997 
: Copy external table to target during Repl Dump operation
URL: https://github.com/apache/hive/pull/951#discussion_r393724555
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationOnHDFSEncryptedZones.java
 ##
 @@ -102,12 +104,20 @@ public void 
targetAndSourceHaveDifferentEncryptionZoneKeys() throws Throwable {
   put(HiveConf.ConfVars.REPLDIR.varname, primary.repldDir);
 }}, "test_key123");
 
+List dumpWithClause = Arrays.asList(
+"'hive.repl.add.raw.reserved.namespace'='true'",
+"'" + HiveConf.ConfVars.REPL_EXTERNAL_TABLE_BASE_DIR.varname + 
"'='"
++ replica.externalTableWarehouseRoot + "'",
+"'distcp.options.skipcrccheck'=''",
 
 Review comment:
   That was part of existing test, I have used the same set of config in dump 
time
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 404755)
Time Spent: 4h 10m  (was: 4h)

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.2.patch, HIVE-22997.4.patch, 
> HIVE-22997.5.patch, HIVE-22997.6.patch, HIVE-22997.7.patch, HIVE-22997.8.patch
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22997?focusedWorklogId=404754=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-404754
 ]

ASF GitHub Bot logged work on HIVE-22997:
-

Author: ASF GitHub Bot
Created on: 17/Mar/20 14:33
Start Date: 17/Mar/20 14:33
Worklog Time Spent: 10m 
  Work Description: pkumarsinha commented on pull request #951: HIVE-22997 
: Copy external table to target during Repl Dump operation
URL: https://github.com/apache/hive/pull/951#discussion_r393723769
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestTableLevelReplicationScenarios.java
 ##
 @@ -918,7 +922,9 @@ public void 
testRenameTableScenariosWithReplaceExternalTable() throws Throwable
 String newPolicy = primaryDbName + ".'(in[0-9]+)|(out1500)|(in2)'";
 dumpWithClause = Arrays.asList(
 "'" + HiveConf.ConfVars.REPL_INCLUDE_EXTERNAL_TABLES.varname + 
"'='true'",
-"'" + HiveConf.ConfVars.REPL_BOOTSTRAP_EXTERNAL_TABLES.varname + 
"'='false'"
+"'" + HiveConf.ConfVars.REPL_BOOTSTRAP_EXTERNAL_TABLES.varname + 
"'='false'",
 
 Review comment:
   Yes, I am not planning to refactor all the places.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 404754)
Time Spent: 4h  (was: 3h 50m)

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.2.patch, HIVE-22997.4.patch, 
> HIVE-22997.5.patch, HIVE-22997.6.patch, HIVE-22997.7.patch, HIVE-22997.8.patch
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22997?focusedWorklogId=404753=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-404753
 ]

ASF GitHub Bot logged work on HIVE-22997:
-

Author: ASF GitHub Bot
Created on: 17/Mar/20 14:32
Start Date: 17/Mar/20 14:32
Worklog Time Spent: 10m 
  Work Description: pkumarsinha commented on pull request #951: HIVE-22997 
: Copy external table to target during Repl Dump operation
URL: https://github.com/apache/hive/pull/951#discussion_r393723340
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenariosExternalTables.java
 ##
 @@ -436,16 +438,29 @@ public void externalTableIncrementalReplication() throws 
Throwable {
 }
 
 List loadWithClause = externalTableBasePathWithClause();
-replica.load(replicatedDbName, primaryDbName, loadWithClause)
+replica.load(replicatedDbName, primaryDbName, withClause)
 .run("use " + replicatedDbName)
 .run("show tables like 't1'")
 .verifyResult("t1")
 .run("show partitions t1")
 .verifyResults(new String[] { "country=india", "country=us" })
 .run("select place from t1 order by place")
-.verifyResults(new String[] { "bangalore", "mumbai", "pune" })
+.verifyResults(new String[] {})
 
 Review comment:
   The copy was happening in repl load, change was prior to that.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 404753)
Time Spent: 3h 50m  (was: 3h 40m)

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.2.patch, HIVE-22997.4.patch, 
> HIVE-22997.5.patch, HIVE-22997.6.patch, HIVE-22997.7.patch, HIVE-22997.8.patch
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22997?focusedWorklogId=404748=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-404748
 ]

ASF GitHub Bot logged work on HIVE-22997:
-

Author: ASF GitHub Bot
Created on: 17/Mar/20 14:29
Start Date: 17/Mar/20 14:29
Worklog Time Spent: 10m 
  Work Description: aasha commented on pull request #951: HIVE-22997 : Copy 
external table to target during Repl Dump operation
URL: https://github.com/apache/hive/pull/951#discussion_r393660435
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenariosExternalTables.java
 ##
 @@ -436,16 +436,29 @@ public void externalTableIncrementalReplication() throws 
Throwable {
 }
 
 List loadWithClause = externalTableBasePathWithClause();
-replica.load(replicatedDbName, primaryDbName, loadWithClause)
+replica.load(replicatedDbName, primaryDbName, withClause)
 .run("use " + replicatedDbName)
 .run("show tables like 't1'")
 .verifyResult("t1")
 .run("show partitions t1")
 .verifyResults(new String[] { "country=india", "country=us" })
 .run("select place from t1 order by place")
-.verifyResults(new String[] { "bangalore", "mumbai", "pune" })
 
 Review comment:
   How was it getting loaded here?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 404748)
Time Spent: 3h 40m  (was: 3.5h)

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.2.patch, HIVE-22997.4.patch, 
> HIVE-22997.5.patch, HIVE-22997.6.patch, HIVE-22997.7.patch, HIVE-22997.8.patch
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22997?focusedWorklogId=404746=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-404746
 ]

ASF GitHub Bot logged work on HIVE-22997:
-

Author: ASF GitHub Bot
Created on: 17/Mar/20 14:29
Start Date: 17/Mar/20 14:29
Worklog Time Spent: 10m 
  Work Description: aasha commented on pull request #951: HIVE-22997 : Copy 
external table to target during Repl Dump operation
URL: https://github.com/apache/hive/pull/951#discussion_r393658865
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationOnHDFSEncryptedZones.java
 ##
 @@ -102,12 +104,20 @@ public void 
targetAndSourceHaveDifferentEncryptionZoneKeys() throws Throwable {
   put(HiveConf.ConfVars.REPLDIR.varname, primary.repldDir);
 }}, "test_key123");
 
+List dumpWithClause = Arrays.asList(
+"'hive.repl.add.raw.reserved.namespace'='true'",
+"'" + HiveConf.ConfVars.REPL_EXTERNAL_TABLE_BASE_DIR.varname + 
"'='"
++ replica.externalTableWarehouseRoot + "'",
+"'distcp.options.skipcrccheck'=''",
 
 Review comment:
   Why do we need these extra configs
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 404746)

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.2.patch, HIVE-22997.4.patch, 
> HIVE-22997.5.patch, HIVE-22997.6.patch, HIVE-22997.7.patch, HIVE-22997.8.patch
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22997?focusedWorklogId=404745=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-404745
 ]

ASF GitHub Bot logged work on HIVE-22997:
-

Author: ASF GitHub Bot
Created on: 17/Mar/20 14:29
Start Date: 17/Mar/20 14:29
Worklog Time Spent: 10m 
  Work Description: aasha commented on pull request #951: HIVE-22997 : Copy 
external table to target during Repl Dump operation
URL: https://github.com/apache/hive/pull/951#discussion_r393660304
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenariosExternalTables.java
 ##
 @@ -436,16 +438,29 @@ public void externalTableIncrementalReplication() throws 
Throwable {
 }
 
 List loadWithClause = externalTableBasePathWithClause();
-replica.load(replicatedDbName, primaryDbName, loadWithClause)
+replica.load(replicatedDbName, primaryDbName, withClause)
 .run("use " + replicatedDbName)
 .run("show tables like 't1'")
 .verifyResult("t1")
 .run("show partitions t1")
 .verifyResults(new String[] { "country=india", "country=us" })
 .run("select place from t1 order by place")
-.verifyResults(new String[] { "bangalore", "mumbai", "pune" })
+.verifyResults(new String[] {})
 
 Review comment:
   How was this getting loaded in the older scenario
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 404745)
Time Spent: 3h 20m  (was: 3h 10m)

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.2.patch, HIVE-22997.4.patch, 
> HIVE-22997.5.patch, HIVE-22997.6.patch, HIVE-22997.7.patch, HIVE-22997.8.patch
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22997?focusedWorklogId=404747=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-404747
 ]

ASF GitHub Bot logged work on HIVE-22997:
-

Author: ASF GitHub Bot
Created on: 17/Mar/20 14:29
Start Date: 17/Mar/20 14:29
Worklog Time Spent: 10m 
  Work Description: aasha commented on pull request #951: HIVE-22997 : Copy 
external table to target during Repl Dump operation
URL: https://github.com/apache/hive/pull/951#discussion_r393660921
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenariosExternalTables.java
 ##
 @@ -713,9 +732,11 @@ public void testExternalTableDataPath() throws Exception {
 
   @Test
   public void testExternalTablesIncReplicationWithConcurrentDropTable() throws 
Throwable {
-List dumpWithClause = Collections.singletonList(
-"'" + HiveConf.ConfVars.REPL_INCLUDE_EXTERNAL_TABLES.varname + 
"'='true'"
-);
+List dumpWithClause = Arrays.asList(
 
 Review comment:
   The new method is still not used here.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 404747)
Time Spent: 3.5h  (was: 3h 20m)

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.2.patch, HIVE-22997.4.patch, 
> HIVE-22997.5.patch, HIVE-22997.6.patch, HIVE-22997.7.patch, HIVE-22997.8.patch
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22997?focusedWorklogId=404749=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-404749
 ]

ASF GitHub Bot logged work on HIVE-22997:
-

Author: ASF GitHub Bot
Created on: 17/Mar/20 14:29
Start Date: 17/Mar/20 14:29
Worklog Time Spent: 10m 
  Work Description: aasha commented on pull request #951: HIVE-22997 : Copy 
external table to target during Repl Dump operation
URL: https://github.com/apache/hive/pull/951#discussion_r393662263
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestTableLevelReplicationScenarios.java
 ##
 @@ -918,7 +922,9 @@ public void 
testRenameTableScenariosWithReplaceExternalTable() throws Throwable
 String newPolicy = primaryDbName + ".'(in[0-9]+)|(out1500)|(in2)'";
 dumpWithClause = Arrays.asList(
 "'" + HiveConf.ConfVars.REPL_INCLUDE_EXTERNAL_TABLES.varname + 
"'='true'",
-"'" + HiveConf.ConfVars.REPL_BOOTSTRAP_EXTERNAL_TABLES.varname + 
"'='false'"
+"'" + HiveConf.ConfVars.REPL_BOOTSTRAP_EXTERNAL_TABLES.varname + 
"'='false'",
 
 Review comment:
   Its still using its own configs. Is this done?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 404749)
Time Spent: 3h 40m  (was: 3.5h)

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.2.patch, HIVE-22997.4.patch, 
> HIVE-22997.5.patch, HIVE-22997.6.patch, HIVE-22997.7.patch, HIVE-22997.8.patch
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22990) Build acknowledgement mechanism for repl dump and load

2020-03-17 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi updated HIVE-22990:
---
Attachment: HIVE-22990.22.patch
Status: Patch Available  (was: In Progress)

> Build acknowledgement mechanism for repl dump and load
> --
>
> Key: HIVE-22990
> URL: https://issues.apache.org/jira/browse/HIVE-22990
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22990.01.patch, HIVE-22990.02.patch, 
> HIVE-22990.03.patch, HIVE-22990.04.patch, HIVE-22990.05.patch, 
> HIVE-22990.06.patch, HIVE-22990.07.patch, HIVE-22990.08.patch, 
> HIVE-22990.09.patch, HIVE-22990.10.patch, HIVE-22990.11.patch, 
> HIVE-22990.12.patch, HIVE-22990.13.patch, HIVE-22990.14.patch, 
> HIVE-22990.15.patch, HIVE-22990.16.patch, HIVE-22990.17.patch, 
> HIVE-22990.18.patch, HIVE-22990.19.patch, HIVE-22990.20.patch, 
> HIVE-22990.21.patch, HIVE-22990.22.patch, HIVE-22990.patch
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22990) Build acknowledgement mechanism for repl dump and load

2020-03-17 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi updated HIVE-22990:
---
Status: In Progress  (was: Patch Available)

> Build acknowledgement mechanism for repl dump and load
> --
>
> Key: HIVE-22990
> URL: https://issues.apache.org/jira/browse/HIVE-22990
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22990.01.patch, HIVE-22990.02.patch, 
> HIVE-22990.03.patch, HIVE-22990.04.patch, HIVE-22990.05.patch, 
> HIVE-22990.06.patch, HIVE-22990.07.patch, HIVE-22990.08.patch, 
> HIVE-22990.09.patch, HIVE-22990.10.patch, HIVE-22990.11.patch, 
> HIVE-22990.12.patch, HIVE-22990.13.patch, HIVE-22990.14.patch, 
> HIVE-22990.15.patch, HIVE-22990.16.patch, HIVE-22990.17.patch, 
> HIVE-22990.18.patch, HIVE-22990.19.patch, HIVE-22990.20.patch, 
> HIVE-22990.21.patch, HIVE-22990.22.patch, HIVE-22990.patch
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23033) MSSQL metastore schema init script doesn't initialize NOTIFICATION_SEQUENCE

2020-03-17 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17060946#comment-17060946
 ] 

Hive QA commented on HIVE-23033:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12996859/HIVE-23033.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 18107 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21141/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21141/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21141/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12996859 - PreCommit-HIVE-Build

> MSSQL metastore schema init script doesn't initialize NOTIFICATION_SEQUENCE
> ---
>
> Key: HIVE-23033
> URL: https://issues.apache.org/jira/browse/HIVE-23033
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0, 3.1.0, 3.1.1, 3.1.2
>Reporter: David Lavati
>Assignee: David Lavati
>Priority: Major
> Fix For: 4.0.0, 3.1.3
>
> Attachments: HIVE-23033.1.patch
>
>
> * The inital value for this table in the schema scripts was removed in 
> HIVE-17566: 
> https://github.com/apache/hive/commit/32b7abac961ca3879d23b074357f211fc7c49131#diff-3d1a4bae0d5d53c8e4ea79951ebf5eceL598
> * This was fixed in a number of scripts in HIVE-18781, but not for mssql: 
> https://github.com/apache/hive/commit/59483bca262880d3e7ef1b873d3c21176e9294cb#diff-4f43efd5a45cc362cb138287d90dbf82
> * This is as is since then
> When using the schematool, the table gets initialized by other means.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23036) Incorrect ORC PPD eval with sub-millisecond timestamps

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-23036:
--
Labels: pull-request-available  (was: )

> Incorrect ORC PPD eval with sub-millisecond timestamps
> --
>
> Key: HIVE-23036
> URL: https://issues.apache.org/jira/browse/HIVE-23036
> Project: Hive
>  Issue Type: Bug
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Major
>  Labels: pull-request-available
>
> See [ORC-611|https://issues.apache.org/jira/browse/ORC-611] for more details
> ORC stores timestamps with:
>  - nanosecond precision for the data itself
>  - milliseconds precision for min-max statistics
> As both min and max are rounded to the same value,  timestamps with ns 
> precision will not pass the PPD evaluator.
> {code:java}
> create table tsstat (ts timestamp) stored as orc;
> insert into tsstat values ("1970-01-01 00:00:00.0005");
> select * from tsstat where ts = "1970-01-01 00:00:00.0005";
> -- returned 0 rows{code}
> ORC PPD evaluation currently happens as part of OrcInputFormat 
> [https://github.com/apache/hive/blob/7e39a2c13711f9377c9ce1edb4224880421b1ea5/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java#L2314]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23036) Incorrect ORC PPD eval with sub-millisecond timestamps

2020-03-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23036?focusedWorklogId=404739=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-404739
 ]

ASF GitHub Bot logged work on HIVE-23036:
-

Author: ASF GitHub Bot
Created on: 17/Mar/20 14:19
Start Date: 17/Mar/20 14:19
Worklog Time Spent: 10m 
  Work Description: pgaref commented on pull request #956: HIVE-23036 
Reproducing ORC Timestamp precision issue with PPD
URL: https://github.com/apache/hive/pull/956
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 404739)
Remaining Estimate: 0h
Time Spent: 10m

> Incorrect ORC PPD eval with sub-millisecond timestamps
> --
>
> Key: HIVE-23036
> URL: https://issues.apache.org/jira/browse/HIVE-23036
> Project: Hive
>  Issue Type: Bug
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> See [ORC-611|https://issues.apache.org/jira/browse/ORC-611] for more details
> ORC stores timestamps with:
>  - nanosecond precision for the data itself
>  - milliseconds precision for min-max statistics
> As both min and max are rounded to the same value,  timestamps with ns 
> precision will not pass the PPD evaluator.
> {code:java}
> create table tsstat (ts timestamp) stored as orc;
> insert into tsstat values ("1970-01-01 00:00:00.0005");
> select * from tsstat where ts = "1970-01-01 00:00:00.0005";
> -- returned 0 rows{code}
> ORC PPD evaluation currently happens as part of OrcInputFormat 
> [https://github.com/apache/hive/blob/7e39a2c13711f9377c9ce1edb4224880421b1ea5/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java#L2314]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-23036) Incorrect ORC PPD eval with sub-millisecond timestamps

2020-03-17 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis reassigned HIVE-23036:
-


> Incorrect ORC PPD eval with sub-millisecond timestamps
> --
>
> Key: HIVE-23036
> URL: https://issues.apache.org/jira/browse/HIVE-23036
> Project: Hive
>  Issue Type: Bug
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Major
>
> See [ORC-611|https://issues.apache.org/jira/browse/ORC-611] for more details
> ORC stores timestamps with:
>  - nanosecond precision for the data itself
>  - milliseconds precision for min-max statistics
> As both min and max are rounded to the same value,  timestamps with ns 
> precision will not pass the PPD evaluator.
> {code:java}
> create table tsstat (ts timestamp) stored as orc;
> insert into tsstat values ("1970-01-01 00:00:00.0005");
> select * from tsstat where ts = "1970-01-01 00:00:00.0005";
> -- returned 0 rows{code}
> ORC PPD evaluation currently happens as part of OrcInputFormat 
> [https://github.com/apache/hive/blob/7e39a2c13711f9377c9ce1edb4224880421b1ea5/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java#L2314]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23035) Scheduled query executor may hang in case TezAMs are launched on-demand

2020-03-17 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-23035:

Status: Patch Available  (was: Open)

> Scheduled query executor may hang in case TezAMs are launched on-demand
> ---
>
> Key: HIVE-23035
> URL: https://issues.apache.org/jira/browse/HIVE-23035
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>
> Right now the schq executor hangs during session initialization - because it 
> tries to open the tez session while it initializes the SessionState



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23035) Scheduled query executor may hang in case TezAMs are launched on-demand

2020-03-17 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-23035:

Attachment: HIVE-23035.01.patch

> Scheduled query executor may hang in case TezAMs are launched on-demand
> ---
>
> Key: HIVE-23035
> URL: https://issues.apache.org/jira/browse/HIVE-23035
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-23035.01.patch
>
>
> Right now the schq executor hangs during session initialization - because it 
> tries to open the tez session while it initializes the SessionState



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-23035) Scheduled query executor may hang in case TezAMs are launched on-demand

2020-03-17 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich reassigned HIVE-23035:
---


> Scheduled query executor may hang in case TezAMs are launched on-demand
> ---
>
> Key: HIVE-23035
> URL: https://issues.apache.org/jira/browse/HIVE-23035
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>
> Right now the schq executor hangs during session initialization - because it 
> tries to open the tez session while it initializes the SessionState



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23033) MSSQL metastore schema init script doesn't initialize NOTIFICATION_SEQUENCE

2020-03-17 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17060904#comment-17060904
 ] 

Hive QA commented on HIVE-23033:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21141/dev-support/hive-personality.sh
 |
| git revision | master / 4daa57c |
| Default Java | 1.8.0_111 |
| modules | C: standalone-metastore/metastore-server U: 
standalone-metastore/metastore-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21141/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> MSSQL metastore schema init script doesn't initialize NOTIFICATION_SEQUENCE
> ---
>
> Key: HIVE-23033
> URL: https://issues.apache.org/jira/browse/HIVE-23033
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0, 3.1.0, 3.1.1, 3.1.2
>Reporter: David Lavati
>Assignee: David Lavati
>Priority: Major
> Fix For: 4.0.0, 3.1.3
>
> Attachments: HIVE-23033.1.patch
>
>
> * The inital value for this table in the schema scripts was removed in 
> HIVE-17566: 
> https://github.com/apache/hive/commit/32b7abac961ca3879d23b074357f211fc7c49131#diff-3d1a4bae0d5d53c8e4ea79951ebf5eceL598
> * This was fixed in a number of scripts in HIVE-18781, but not for mssql: 
> https://github.com/apache/hive/commit/59483bca262880d3e7ef1b873d3c21176e9294cb#diff-4f43efd5a45cc362cb138287d90dbf82
> * This is as is since then
> When using the schematool, the table gets initialized by other means.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22990) Build acknowledgement mechanism for repl dump and load

2020-03-17 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi updated HIVE-22990:
---
Attachment: HIVE-22990.21.patch
Status: Patch Available  (was: In Progress)

> Build acknowledgement mechanism for repl dump and load
> --
>
> Key: HIVE-22990
> URL: https://issues.apache.org/jira/browse/HIVE-22990
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22990.01.patch, HIVE-22990.02.patch, 
> HIVE-22990.03.patch, HIVE-22990.04.patch, HIVE-22990.05.patch, 
> HIVE-22990.06.patch, HIVE-22990.07.patch, HIVE-22990.08.patch, 
> HIVE-22990.09.patch, HIVE-22990.10.patch, HIVE-22990.11.patch, 
> HIVE-22990.12.patch, HIVE-22990.13.patch, HIVE-22990.14.patch, 
> HIVE-22990.15.patch, HIVE-22990.16.patch, HIVE-22990.17.patch, 
> HIVE-22990.18.patch, HIVE-22990.19.patch, HIVE-22990.20.patch, 
> HIVE-22990.21.patch, HIVE-22990.patch
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22990) Build acknowledgement mechanism for repl dump and load

2020-03-17 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi updated HIVE-22990:
---
Status: In Progress  (was: Patch Available)

> Build acknowledgement mechanism for repl dump and load
> --
>
> Key: HIVE-22990
> URL: https://issues.apache.org/jira/browse/HIVE-22990
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22990.01.patch, HIVE-22990.02.patch, 
> HIVE-22990.03.patch, HIVE-22990.04.patch, HIVE-22990.05.patch, 
> HIVE-22990.06.patch, HIVE-22990.07.patch, HIVE-22990.08.patch, 
> HIVE-22990.09.patch, HIVE-22990.10.patch, HIVE-22990.11.patch, 
> HIVE-22990.12.patch, HIVE-22990.13.patch, HIVE-22990.14.patch, 
> HIVE-22990.15.patch, HIVE-22990.16.patch, HIVE-22990.17.patch, 
> HIVE-22990.18.patch, HIVE-22990.19.patch, HIVE-22990.20.patch, 
> HIVE-22990.patch
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22990) Build acknowledgement mechanism for repl dump and load

2020-03-17 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17060896#comment-17060896
 ] 

Hive QA commented on HIVE-22990:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12996867/HIVE-22990.19.patch

{color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 18107 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query3] 
(batchId=306)
org.apache.hadoop.hive.ql.parse.TestReplAcrossInstancesWithJsonMessageFormat.testIfBootstrapReplLoadFailWhenRetryAfterBootstrapComplete
 (batchId=268)
org.apache.hadoop.hive.ql.parse.TestReplicationScenariosAcrossInstances.testIfBootstrapReplLoadFailWhenRetryAfterBootstrapComplete
 (batchId=273)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21140/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21140/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21140/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12996867 - PreCommit-HIVE-Build

> Build acknowledgement mechanism for repl dump and load
> --
>
> Key: HIVE-22990
> URL: https://issues.apache.org/jira/browse/HIVE-22990
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22990.01.patch, HIVE-22990.02.patch, 
> HIVE-22990.03.patch, HIVE-22990.04.patch, HIVE-22990.05.patch, 
> HIVE-22990.06.patch, HIVE-22990.07.patch, HIVE-22990.08.patch, 
> HIVE-22990.09.patch, HIVE-22990.10.patch, HIVE-22990.11.patch, 
> HIVE-22990.12.patch, HIVE-22990.13.patch, HIVE-22990.14.patch, 
> HIVE-22990.15.patch, HIVE-22990.16.patch, HIVE-22990.17.patch, 
> HIVE-22990.18.patch, HIVE-22990.19.patch, HIVE-22990.20.patch, 
> HIVE-22990.patch
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22842) Timestamp/date vectors in Arrow serializer should use correct calendar for value representation

2020-03-17 Thread Shubham Chaurasia (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shubham Chaurasia updated HIVE-22842:
-
Attachment: HIVE-22842.02.patch

> Timestamp/date vectors in Arrow serializer should use correct calendar for 
> value representation
> ---
>
> Key: HIVE-22842
> URL: https://issues.apache.org/jira/browse/HIVE-22842
> Project: Hive
>  Issue Type: Improvement
>Reporter: Jesus Camacho Rodriguez
>Assignee: Shubham Chaurasia
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22842.01.patch, HIVE-22842.02.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22990) Build acknowledgement mechanism for repl dump and load

2020-03-17 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi updated HIVE-22990:
---
Attachment: HIVE-22990.20.patch
Status: Patch Available  (was: In Progress)

> Build acknowledgement mechanism for repl dump and load
> --
>
> Key: HIVE-22990
> URL: https://issues.apache.org/jira/browse/HIVE-22990
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22990.01.patch, HIVE-22990.02.patch, 
> HIVE-22990.03.patch, HIVE-22990.04.patch, HIVE-22990.05.patch, 
> HIVE-22990.06.patch, HIVE-22990.07.patch, HIVE-22990.08.patch, 
> HIVE-22990.09.patch, HIVE-22990.10.patch, HIVE-22990.11.patch, 
> HIVE-22990.12.patch, HIVE-22990.13.patch, HIVE-22990.14.patch, 
> HIVE-22990.15.patch, HIVE-22990.16.patch, HIVE-22990.17.patch, 
> HIVE-22990.18.patch, HIVE-22990.19.patch, HIVE-22990.20.patch, 
> HIVE-22990.patch
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >