[jira] [Commented] (HIVE-9995) ACID compaction tries to compact a single file

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-9995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797852#comment-16797852
 ] 

Hive QA commented on HIVE-9995:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
8s{color} | {color:blue} ql in master has 2255 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
47s{color} | {color:red} ql: The patch generated 3 new + 680 unchanged - 7 
fixed = 683 total (was 687) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16604/dev-support/hive-personality.sh
 |
| git revision | master / 25b14be |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16604/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16604/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> ACID compaction tries to compact a single file
> --
>
> Key: HIVE-9995
> URL: https://issues.apache.org/jira/browse/HIVE-9995
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-9995.01.patch, HIVE-9995.02.patch, 
> HIVE-9995.WIP.patch
>
>
> Consider TestWorker.minorWithOpenInMiddle()
> since there is an open txnId=23, this doesn't have any meaningful minor 
> compaction work to do.  The system still tries to compact a single delta file 
> for 21-22 id range, and effectively copies the file onto itself.
> This is 1. inefficient and 2. can potentially affect a reader.
> (from a real cluster)
> Suppose we start with 
> {noformat}
> drwxr-xr-x   - ekoifman staff  0 2016-06-09 16:03 
> /user/hive/warehouse/t/base_016
> -rw-r--r--   1 ekoifman staff602 2016-06-09 16:03 
> /user/hive/warehouse/t/base_016/bucket_0
> drwxr-xr-x   - ekoifman staff  0 2016-06-09 16:07 
> /user/hive/warehouse/t/base_017
> -rw-r--r--   1 ekoifman staff588 2016-06-09 16:07 
> /user/hive/warehouse/t/base_017/bucket_0
> drwxr-xr-x   - ekoifman staff  0 2016-06-09 16:07 
> /user/hive/warehouse/t/delta_017_017_
> -rw-r--r--   1 ekoifman staff514 2016-06-09 16:06 
> 

[jira] [Commented] (HIVE-21304) Show Bucketing version for ReduceSinkOp in explain extended plan

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797839#comment-16797839
 ] 

Hive QA commented on HIVE-21304:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12963140/HIVE-21304.02.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 15833 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[infer_bucket_sort_num_buckets]
 (batchId=66)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_buckets] (batchId=65)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[unionDistinct_1] 
(batchId=155)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[infer_bucket_sort_num_buckets]
 (batchId=191)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucket4] 
(batchId=146)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketmapjoin7] 
(batchId=125)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[disable_merge_for_bucketing]
 (batchId=147)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16603/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16603/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16603/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12963140 - PreCommit-HIVE-Build

> Show Bucketing version for ReduceSinkOp in explain extended plan
> 
>
> Key: HIVE-21304
> URL: https://issues.apache.org/jira/browse/HIVE-21304
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21304.01.patch, HIVE-21304.02.patch
>
>
> Show Bucketing version for ReduceSinkOp in explain extended plan.
> This helps identify what hashing algorithm is being used by by ReduceSinkOp.
>  
> cc [~vgarg]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21283) Create Synonym mid for substr, position for locate

2019-03-20 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21283?focusedWorklogId=216599=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-216599
 ]

ASF GitHub Bot logged work on HIVE-21283:
-

Author: ASF GitHub Bot
Created on: 21/Mar/19 05:16
Start Date: 21/Mar/19 05:16
Worklog Time Spent: 10m 
  Work Description: rmsmani commented on issue #540: HIVE-21283 Synonyms 
for the existing functions
URL: https://github.com/apache/hive/pull/540#issuecomment-475115627
 
 
   Hi @sankarh
   At last the unit test came green. Please review and merge the code to master
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 216599)
Time Spent: 3h  (was: 2h 50m)

> Create Synonym mid for  substr, position for  locate
> 
>
> Key: HIVE-21283
> URL: https://issues.apache.org/jira/browse/HIVE-21283
> Project: Hive
>  Issue Type: New Feature
>Reporter: Mani M
>Assignee: Mani M
>Priority: Minor
>  Labels: UDF, pull-request-available, todoc4.0
> Fix For: 4.0.0
>
> Attachments: HIVE.21283.03.PATCH, HIVE.21283.04.PATCH, 
> HIVE.21283.05.PATCH, HIVE.21283.06.PATCH, HIVE.21283.07.PATCH, 
> HIVE.21283.08.PATCH, HIVE.21283.09.PATCH, HIVE.21283.10.PATCH, 
> HIVE.21283.2.PATCH, HIVE.21283.PATCH, image-2019-03-16-21-31-15-541.png, 
> image-2019-03-16-21-33-18-898.png
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Create new synonym for the existing function
>  
> Mid for substr
> postiion for locate 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21304) Show Bucketing version for ReduceSinkOp in explain extended plan

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797819#comment-16797819
 ] 

Hive QA commented on HIVE-21304:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
8s{color} | {color:blue} ql in master has 2255 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
52s{color} | {color:red} ql: The patch generated 2 new + 989 unchanged - 3 
fixed = 991 total (was 992) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16603/dev-support/hive-personality.sh
 |
| git revision | master / 25b14be |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16603/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql itests/hive-blobstore U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16603/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Show Bucketing version for ReduceSinkOp in explain extended plan
> 
>
> Key: HIVE-21304
> URL: https://issues.apache.org/jira/browse/HIVE-21304
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21304.01.patch, HIVE-21304.02.patch
>
>
> Show Bucketing version for ReduceSinkOp in explain extended plan.
> This helps identify what hashing algorithm is being used by by ReduceSinkOp.
>  
> cc [~vgarg]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21290) Restore historical way of handling timestamps in Parquet while keeping the new semantics at the same time

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797798#comment-16797798
 ] 

Hive QA commented on HIVE-21290:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12963135/HIVE-21290.1.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16602/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16602/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16602/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-03-21 03:54:32.141
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-16602/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-03-21 03:54:32.145
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 25b14be HIVE-21460: ACID: Load data followed by a select * query 
results in incorrect results (Vaibhav Gumashta, reviewed by Gopal V)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 25b14be HIVE-21460: ACID: Load data followed by a select * query 
results in incorrect results (Vaibhav Gumashta, reviewed by Gopal V)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-03-21 03:54:33.336
+ rm -rf ../yetus_PreCommit-HIVE-Build-16602
+ mkdir ../yetus_PreCommit-HIVE-Build-16602
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-16602
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-16602/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: cannot apply binary patch to 
'data/files/parquet_historical_timestamp_legacy.parq' without full index line
Falling back to three-way merge...
error: cannot apply binary patch to 
'data/files/parquet_historical_timestamp_legacy.parq' without full index line
error: data/files/parquet_historical_timestamp_legacy.parq: patch does not apply
error: cannot apply binary patch to 
'data/files/parquet_historical_timestamp_new.parq' without full index line
Falling back to three-way merge...
error: cannot apply binary patch to 
'data/files/parquet_historical_timestamp_new.parq' without full index line
error: data/files/parquet_historical_timestamp_new.parq: patch does not apply
error: src/java/org/apache/hadoop/hive/common/type/Timestamp.java: does not 
exist in index
error: cannot apply binary patch to 
'files/parquet_historical_timestamp_legacy.parq' without full index line
Falling back to three-way merge...
error: cannot apply binary patch to 
'files/parquet_historical_timestamp_legacy.parq' without full index line
error: files/parquet_historical_timestamp_legacy.parq: patch does not apply
error: cannot apply binary patch to 
'files/parquet_historical_timestamp_new.parq' without full index line
Falling back to three-way merge...
error: cannot apply binary patch to 
'files/parquet_historical_timestamp_new.parq' without full index line
error: files/parquet_historical_timestamp_new.parq: patch does not apply
error: 
src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java: does 
not exist in index
error: 
src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java:
 does not exist in index
error: 
src/java/org/apache/hadoop/hive/ql/io/parquet/timestamp/NanoTimeUtils.java: 
does not exist in index
error: 
src/java/org/apache/hadoop/hive/ql/io/parquet/vector/BaseVectorizedColumnReader.java:
 does not exist in index
error: 

[jira] [Commented] (HIVE-21467) Remove deprecated junit.framework.Assert imports

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797796#comment-16797796
 ] 

Hive QA commented on HIVE-21467:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12963131/HIVE-21467.02.patch

{color:green}SUCCESS:{color} +1 due to 96 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15833 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.jdbc.TestJdbcGenericUDTFGetSplits.testGenericUDTFOrderBySplitCount1
 (batchId=261)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16601/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16601/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16601/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12963131 - PreCommit-HIVE-Build

> Remove deprecated junit.framework.Assert imports
> 
>
> Key: HIVE-21467
> URL: https://issues.apache.org/jira/browse/HIVE-21467
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Minor
>  Labels: newbie
> Attachments: HIVE-21467.01.patch, HIVE-21467.02.patch
>
>
> These imports trigger lots of warnings in ide, which could be annoying, and 
> it can be replaced easily to org.junit.Assert, the signature and behavior are 
> the same, so the tests should pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21111) ConditionalTask cannot be cast to MapRedTask

2019-03-20 Thread zhuwei (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797787#comment-16797787
 ] 

zhuwei commented on HIVE-2:
---

[~lirui] Since it's related to table data size , it's not easy to reproduce it 
from beginning. The root cause is that a child task of conditional task is 
still conditional task. Please take a look at the code that I pasted in 
description, I think this bug is obvious.

The SQL that triggered this bug in our product environment is like this:

set hive.auto.convert.join=true;
set hive.optimize.skewjoin = true;
explain
insert overwrite table dw.dwd_tc_order_old_d_orign
select
a.order_no,
a.kdt_id,
a.store_id,
a.order_type,
a.features,
a.state,
a.close_state,
a.pay_state,
b.origin_price, 
a.buy_way,
b.goods_num,
b.goods_pay, 
a.express_type,
case when ((a.state >=6 and a.state <> 99) or a.express_time <> 0) then 1 else 
0 end as express_state,
case when ((a.state >=6 and a.state <> 99) or a.express_time <> 0) then 'a' 
else 'b' end as express_state_name, 
if((a.order_type=6 and a.pay_state>0),1,a.stock_state) as stock_state,
a.customer_id,
a.customer_type,
a.customer_name,
a.buyer_id,
a.buyer_phone,
if(a.book_time=0 or a.book_time is null,'0',udf.format_unixtime(a.book_time)) 
as book_time,
if(a.pay_time=0 or a.pay_time is null,'0',udf.format_unixtime(a.pay_time)) as 
pay_time,
if(a.express_time=0 or a.express_time is 
null,'0',udf.format_unixtime(a.express_time)) as express_time,
if(a.success_time=0 or a.success_time is 
null,'0',udf.format_unixtime(a.success_time)) as success_time,
if(a.close_time=0 or a.close_time is null,0,udf.format_unixtime(a.close_time)) 
as close_time,
if(a.feedback_time=0 or a.feedback_time is 
null,'0',udf.format_unixtime(a.feedback_time)) as feedback_time

FROM 
(
 select order_no, 
kdt_id,store_id,features,state,close_state,pay_state,order_type, 
buy_way,express_type,activity_type,
 
express_state,feedback,refund_state,stock_state,customer_id,customer_type,customer_name,buyer_id,buyer_phone,
 book_time,pay_time, express_time,success_time,close_time,feedback_time
 FROM ods.tc_seller_order 
 where kdt_id<>0
 and (length(order_no)<> 24 OR substr(order_no,1,1) <> 'E' OR 
substr(order_no,-5,1) <> '0') 
) a
join 
(
 select order_no, 
 cast(sum(price * num)as bigint) as origin_price ,
 sum(num) AS goods_num,
 cast(sum(pay_price*num) AS bigint) AS goods_pay 
 from ods.tc_order_item
 where (length(order_no)<> 24 OR substr(order_no,1,1) <> 'E' OR 
substr(order_no,-5,1) <> '0') 
 group by order_no
) b
on a.order_no = b.order_no;

> ConditionalTask cannot be cast to MapRedTask
> 
>
> Key: HIVE-2
> URL: https://issues.apache.org/jira/browse/HIVE-2
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 2.1.1, 3.1.1, 2.3.4
>Reporter: zhuwei
>Assignee: zhuwei
>Priority: Major
> Attachments: HIVE-2.1.patch
>
>
> We met error like this in our product environment:
> java.lang.ClassCastException: org.apache.hadoop.hive.ql.exec.ConditionalTask 
> cannot be cast to org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> at 
> org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch(AbstractJoinTaskDispatcher.java:173)
>  
> There is a bug in function 
> org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch:
> if (tsk.isMapRedTask()) {
>  Task newTask = this.processCurrentTask((MapRedTask) 
> tsk,
>  ((ConditionalTask) currTask), physicalContext.getContext());
>  walkerCtx.addToDispatchList(newTask);
> }
> In the above code, when tsk is instance of ConditionalTask, 
> tsk.isMapRedTask() still can be true, but it cannot be cast to MapRedTask.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-13479) Relax sorting requirement in ACID tables

2019-03-20 Thread Eugene Koifman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-13479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797769#comment-16797769
 ] 

Eugene Koifman commented on HIVE-13479:
---

There is no sorting restriction on insert-only ACID tables.
Delete event filtering (HIVE-20738) for full-crud tables relies on the fact 
that data is ordered by ROW__ID.
I don't think there is anything that precludes INSERT INTO T  SORT BY ...  
for full-crud table
That should be enough to make min/max in ORC useful for predicate push-down in 
a lot of cases.

IOW is supported and I think could be used to re-sort the table by any column 
(and will generate new row_id) but it's currently an operation with X lock.  
With some work, IOW could run with less strict lock, that allows reads but not 
any other writes.  Compaction that does overwrite would have the same issue 
which is likely too restrictive.  
IOW (directly from user or compactor) is also problematic since it will 
invalidate all result set caches and materialized views.

Incidentally, {{hive.optimize.sort.dynamic.partition=true}} was fixed on ACID 
tables long ago.








> Relax sorting requirement in ACID tables
> 
>
> Key: HIVE-13479
> URL: https://issues.apache.org/jira/browse/HIVE-13479
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Affects Versions: 1.2.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
>   Original Estimate: 160h
>  Remaining Estimate: 160h
>
> Currently ACID tables require data to be sorted according to internal primary 
> key.  This is that base + delta files can be efficiently sort/merged to 
> produce the snapshot for current transaction.
> This prevents the user to make the table sorted based on any other criteria 
> which can be useful.  One example is using dynamic partition insert (which 
> also occurs for update/delete SQL).  This may create lots of writers 
> (buckets*partitions) and tax cluster resources.
> The usual solution is hive.optimize.sort.dynamic.partition=true which won't 
> be honored for ACID tables.
> We could rely on hash table based algorithm to merge delta files and then not 
> require any particular sort on Acid tables.  One way to do that is to treat 
> each update event as an Insert (new internal PK) + delete (old PK).  Delete 
> events are very small since they just need to contain PKs.  So the hash table 
> would just need to contain Delete events and be reasonably memory efficient.
> This is a significant amount of work but worth doing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21386) Extend the fetch task enhancement done in HIVE-21279 to make it work with query result cache

2019-03-20 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21386:
---
Attachment: HIVE-21386.2.patch

> Extend the fetch task enhancement done in HIVE-21279 to make it work with 
> query result cache
> 
>
> Key: HIVE-21386
> URL: https://issues.apache.org/jira/browse/HIVE-21386
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21386.1.patch, HIVE-21386.2.patch
>
>
> The improvement done in HIVE-21279 is disabled for query cache. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21386) Extend the fetch task enhancement done in HIVE-21279 to make it work with query result cache

2019-03-20 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21386:
---
Status: Patch Available  (was: Open)

> Extend the fetch task enhancement done in HIVE-21279 to make it work with 
> query result cache
> 
>
> Key: HIVE-21386
> URL: https://issues.apache.org/jira/browse/HIVE-21386
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21386.1.patch, HIVE-21386.2.patch
>
>
> The improvement done in HIVE-21279 is disabled for query cache. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21386) Extend the fetch task enhancement done in HIVE-21279 to make it work with query result cache

2019-03-20 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21386:
---
Status: Open  (was: Patch Available)

> Extend the fetch task enhancement done in HIVE-21279 to make it work with 
> query result cache
> 
>
> Key: HIVE-21386
> URL: https://issues.apache.org/jira/browse/HIVE-21386
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21386.1.patch, HIVE-21386.2.patch
>
>
> The improvement done in HIVE-21279 is disabled for query cache. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21406) Add .factorypath files to .gitignore

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797743#comment-16797743
 ] 

Hive QA commented on HIVE-21406:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12963116/HIVE-21406.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15833 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16599/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16599/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16599/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12963116 - PreCommit-HIVE-Build

> Add .factorypath files to .gitignore
> 
>
> Key: HIVE-21406
> URL: https://issues.apache.org/jira/browse/HIVE-21406
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Minor
> Attachments: HIVE-21406.01.patch, Screen Shot 2019-03-07 at 2.02.10 
> PM.png
>
>
> .factorypath files are generated by eclipse and should be ignored



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21109) Stats replication for ACID tables.

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797744#comment-16797744
 ] 

Hive QA commented on HIVE-21109:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12963122/HIVE-21109.02.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16600/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16600/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16600/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Tests exited with: Exception: Patch URL 
https://issues.apache.org/jira/secure/attachment/12963122/HIVE-21109.02.patch 
was found in seen patch url's cache and a test was probably run already on it. 
Aborting...
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12963122 - PreCommit-HIVE-Build

> Stats replication for ACID tables.
> --
>
> Key: HIVE-21109
> URL: https://issues.apache.org/jira/browse/HIVE-21109
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
> Attachments: HIVE-21109.01.patch, HIVE-21109.02.patch
>
>
> Transactional tables require a writeID associated with the stats update. This 
> writeId needs to be in sync with the writeId on the source and hence needs to 
> be replicated from the source.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21446) Hive Server going OOM during hive external table replications

2019-03-20 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-21446:
---
Status: Patch Available  (was: Open)

> Hive Server going OOM during hive external table replications
> -
>
> Key: HIVE-21446
> URL: https://issues.apache.org/jira/browse/HIVE-21446
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21446.01.patch, HIVE-21446.02.patch, 
> HIVE-21446.03.patch
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> The file system objects opened using proxy users are not closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21446) Hive Server going OOM during hive external table replications

2019-03-20 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-21446:
---
Attachment: HIVE-21446.03.patch

> Hive Server going OOM during hive external table replications
> -
>
> Key: HIVE-21446
> URL: https://issues.apache.org/jira/browse/HIVE-21446
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21446.01.patch, HIVE-21446.02.patch, 
> HIVE-21446.03.patch
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> The file system objects opened using proxy users are not closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21446) Hive Server going OOM during hive external table replications

2019-03-20 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-21446:
---
Attachment: (was: HIVE-21446.03.patch)

> Hive Server going OOM during hive external table replications
> -
>
> Key: HIVE-21446
> URL: https://issues.apache.org/jira/browse/HIVE-21446
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21446.01.patch, HIVE-21446.02.patch, 
> HIVE-21446.03.patch
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> The file system objects opened using proxy users are not closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21446) Hive Server going OOM during hive external table replications

2019-03-20 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-21446:
---
Status: Open  (was: Patch Available)

> Hive Server going OOM during hive external table replications
> -
>
> Key: HIVE-21446
> URL: https://issues.apache.org/jira/browse/HIVE-21446
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21446.01.patch, HIVE-21446.02.patch, 
> HIVE-21446.03.patch
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> The file system objects opened using proxy users are not closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21409) Initial SessionState ClassLoader Reused For Subsequent Sessions

2019-03-20 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21409?focusedWorklogId=216567=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-216567
 ]

ASF GitHub Bot logged work on HIVE-21409:
-

Author: ASF GitHub Bot
Created on: 21/Mar/19 02:14
Start Date: 21/Mar/19 02:14
Worklog Time Spent: 10m 
  Work Description: shawnweeks commented on issue #575: HIVE-21409 Add Jars 
to Session Conf ClassLoader
URL: https://github.com/apache/hive/pull/575#issuecomment-475092978
 
 
   I suspect I should be doing the same thing for unregisterJar as it has the 
risk of overwriting the classloader in SessionState.get().getConf() as well.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 216567)
Time Spent: 20m  (was: 10m)

> Initial SessionState ClassLoader Reused For Subsequent Sessions
> ---
>
> Key: HIVE-21409
> URL: https://issues.apache.org/jira/browse/HIVE-21409
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
>Reporter: Shawn Weeks
>Priority: Minor
>  Labels: pull-request-available
> Attachments: create_class.sql, run.sql, setup.sql
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> It appears that the first ClassLoader attached to a SessionState Static 
> Instance is being reused as the parent for all future sessions. This causes 
> any libraries added to the class path on the initial session to be added to 
> future sessions. It also appears that further sessions may be adding jars to 
> this initial ClassLoader as well leading to the class path getting more and 
> more polluted. This occurring on a build including HIVE-11878. I've included 
> some examples that greatly exaggerate the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21409) Initial SessionState ClassLoader Reused For Subsequent Sessions

2019-03-20 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21409?focusedWorklogId=216565=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-216565
 ]

ASF GitHub Bot logged work on HIVE-21409:
-

Author: ASF GitHub Bot
Created on: 21/Mar/19 01:56
Start Date: 21/Mar/19 01:56
Worklog Time Spent: 10m 
  Work Description: shawnweeks commented on pull request #575: HIVE-21409 
Add Jars to Session Conf ClassLoader
URL: https://github.com/apache/hive/pull/575
 
 
   It is possible that the current threads classloader may be modified after 
the sessionstate hiveconf has been attached to the current thread. This ensure 
we are always adding jars to the correct class loader.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 216565)
Time Spent: 10m
Remaining Estimate: 0h

> Initial SessionState ClassLoader Reused For Subsequent Sessions
> ---
>
> Key: HIVE-21409
> URL: https://issues.apache.org/jira/browse/HIVE-21409
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
>Reporter: Shawn Weeks
>Priority: Minor
>  Labels: pull-request-available
> Attachments: create_class.sql, run.sql, setup.sql
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> It appears that the first ClassLoader attached to a SessionState Static 
> Instance is being reused as the parent for all future sessions. This causes 
> any libraries added to the class path on the initial session to be added to 
> future sessions. It also appears that further sessions may be adding jars to 
> this initial ClassLoader as well leading to the class path getting more and 
> more polluted. This occurring on a build including HIVE-11878. I've included 
> some examples that greatly exaggerate the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21409) Initial SessionState ClassLoader Reused For Subsequent Sessions

2019-03-20 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-21409:
--
Labels: pull-request-available  (was: )

> Initial SessionState ClassLoader Reused For Subsequent Sessions
> ---
>
> Key: HIVE-21409
> URL: https://issues.apache.org/jira/browse/HIVE-21409
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
>Reporter: Shawn Weeks
>Priority: Minor
>  Labels: pull-request-available
> Attachments: create_class.sql, run.sql, setup.sql
>
>
> It appears that the first ClassLoader attached to a SessionState Static 
> Instance is being reused as the parent for all future sessions. This causes 
> any libraries added to the class path on the initial session to be added to 
> future sessions. It also appears that further sessions may be adding jars to 
> this initial ClassLoader as well leading to the class path getting more and 
> more polluted. This occurring on a build including HIVE-11878. I've included 
> some examples that greatly exaggerate the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21406) Add .factorypath files to .gitignore

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797722#comment-16797722
 ] 

Hive QA commented on HIVE-21406:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 1s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  1m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16599/dev-support/hive-personality.sh
 |
| git revision | master / 25b14be |
| modules | C: . U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16599/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add .factorypath files to .gitignore
> 
>
> Key: HIVE-21406
> URL: https://issues.apache.org/jira/browse/HIVE-21406
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Minor
> Attachments: HIVE-21406.01.patch, Screen Shot 2019-03-07 at 2.02.10 
> PM.png
>
>
> .factorypath files are generated by eclipse and should be ignored



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21283) Create Synonym mid for substr, position for locate

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797720#comment-16797720
 ] 

Hive QA commented on HIVE-21283:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12963114/HIVE.21283.10.PATCH

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15835 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16598/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16598/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16598/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12963114 - PreCommit-HIVE-Build

> Create Synonym mid for  substr, position for  locate
> 
>
> Key: HIVE-21283
> URL: https://issues.apache.org/jira/browse/HIVE-21283
> Project: Hive
>  Issue Type: New Feature
>Reporter: Mani M
>Assignee: Mani M
>Priority: Minor
>  Labels: UDF, pull-request-available, todoc4.0
> Fix For: 4.0.0
>
> Attachments: HIVE.21283.03.PATCH, HIVE.21283.04.PATCH, 
> HIVE.21283.05.PATCH, HIVE.21283.06.PATCH, HIVE.21283.07.PATCH, 
> HIVE.21283.08.PATCH, HIVE.21283.09.PATCH, HIVE.21283.10.PATCH, 
> HIVE.21283.2.PATCH, HIVE.21283.PATCH, image-2019-03-16-21-31-15-541.png, 
> image-2019-03-16-21-33-18-898.png
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Create new synonym for the existing function
>  
> Mid for substr
> postiion for locate 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21283) Create Synonym mid for substr, position for locate

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797698#comment-16797698
 ] 

Hive QA commented on HIVE-21283:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
35s{color} | {color:blue} ql in master has 2255 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16598/dev-support/hive-personality.sh
 |
| git revision | master / 25b14be |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16598/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Create Synonym mid for  substr, position for  locate
> 
>
> Key: HIVE-21283
> URL: https://issues.apache.org/jira/browse/HIVE-21283
> Project: Hive
>  Issue Type: New Feature
>Reporter: Mani M
>Assignee: Mani M
>Priority: Minor
>  Labels: UDF, pull-request-available, todoc4.0
> Fix For: 4.0.0
>
> Attachments: HIVE.21283.03.PATCH, HIVE.21283.04.PATCH, 
> HIVE.21283.05.PATCH, HIVE.21283.06.PATCH, HIVE.21283.07.PATCH, 
> HIVE.21283.08.PATCH, HIVE.21283.09.PATCH, HIVE.21283.10.PATCH, 
> HIVE.21283.2.PATCH, HIVE.21283.PATCH, image-2019-03-16-21-31-15-541.png, 
> image-2019-03-16-21-33-18-898.png
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Create new synonym for the existing function
>  
> Mid for substr
> postiion for locate 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21484) Metastore API getVersion() should return real version

2019-03-20 Thread Vihang Karajgaonkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-21484:
---
Status: Patch Available  (was: Open)

> Metastore API getVersion() should return real version
> -
>
> Key: HIVE-21484
> URL: https://issues.apache.org/jira/browse/HIVE-21484
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Minor
> Attachments: HIVE-21484.01.patch
>
>
> Currently I see the {{getVersion}} implementation in the metastore is 
> returning a hard-coded "3.0". It would be good to return the real version of 
> the metastore server using {{HiveversionInfo}} so that clients can take 
> certain actions based on metastore server versions.
> Possible use-cases are:
> 1. Client A can make use of new features introduced in given Metastore 
> version else stick to the base functionality.
> 2. This version number  can be used to do a version handshake between client 
> and server in the future to improve our cross-version compatibity story.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21484) Metastore API getVersion() should return real version

2019-03-20 Thread Vihang Karajgaonkar (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797691#comment-16797691
 ] 

Vihang Karajgaonkar commented on HIVE-21484:


[~ngangam] [~pvary] Can you please take a look?

> Metastore API getVersion() should return real version
> -
>
> Key: HIVE-21484
> URL: https://issues.apache.org/jira/browse/HIVE-21484
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Minor
> Attachments: HIVE-21484.01.patch
>
>
> Currently I see the {{getVersion}} implementation in the metastore is 
> returning a hard-coded "3.0". It would be good to return the real version of 
> the metastore server using {{HiveversionInfo}} so that clients can take 
> certain actions based on metastore server versions.
> Possible use-cases are:
> 1. Client A can make use of new features introduced in given Metastore 
> version else stick to the base functionality.
> 2. This version number  can be used to do a version handshake between client 
> and server in the future to improve our cross-version compatibity story.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21484) Metastore API getVersion() should return real version

2019-03-20 Thread Vihang Karajgaonkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-21484:
---
Attachment: HIVE-21484.01.patch

> Metastore API getVersion() should return real version
> -
>
> Key: HIVE-21484
> URL: https://issues.apache.org/jira/browse/HIVE-21484
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Minor
> Attachments: HIVE-21484.01.patch
>
>
> Currently I see the {{getVersion}} implementation in the metastore is 
> returning a hard-coded "3.0". It would be good to return the real version of 
> the metastore server using {{HiveversionInfo}} so that clients can take 
> certain actions based on metastore server versions.
> Possible use-cases are:
> 1. Client A can make use of new features introduced in given Metastore 
> version else stick to the base functionality.
> 2. This version number  can be used to do a version handshake between client 
> and server in the future to improve our cross-version compatibity story.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21474) Bumping guava version

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797686#comment-16797686
 ] 

Hive QA commented on HIVE-21474:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
3s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
 0s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
33s{color} | {color:blue} storage-api in master has 48 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
33s{color} | {color:blue} ql in master has 2255 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
33s{color} | {color:blue} druid-handler in master has 3 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
24s{color} | {color:blue} itests/qtest-druid in master has 7 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 10m 
49s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
39s{color} | {color:red} ql generated 3 new + 2244 unchanged - 11 fixed = 2247 
total (was 2255) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 17m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Null passed for non-null parameter of addJars(String) in 
org.apache.hadoop.hive.ql.exec.spark.LocalHiveSparkClient.refreshLocalResources(SparkWork,
 HiveConf)  Method invoked at LocalHiveSparkClient.java:of addJars(String) in 
org.apache.hadoop.hive.ql.exec.spark.LocalHiveSparkClient.refreshLocalResources(SparkWork,
 HiveConf)  Method invoked at LocalHiveSparkClient.java:[line 195] |
|  |  Null passed for non-null parameter of addJars(String) in 
org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.refreshLocalResources(SparkWork,
 HiveConf)  Method invoked at RemoteHiveSparkClient.java:of addJars(String) in 
org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.refreshLocalResources(SparkWork,
 HiveConf)  Method invoked at RemoteHiveSparkClient.java:[line 238] |
|  |  Null passed for non-null parameter of 
com.google.common.util.concurrent.SettableFuture.set(Object) in 
org.apache.hadoop.hive.ql.exec.tez.WorkloadManager.processCurrentEvents(WorkloadManager$EventState,
 WorkloadManager$WmThreadSyncWork)  At WorkloadManager.java:of 
com.google.common.util.concurrent.SettableFuture.set(Object) in 
org.apache.hadoop.hive.ql.exec.tez.WorkloadManager.processCurrentEvents(WorkloadManager$EventState,
 WorkloadManager$WmThreadSyncWork)  At WorkloadManager.java:[line 733] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional 

[jira] [Commented] (HIVE-21109) Stats replication for ACID tables.

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797681#comment-16797681
 ] 

Hive QA commented on HIVE-21109:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12963122/HIVE-21109.02.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16597/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16597/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16597/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-03-20 23:50:49.859
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-16597/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-03-20 23:50:49.863
+ cd apache-github-source-source
+ git fetch origin
>From https://github.com/apache/hive
   64b8252..25b14be  master -> origin/master
   52aeb29..e9d1e03  branch-3   -> origin/branch-3
+ git reset --hard HEAD
HEAD is now at 64b8252 HIVE-21468: Case sensitivity in identifier names for 
JDBC storage handler (Jesus Camacho Rodriguez, reviewed by Daniel Dai)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded.
  (use "git pull" to update your local branch)
+ git reset --hard origin/master
HEAD is now at 25b14be HIVE-21460: ACID: Load data followed by a select * query 
results in incorrect results (Vaibhav Gumashta, reviewed by Gopal V)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-03-20 23:50:51.937
+ rm -rf ../yetus_PreCommit-HIVE-Build-16597
+ mkdir ../yetus_PreCommit-HIVE-Build-16597
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-16597
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-16597/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenariosIncrementalLoadAcidTables.java:
 does not exist in index
error: 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestStatsReplicationScenarios.java:
 does not exist in index
error: 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestStatsReplicationScenariosNoAutogather.java:
 does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnStatsUpdateTask.java: 
does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java: does not 
exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/MoveTask.java: does not 
exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplDumpTask.java: 
does not exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/repl/bootstrap/events/filesystem/FSTableEvent.java:
 does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java: does not 
exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/lockmgr/DbTxnManager.java: does 
not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/lockmgr/DummyTxnManager.java: 
does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveTxnManager.java: 
does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java: does not 
exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/metadata/Table.java: does not 
exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java: does 
not exist in index
error: 

[jira] [Commented] (HIVE-21474) Bumping guava version

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797678#comment-16797678
 ] 

Hive QA commented on HIVE-21474:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12963108/HIVE-21474.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 36 failed/errored test(s), 15795 tests 
executed
*Failed tests:*
{noformat}
TestAccumuloCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=267)
TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=275)
TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=267)
TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=275)
TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=195)

[druidmini_masking.q,druid_timestamptz2.q,druid_timestamptz.q,druidmini_floorTime.q,druid_timeseries.q,druid_topn.q,druidmini_dynamic_partition.q,druidmini_test_ts.q,druidmini_mv.q,druidmini_expressions.q,druidmini_test_alter.q,druidmini_test1.q,druidmini_test_insert.q,druidmini_extractTime.q,druidmini_joins.q]
TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=275)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[multi_insert_with_join2] 
(batchId=85)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[pcs] (batchId=54)
org.apache.hadoop.hive.cli.TestMiniDruidKafkaCliDriver.testCliDriver[druidkafkamini_avro]
 (batchId=275)
org.apache.hadoop.hive.cli.TestMiniDruidKafkaCliDriver.testCliDriver[druidkafkamini_basic]
 (batchId=275)
org.apache.hadoop.hive.cli.TestMiniDruidKafkaCliDriver.testCliDriver[druidkafkamini_csv]
 (batchId=275)
org.apache.hadoop.hive.cli.TestMiniDruidKafkaCliDriver.testCliDriver[druidkafkamini_delimited]
 (batchId=275)
org.apache.hive.spark.client.TestSparkClient.testAddJarsAndFiles (batchId=331)
org.apache.hive.spark.client.TestSparkClient.testCounters (batchId=331)
org.apache.hive.spark.client.TestSparkClient.testErrorJob (batchId=331)
org.apache.hive.spark.client.TestSparkClient.testErrorJobNotSerializable 
(batchId=331)
org.apache.hive.spark.client.TestSparkClient.testJobSubmission (batchId=331)
org.apache.hive.spark.client.TestSparkClient.testMetricsCollection (batchId=331)
org.apache.hive.spark.client.TestSparkClient.testSimpleSparkJob (batchId=331)
org.apache.hive.spark.client.TestSparkClient.testSyncRpc (batchId=331)
org.apache.hive.spark.client.rpc.TestKryoMessageCodec.testAutoRegistration 
(batchId=331)
org.apache.hive.spark.client.rpc.TestKryoMessageCodec.testDecryptionOnly 
(batchId=331)
org.apache.hive.spark.client.rpc.TestKryoMessageCodec.testEmbeddedChannel 
(batchId=331)
org.apache.hive.spark.client.rpc.TestKryoMessageCodec.testEncryptDecrypt 
(batchId=331)
org.apache.hive.spark.client.rpc.TestKryoMessageCodec.testEncryptionOnly 
(batchId=331)
org.apache.hive.spark.client.rpc.TestKryoMessageCodec.testFragmentation 
(batchId=331)
org.apache.hive.spark.client.rpc.TestKryoMessageCodec.testKryoCodec 
(batchId=331)
org.apache.hive.spark.client.rpc.TestKryoMessageCodec.testMaxMessageSize 
(batchId=331)
org.apache.hive.spark.client.rpc.TestKryoMessageCodec.testNegativeMessageSize 
(batchId=331)
org.apache.hive.spark.client.rpc.TestRpc.testBadHello (batchId=331)
org.apache.hive.spark.client.rpc.TestRpc.testClientServer (batchId=331)
org.apache.hive.spark.client.rpc.TestRpc.testCloseListener (batchId=331)
org.apache.hive.spark.client.rpc.TestRpc.testEncryption (batchId=331)
org.apache.hive.spark.client.rpc.TestRpc.testNotDeserializableRpc (batchId=331)
org.apache.hive.spark.client.rpc.TestRpc.testRpcDispatcher (batchId=331)
org.apache.hive.spark.client.rpc.TestRpc.testRpcServerMultiThread (batchId=331)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16596/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16596/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16596/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 36 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12963108 - PreCommit-HIVE-Build

> Bumping guava version
> -
>
> Key: HIVE-21474
> URL: https://issues.apache.org/jira/browse/HIVE-21474
> Project: Hive
>  Issue Type: Task
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21474.2.patch, HIVE-21474.patch
>
>
> Bump guava to 

[jira] [Commented] (HIVE-21304) Show Bucketing version for ReduceSinkOp in explain extended plan

2019-03-20 Thread Zoltan Haindrich (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797647#comment-16797647
 ] 

Zoltan Haindrich commented on HIVE-21304:
-

storing the bucketingVersion in the desc made it reach places it wasn't 
available before - right now I don't see the following as problematic

* cp_sel: the resultset change is due to reading 3 rows from a table; it reads 
a different 3 rows
* truncate_column_buckets: the table is most probably distributed by version2 ; 
so the number of rows in the 2 files have changed a little

I right now see some ways in which this bucketingVersion - or other infos may 
get lost; I would rather address that in a separate ticket.
The issue is with the "clone" methods:
* abstractDesc declares that it doesn't support clones - and by doing so it 
doesn't provide any facility to clone the "abstract" fields
* but there are a lot of "clone()" implementations by descendants...
* I think it might have been organized like this to try to force descendant 
descs to implement it?
* instead of relying on these clone methods; I would probably go for a 
kryo/unkryo to also cover all the "other" fields which we might have forgot 
along the way...


> Show Bucketing version for ReduceSinkOp in explain extended plan
> 
>
> Key: HIVE-21304
> URL: https://issues.apache.org/jira/browse/HIVE-21304
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21304.01.patch, HIVE-21304.02.patch
>
>
> Show Bucketing version for ReduceSinkOp in explain extended plan.
> This helps identify what hashing algorithm is being used by by ReduceSinkOp.
>  
> cc [~vgarg]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21460) ACID: Load data followed by a select * query results in incorrect results

2019-03-20 Thread Gopal V (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-21460:
---
Fix Version/s: 3.1.1

> ACID: Load data followed by a select * query results in incorrect results
> -
>
> Key: HIVE-21460
> URL: https://issues.apache.org/jira/browse/HIVE-21460
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 4.0.0, 3.1.1
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Blocker
> Fix For: 4.0.0, 3.1.1
>
> Attachments: HIVE-21460.1.patch
>
>
> This affects current master as well. Created an orc file such that it spans 
> multiple stripes and ran a simple select *, and got incorrect row counts 
> (when comparing with select count(*). The problem seems to be that after 
> split generation and creating min/max rowId for each row (note that since the 
> loaded file is not written by Hive ACID, it does not have ROW__ID in the 
> file; but the ROW__ID is applied on read by discovering min/max bounds which 
> are used for calculating ROW__ID.rowId for each row of a split), Hive is only 
> reading the last split.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21460) ACID: Load data followed by a select * query results in incorrect results

2019-03-20 Thread Gopal V (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-21460:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to master, it applies cleanly on 3.x too (build + test kicked off)

> ACID: Load data followed by a select * query results in incorrect results
> -
>
> Key: HIVE-21460
> URL: https://issues.apache.org/jira/browse/HIVE-21460
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 4.0.0, 3.1.1
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Blocker
> Fix For: 4.0.0
>
> Attachments: HIVE-21460.1.patch
>
>
> This affects current master as well. Created an orc file such that it spans 
> multiple stripes and ran a simple select *, and got incorrect row counts 
> (when comparing with select count(*). The problem seems to be that after 
> split generation and creating min/max rowId for each row (note that since the 
> loaded file is not written by Hive ACID, it does not have ROW__ID in the 
> file; but the ROW__ID is applied on read by discovering min/max bounds which 
> are used for calculating ROW__ID.rowId for each row of a split), Hive is only 
> reading the last split.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21460) ACID: Load data followed by a select * query results in incorrect results

2019-03-20 Thread Gopal V (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-21460:
---
Fix Version/s: 4.0.0

> ACID: Load data followed by a select * query results in incorrect results
> -
>
> Key: HIVE-21460
> URL: https://issues.apache.org/jira/browse/HIVE-21460
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 4.0.0, 3.1.1
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Blocker
> Fix For: 4.0.0
>
> Attachments: HIVE-21460.1.patch
>
>
> This affects current master as well. Created an orc file such that it spans 
> multiple stripes and ran a simple select *, and got incorrect row counts 
> (when comparing with select count(*). The problem seems to be that after 
> split generation and creating min/max rowId for each row (note that since the 
> loaded file is not written by Hive ACID, it does not have ROW__ID in the 
> file; but the ROW__ID is applied on read by discovering min/max bounds which 
> are used for calculating ROW__ID.rowId for each row of a split), Hive is only 
> reading the last split.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21482) Partition discovery table property is added to non-partitioned external tables

2019-03-20 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-21482:
-
Attachment: HIVE-21482.2.patch

> Partition discovery table property is added to non-partitioned external tables
> --
>
> Key: HIVE-21482
> URL: https://issues.apache.org/jira/browse/HIVE-21482
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21482.1.patch, HIVE-21482.2.patch
>
>
> Automatic partition discovery is added to external tables by default. But it 
> doesn't check if the external table is partitioned or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21473) Bumping jackson version

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797616#comment-16797616
 ] 

Hive QA commented on HIVE-21473:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12963111/HIVE-21473.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 29 failed/errored test(s), 15833 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropParitionsCleanup
 (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropPartitionsCacheCrossSession
 (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSqlErrorMetrics 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testEmptyTrustStoreProps 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testMaxEventResponse 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testPartitionOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testQueryCloseOnError 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testRoleOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testUseSSLProperty 
(batchId=230)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testMultipleTriggers1 
(batchId=263)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testMultipleTriggers2 
(batchId=263)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitions
 (batchId=263)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitionsMultiInsert
 (batchId=263)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitionsUnionAll
 (batchId=263)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedFiles
 (batchId=263)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomNonExistent
 (batchId=263)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomReadOps 
(batchId=263)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerDagRawInputSplitsKill
 (batchId=263)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerDagTotalTasks 
(batchId=263)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerDefaultRawInputSplits
 (batchId=263)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighBytesRead 
(batchId=263)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighBytesWrite
 (batchId=263)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighShuffleBytes
 (batchId=263)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerShortQueryElapsedTime
 (batchId=263)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerSlowQueryElapsedTime
 (batchId=263)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerSlowQueryExecutionTime
 (batchId=263)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerTotalTasks 
(batchId=263)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerVertexRawInputSplitsKill
 (batchId=263)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerVertexRawInputSplitsNoKill
 (batchId=263)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16595/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16595/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16595/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 29 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12963111 - PreCommit-HIVE-Build

> Bumping jackson version
> ---
>
> Key: HIVE-21473
> URL: https://issues.apache.org/jira/browse/HIVE-21473
> Project: Hive
>  Issue Type: Task
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21473.2.patch, HIVE-21473.patch
>
>
> Bump jackson version to 2.9.8



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21484) Metastore API getVersion() should return real version

2019-03-20 Thread Vihang Karajgaonkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar reassigned HIVE-21484:
--


> Metastore API getVersion() should return real version
> -
>
> Key: HIVE-21484
> URL: https://issues.apache.org/jira/browse/HIVE-21484
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Minor
>
> Currently I see the {{getVersion}} implementation in the metastore is 
> returning a hard-coded "3.0". It would be good to return the real version of 
> the metastore server using {{HiveversionInfo}} so that clients can take 
> certain actions based on metastore server versions.
> Possible use-cases are:
> 1. Client A can make use of new features introduced in given Metastore 
> version else stick to the base functionality.
> 2. This version number  can be used to do a version handshake between client 
> and server in the future to improve our cross-version compatibity story.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21473) Bumping jackson version

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797602#comment-16797602
 ] 

Hive QA commented on HIVE-21473:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  9m 
16s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  9m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16595/dev-support/hive-personality.sh
 |
| git revision | master / 64b8252 |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16595/yetus/patch-asflicense-problems.txt
 |
| modules | C: standalone-metastore . testutils/ptest2 U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16595/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Bumping jackson version
> ---
>
> Key: HIVE-21473
> URL: https://issues.apache.org/jira/browse/HIVE-21473
> Project: Hive
>  Issue Type: Task
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21473.2.patch, HIVE-21473.patch
>
>
> Bump jackson version to 2.9.8



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21205) Tests for replace flag in insert event messages in Metastore notifications.

2019-03-20 Thread Bharathkrishna Guruvayoor Murali (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharathkrishna Guruvayoor Murali updated HIVE-21205:

Attachment: HIVE-21205.2.patch

> Tests for replace flag in insert event messages in Metastore notifications.
> ---
>
> Key: HIVE-21205
> URL: https://issues.apache.org/jira/browse/HIVE-21205
> Project: Hive
>  Issue Type: Test
>Reporter: Bharathkrishna Guruvayoor Murali
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Minor
> Attachments: HIVE-21205.1.patch, HIVE-21205.2.patch
>
>
> The replace flag is initially added in HIVE-16197. It would be good to have 
> some tests in TestDbNotificationListener to validate if the flag is set as 
> expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21430) INSERT into a dynamically partitioned table with hive.stats.autogather = false throws a MetaException

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797570#comment-16797570
 ] 

Hive QA commented on HIVE-21430:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12963100/HIVE-21430.04.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15833 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16594/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16594/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16594/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12963100 - PreCommit-HIVE-Build

> INSERT into a dynamically partitioned table with hive.stats.autogather = 
> false throws a MetaException
> -
>
> Key: HIVE-21430
> URL: https://issues.apache.org/jira/browse/HIVE-21430
> Project: Hive
>  Issue Type: Bug
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21430.01.patch, HIVE-21430.02.patch, 
> HIVE-21430.03.patch, HIVE-21430.04.patch, metaexception_repro.patch, 
> org.apache.hadoop.hive.ql.stats.TestStatsUpdaterThread-output.txt
>
>   Original Estimate: 48h
>  Time Spent: 50m
>  Remaining Estimate: 47h 10m
>
> When the test TestStatsUpdaterThread#testTxnDynamicPartitions added in the 
> attached patch is run it throws exception (full logs attached.)
> org.apache.hadoop.hive.metastore.api.MetaException: Cannot change stats state 
> for a transactional table default.simple_stats without providing the 
> transactional write state for verification (new write ID 5, valid write IDs 
> null; current state \{"BASIC_STATS":"true","COLUMN_STATS":{"s":"true"}}; new 
> state null
>  at 
> org.apache.hadoop.hive.metastore.ObjectStore.alterPartitionNoTxn(ObjectStore.java:4328)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21430) INSERT into a dynamically partitioned table with hive.stats.autogather = false throws a MetaException

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797513#comment-16797513
 ] 

Hive QA commented on HIVE-21430:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
11s{color} | {color:blue} ql in master has 2255 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16594/dev-support/hive-personality.sh
 |
| git revision | master / 230db04 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16594/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> INSERT into a dynamically partitioned table with hive.stats.autogather = 
> false throws a MetaException
> -
>
> Key: HIVE-21430
> URL: https://issues.apache.org/jira/browse/HIVE-21430
> Project: Hive
>  Issue Type: Bug
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21430.01.patch, HIVE-21430.02.patch, 
> HIVE-21430.03.patch, HIVE-21430.04.patch, metaexception_repro.patch, 
> org.apache.hadoop.hive.ql.stats.TestStatsUpdaterThread-output.txt
>
>   Original Estimate: 48h
>  Time Spent: 50m
>  Remaining Estimate: 47h 10m
>
> When the test TestStatsUpdaterThread#testTxnDynamicPartitions added in the 
> attached patch is run it throws exception (full logs attached.)
> org.apache.hadoop.hive.metastore.api.MetaException: Cannot change stats state 
> for a transactional table default.simple_stats without providing the 
> transactional write state for verification (new write ID 5, valid write IDs 
> null; current state \{"BASIC_STATS":"true","COLUMN_STATS":{"s":"true"}}; new 
> state null
>  at 
> org.apache.hadoop.hive.metastore.ObjectStore.alterPartitionNoTxn(ObjectStore.java:4328)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21468) Case sensitivity in identifier names for JDBC storage handler

2019-03-20 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-21468:
---
   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master, thanks [~daijy]

> Case sensitivity in identifier names for JDBC storage handler
> -
>
> Key: HIVE-21468
> URL: https://issues.apache.org/jira/browse/HIVE-21468
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21468.01.patch, HIVE-21468.02.patch, 
> HIVE-21468.patch
>
>
> Currently, when Calcite generates the SQL query for the JDBC storage handler, 
> it will ignore capitalization for the identifiers names, which can lead to 
> errors at execution time (though the query is properly generated).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21468) Case sensitivity in identifier names for JDBC storage handler

2019-03-20 Thread Daniel Dai (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797499#comment-16797499
 ] 

Daniel Dai commented on HIVE-21468:
---

+1

> Case sensitivity in identifier names for JDBC storage handler
> -
>
> Key: HIVE-21468
> URL: https://issues.apache.org/jira/browse/HIVE-21468
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-21468.01.patch, HIVE-21468.02.patch, 
> HIVE-21468.patch
>
>
> Currently, when Calcite generates the SQL query for the JDBC storage handler, 
> it will ignore capitalization for the identifiers names, which can lead to 
> errors at execution time (though the query is properly generated).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21474) Bumping guava version

2019-03-20 Thread slim bouguerra (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797495#comment-16797495
 ] 

slim bouguerra commented on HIVE-21474:
---

Druid Upstream is not compatible with this version, but within Hive we are 
using a subset of Druid Api, so might work might not, let's see what the IT 
tests does.
Also in general Guava can break bunch of other stuff within Hadoop ecosystem, 
do we really need to go all the way to 24.1.1-jre ?
Am afraid that in production systems where TEZ/Hadoop is using different 
version that clash with this one. 

> Bumping guava version
> -
>
> Key: HIVE-21474
> URL: https://issues.apache.org/jira/browse/HIVE-21474
> Project: Hive
>  Issue Type: Task
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21474.2.patch, HIVE-21474.patch
>
>
> Bump guava to 24.1.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21034) Add option to schematool to drop Hive databases

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797486#comment-16797486
 ] 

Hive QA commented on HIVE-21034:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12963099/HIVE-21034.5.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15836 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.llap.security.TestLlapSignerImpl.testSigning 
(batchId=337)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16593/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16593/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16593/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12963099 - PreCommit-HIVE-Build

> Add option to schematool to drop Hive databases
> ---
>
> Key: HIVE-21034
> URL: https://issues.apache.org/jira/browse/HIVE-21034
> Project: Hive
>  Issue Type: Improvement
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
> Attachments: HIVE-21034.1.patch, HIVE-21034.2.patch, 
> HIVE-21034.2.patch, HIVE-21034.3.patch, HIVE-21034.4.patch, 
> HIVE-21034.5.patch, HIVE-21034.5.patch
>
>
> An option to remove all Hive managed data could be a useful addition to 
> {{schematool}}.
> I propose to introduce a new flag {{-dropAllDatabases}} that would *drop all 
> databases with CASCADE* to remove all data of managed tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21034) Add option to schematool to drop Hive databases

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797443#comment-16797443
 ] 

Hive QA commented on HIVE-21034:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
17s{color} | {color:blue} standalone-metastore/metastore-server in master has 
179 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16593/dev-support/hive-personality.sh
 |
| git revision | master / 230db04 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: standalone-metastore/metastore-server U: 
standalone-metastore/metastore-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16593/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add option to schematool to drop Hive databases
> ---
>
> Key: HIVE-21034
> URL: https://issues.apache.org/jira/browse/HIVE-21034
> Project: Hive
>  Issue Type: Improvement
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
> Attachments: HIVE-21034.1.patch, HIVE-21034.2.patch, 
> HIVE-21034.2.patch, HIVE-21034.3.patch, HIVE-21034.4.patch, 
> HIVE-21034.5.patch, HIVE-21034.5.patch
>
>
> An option to remove all Hive managed data could be a useful addition to 
> {{schematool}}.
> I propose to introduce a new flag {{-dropAllDatabases}} that would *drop all 
> databases with CASCADE* to remove all data of managed tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21304) Show Bucketing version for ReduceSinkOp in explain extended plan

2019-03-20 Thread Vineet Garg (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797430#comment-16797430
 ] 

Vineet Garg commented on HIVE-21304:


[~kgyrtkirk] In the attached patch following tests have results changed. Is 
that expected?

{noformat}
* cp_sel.q.out
* truncate_column_buckets.q.out
{noformat}

> Show Bucketing version for ReduceSinkOp in explain extended plan
> 
>
> Key: HIVE-21304
> URL: https://issues.apache.org/jira/browse/HIVE-21304
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21304.01.patch, HIVE-21304.02.patch
>
>
> Show Bucketing version for ReduceSinkOp in explain extended plan.
> This helps identify what hashing algorithm is being used by by ReduceSinkOp.
>  
> cc [~vgarg]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21205) Tests for replace flag in insert event messages in Metastore notifications.

2019-03-20 Thread Vihang Karajgaonkar (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797422#comment-16797422
 ] 

Vihang Karajgaonkar commented on HIVE-21205:


Thanks for the patch [~bharos92]. You use {{assertTrue}} or {{assertFalse}} 
instead of {{assertEquals(false,value)}}. Also, do you need to add this check 
in {{sqlInsertTable}} testcase also. The {{sqlInsertPartition}} test does a 
{{insert into table }} sql as well. Would be good to verify that the 
replace flag is false in such a case as well.

> Tests for replace flag in insert event messages in Metastore notifications.
> ---
>
> Key: HIVE-21205
> URL: https://issues.apache.org/jira/browse/HIVE-21205
> Project: Hive
>  Issue Type: Test
>Reporter: Bharathkrishna Guruvayoor Murali
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Minor
> Attachments: HIVE-21205.1.patch
>
>
> The replace flag is initially added in HIVE-16197. It would be good to have 
> some tests in TestDbNotificationListener to validate if the flag is set as 
> expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21462) Upgrading SQL server backed metastore when changing data type of a column with constraints

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797416#comment-16797416
 ] 

Hive QA commented on HIVE-21462:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12963097/HIVE-21462.02.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15832 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16592/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16592/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16592/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12963097 - PreCommit-HIVE-Build

> Upgrading SQL server backed metastore when changing data type of a column 
> with constraints
> --
>
> Key: HIVE-21462
> URL: https://issues.apache.org/jira/browse/HIVE-21462
> Project: Hive
>  Issue Type: Bug
>  Components: Standalone Metastore
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21462.01.patch, HIVE-21462.02.patch
>
>   Original Estimate: 24h
>  Time Spent: 10m
>  Remaining Estimate: 23h 50m
>
> SQL server does not allow changing data type of a column which has a 
> constraint or an index on it. The constraint or the index needs to be dropped 
> before changing the data type and needs to be recreated after that. Metastore 
> upgrade scripts aren't doing this and thus upgrade fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21462) Upgrading SQL server backed metastore when changing data type of a column with constraints

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797388#comment-16797388
 ] 

Hive QA commented on HIVE-21462:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
46s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
43s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 21 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16592/dev-support/hive-personality.sh
 |
| git revision | master / 230db04 |
| Default Java | 1.8.0_111 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16592/yetus/whitespace-tabs.txt
 |
| modules | C: standalone-metastore/metastore-server . U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16592/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Upgrading SQL server backed metastore when changing data type of a column 
> with constraints
> --
>
> Key: HIVE-21462
> URL: https://issues.apache.org/jira/browse/HIVE-21462
> Project: Hive
>  Issue Type: Bug
>  Components: Standalone Metastore
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21462.01.patch, HIVE-21462.02.patch
>
>   Original Estimate: 24h
>  Time Spent: 10m
>  Remaining Estimate: 23h 50m
>
> SQL server does not allow changing data type of a column which has a 
> constraint or an index on it. The constraint or the index needs to be dropped 
> before changing the data type and needs to be recreated after that. Metastore 
> upgrade scripts aren't doing this and thus upgrade fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21471) Replicating conversion of managed to external table leaks HDFS files at target.

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797364#comment-16797364
 ] 

Hive QA commented on HIVE-21471:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12963088/HIVE-21471.01.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15833 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16591/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16591/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16591/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12963088 - PreCommit-HIVE-Build

> Replicating conversion of managed to external table leaks HDFS files at 
> target.
> ---
>
> Key: HIVE-21471
> URL: https://issues.apache.org/jira/browse/HIVE-21471
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, replication
> Attachments: HIVE-21471.01.patch
>
>
> While replicating the ALTER event to convert managed table to external table, 
> the data location for the table is changed under input base directory for 
> external tables replication. But, the old location remains there and would be 
> leaked for ever.
> ALTER TABLE T1 SET TBLPROPERTIES('EXTERNAL'='true');



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21480) Fix test TestHiveMetaStore.testJDOPersistanceManagerCleanup

2019-03-20 Thread Morio Ramdenbourg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Morio Ramdenbourg updated HIVE-21480:
-
Attachment: HIVE-21480.patch
Status: Patch Available  (was: In Progress)

> Fix test TestHiveMetaStore.testJDOPersistanceManagerCleanup
> ---
>
> Key: HIVE-21480
> URL: https://issues.apache.org/jira/browse/HIVE-21480
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Affects Versions: 4.0.0
>Reporter: Morio Ramdenbourg
>Assignee: Morio Ramdenbourg
>Priority: Major
> Attachments: HIVE-21480.patch
>
>
> [TestHiveMetaStore#testJDOPersistanceManagerCleanup|https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java#L3140-L3162]
>  tests whether the JDO persistence manager cache cleanup was performed 
> correctly when a HiveMetaStoreClient executes an API call, and closes. It 
> does this by ensuring that the cache object count before the API call, and 
> after closing, are the same. However, there are some assumptions that are not 
> always correct, and can cause flakiness.
> For example, lingering resources could be present from previous tests or from 
> setup depending on how PTest runs it, and can cause the object count to 
> sometimes be incorrect. We should rewrite this test to account for this 
> flakiness that can occur.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21471) Replicating conversion of managed to external table leaks HDFS files at target.

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797331#comment-16797331
 ] 

Hive QA commented on HIVE-21471:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
29s{color} | {color:blue} ql in master has 2255 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
43s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
32s{color} | {color:red} ql generated 1 new + 2255 unchanged - 0 fixed = 2256 
total (was 2255) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Exception is caught when Exception is not thrown in 
org.apache.hadoop.hive.ql.ddl.table.CreateTableOperation.deleteOldDataLocation(String,
 String, Path, boolean)  At CreateTableOperation.java:is not thrown in 
org.apache.hadoop.hive.ql.ddl.table.CreateTableOperation.deleteOldDataLocation(String,
 String, Path, boolean)  At CreateTableOperation.java:[line 166] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16591/dev-support/hive-personality.sh
 |
| git revision | master / 230db04 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16591/yetus/new-findbugs-ql.html
 |
| modules | C: ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16591/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Replicating conversion of managed to external table leaks HDFS files at 
> target.
> ---
>
> Key: HIVE-21471
> URL: https://issues.apache.org/jira/browse/HIVE-21471
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, replication
> Attachments: HIVE-21471.01.patch
>
>
> While replicating the ALTER event to convert managed table to external table, 
> the data 

[jira] [Commented] (HIVE-21446) Hive Server going OOM during hive external table replications

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797289#comment-16797289
 ] 

Hive QA commented on HIVE-21446:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12963063/HIVE-21446.03.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 15832 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testComplexQuery (batchId=263)
org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testKillQuery (batchId=263)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16590/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16590/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16590/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12963063 - PreCommit-HIVE-Build

> Hive Server going OOM during hive external table replications
> -
>
> Key: HIVE-21446
> URL: https://issues.apache.org/jira/browse/HIVE-21446
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21446.01.patch, HIVE-21446.02.patch, 
> HIVE-21446.03.patch
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> The file system objects opened using proxy users are not closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21446) Hive Server going OOM during hive external table replications

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797259#comment-16797259
 ] 

Hive QA commented on HIVE-21446:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
44s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
17s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
20s{color} | {color:blue} shims/common in master has 6 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
25s{color} | {color:blue} shims/0.23 in master has 7 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
38s{color} | {color:blue} common in master has 63 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
30s{color} | {color:blue} ql in master has 2255 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} shims/common: The patch generated 0 new + 94 
unchanged - 1 fixed = 94 total (was 95) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} The patch 0.23 passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} The patch common passed checkstyle {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
42s{color} | {color:red} ql: The patch generated 2 new + 19 unchanged - 2 fixed 
= 21 total (was 21) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16590/dev-support/hive-personality.sh
 |
| git revision | master / 230db04 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16590/yetus/diff-checkstyle-ql.txt
 |
| modules | C: shims/common shims/0.23 common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16590/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Hive Server going OOM during hive external table replications
> -
>
> Key: HIVE-21446
> URL: https://issues.apache.org/jira/browse/HIVE-21446
>

[jira] [Commented] (HIVE-19261) Avro SerDe's InstanceCache should not be synchronized on retrieve

2019-03-20 Thread Alexey Diomin (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797257#comment-16797257
 ] 

Alexey Diomin commented on HIVE-19261:
--

Hive master branch already migrated on java 8 only version.

Can we update merge this fix now?

> Avro SerDe's InstanceCache should not be synchronized on retrieve
> -
>
> Key: HIVE-19261
> URL: https://issues.apache.org/jira/browse/HIVE-19261
> Project: Hive
>  Issue Type: Improvement
>Reporter: Fangshi Li
>Assignee: Fangshi Li
>Priority: Major
> Attachments: HIVE-19261.1.patch
>
>
> In HIVE-16175, upstream made a patch to fix the thread safety issue in 
> AvroSerDe's InstanceCache. This fix made the retrieve method in InstanceCache 
> synchronized. While it should make InstanceCache thread-safe, making retrieve 
> synchronized for the cache can be expensive in highly concurrent environment 
> like Spark, as multiple threads need to be synchronized on entering the 
> entire retrieve method.
> We are proposing another way to fix this thread safety issue by making the 
> underlying map of InstanceCache as ConcurrentHashMap. Ideally, we can use 
> atomic computeIfAbsent in the retrieve method to avoid synchronizing the 
> entire method.
> While computeIfAbsent is only available on java 8 and java 7 is still 
> supported in Hive,
> we use a pattern to simulate the behavior of computeIfAbsent. In the future, 
> we should move to computeIfAbsent when Hive requires java 8.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21482) Partition discovery table property is added to non-partitioned external tables

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797215#comment-16797215
 ] 

Hive QA commented on HIVE-21482:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12963048/HIVE-21482.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 15832 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[external_table_purge]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_optimization_acid]
 (batchId=171)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16589/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16589/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16589/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12963048 - PreCommit-HIVE-Build

> Partition discovery table property is added to non-partitioned external tables
> --
>
> Key: HIVE-21482
> URL: https://issues.apache.org/jira/browse/HIVE-21482
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21482.1.patch
>
>
> Automatic partition discovery is added to external tables by default. But it 
> doesn't check if the external table is partitioned or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-9995) ACID compaction tries to compact a single file

2019-03-20 Thread Denys Kuzmenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-9995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko updated HIVE-9995:
-
Attachment: HIVE-9995.02.patch

> ACID compaction tries to compact a single file
> --
>
> Key: HIVE-9995
> URL: https://issues.apache.org/jira/browse/HIVE-9995
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-9995.01.patch, HIVE-9995.02.patch, 
> HIVE-9995.WIP.patch
>
>
> Consider TestWorker.minorWithOpenInMiddle()
> since there is an open txnId=23, this doesn't have any meaningful minor 
> compaction work to do.  The system still tries to compact a single delta file 
> for 21-22 id range, and effectively copies the file onto itself.
> This is 1. inefficient and 2. can potentially affect a reader.
> (from a real cluster)
> Suppose we start with 
> {noformat}
> drwxr-xr-x   - ekoifman staff  0 2016-06-09 16:03 
> /user/hive/warehouse/t/base_016
> -rw-r--r--   1 ekoifman staff602 2016-06-09 16:03 
> /user/hive/warehouse/t/base_016/bucket_0
> drwxr-xr-x   - ekoifman staff  0 2016-06-09 16:07 
> /user/hive/warehouse/t/base_017
> -rw-r--r--   1 ekoifman staff588 2016-06-09 16:07 
> /user/hive/warehouse/t/base_017/bucket_0
> drwxr-xr-x   - ekoifman staff  0 2016-06-09 16:07 
> /user/hive/warehouse/t/delta_017_017_
> -rw-r--r--   1 ekoifman staff514 2016-06-09 16:06 
> /user/hive/warehouse/t/delta_017_017_/bucket_0
> drwxr-xr-x   - ekoifman staff  0 2016-06-09 16:07 
> /user/hive/warehouse/t/delta_018_018_
> -rw-r--r--   1 ekoifman staff612 2016-06-09 16:07 
> /user/hive/warehouse/t/delta_018_018_/bucket_0
> {noformat}
> then do _alter table T compact 'minor';_
> then we end up with 
> {noformat}
> drwxr-xr-x   - ekoifman staff  0 2016-06-09 16:07 
> /user/hive/warehouse/t/base_017
> -rw-r--r--   1 ekoifman staff588 2016-06-09 16:07 
> /user/hive/warehouse/t/base_017/bucket_0
> drwxr-xr-x   - ekoifman staff  0 2016-06-09 16:11 
> /user/hive/warehouse/t/delta_018_018
> -rw-r--r--   1 ekoifman staff500 2016-06-09 16:11 
> /user/hive/warehouse/t/delta_018_018/bucket_0
> drwxr-xr-x   - ekoifman staff  0 2016-06-09 16:07 
> /user/hive/warehouse/t/delta_018_018_
> -rw-r--r--   1 ekoifman staff612 2016-06-09 16:07 
> /user/hive/warehouse/t/delta_018_018_/bucket_0
> {noformat}
> So compaction created a new dir _/user/hive/warehouse/t/delta_018_018_



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-9995) ACID compaction tries to compact a single file

2019-03-20 Thread Denys Kuzmenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-9995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko reassigned HIVE-9995:


Assignee: Denys Kuzmenko  (was: Eugene Koifman)

> ACID compaction tries to compact a single file
> --
>
> Key: HIVE-9995
> URL: https://issues.apache.org/jira/browse/HIVE-9995
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-9995.01.patch, HIVE-9995.02.patch, 
> HIVE-9995.WIP.patch
>
>
> Consider TestWorker.minorWithOpenInMiddle()
> since there is an open txnId=23, this doesn't have any meaningful minor 
> compaction work to do.  The system still tries to compact a single delta file 
> for 21-22 id range, and effectively copies the file onto itself.
> This is 1. inefficient and 2. can potentially affect a reader.
> (from a real cluster)
> Suppose we start with 
> {noformat}
> drwxr-xr-x   - ekoifman staff  0 2016-06-09 16:03 
> /user/hive/warehouse/t/base_016
> -rw-r--r--   1 ekoifman staff602 2016-06-09 16:03 
> /user/hive/warehouse/t/base_016/bucket_0
> drwxr-xr-x   - ekoifman staff  0 2016-06-09 16:07 
> /user/hive/warehouse/t/base_017
> -rw-r--r--   1 ekoifman staff588 2016-06-09 16:07 
> /user/hive/warehouse/t/base_017/bucket_0
> drwxr-xr-x   - ekoifman staff  0 2016-06-09 16:07 
> /user/hive/warehouse/t/delta_017_017_
> -rw-r--r--   1 ekoifman staff514 2016-06-09 16:06 
> /user/hive/warehouse/t/delta_017_017_/bucket_0
> drwxr-xr-x   - ekoifman staff  0 2016-06-09 16:07 
> /user/hive/warehouse/t/delta_018_018_
> -rw-r--r--   1 ekoifman staff612 2016-06-09 16:07 
> /user/hive/warehouse/t/delta_018_018_/bucket_0
> {noformat}
> then do _alter table T compact 'minor';_
> then we end up with 
> {noformat}
> drwxr-xr-x   - ekoifman staff  0 2016-06-09 16:07 
> /user/hive/warehouse/t/base_017
> -rw-r--r--   1 ekoifman staff588 2016-06-09 16:07 
> /user/hive/warehouse/t/base_017/bucket_0
> drwxr-xr-x   - ekoifman staff  0 2016-06-09 16:11 
> /user/hive/warehouse/t/delta_018_018
> -rw-r--r--   1 ekoifman staff500 2016-06-09 16:11 
> /user/hive/warehouse/t/delta_018_018/bucket_0
> drwxr-xr-x   - ekoifman staff  0 2016-06-09 16:07 
> /user/hive/warehouse/t/delta_018_018_
> -rw-r--r--   1 ekoifman staff612 2016-06-09 16:07 
> /user/hive/warehouse/t/delta_018_018_/bucket_0
> {noformat}
> So compaction created a new dir _/user/hive/warehouse/t/delta_018_018_



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21234) Enforce timestamp range

2019-03-20 Thread Karen Coppage (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-21234:
-
Status: Open  (was: Patch Available)

> Enforce timestamp range
> ---
>
> Key: HIVE-21234
> URL: https://issues.apache.org/jira/browse/HIVE-21234
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.1.0
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
>  Labels: TODOC, backwards-compatibility
> Attachments: HIVE-21234.1.patch, HIVE-21234.2.patch, 
> HIVE-21234.3.patch, HIVE-21234.4.patch
>
>
> Our Wiki specifies a range for DATE, but not for TIMESTAMP (well, there's a 
> specified format () but no explicitly specified range). [1]
> TIMESTAMP used to have inner representation of java.sql.Timestamp which 
> couldn't handle timestamps outside of the range of years -. ( 
> converted to 0001)
> Since the inner representation was changed to LocalDateTime (HIVE-20007), 
> negative timestamps overflow because of a formatting error.
> I propose simply disabling negative timestamps, and timestamps beyond year 
> . No data is much better than bad data.
> See [2] for more details.
> [1] 
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-TimestampstimestampTimestamps
> [2] 
> https://docs.google.com/document/d/1y-GcyzzALXM2AJB3bFuyTAEq5fq6p41gu5eH1pF8I7o/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21333) [trivial] Fix argument order in TestDateWritableV2#setupDateStrings

2019-03-20 Thread Karen Coppage (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-21333:
-
Labels:   (was: pull-request-available)

> [trivial] Fix argument order in TestDateWritableV2#setupDateStrings
> ---
>
> Key: HIVE-21333
> URL: https://issues.apache.org/jira/browse/HIVE-21333
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-21333.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Calendar#add (int field, int amount) is given parameters (1, 
> Calendar.DAY_OF_YEAR) which i presume is backwards especially since this 
> method is called 365 times.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21482) Partition discovery table property is added to non-partitioned external tables

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797193#comment-16797193
 ] 

Hive QA commented on HIVE-21482:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
16s{color} | {color:blue} standalone-metastore/metastore-server in master has 
179 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
12s{color} | {color:blue} ql in master has 2255 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
36s{color} | {color:blue} hbase-handler in master has 15 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
19s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 2 new + 47 unchanged - 2 fixed = 49 total (was 49) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16589/dev-support/hive-personality.sh
 |
| git revision | master / 230db04 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16589/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt
 |
| modules | C: standalone-metastore/metastore-server ql hbase-handler U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16589/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Partition discovery table property is added to non-partitioned external tables
> --
>
> Key: HIVE-21482
> URL: https://issues.apache.org/jira/browse/HIVE-21482
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21482.1.patch
>
>
> Automatic partition discovery is added to external tables by default. But it 
> doesn't check if the external table is partitioned or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21304) Show Bucketing version for ReduceSinkOp in explain extended plan

2019-03-20 Thread Zoltan Haindrich (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797162#comment-16797162
 ] 

Zoltan Haindrich commented on HIVE-21304:
-

moving this info to the Desc had a little side effect; after a "distribute by 
key2" the created table doesnt report key as bucket column.
the qtest is: infer_bucket_sort_num_buckets.q
https://github.com/kgyrtkirk/hive/commit/c8e4f71ce0a35e2d1285588c85f690a810bd0b88#diff-1c745e5be3badca08bc591744ed87fc0L410
I'm not sure how serious this is...may I file a follow-up for it? or it's a big 
issue?

> Show Bucketing version for ReduceSinkOp in explain extended plan
> 
>
> Key: HIVE-21304
> URL: https://issues.apache.org/jira/browse/HIVE-21304
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21304.01.patch, HIVE-21304.02.patch
>
>
> Show Bucketing version for ReduceSinkOp in explain extended plan.
> This helps identify what hashing algorithm is being used by by ReduceSinkOp.
>  
> cc [~vgarg]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21290) Restore historical way of handling timestamps in Parquet while keeping the new semantics at the same time

2019-03-20 Thread Karen Coppage (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797158#comment-16797158
 ] 

Karen Coppage commented on HIVE-21290:
--

Patch 1 notes:
* Timestamps are converted from JVM time zone, not session ("set time zone...") 
time zone, this is for backwards compatibility reasons.
*  The writer time zone has to be passed through all the vectorized readers so 
that 
org.apache.hadoop.hive.ql.io.parquet.vector.ParquetDataColumnReaderFactory.TypesFromInt96PageReader#convert
 can correctly convert int96 to Timestamp.
* ^ It might be a better idea to pass the entire reader metadata (Map with ~5 elements) instead of extracting skipConversion (boolean) and 
writerTimezone (ZoneId) and passing these through all those constructors. Any 
input is welcome.


> Restore historical way of handling timestamps in Parquet while keeping the 
> new semantics at the same time
> -
>
> Key: HIVE-21290
> URL: https://issues.apache.org/jira/browse/HIVE-21290
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Ivanfi
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-21290.1.patch
>
>
> This sub-task is for implementing the Parquet-specific parts of the following 
> plan:
> h1. Problem
> Historically, the semantics of the TIMESTAMP type in Hive depended on the 
> file format. Timestamps in Avro, Parquet and RCFiles with a binary SerDe had 
> _Instant_ semantics, while timestamps in ORC, textfiles and RCFiles with a 
> text SerDe had _LocalDateTime_ semantics.
> The Hive community wanted to get rid of this inconsistency and have 
> _LocalDateTime_ semantics in Avro, Parquet and RCFiles with a binary SerDe as 
> well. *Hive 3.1 turned off normalization to UTC* to achieve this. While this 
> leads to the desired new semantics, it also leads to incorrect results when 
> new Hive versions read timestamps written by old Hive versions or when old 
> Hive versions or any other component not aware of this change (including 
> legacy Impala and Spark versions) read timestamps written by new Hive 
> versions.
> h1. Solution
> To work around this issue, Hive *should restore the practice of normalizing 
> to UTC* when writing timestamps to Avro, Parquet and RCFiles with a binary 
> SerDe. In itself, this would restore the historical _Instant_ semantics, 
> which is undesirable. In order to achieve the desired _LocalDateTime_ 
> semantics in spite of normalizing to UTC, newer Hive versions should record 
> the session-local local time zone in the file metadata fields serving 
> arbitrary key-value storage purposes.
> When reading back files with this time zone metadata, newer Hive versions (or 
> any other new component aware of this extra metadata) can achieve 
> _LocalDateTime_ semantics by *converting from UTC to the saved time zone 
> (instead of to the local time zone)*. Legacy components that are unaware of 
> the new metadata can read the files without any problem and the timestamps 
> will show the historical Instant behaviour to them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21304) Show Bucketing version for ReduceSinkOp in explain extended plan

2019-03-20 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-21304:

Attachment: HIVE-21304.02.patch

> Show Bucketing version for ReduceSinkOp in explain extended plan
> 
>
> Key: HIVE-21304
> URL: https://issues.apache.org/jira/browse/HIVE-21304
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21304.01.patch, HIVE-21304.02.patch
>
>
> Show Bucketing version for ReduceSinkOp in explain extended plan.
> This helps identify what hashing algorithm is being used by by ReduceSinkOp.
>  
> cc [~vgarg]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21456) Hive Metastore HTTP Thrift

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797149#comment-16797149
 ] 

Hive QA commented on HIVE-21456:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12963046/HIVE-21456.4.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15832 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16588/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16588/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16588/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12963046 - PreCommit-HIVE-Build

> Hive Metastore HTTP Thrift
> --
>
> Key: HIVE-21456
> URL: https://issues.apache.org/jira/browse/HIVE-21456
> Project: Hive
>  Issue Type: New Feature
>  Components: Metastore, Standalone Metastore
>Reporter: Amit Khanna
>Assignee: Amit Khanna
>Priority: Major
> Attachments: HIVE-21456.2.patch, HIVE-21456.3.patch, 
> HIVE-21456.4.patch, HIVE-21456.patch
>
>
> Hive Metastore currently doesn't have support for HTTP transport because of 
> which it is not possible to access it via Knox. Adding support for Thrift 
> over HTTP transport will allow the clients to access via Knox



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21456) Hive Metastore HTTP Thrift

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797148#comment-16797148
 ] 

Hive QA commented on HIVE-21456:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
38s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
38s{color} | {color:blue} standalone-metastore/metastore-common in master has 
29 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
19s{color} | {color:blue} standalone-metastore/metastore-server in master has 
179 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
36s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16588/dev-support/hive-personality.sh
 |
| git revision | master / 230db04 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: standalone-metastore/metastore-common 
standalone-metastore/metastore-server . U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16588/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Hive Metastore HTTP Thrift
> --
>
> Key: HIVE-21456
> URL: https://issues.apache.org/jira/browse/HIVE-21456
> Project: Hive
>  Issue Type: New Feature
>  Components: Metastore, Standalone Metastore
>Reporter: Amit Khanna
>Assignee: Amit Khanna
>Priority: Major
> Attachments: HIVE-21456.2.patch, HIVE-21456.3.patch, 
> HIVE-21456.4.patch, HIVE-21456.patch
>
>
> Hive Metastore currently doesn't have support for HTTP transport because of 
> which it is not possible to access it via Knox. Adding support for Thrift 
> over HTTP transport will allow the clients to access via Knox



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21290) Restore historical way of handling timestamps in Parquet while keeping the new semantics at the same time

2019-03-20 Thread Karen Coppage (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-21290:
-
Attachment: HIVE-21290.1.patch
Status: Patch Available  (was: Open)

> Restore historical way of handling timestamps in Parquet while keeping the 
> new semantics at the same time
> -
>
> Key: HIVE-21290
> URL: https://issues.apache.org/jira/browse/HIVE-21290
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Ivanfi
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-21290.1.patch
>
>
> This sub-task is for implementing the Parquet-specific parts of the following 
> plan:
> h1. Problem
> Historically, the semantics of the TIMESTAMP type in Hive depended on the 
> file format. Timestamps in Avro, Parquet and RCFiles with a binary SerDe had 
> _Instant_ semantics, while timestamps in ORC, textfiles and RCFiles with a 
> text SerDe had _LocalDateTime_ semantics.
> The Hive community wanted to get rid of this inconsistency and have 
> _LocalDateTime_ semantics in Avro, Parquet and RCFiles with a binary SerDe as 
> well. *Hive 3.1 turned off normalization to UTC* to achieve this. While this 
> leads to the desired new semantics, it also leads to incorrect results when 
> new Hive versions read timestamps written by old Hive versions or when old 
> Hive versions or any other component not aware of this change (including 
> legacy Impala and Spark versions) read timestamps written by new Hive 
> versions.
> h1. Solution
> To work around this issue, Hive *should restore the practice of normalizing 
> to UTC* when writing timestamps to Avro, Parquet and RCFiles with a binary 
> SerDe. In itself, this would restore the historical _Instant_ semantics, 
> which is undesirable. In order to achieve the desired _LocalDateTime_ 
> semantics in spite of normalizing to UTC, newer Hive versions should record 
> the session-local local time zone in the file metadata fields serving 
> arbitrary key-value storage purposes.
> When reading back files with this time zone metadata, newer Hive versions (or 
> any other new component aware of this extra metadata) can achieve 
> _LocalDateTime_ semantics by *converting from UTC to the saved time zone 
> (instead of to the local time zone)*. Legacy components that are unaware of 
> the new metadata can read the files without any problem and the timestamps 
> will show the historical Instant behaviour to them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17404) Orc split generation cache does not handle files without file tail

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797052#comment-16797052
 ] 

Hive QA commented on HIVE-17404:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
15s{color} | {color:blue} ql in master has 2255 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16587/dev-support/hive-personality.sh
 |
| git revision | master / 230db04 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16587/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Orc split generation cache does not handle files without file tail
> --
>
> Key: HIVE-17404
> URL: https://issues.apache.org/jira/browse/HIVE-17404
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Prasanth Jayachandran
>Assignee: Aditya Shah
>Priority: Critical
> Attachments: HIVE-17404.2.patch, HIVE-17404.patch
>
>
> Some old files do not have Orc FileTail. If file tail does not exist, split 
> generation should fallback to old way of storing footers. 
> This can result in exceptions like below
> {code}
> ORC split generation failed with exception: Malformed ORC file. Invalid 
> postscript length 9
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1735)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1822)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:450)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:569)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:196)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   

[jira] [Commented] (HIVE-21466) Increase Default Size of SPLIT_MAXSIZE

2019-03-20 Thread David Mollitor (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797136#comment-16797136
 ] 

David Mollitor commented on HIVE-21466:
---

For additional context, this is proposed value is still less that what is 
recommended for HoS.

https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started

{code}
mapreduce.input.fileinputformat.split.maxsize=75000
{code}

> Increase Default Size of SPLIT_MAXSIZE
> --
>
> Key: HIVE-21466
> URL: https://issues.apache.org/jira/browse/HIVE-21466
> Project: Hive
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 4.0.0, 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HIVE-21466.1.patch, HIVE-21466.2.patch
>
>
> {code:java}
>  MAPREDMAXSPLITSIZE(FileInputFormat.SPLIT_MAXSIZE, 25600L, "", true),
> {code}
> [https://github.com/apache/hive/blob/8d4300a02691777fc96f33861ed27e64fed72f2c/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java#L682]
> This field specifies a maximum size for each MR (maybe other?) splits.
> This number should be a multiple of the HDFS Block size. The way that this 
> maximum is implemented, is that each block is added to the split, and if the 
> split grows to be larger than the maximum allowed, the split is submitted to 
> the cluster and a new split is opened.
> So, imagine the following scenario:
>  * HDFS block size of 16 bytes
>  * Maximum size of 40 bytes
> This will produce a split with 3 blocks. (2x16) = 32; another block will be 
> inserted, (3x16) = 48 bytes in the split. So, while many operators would 
> assume a split of 2 blocks, the actual is 3 blocks. Setting the maximum split 
> size to a multiple of the HDFS block size will make this behavior less 
> confusing.
> The current setting is ~256MB and when this was introduced, the default HDFS 
> block size was 64MB. That is a factor of 4x. However, now HDFS block sizes 
> are 128MB by default, so I propose setting this to 4x128MB.  The larger 
> splits (fewer tasks) should give a nice performance boost for modern hardware.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21467) Remove deprecated junit.framework.Assert imports

2019-03-20 Thread Laszlo Bodor (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797105#comment-16797105
 ] 

Laszlo Bodor commented on HIVE-21467:
-

02.patch handles double<->double comparison errors in test classes: 
TestVectorMathFunctions, TestVectorTypeCasts

> Remove deprecated junit.framework.Assert imports
> 
>
> Key: HIVE-21467
> URL: https://issues.apache.org/jira/browse/HIVE-21467
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Minor
>  Labels: newbie
> Attachments: HIVE-21467.01.patch, HIVE-21467.02.patch
>
>
> These imports trigger lots of warnings in ide, which could be annoying, and 
> it can be replaced easily to org.junit.Assert, the signature and behavior are 
> the same, so the tests should pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21467) Remove deprecated junit.framework.Assert imports

2019-03-20 Thread Laszlo Bodor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Bodor updated HIVE-21467:

Attachment: HIVE-21467.02.patch

> Remove deprecated junit.framework.Assert imports
> 
>
> Key: HIVE-21467
> URL: https://issues.apache.org/jira/browse/HIVE-21467
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Minor
>  Labels: newbie
> Attachments: HIVE-21467.01.patch, HIVE-21467.02.patch
>
>
> These imports trigger lots of warnings in ide, which could be annoying, and 
> it can be replaced easily to org.junit.Assert, the signature and behavior are 
> the same, so the tests should pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-15406) Consider vectorizing the new 'trunc' function

2019-03-20 Thread Zoltan Haindrich (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-15406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797100#comment-16797100
 ] 

Zoltan Haindrich commented on HIVE-15406:
-

I'm not sure; but for:
{code}
trunc(CAST (c AS DECIMAL(10,5))) 
{code}
isn't we should truncate to 5 ? right now it add ~15 zeros - but it adds it 
consistently... so I think thats also good; as there is a way to explicitly 
specify it - is this "aligns" with the non-vectorized "trunc" method-s 
behaviour?

> Consider vectorizing the new 'trunc' function
> -
>
> Key: HIVE-15406
> URL: https://issues.apache.org/jira/browse/HIVE-15406
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 4.0.0
>Reporter: Matt McCline
>Assignee: Laszlo Bodor
>Priority: Critical
> Attachments: HIVE-15406.01.patch, HIVE-15406.02.patch, 
> HIVE-15406.03.patch, HIVE-15406.04.patch, HIVE-15406.05.patch
>
>
> Rounding function 'trunc' added by HIVE-14582.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-21483) Fix HoS when scratch_dir is using remote HDFS

2019-03-20 Thread Dapeng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dapeng Sun resolved HIVE-21483.
---
Resolution: Duplicate

> Fix HoS when scratch_dir is using remote HDFS
> -
>
> Key: HIVE-21483
> URL: https://issues.apache.org/jira/browse/HIVE-21483
> Project: Hive
>  Issue Type: Bug
>Reporter: Dapeng Sun
>Assignee: Dapeng Sun
>Priority: Major
>
> HoS would fail when scratch dir is using remote HDFS:
> {noformat}
>   public static URI uploadToHDFS(URI source, HiveConf conf) throws 
> IOException {
> Path localFile = new Path(source.getPath());
> Path remoteFile = new 
> Path(SessionState.get().getSparkSession().getHDFSSessionDir(),
> getFileName(source));
> -FileSystem fileSystem = FileSystem.get(conf);
> +FileSystem fileSystem = remoteFile.getFileSystem(conf);
> // Overwrite if the remote file already exists. Whether the file can be 
> added
> // on executor is up to spark, i.e. spark.files.overwrite
> fileSystem.copyFromLocalFile(false, true, localFile, remoteFile);
> Path fullPath = fileSystem.getFileStatus(remoteFile).getPath();
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21483) Fix HoS when scratch_dir is using remote HDFS

2019-03-20 Thread Dapeng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dapeng Sun updated HIVE-21483:
--
Summary: Fix HoS when scratch_dir is using remote HDFS  (was: Fix HoS when 
scratch dir is using remote HDFS)

> Fix HoS when scratch_dir is using remote HDFS
> -
>
> Key: HIVE-21483
> URL: https://issues.apache.org/jira/browse/HIVE-21483
> Project: Hive
>  Issue Type: Bug
>Reporter: Dapeng Sun
>Assignee: Dapeng Sun
>Priority: Major
>
> HoS would fail when scratch dir is using remote HDFS:
> {noformat}
>   public static URI uploadToHDFS(URI source, HiveConf conf) throws 
> IOException {
> Path localFile = new Path(source.getPath());
> Path remoteFile = new 
> Path(SessionState.get().getSparkSession().getHDFSSessionDir(),
> getFileName(source));
> -FileSystem fileSystem = FileSystem.get(conf);
> +FileSystem fileSystem = remoteFile.getFileSystem(conf);
> // Overwrite if the remote file already exists. Whether the file can be 
> added
> // on executor is up to spark, i.e. spark.files.overwrite
> fileSystem.copyFromLocalFile(false, true, localFile, remoteFile);
> Path fullPath = fileSystem.getFileStatus(remoteFile).getPath();
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21483) Fix HoS when scratch dir is using remote HDFS

2019-03-20 Thread Dapeng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dapeng Sun updated HIVE-21483:
--
Summary: Fix HoS when scratch dir is using remote HDFS  (was: HoS would 
fail when scratch dir is using remote HDFS)

> Fix HoS when scratch dir is using remote HDFS
> -
>
> Key: HIVE-21483
> URL: https://issues.apache.org/jira/browse/HIVE-21483
> Project: Hive
>  Issue Type: Bug
>Reporter: Dapeng Sun
>Assignee: Dapeng Sun
>Priority: Major
>
> HoS would fail when scratch dir is using remote HDFS:
> {noformat}
>   public static URI uploadToHDFS(URI source, HiveConf conf) throws 
> IOException {
> Path localFile = new Path(source.getPath());
> Path remoteFile = new 
> Path(SessionState.get().getSparkSession().getHDFSSessionDir(),
> getFileName(source));
> -FileSystem fileSystem = FileSystem.get(conf);
> +FileSystem fileSystem = remoteFile.getFileSystem(conf);
> // Overwrite if the remote file already exists. Whether the file can be 
> added
> // on executor is up to spark, i.e. spark.files.overwrite
> fileSystem.copyFromLocalFile(false, true, localFile, remoteFile);
> Path fullPath = fileSystem.getFileStatus(remoteFile).getPath();
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21483) HoS would fail when scratch dir is using remote HDFS

2019-03-20 Thread Dapeng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dapeng Sun reassigned HIVE-21483:
-


> HoS would fail when scratch dir is using remote HDFS
> 
>
> Key: HIVE-21483
> URL: https://issues.apache.org/jira/browse/HIVE-21483
> Project: Hive
>  Issue Type: Bug
>Reporter: Dapeng Sun
>Assignee: Dapeng Sun
>Priority: Major
>
> HoS would fail when scratch dir is using remote HDFS:
>   public static URI uploadToHDFS(URI source, HiveConf conf) throws 
> IOException {
> Path localFile = new Path(source.getPath());
> Path remoteFile = new 
> Path(SessionState.get().getSparkSession().getHDFSSessionDir(),
> getFileName(source));
> -FileSystem fileSystem = FileSystem.get(conf);
> +FileSystem fileSystem = remoteFile.getFileSystem(conf);
> // Overwrite if the remote file already exists. Whether the file can be 
> added
> // on executor is up to spark, i.e. spark.files.overwrite
> fileSystem.copyFromLocalFile(false, true, localFile, remoteFile);
> Path fullPath = fileSystem.getFileStatus(remoteFile).getPath();
> r



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21483) HoS would fail when scratch dir is using remote HDFS

2019-03-20 Thread Dapeng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dapeng Sun updated HIVE-21483:
--
Description: 
HoS would fail when scratch dir is using remote HDFS:

{noformat}
  public static URI uploadToHDFS(URI source, HiveConf conf) throws IOException {
Path localFile = new Path(source.getPath());
Path remoteFile = new 
Path(SessionState.get().getSparkSession().getHDFSSessionDir(),
getFileName(source));
-FileSystem fileSystem = FileSystem.get(conf);
+FileSystem fileSystem = remoteFile.getFileSystem(conf);
// Overwrite if the remote file already exists. Whether the file can be 
added
// on executor is up to spark, i.e. spark.files.overwrite
fileSystem.copyFromLocalFile(false, true, localFile, remoteFile);
Path fullPath = fileSystem.getFileStatus(remoteFile).getPath();
{noformat}

  was:
HoS would fail when scratch dir is using remote HDFS:

  public static URI uploadToHDFS(URI source, HiveConf conf) throws IOException {
Path localFile = new Path(source.getPath());
Path remoteFile = new 
Path(SessionState.get().getSparkSession().getHDFSSessionDir(),
getFileName(source));
-FileSystem fileSystem = FileSystem.get(conf);
+FileSystem fileSystem = remoteFile.getFileSystem(conf);
// Overwrite if the remote file already exists. Whether the file can be 
added
// on executor is up to spark, i.e. spark.files.overwrite
fileSystem.copyFromLocalFile(false, true, localFile, remoteFile);
Path fullPath = fileSystem.getFileStatus(remoteFile).getPath();
r


> HoS would fail when scratch dir is using remote HDFS
> 
>
> Key: HIVE-21483
> URL: https://issues.apache.org/jira/browse/HIVE-21483
> Project: Hive
>  Issue Type: Bug
>Reporter: Dapeng Sun
>Assignee: Dapeng Sun
>Priority: Major
>
> HoS would fail when scratch dir is using remote HDFS:
> {noformat}
>   public static URI uploadToHDFS(URI source, HiveConf conf) throws 
> IOException {
> Path localFile = new Path(source.getPath());
> Path remoteFile = new 
> Path(SessionState.get().getSparkSession().getHDFSSessionDir(),
> getFileName(source));
> -FileSystem fileSystem = FileSystem.get(conf);
> +FileSystem fileSystem = remoteFile.getFileSystem(conf);
> // Overwrite if the remote file already exists. Whether the file can be 
> added
> // on executor is up to spark, i.e. spark.files.overwrite
> fileSystem.copyFromLocalFile(false, true, localFile, remoteFile);
> Path fullPath = fileSystem.getFileStatus(remoteFile).getPath();
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17404) Orc split generation cache does not handle files without file tail

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797089#comment-16797089
 ] 

Hive QA commented on HIVE-17404:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12963020/HIVE-17404.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15832 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testMergePartitioned01 
(batchId=326)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16587/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16587/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16587/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12963020 - PreCommit-HIVE-Build

> Orc split generation cache does not handle files without file tail
> --
>
> Key: HIVE-17404
> URL: https://issues.apache.org/jira/browse/HIVE-17404
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Prasanth Jayachandran
>Assignee: Aditya Shah
>Priority: Critical
> Attachments: HIVE-17404.2.patch, HIVE-17404.patch
>
>
> Some old files do not have Orc FileTail. If file tail does not exist, split 
> generation should fallback to old way of storing footers. 
> This can result in exceptions like below
> {code}
> ORC split generation failed with exception: Malformed ORC file. Invalid 
> postscript length 9
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1735)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1822)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:450)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:569)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:196)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269)
>   at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.orc.FileFormatException: Malformed ORC file. Invalid 
> postscript length 9
>   at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:297)
>   at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:470)
>   at 
> org.apache.hadoop.hive.ql.io.orc.LocalCache.getAndValidate(LocalCache.java:103)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$ETLSplitStrategy.getSplits(OrcInputFormat.java:804)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$ETLSplitStrategy.runGetSplitsSync(OrcInputFormat.java:922)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$ETLSplitStrategy.generateSplitWork(OrcInputFormat.java:891)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.scheduleSplits(OrcInputFormat.java:1763)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1707)
>   ... 15 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-15406) Consider vectorizing the new 'trunc' function

2019-03-20 Thread Laszlo Bodor (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-15406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797085#comment-16797085
 ] 

Laszlo Bodor commented on HIVE-15406:
-

could someone review this patch?
cc: [~ashutoshc]
thanks in advance

> Consider vectorizing the new 'trunc' function
> -
>
> Key: HIVE-15406
> URL: https://issues.apache.org/jira/browse/HIVE-15406
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 4.0.0
>Reporter: Matt McCline
>Assignee: Laszlo Bodor
>Priority: Critical
> Attachments: HIVE-15406.01.patch, HIVE-15406.02.patch, 
> HIVE-15406.03.patch, HIVE-15406.04.patch, HIVE-15406.05.patch
>
>
> Rounding function 'trunc' added by HIVE-14582.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21474) Bumping guava version

2019-03-20 Thread Zoltan Haindrich (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797072#comment-16797072
 ] 

Zoltan Haindrich commented on HIVE-21474:
-

[~bslim] is it ok to change "druid.guava.version" ? if yes: can we remove it; 
and use the standard guava.version instead?

> Bumping guava version
> -
>
> Key: HIVE-21474
> URL: https://issues.apache.org/jira/browse/HIVE-21474
> Project: Hive
>  Issue Type: Task
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21474.2.patch, HIVE-21474.patch
>
>
> Bump guava to 24.1.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21109) Stats replication for ACID tables.

2019-03-20 Thread Ashutosh Bapat (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Bapat updated HIVE-21109:
--
Attachment: HIVE-21109.02.patch
Status: Patch Available  (was: Open)

Completely removed the need to create a CreateTable task for handling a 
CommitTxnEvent. Attaching a patch to trigger ptests.

> Stats replication for ACID tables.
> --
>
> Key: HIVE-21109
> URL: https://issues.apache.org/jira/browse/HIVE-21109
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
> Attachments: HIVE-21109.01.patch, HIVE-21109.02.patch
>
>
> Transactional tables require a writeID associated with the stats update. This 
> writeId needs to be in sync with the writeId on the source and hence needs to 
> be replicated from the source.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21109) Stats replication for ACID tables.

2019-03-20 Thread Ashutosh Bapat (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Bapat updated HIVE-21109:
--
Attachment: (was: HIVE-21109.02.patch)

> Stats replication for ACID tables.
> --
>
> Key: HIVE-21109
> URL: https://issues.apache.org/jira/browse/HIVE-21109
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
> Attachments: HIVE-21109.01.patch
>
>
> Transactional tables require a writeID associated with the stats update. This 
> writeId needs to be in sync with the writeId on the source and hence needs to 
> be replicated from the source.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21109) Stats replication for ACID tables.

2019-03-20 Thread Ashutosh Bapat (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Bapat updated HIVE-21109:
--
Status: Open  (was: Patch Available)

> Stats replication for ACID tables.
> --
>
> Key: HIVE-21109
> URL: https://issues.apache.org/jira/browse/HIVE-21109
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
> Attachments: HIVE-21109.01.patch
>
>
> Transactional tables require a writeID associated with the stats update. This 
> writeId needs to be in sync with the writeId on the source and hence needs to 
> be replicated from the source.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21048) Remove needless org.mortbay.jetty from hadoop exclusions

2019-03-20 Thread Zoltan Haindrich (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797064#comment-16797064
 ] 

Zoltan Haindrich commented on HIVE-21048:
-

+1

> Remove needless org.mortbay.jetty from hadoop exclusions
> 
>
> Key: HIVE-21048
> URL: https://issues.apache.org/jira/browse/HIVE-21048
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Major
> Attachments: HIVE-21048.01.patch, HIVE-21048.02.patch, 
> HIVE-21048.03.patch, HIVE-21048.04.patch, HIVE-21048.05.patch, 
> HIVE-21048.06.patch, HIVE-21048.07.patch, HIVE-21048.08.patch, 
> HIVE-21048.08.patch, HIVE-21048.09.patch, HIVE-21048.10.patch, 
> HIVE-21048.11.patch, dep.out
>
>
> During HIVE-20638 i found that org.mortbay.jetty exclusions from e.g. hadoop 
> don't take effect, as the actual groupId of jetty is org.eclipse.jetty for 
> most of the current projects, please find attachment (example for hive 
> commons project).
> https://en.wikipedia.org/wiki/Jetty_(web_server)#History



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21406) Add .factorypath files to .gitignore

2019-03-20 Thread Laszlo Bodor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Bodor updated HIVE-21406:

Attachment: (was: HIVE-21406.01.patch)

> Add .factorypath files to .gitignore
> 
>
> Key: HIVE-21406
> URL: https://issues.apache.org/jira/browse/HIVE-21406
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Minor
> Attachments: HIVE-21406.01.patch, Screen Shot 2019-03-07 at 2.02.10 
> PM.png
>
>
> .factorypath files are generated by eclipse and should be ignored



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21406) Add .factorypath files to .gitignore

2019-03-20 Thread Laszlo Bodor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Bodor updated HIVE-21406:

Attachment: HIVE-21406.01.patch

> Add .factorypath files to .gitignore
> 
>
> Key: HIVE-21406
> URL: https://issues.apache.org/jira/browse/HIVE-21406
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Minor
> Attachments: HIVE-21406.01.patch, Screen Shot 2019-03-07 at 2.02.10 
> PM.png
>
>
> .factorypath files are generated by eclipse and should be ignored



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21479) NPE during metastore cache update

2019-03-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797022#comment-16797022
 ] 

Hive QA commented on HIVE-21479:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12963016/HIVE-21479.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15832 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16586/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16586/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16586/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12963016 - PreCommit-HIVE-Build

> NPE during metastore cache update
> -
>
> Key: HIVE-21479
> URL: https://issues.apache.org/jira/browse/HIVE-21479
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-21479.1.patch
>
>
> Saw the following stack during a long periodical update:
> {code}
> 2019-03-12T10:01:43,015 ERROR [CachedStore-CacheUpdateService: Thread-36] 
> cache.CachedStore: Update failure:java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.metastore.cache.CachedStore$CacheUpdateMasterWork.updateTableColStats(CachedStore.java:508)
>   at 
> org.apache.hadoop.hive.metastore.cache.CachedStore$CacheUpdateMasterWork.update(CachedStore.java:461)
>   at 
> org.apache.hadoop.hive.metastore.cache.CachedStore$CacheUpdateMasterWork.run(CachedStore.java:396)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> The reason is we get the table list at very early stage and then refresh 
> table one by one. It is likely table is removed during the interim. We need 
> to deal with this case during cache update.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21283) Create Synonym mid for substr, position for locate

2019-03-20 Thread Mani M (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mani M updated HIVE-21283:
--
Attachment: HIVE.21283.10.PATCH

> Create Synonym mid for  substr, position for  locate
> 
>
> Key: HIVE-21283
> URL: https://issues.apache.org/jira/browse/HIVE-21283
> Project: Hive
>  Issue Type: New Feature
>Reporter: Mani M
>Assignee: Mani M
>Priority: Minor
>  Labels: UDF, pull-request-available, todoc4.0
> Fix For: 4.0.0
>
> Attachments: HIVE.21283.03.PATCH, HIVE.21283.04.PATCH, 
> HIVE.21283.05.PATCH, HIVE.21283.06.PATCH, HIVE.21283.07.PATCH, 
> HIVE.21283.08.PATCH, HIVE.21283.09.PATCH, HIVE.21283.10.PATCH, 
> HIVE.21283.2.PATCH, HIVE.21283.PATCH, image-2019-03-16-21-31-15-541.png, 
> image-2019-03-16-21-33-18-898.png
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Create new synonym for the existing function
>  
> Mid for substr
> postiion for locate 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21283) Create Synonym mid for substr, position for locate

2019-03-20 Thread Mani M (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mani M updated HIVE-21283:
--
Status: Patch Available  (was: In Progress)

Resubmitting the patch to clear the falky test

> Create Synonym mid for  substr, position for  locate
> 
>
> Key: HIVE-21283
> URL: https://issues.apache.org/jira/browse/HIVE-21283
> Project: Hive
>  Issue Type: New Feature
>Reporter: Mani M
>Assignee: Mani M
>Priority: Minor
>  Labels: UDF, pull-request-available, todoc4.0
> Fix For: 4.0.0
>
> Attachments: HIVE.21283.03.PATCH, HIVE.21283.04.PATCH, 
> HIVE.21283.05.PATCH, HIVE.21283.06.PATCH, HIVE.21283.07.PATCH, 
> HIVE.21283.08.PATCH, HIVE.21283.09.PATCH, HIVE.21283.10.PATCH, 
> HIVE.21283.2.PATCH, HIVE.21283.PATCH, image-2019-03-16-21-31-15-541.png, 
> image-2019-03-16-21-33-18-898.png
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Create new synonym for the existing function
>  
> Mid for substr
> postiion for locate 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21283) Create Synonym mid for substr, position for locate

2019-03-20 Thread Mani M (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mani M updated HIVE-21283:
--
Status: In Progress  (was: Patch Available)

> Create Synonym mid for  substr, position for  locate
> 
>
> Key: HIVE-21283
> URL: https://issues.apache.org/jira/browse/HIVE-21283
> Project: Hive
>  Issue Type: New Feature
>Reporter: Mani M
>Assignee: Mani M
>Priority: Minor
>  Labels: UDF, pull-request-available, todoc4.0
> Fix For: 4.0.0
>
> Attachments: HIVE.21283.03.PATCH, HIVE.21283.04.PATCH, 
> HIVE.21283.05.PATCH, HIVE.21283.06.PATCH, HIVE.21283.07.PATCH, 
> HIVE.21283.08.PATCH, HIVE.21283.09.PATCH, HIVE.21283.2.PATCH, 
> HIVE.21283.PATCH, image-2019-03-16-21-31-15-541.png, 
> image-2019-03-16-21-33-18-898.png
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Create new synonym for the existing function
>  
> Mid for substr
> postiion for locate 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21109) Stats replication for ACID tables.

2019-03-20 Thread Ashutosh Bapat (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Bapat updated HIVE-21109:
--
Attachment: HIVE-21109.02.patch
Status: Patch Available  (was: In Progress)

Fixed some of the failures in the previous ptest run. Attaching patch to 
trigger second run of ptest.

> Stats replication for ACID tables.
> --
>
> Key: HIVE-21109
> URL: https://issues.apache.org/jira/browse/HIVE-21109
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
> Attachments: HIVE-21109.01.patch, HIVE-21109.02.patch
>
>
> Transactional tables require a writeID associated with the stats update. This 
> writeId needs to be in sync with the writeId on the source and hence needs to 
> be replicated from the source.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21109) Stats replication for ACID tables.

2019-03-20 Thread Ashutosh Bapat (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Bapat updated HIVE-21109:
--
Status: In Progress  (was: Patch Available)

> Stats replication for ACID tables.
> --
>
> Key: HIVE-21109
> URL: https://issues.apache.org/jira/browse/HIVE-21109
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
> Attachments: HIVE-21109.01.patch
>
>
> Transactional tables require a writeID associated with the stats update. This 
> writeId needs to be in sync with the writeId on the source and hence needs to 
> be replicated from the source.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21304) Show Bucketing version for ReduceSinkOp in explain extended plan

2019-03-20 Thread Zoltan Haindrich (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796997#comment-16796997
 ] 

Zoltan Haindrich commented on HIVE-21304:
-

[~djaiswal]: I think around the time this feature was done; it was missed in 
the review process...

moving it to the desc already started fixing some issues: (originally there was 
-1 and now it uses version 2 somehow)
however...since a qtest seemingly regressed(infer_bucket_sort_num_buckets) I'll 
take a closer look - but we most probably have other issues as well; because 
there are a bunch of clone() methods in some Desc-s which may or may not work 
as we expect

I'm not sure if it's the best to populate this version infor during 
"construction" can't we do this bucketVersion population on the operator tree 
at some point?
because someone could "operate" the operator treeand in that case the 
bucketing version will probably not set to a correct value

> Show Bucketing version for ReduceSinkOp in explain extended plan
> 
>
> Key: HIVE-21304
> URL: https://issues.apache.org/jira/browse/HIVE-21304
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21304.01.patch
>
>
> Show Bucketing version for ReduceSinkOp in explain extended plan.
> This helps identify what hashing algorithm is being used by by ReduceSinkOp.
>  
> cc [~vgarg]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21473) Bumping jackson version

2019-03-20 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-21473:
--
Attachment: (was: HIVE-21474.2.patch)

> Bumping jackson version
> ---
>
> Key: HIVE-21473
> URL: https://issues.apache.org/jira/browse/HIVE-21473
> Project: Hive
>  Issue Type: Task
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21473.2.patch, HIVE-21473.patch
>
>
> Bump jackson version to 2.9.8



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21474) Bumping guava version

2019-03-20 Thread Peter Vary (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796984#comment-16796984
 ] 

Peter Vary commented on HIVE-21474:
---

Found several other places of guava references which I missed before...

Let's try now

> Bumping guava version
> -
>
> Key: HIVE-21474
> URL: https://issues.apache.org/jira/browse/HIVE-21474
> Project: Hive
>  Issue Type: Task
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21474.2.patch, HIVE-21474.patch
>
>
> Bump guava to 24.1.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21473) Bumping jackson version

2019-03-20 Thread Peter Vary (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796977#comment-16796977
 ] 

Peter Vary commented on HIVE-21473:
---

Found 2 new occurances

> Bumping jackson version
> ---
>
> Key: HIVE-21473
> URL: https://issues.apache.org/jira/browse/HIVE-21473
> Project: Hive
>  Issue Type: Task
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21473.2.patch, HIVE-21473.patch
>
>
> Bump jackson version to 2.9.8



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21473) Bumping jackson version

2019-03-20 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-21473:
--
Attachment: HIVE-21473.2.patch

> Bumping jackson version
> ---
>
> Key: HIVE-21473
> URL: https://issues.apache.org/jira/browse/HIVE-21473
> Project: Hive
>  Issue Type: Task
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21473.2.patch, HIVE-21473.patch
>
>
> Bump jackson version to 2.9.8



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21473) Bumping jackson version

2019-03-20 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-21473:
--
Attachment: (was: HIVE-21473.2.patch)

> Bumping jackson version
> ---
>
> Key: HIVE-21473
> URL: https://issues.apache.org/jira/browse/HIVE-21473
> Project: Hive
>  Issue Type: Task
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21473.2.patch, HIVE-21473.patch
>
>
> Bump jackson version to 2.9.8



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21473) Bumping jackson version

2019-03-20 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-21473:
--
Attachment: HIVE-21474.2.patch

> Bumping jackson version
> ---
>
> Key: HIVE-21473
> URL: https://issues.apache.org/jira/browse/HIVE-21473
> Project: Hive
>  Issue Type: Task
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21473.2.patch, HIVE-21473.patch
>
>
> Bump jackson version to 2.9.8



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21474) Bumping guava version

2019-03-20 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-21474:
--
Attachment: HIVE-21474.2.patch

> Bumping guava version
> -
>
> Key: HIVE-21474
> URL: https://issues.apache.org/jira/browse/HIVE-21474
> Project: Hive
>  Issue Type: Task
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21474.2.patch, HIVE-21474.patch
>
>
> Bump guava to 24.1.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >