[jira] [Commented] (HIVE-21167) Bucketing: Bucketing version 1 is incorrectly partitioning data

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773778#comment-16773778
 ] 

Hive QA commented on HIVE-21167:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
51s{color} | {color:blue} ql in master has 2260 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} ql: The patch generated 0 new + 37 unchanged - 2 
fixed = 37 total (was 39) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16175/dev-support/hive-personality.sh
 |
| git revision | master / 89b9f12 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16175/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Bucketing: Bucketing version 1 is incorrectly partitioning data
> ---
>
> Key: HIVE-21167
> URL: https://issues.apache.org/jira/browse/HIVE-21167
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Vaibhav Gumashta
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-21167.1.patch, HIVE-21167.2.patch
>
>
> Using murmur hash for bucketing columns was introduced in HIVE-18910, 
> following which {{'bucketing_version'='1'}} stands for the old behaviour 
> (where for example integer columns were partitioned based on mod values). 
> Looks like we have a bug in the old bucketing scheme now. I could repro it 
> when modified the existing schema using an alter table add column and adding 
> new data. Repro:
> {code}
> 0: jdbc:hive2://localhost:10010> create transactional table acid_ptn_bucket1 
> (a int, b int) partitioned by(ds string) clustered by (a) into 2 buckets 
> stored as ORC TBLPROPERTIES('bucketing_version'='1', 'transactional'='true', 
> 'transactional_properties'='default');
> No rows affected (0.418 seconds)
> 0: jdbc:hive2://localhost:10010> insert into acid_ptn_bucket1 partition (ds) 
> values(1,2,'today'),(1,3,'today'),(1,4,'yesterday'),(2,2,'yesterday'),(2,3,'today'),(2,4,'today');
> 6 rows affected (3.695 seconds)
> {code}
> Data from ORC file (data as expected):
> {code}
> /apps/hive/warehouse/acid_ptn_bucket1/ds=today/delta_001_001_/bucket_0
> {"operation": 0, "originalTransaction": 1, "bucket": 536870912,

[jira] [Updated] (HIVE-20079) Populate more accurate rawDataSize for parquet format

2019-02-20 Thread Antal Sinkovits (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antal Sinkovits updated HIVE-20079:
---
Attachment: HIVE-20079.4.patch

> Populate more accurate rawDataSize for parquet format
> -
>
> Key: HIVE-20079
> URL: https://issues.apache.org/jira/browse/HIVE-20079
> Project: Hive
>  Issue Type: Improvement
>  Components: File Formats
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Antal Sinkovits
>Priority: Major
> Attachments: HIVE-20079.1.patch, HIVE-20079.2.patch, 
> HIVE-20079.3.patch, HIVE-20079.4.patch
>
>
> Run the following queries and you will see the raw data for the table is 4 
> (that is the number of fields) incorrectly. We need to populate correct data 
> size so data can be split properly.
> {noformat}
> SET hive.stats.autogather=true;
> CREATE TABLE parquet_stats (id int,str string) STORED AS PARQUET;
> INSERT INTO parquet_stats values(0, 'this is string 0'), (1, 'string 1');
> DESC FORMATTED parquet_stats;
> {noformat}
> {noformat}
> Table Parameters:
>   COLUMN_STATS_ACCURATE   true
>   numFiles1
>   numRows 2
>   rawDataSize 4
>   totalSize   373
>   transient_lastDdlTime   1530660523
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21295) StorageHandler shall convert date to string using Hive convention

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773751#comment-16773751
 ] 

Hive QA commented on HIVE-21295:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12959513/HIVE-21295.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15809 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16174/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16174/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16174/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12959513 - PreCommit-HIVE-Build

> StorageHandler shall convert date to string using Hive convention
> -
>
> Key: HIVE-21295
> URL: https://issues.apache.org/jira/browse/HIVE-21295
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-21295.1.patch, HIVE-21295.2.patch
>
>
> If we have date datatype in mysql, string datatype defined in hive, 
> JdbcStorageHandler will translate the date to string with the format 
> -MM-dd HH:mm:ss. However, Hive convention is -MM-dd, we shall follow 
> Hive convention. Eg:
> mysql: CREATE TABLE test ("datekey" DATE);
> hive: CREATE TABLE test (datekey string) STORED BY 
> 'org.apache.hive.storage.jdbc.JdbcStorageHandler' TBLPROPERTIES 
> (.."hive.sql.table" = "test"..);
> Then in hive, do: select datekey from test;
> We get: 1999-03-24 00:00:00
> But should be: 1999-03-24



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21278) Fix ambiguity in grammar warnings at compilation time

2019-02-20 Thread Ashutosh Chauhan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773736#comment-16773736
 ] 

Ashutosh Chauhan commented on HIVE-21278:
-

[~jcamachorodriguez] Can you create a follow-up jira for  ambiguity due to 
unknown? I am wondering if we shall revert that feature.

> Fix ambiguity in grammar warnings at compilation time
> -
>
> Key: HIVE-21278
> URL: https://issues.apache.org/jira/browse/HIVE-21278
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-21278.01.patch, HIVE-21278.02.patch, 
> HIVE-21278.patch
>
>
> These are the warnings at compilation time:
> {code}
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_DATETIME" using multiple 
> alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_DATE {LPAREN, StringLiteral}" 
> using multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_UNIONTYPE LESSTHAN" using 
> multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK {KW_EXISTS, KW_TINYINT}" using 
> multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_STRUCT LESSTHAN" using multiple 
> alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> {code}
> This means that multiple parser rules can match certain query text, possibly 
> leading to unexpected errors at parsing time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21294) Vectorization: 1-reducer Shuffle can skip the object hash functions

2019-02-20 Thread Gopal V (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773717#comment-16773717
 ] 

Gopal V commented on HIVE-21294:


[~teddy.choi]: you're right, that's the right operator name - it currently 
computes a hashcode and then does % 1 for 1 downstream reducers.

> Vectorization: 1-reducer Shuffle can skip the object hash functions
> ---
>
> Key: HIVE-21294
> URL: https://issues.apache.org/jira/browse/HIVE-21294
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Gopal V
>Assignee: Teddy Choi
>Priority: Major
>
> VectorObjectSinkHashOperator can skip the object hashing entirely if the 
> reducer count = 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21295) StorageHandler shall convert date to string using Hive convention

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773716#comment-16773716
 ] 

Hive QA commented on HIVE-21295:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
54s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
53s{color} | {color:blue} ql in master has 2260 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
23s{color} | {color:blue} jdbc-handler in master has 12 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16174/dev-support/hive-personality.sh
 |
| git revision | master / 89b9f12 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql jdbc-handler U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16174/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> StorageHandler shall convert date to string using Hive convention
> -
>
> Key: HIVE-21295
> URL: https://issues.apache.org/jira/browse/HIVE-21295
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-21295.1.patch, HIVE-21295.2.patch
>
>
> If we have date datatype in mysql, string datatype defined in hive, 
> JdbcStorageHandler will translate the date to string with the format 
> -MM-dd HH:mm:ss. However, Hive convention is -MM-dd, we shall follow 
> Hive convention. Eg:
> mysql: CREATE TABLE test ("datekey" DATE);
> hive: CREATE TABLE test (datekey string) STORED BY 
> 'org.apache.hive.storage.jdbc.JdbcStorageHandler' TBLPROPERTIES 
> (.."hive.sql.table" = "test"..);
> Then in hive, do: select datekey from test;
> We get: 1999-03-24 00:00:00
> But should be: 1999-03-24



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21197) Hive replication can add duplicate data during migration to a target with hive.strict.managed.tables enabled

2019-02-20 Thread mahesh kumar behera (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773713#comment-16773713
 ] 

mahesh kumar behera commented on HIVE-21197:


[https://github.com/apache/hive/pull/541] – pull request

> Hive replication can add duplicate data during migration to a target with 
> hive.strict.managed.tables enabled
> 
>
> Key: HIVE-21197
> URL: https://issues.apache.org/jira/browse/HIVE-21197
> Project: Hive
>  Issue Type: Task
>  Components: repl
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
> Attachments: HIVE-21197.01.patch, HIVE-21197.02.patch
>
>
> During bootstrap phase it may happen that the files copied to target are 
> created by events which are not part of the bootstrap. This is because of the 
> fact that, bootstrap first gets the last event id and then the file list. 
> During this period if some event are added, then bootstrap will include files 
> created by these events also.The same files will be copied again during the 
> first incremental replication just after the bootstrap. In normal scenario, 
> the duplicate copy does not cause any issue as hive allows the use of target 
> database only after the first incremental. But in case of migration, the file 
> at source and target are copied to different location (based on the write id 
> at target) and thus this may lead to duplicate data at target. This can be 
> avoided by having at check at load time for duplicate file. This check can be 
> done only for the first incremental and the search can be done in the 
> bootstrap directory (with write id 1). if the file is already present then 
> just ignore the copy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21197) Hive replication can add duplicate data during migration to a target with hive.strict.managed.tables enabled

2019-02-20 Thread Sankar Hariappan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-21197:

Summary: Hive replication can add duplicate data during migration to a 
target with hive.strict.managed.tables enabled  (was: Hive Replication can add 
duplicate data during migration to a target with hive.strict.managed.tables 
enabled)

> Hive replication can add duplicate data during migration to a target with 
> hive.strict.managed.tables enabled
> 
>
> Key: HIVE-21197
> URL: https://issues.apache.org/jira/browse/HIVE-21197
> Project: Hive
>  Issue Type: Task
>  Components: repl
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
> Attachments: HIVE-21197.01.patch, HIVE-21197.02.patch
>
>
> During bootstrap phase it may happen that the files copied to target are 
> created by events which are not part of the bootstrap. This is because of the 
> fact that, bootstrap first gets the last event id and then the file list. 
> During this period if some event are added, then bootstrap will include files 
> created by these events also.The same files will be copied again during the 
> first incremental replication just after the bootstrap. In normal scenario, 
> the duplicate copy does not cause any issue as hive allows the use of target 
> database only after the first incremental. But in case of migration, the file 
> at source and target are copied to different location (based on the write id 
> at target) and thus this may lead to duplicate data at target. This can be 
> avoided by having at check at load time for duplicate file. This check can be 
> done only for the first incremental and the search can be done in the 
> bootstrap directory (with write id 1). if the file is already present then 
> just ignore the copy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21294) Vectorization: 1-reducer Shuffle can skip the object hash functions

2019-02-20 Thread Teddy Choi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Teddy Choi reassigned HIVE-21294:
-

Assignee: Teddy Choi

> Vectorization: 1-reducer Shuffle can skip the object hash functions
> ---
>
> Key: HIVE-21294
> URL: https://issues.apache.org/jira/browse/HIVE-21294
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Gopal V
>Assignee: Teddy Choi
>Priority: Major
>
> VectorObjectSinkHashOperator can skip the object hashing entirely if the 
> reducer count = 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21294) Vectorization: 1-reducer Shuffle can skip the object hash functions

2019-02-20 Thread Teddy Choi (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773693#comment-16773693
 ] 

Teddy Choi commented on HIVE-21294:
---

I guess you meant VectorReduceSinkObjectHashOperator.

> Vectorization: 1-reducer Shuffle can skip the object hash functions
> ---
>
> Key: HIVE-21294
> URL: https://issues.apache.org/jira/browse/HIVE-21294
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Gopal V
>Assignee: Teddy Choi
>Priority: Major
>
> VectorObjectSinkHashOperator can skip the object hashing entirely if the 
> reducer count = 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20376) Timestamp Timezone parser dosen't handler ISO formats "2013-08-31T01:02:33Z"

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773692#comment-16773692
 ] 

Hive QA commented on HIVE-20376:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12959512/HIVE-20376.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15810 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniHiveKafkaCliDriver.testCliDriver[kafka_storage_handler]
 (batchId=275)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16173/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16173/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16173/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12959512 - PreCommit-HIVE-Build

> Timestamp Timezone parser dosen't handler ISO formats "2013-08-31T01:02:33Z"
> 
>
> Key: HIVE-20376
> URL: https://issues.apache.org/jira/browse/HIVE-20376
> Project: Hive
>  Issue Type: Bug
>Reporter: slim bouguerra
>Assignee: Bruno Pusztahazi
>Priority: Minor
> Attachments: HIVE-20376.1.patch, HIVE-20376.2.patch
>
>
> It will be nice to add ISO formats to timezone utils parser to handler the 
> following  "2013-08-31T01:02:33Z"
> org.apache.hadoop.hive.common.type.TimestampTZUtil#parse(java.lang.String)
> CC [~jcamachorodriguez]/ [~ashutoshc]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20376) Timestamp Timezone parser dosen't handler ISO formats "2013-08-31T01:02:33Z"

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773667#comment-16773667
 ] 

Hive QA commented on HIVE-20376:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
32s{color} | {color:blue} common in master has 65 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} common: The patch generated 4 new + 6 unchanged - 0 
fixed = 10 total (was 6) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16173/dev-support/hive-personality.sh
 |
| git revision | master / 89b9f12 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16173/yetus/diff-checkstyle-common.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16173/yetus/whitespace-eol.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16173/yetus/whitespace-tabs.txt
 |
| modules | C: common U: common |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16173/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Timestamp Timezone parser dosen't handler ISO formats "2013-08-31T01:02:33Z"
> 
>
> Key: HIVE-20376
> URL: https://issues.apache.org/jira/browse/HIVE-20376
> Project: Hive
>  Issue Type: Bug
>Reporter: slim bouguerra
>Assignee: Bruno Pusztahazi
>Priority: Minor
> Attachments: HIVE-20376.1.patch, HIVE-20376.2.patch
>
>
> It will be nice to add ISO formats to timezone utils parser to handler the 
> following  "2013-08-31T01:02:33Z"
> org.apache.hadoop.hive.common.type.TimestampTZUtil#parse(java.lang.String)
> CC [~jcamachorodriguez]/ [~ashutoshc]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21197) Hive Replication can add duplicate data during migration to a target with hive.strict.managed.tables enabled

2019-02-20 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-21197:
---
Status: Open  (was: Patch Available)

> Hive Replication can add duplicate data during migration to a target with 
> hive.strict.managed.tables enabled
> 
>
> Key: HIVE-21197
> URL: https://issues.apache.org/jira/browse/HIVE-21197
> Project: Hive
>  Issue Type: Task
>  Components: repl
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
> Attachments: HIVE-21197.01.patch
>
>
> During bootstrap phase it may happen that the files copied to target are 
> created by events which are not part of the bootstrap. This is because of the 
> fact that, bootstrap first gets the last event id and then the file list. 
> During this period if some event are added, then bootstrap will include files 
> created by these events also.The same files will be copied again during the 
> first incremental replication just after the bootstrap. In normal scenario, 
> the duplicate copy does not cause any issue as hive allows the use of target 
> database only after the first incremental. But in case of migration, the file 
> at source and target are copied to different location (based on the write id 
> at target) and thus this may lead to duplicate data at target. This can be 
> avoided by having at check at load time for duplicate file. This check can be 
> done only for the first incremental and the search can be done in the 
> bootstrap directory (with write id 1). if the file is already present then 
> just ignore the copy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21197) Hive Replication can add duplicate data during migration to a target with hive.strict.managed.tables enabled

2019-02-20 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-21197:
---
Attachment: HIVE-21197.02.patch

> Hive Replication can add duplicate data during migration to a target with 
> hive.strict.managed.tables enabled
> 
>
> Key: HIVE-21197
> URL: https://issues.apache.org/jira/browse/HIVE-21197
> Project: Hive
>  Issue Type: Task
>  Components: repl
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
> Attachments: HIVE-21197.01.patch, HIVE-21197.02.patch
>
>
> During bootstrap phase it may happen that the files copied to target are 
> created by events which are not part of the bootstrap. This is because of the 
> fact that, bootstrap first gets the last event id and then the file list. 
> During this period if some event are added, then bootstrap will include files 
> created by these events also.The same files will be copied again during the 
> first incremental replication just after the bootstrap. In normal scenario, 
> the duplicate copy does not cause any issue as hive allows the use of target 
> database only after the first incremental. But in case of migration, the file 
> at source and target are copied to different location (based on the write id 
> at target) and thus this may lead to duplicate data at target. This can be 
> avoided by having at check at load time for duplicate file. This check can be 
> done only for the first incremental and the search can be done in the 
> bootstrap directory (with write id 1). if the file is already present then 
> just ignore the copy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21292) Break up DDLTask 1 - extract Database related operations

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773650#comment-16773650
 ] 

Hive QA commented on HIVE-21292:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12959510/HIVE-21292.05.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15809 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16172/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16172/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16172/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12959510 - PreCommit-HIVE-Build

> Break up DDLTask 1 - extract Database related operations
> 
>
> Key: HIVE-21292
> URL: https://issues.apache.org/jira/browse/HIVE-21292
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21292.01.patch, HIVE-21292.02.patch, 
> HIVE-21292.03.patch, HIVE-21292.04.patch, HIVE-21292.05.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #1: extract all the database related operations from the old DDLTask, 
> and move them under the new package. Also create the new internal framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21197) Hive Replication can add duplicate data during migration to a target with hive.strict.managed.tables enabled

2019-02-20 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-21197:
---
Status: Patch Available  (was: Open)

> Hive Replication can add duplicate data during migration to a target with 
> hive.strict.managed.tables enabled
> 
>
> Key: HIVE-21197
> URL: https://issues.apache.org/jira/browse/HIVE-21197
> Project: Hive
>  Issue Type: Task
>  Components: repl
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
> Attachments: HIVE-21197.01.patch, HIVE-21197.02.patch
>
>
> During bootstrap phase it may happen that the files copied to target are 
> created by events which are not part of the bootstrap. This is because of the 
> fact that, bootstrap first gets the last event id and then the file list. 
> During this period if some event are added, then bootstrap will include files 
> created by these events also.The same files will be copied again during the 
> first incremental replication just after the bootstrap. In normal scenario, 
> the duplicate copy does not cause any issue as hive allows the use of target 
> database only after the first incremental. But in case of migration, the file 
> at source and target are copied to different location (based on the write id 
> at target) and thus this may lead to duplicate data at target. This can be 
> avoided by having at check at load time for duplicate file. This check can be 
> done only for the first incremental and the search can be done in the 
> bootstrap directory (with write id 1). if the file is already present then 
> just ignore the copy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21292) Break up DDLTask 1 - extract Database related operations

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773644#comment-16773644
 ] 

Hive QA commented on HIVE-21292:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
16s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
50s{color} | {color:blue} ql in master has 2260 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
37s{color} | {color:blue} hcatalog/core in master has 29 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
43s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} ql: The patch generated 0 new + 507 unchanged - 25 
fixed = 507 total (was 532) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} hcatalog/core: The patch generated 0 new + 40 
unchanged - 2 fixed = 40 total (was 42) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} The patch hive-unit passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
59s{color} | {color:green} ql generated 0 new + 2259 unchanged - 1 fixed = 2259 
total (was 2260) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} core in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} hive-unit in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16172/dev-support/hive-personality.sh
 |
| git revision | master / 89b9f12 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql hcatalog/core itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16172/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Break up DDLTask 1 - extract Database related operations
> 
>
> Key: HIVE-21292
> URL: https://issues.apache.org/jira/browse/HIVE-21292
> Project: Hive
>  Issue Type: Imp

[jira] [Updated] (HIVE-21280) Null pointer exception on running compaction against a MM table.

2019-02-20 Thread Aditya Shah (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Shah updated HIVE-21280:
---
Attachment: (was: HIVE-21280.patch)

> Null pointer exception on running compaction against a MM table.
> 
>
> Key: HIVE-21280
> URL: https://issues.apache.org/jira/browse/HIVE-21280
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0, 3.1.1
>Reporter: Aditya Shah
>Priority: Major
>
> On running compaction on MM table, got a null pointer exception while getting 
> HDFS session path. The error seemed to me that the session state was not 
> started for these queries. Even after making it start it further fails in 
> running a Teztask for insert overwrite on temp table with the contents of the 
> original table. The cause for this is Tezsession state is not able to 
> initialize due to Illegal Argument exception being thrown at the time of 
> setting up caller context in Tez task due to caller id which uses queryid 
> being an empty string. 
> I do think session state needs to be started and each of the queries running 
> for compaction (I'm also doubtful for stats updater thread's queries) should 
> have a query id. Some details are as follows:
> Steps to reproduce:
> 1) Using beeline with HS2 and HMS
> 2) create an MM table
> 3) Insert a few values in the table
> 4) alter table mm_table compact 'major'; 
> Stack trace on HMS:
> {code:java}
> compactor.Worker: Caught exception while trying to compact 
> id:8,dbname:default,tableName:acid_mm_orc,partName:null,state:^@,type:MAJOR,properties:null,runAs:null,tooManyAborts:false,highestWriteId:0.
>  Marking failed to avoid repeated failures, java.io.IOException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Failed to run create 
> temporary table default.tmp_compactor_acid_mm_orc_1550222367257(`a` int, `b` 
> string) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde'WITH 
> SERDEPROPERTIES (
> 'serialization.format'='1')STORED AS INPUTFORMAT 
> 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 
> 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' LOCATION 
> 'hdfs://localhost:9000/user/hive/warehouse/acid_mm_orc/_tmp_2d8a096c-2db5-4ed8-921c-b3f6d31e079e/_base'
>  TBLPROPERTIES ('transactional'='false')
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.runMmCompaction(CompactorMR.java:373)
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.run(CompactorMR.java:241)
> at org.apache.hadoop.hive.ql.txn.compactor.Worker.run(Worker.java:174)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Failed to run 
> create temporary table default.tmp_compactor_acid_mm_orc_1550222367257(`a` 
> int, `b` string) ROW FORMAT SERDE 
> 'org.apache.hadoop.hive.ql.io.orc.OrcSerde'WITH SERDEPROPERTIES (
> 'serialization.format'='1')STORED AS INPUTFORMAT 
> 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 
> 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' LOCATION 
> 'hdfs://localhost:9000/user/hive/warehouse/acid_mm_orc/_tmp_2d8a096c-2db5-4ed8-921c-b3f6d31e079e/_base'
>  TBLPROPERTIES ('transactional'='false')
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.runOnDriver(CompactorMR.java:525)
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.runMmCompaction(CompactorMR.java:365)
> ... 2 more
> Caused by: java.lang.NullPointerException: Non-local session path expected to 
> be non-null
> at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:228)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.getHDFSSessionPath(SessionState.java:815)
> at org.apache.hadoop.hive.ql.Context.(Context.java:309)
> at org.apache.hadoop.hive.ql.Context.(Context.java:295)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:591)
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1684)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1807)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1567)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1556)
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.runOnDriver(CompactorMR.java:522)
> ... 3 more
> {code}
> cc: [~ekoifman] [~vgumashta] [~sershe]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21280) Null pointer exception on running compaction against a MM table.

2019-02-20 Thread Aditya Shah (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Shah updated HIVE-21280:
---
 Assignee: Aditya Shah
Fix Version/s: 4.0.0
   Attachment: HIVE-21280.patch
   Status: Patch Available  (was: Open)

> Null pointer exception on running compaction against a MM table.
> 
>
> Key: HIVE-21280
> URL: https://issues.apache.org/jira/browse/HIVE-21280
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.1, 3.0.0
>Reporter: Aditya Shah
>Assignee: Aditya Shah
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21280.patch
>
>
> On running compaction on MM table, got a null pointer exception while getting 
> HDFS session path. The error seemed to me that the session state was not 
> started for these queries. Even after making it start it further fails in 
> running a Teztask for insert overwrite on temp table with the contents of the 
> original table. The cause for this is Tezsession state is not able to 
> initialize due to Illegal Argument exception being thrown at the time of 
> setting up caller context in Tez task due to caller id which uses queryid 
> being an empty string. 
> I do think session state needs to be started and each of the queries running 
> for compaction (I'm also doubtful for stats updater thread's queries) should 
> have a query id. Some details are as follows:
> Steps to reproduce:
> 1) Using beeline with HS2 and HMS
> 2) create an MM table
> 3) Insert a few values in the table
> 4) alter table mm_table compact 'major'; 
> Stack trace on HMS:
> {code:java}
> compactor.Worker: Caught exception while trying to compact 
> id:8,dbname:default,tableName:acid_mm_orc,partName:null,state:^@,type:MAJOR,properties:null,runAs:null,tooManyAborts:false,highestWriteId:0.
>  Marking failed to avoid repeated failures, java.io.IOException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Failed to run create 
> temporary table default.tmp_compactor_acid_mm_orc_1550222367257(`a` int, `b` 
> string) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde'WITH 
> SERDEPROPERTIES (
> 'serialization.format'='1')STORED AS INPUTFORMAT 
> 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 
> 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' LOCATION 
> 'hdfs://localhost:9000/user/hive/warehouse/acid_mm_orc/_tmp_2d8a096c-2db5-4ed8-921c-b3f6d31e079e/_base'
>  TBLPROPERTIES ('transactional'='false')
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.runMmCompaction(CompactorMR.java:373)
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.run(CompactorMR.java:241)
> at org.apache.hadoop.hive.ql.txn.compactor.Worker.run(Worker.java:174)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Failed to run 
> create temporary table default.tmp_compactor_acid_mm_orc_1550222367257(`a` 
> int, `b` string) ROW FORMAT SERDE 
> 'org.apache.hadoop.hive.ql.io.orc.OrcSerde'WITH SERDEPROPERTIES (
> 'serialization.format'='1')STORED AS INPUTFORMAT 
> 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 
> 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' LOCATION 
> 'hdfs://localhost:9000/user/hive/warehouse/acid_mm_orc/_tmp_2d8a096c-2db5-4ed8-921c-b3f6d31e079e/_base'
>  TBLPROPERTIES ('transactional'='false')
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.runOnDriver(CompactorMR.java:525)
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.runMmCompaction(CompactorMR.java:365)
> ... 2 more
> Caused by: java.lang.NullPointerException: Non-local session path expected to 
> be non-null
> at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:228)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.getHDFSSessionPath(SessionState.java:815)
> at org.apache.hadoop.hive.ql.Context.(Context.java:309)
> at org.apache.hadoop.hive.ql.Context.(Context.java:295)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:591)
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1684)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1807)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1567)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1556)
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.runOnDriver(CompactorMR.java:522)
> ... 3 more
> {code}
> cc: [~ekoifman] [~vgumashta] [~sershe]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21298) Move Hive Schema Tool classes to their own package to have cleaner structure

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773634#comment-16773634
 ] 

Hive QA commented on HIVE-21298:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12959495/HIVE-21298.02.patch

{color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15809 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16171/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16171/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16171/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12959495 - PreCommit-HIVE-Build

> Move Hive Schema Tool classes to their own package to have  cleaner structure
> -
>
> Key: HIVE-21298
> URL: https://issues.apache.org/jira/browse/HIVE-21298
> Project: Hive
>  Issue Type: Improvement
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-21298.01.patch, HIVE-21298.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21298) Move Hive Schema Tool classes to their own package to have cleaner structure

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773619#comment-16773619
 ] 

Hive QA commented on HIVE-21298:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m  
4s{color} | {color:blue} standalone-metastore/metastore-server in master has 
181 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
28s{color} | {color:blue} beeline in master has 45 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
39s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} standalone-metastore/metastore-server: The patch 
generated 0 new + 179 unchanged - 1 fixed = 179 total (was 180) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} The patch beeline passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} The patch hive-unit passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
14s{color} | {color:red} standalone-metastore/metastore-server generated 1 new 
+ 180 unchanged - 1 fixed = 181 total (was 181) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} standalone-metastore_metastore-server generated 0 
new + 40 unchanged - 8 fixed = 40 total (was 48) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} beeline in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} hive-unit in the patch passed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:standalone-metastore/metastore-server |
|  |  
org.apache.hadoop.hive.metastore.MetaStoreSchemaInfo.getMetaStoreSchemaVersion(HiveSchemaHelper$MetaStoreConnectionInfo)
 may fail to clean up java.sql.ResultSet  Obligation to clean up resource 
created at MetaStoreSchemaInfo.java:up java.sql.ResultSet  Obligation to clean 
up resource created at MetaStoreSchemaInfo.java:[line 233] is not discharged |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16171/dev-support/hive-personality.sh
 |
| git revision | master / 89b9f12 |
| Default Ja

[jira] [Updated] (HIVE-21301) Show tables statement to include views and materialized views

2019-02-20 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-21301:
---
Attachment: HIVE-21301.patch

> Show tables statement to include views and materialized views
> -
>
> Key: HIVE-21301
> URL: https://issues.apache.org/jira/browse/HIVE-21301
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-21301.patch
>
>
> HIVE-19974 introduced backwards incompatible change, with {{SHOW TABLES}} 
> statement showing only managed/external tables in the system.
> This issue will restore old behavior, with {{SHOW TABLES}} showing all 
> queryable entities, including views and materialized views.
> Instead, to provide information about table types, {{SHOW EXTENDED TABLES}} 
> statement is introduced, which includes an additional column with the table 
> type for each of the tables listed.
> Besides, the possibility to filter the show tables statements with a {{WHERE 
> `table_type` = 'ANY_TYPE'}} clause is introduced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HIVE-21301) Show tables statement to include views and materialized views

2019-02-20 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-21301 started by Jesus Camacho Rodriguez.
--
> Show tables statement to include views and materialized views
> -
>
> Key: HIVE-21301
> URL: https://issues.apache.org/jira/browse/HIVE-21301
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>
> HIVE-19974 introduced backwards incompatible change, with {{SHOW TABLES}} 
> statement showing only managed/external tables in the system.
> This issue will restore old behavior, with {{SHOW TABLES}} showing all 
> queryable entities, including views and materialized views.
> Instead, to provide information about table types, {{SHOW EXTENDED TABLES}} 
> statement is introduced, which includes an additional column with the table 
> type for each of the tables listed.
> Besides, the possibility to filter the show tables statements with a {{WHERE 
> `table_type` = 'ANY_TYPE'}} clause is introduced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21301) Show tables statement to include views and materialized views

2019-02-20 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-21301:
---
Status: Patch Available  (was: In Progress)

> Show tables statement to include views and materialized views
> -
>
> Key: HIVE-21301
> URL: https://issues.apache.org/jira/browse/HIVE-21301
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>
> HIVE-19974 introduced backwards incompatible change, with {{SHOW TABLES}} 
> statement showing only managed/external tables in the system.
> This issue will restore old behavior, with {{SHOW TABLES}} showing all 
> queryable entities, including views and materialized views.
> Instead, to provide information about table types, {{SHOW EXTENDED TABLES}} 
> statement is introduced, which includes an additional column with the table 
> type for each of the tables listed.
> Besides, the possibility to filter the show tables statements with a {{WHERE 
> `table_type` = 'ANY_TYPE'}} clause is introduced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21301) Show tables statement to include views and materialized views

2019-02-20 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-21301:
---
Description: 
HIVE-19974 introduced backwards incompatible change, with {{SHOW TABLES}} 
statement showing only managed/external tables in the system.
This issue will restore old behavior, with {{SHOW TABLES}} showing all 
queryable entities, including views and materialized views.
Instead, to provide information about table types, {{SHOW EXTENDED TABLES}} 
statement is introduced, which includes an additional column with the table 
type for each of the tables listed.
Besides, the possibility to filter the show tables statements with a {{WHERE 
`table_type` = 'ANY_TYPE'}} clause is introduced.

  was:
HIVE-19974 introduced backwards incompatible change, with {{SHOW TABLES}} 
statement showing only tables in the system.
This issue will restore old behavior, with {{SHOW TABLES}} showing all 
queryable entities, including views and materialized views.
In addition, {{SHOW EXTENDED TABLES}} statement is introduced, which includes 
an additional column with the table type for each of the tables listed.
Finally, the possibility to filter the show tables statements with a {{WHERE 
`table_type` = 'ANY_TYPE'}} clause is introduced.


> Show tables statement to include views and materialized views
> -
>
> Key: HIVE-21301
> URL: https://issues.apache.org/jira/browse/HIVE-21301
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>
> HIVE-19974 introduced backwards incompatible change, with {{SHOW TABLES}} 
> statement showing only managed/external tables in the system.
> This issue will restore old behavior, with {{SHOW TABLES}} showing all 
> queryable entities, including views and materialized views.
> Instead, to provide information about table types, {{SHOW EXTENDED TABLES}} 
> statement is introduced, which includes an additional column with the table 
> type for each of the tables listed.
> Besides, the possibility to filter the show tables statements with a {{WHERE 
> `table_type` = 'ANY_TYPE'}} clause is introduced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21301) Show tables statement to include views and materialized views

2019-02-20 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez reassigned HIVE-21301:
--


> Show tables statement to include views and materialized views
> -
>
> Key: HIVE-21301
> URL: https://issues.apache.org/jira/browse/HIVE-21301
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>
> HIVE-19974 introduced backwards incompatible change, with {{SHOW TABLES}} 
> statement showing only tables in the system.
> This issue will restore old behavior, with {{SHOW TABLES}} showing all 
> queryable entities, including views and materialized views.
> In addition, {{SHOW EXTENDED TABLES}} statement is introduced, which includes 
> an additional column with the table type for each of the tables listed.
> Finally, the possibility to filter the show tables statements with a {{WHERE 
> `table_type` = 'ANY_TYPE'}} clause is introduced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19974) Show tables statement includes views and materialized views

2019-02-20 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-19974:
---
Fix Version/s: 3.2.0
   4.0.0

> Show tables statement includes views and materialized views
> ---
>
> Key: HIVE-19974
> URL: https://issues.apache.org/jira/browse/HIVE-19974
> Project: Hive
>  Issue Type: Bug
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Minor
> Fix For: 4.0.0, 3.2.0
>
>
> Probably it would be more logical to show only the tables, since there exist 
> 'show views' and 'show materialized views' statements.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21279) Avoid moving/rename operation in FileSink op for SELECT queries

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773569#comment-16773569
 ] 

Hive QA commented on HIVE-21279:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12959478/HIVE-21279.4.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 17 failed/errored test(s), 15809 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[union_remove_15] 
(batchId=93)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[union_remove_16] 
(batchId=80)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[union_remove_18] 
(batchId=7)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[union_remove_25] 
(batchId=96)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[results_cache_diff_fs]
 (batchId=155)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_1]
 (batchId=177)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_2]
 (batchId=182)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_capacity]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation2]
 (batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation]
 (batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_lifetime]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_quoted_identifiers]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_temptable]
 (batchId=183)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_transactional]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_with_masking]
 (batchId=178)
org.apache.hadoop.hive.ql.TestAcidOnTez.testMapJoinOnMR (batchId=241)
org.apache.hive.hcatalog.streaming.TestStreaming.testStreamBucketingMatchesRegularBucketing
 (batchId=216)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16170/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16170/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16170/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 17 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12959478 - PreCommit-HIVE-Build

> Avoid moving/rename operation in FileSink op for SELECT queries
> ---
>
> Key: HIVE-21279
> URL: https://issues.apache.org/jira/browse/HIVE-21279
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21279.1.patch, HIVE-21279.2.patch, 
> HIVE-21279.3.patch, HIVE-21279.4.patch
>
>
> Currently at the end of a job FileSink operator moves/rename temp directory 
> to another directory from which FetchTask fetches result. This is done to 
> avoid fetching potential partial/invalid files by failed/runway tasks. This 
> operation is expensive for cloud storage. It could be avoided if FetchTask is 
> passed on set of files to read from instead of whole directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21279) Avoid moving/rename operation in FileSink op for SELECT queries

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773548#comment-16773548
 ] 

Hive QA commented on HIVE-21279:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
54s{color} | {color:blue} ql in master has 2260 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
44s{color} | {color:red} ql: The patch generated 44 new + 787 unchanged - 2 
fixed = 831 total (was 789) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16170/dev-support/hive-personality.sh
 |
| git revision | master / 89b9f12 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16170/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16170/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Avoid moving/rename operation in FileSink op for SELECT queries
> ---
>
> Key: HIVE-21279
> URL: https://issues.apache.org/jira/browse/HIVE-21279
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21279.1.patch, HIVE-21279.2.patch, 
> HIVE-21279.3.patch, HIVE-21279.4.patch
>
>
> Currently at the end of a job FileSink operator moves/rename temp directory 
> to another directory from which FetchTask fetches result. This is done to 
> avoid fetching potential partial/invalid files by failed/runway tasks. This 
> operation is expensive for cloud storage. It could be avoided if FetchTask is 
> passed on set of files to read from instead of whole directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21167) Bucketing: Bucketing version 1 is incorrectly partitioning data

2019-02-20 Thread Deepak Jaiswal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-21167:
--
Attachment: HIVE-21167.2.patch

> Bucketing: Bucketing version 1 is incorrectly partitioning data
> ---
>
> Key: HIVE-21167
> URL: https://issues.apache.org/jira/browse/HIVE-21167
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Vaibhav Gumashta
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-21167.1.patch, HIVE-21167.2.patch
>
>
> Using murmur hash for bucketing columns was introduced in HIVE-18910, 
> following which {{'bucketing_version'='1'}} stands for the old behaviour 
> (where for example integer columns were partitioned based on mod values). 
> Looks like we have a bug in the old bucketing scheme now. I could repro it 
> when modified the existing schema using an alter table add column and adding 
> new data. Repro:
> {code}
> 0: jdbc:hive2://localhost:10010> create transactional table acid_ptn_bucket1 
> (a int, b int) partitioned by(ds string) clustered by (a) into 2 buckets 
> stored as ORC TBLPROPERTIES('bucketing_version'='1', 'transactional'='true', 
> 'transactional_properties'='default');
> No rows affected (0.418 seconds)
> 0: jdbc:hive2://localhost:10010> insert into acid_ptn_bucket1 partition (ds) 
> values(1,2,'today'),(1,3,'today'),(1,4,'yesterday'),(2,2,'yesterday'),(2,3,'today'),(2,4,'today');
> 6 rows affected (3.695 seconds)
> {code}
> Data from ORC file (data as expected):
> {code}
> /apps/hive/warehouse/acid_ptn_bucket1/ds=today/delta_001_001_/bucket_0
> {"operation": 0, "originalTransaction": 1, "bucket": 536870912, "rowId": 0, 
> "currentTransaction": 1, "row": {"a": 2, "b": 4}}
> {"operation": 0, "originalTransaction": 1, "bucket": 536870912, "rowId": 1, 
> "currentTransaction": 1, "row": {"a": 2, "b": 3}}
> /apps/hive/warehouse/acid_ptn_bucket1/ds=today/delta_001_001_/bucket_1
> {"operation": 0, "originalTransaction": 1, "bucket": 536936448, "rowId": 0, 
> "currentTransaction": 1, "row": {"a": 1, "b": 3}}
> {"operation": 0, "originalTransaction": 1, "bucket": 536936448, "rowId": 1, 
> "currentTransaction": 1, "row": {"a": 1, "b": 2}}
> {code}
> Modifying table schema and inserting new data:
> {code}
> 0: jdbc:hive2://localhost:10010> alter table acid_ptn_bucket1 add columns(c 
> int);
> No rows affected (0.541 seconds)
> 0: jdbc:hive2://localhost:10010> insert into acid_ptn_bucket1 partition (ds) 
> values(3,2,1000,'yesterday'),(3,3,1001,'today'),(3,4,1002,'yesterday'),(4,2,1003,'today'),
>  (4,3,1004,'yesterday'),(4,4,1005,'today');
> 6 rows affected (3.699 seconds)
> {code}
> Data from ORC file (wrong partitioning):
> {code}
> /apps/hive/warehouse/acid_ptn_bucket1/ds=today/delta_003_003_/bucket_0
> {"operation": 0, "originalTransaction": 3, "bucket": 536870912, "rowId": 0, 
> "currentTransaction": 3, "row": {"a": 3, "b": 3, "c": 1001}}
> /apps/hive/warehouse/acid_ptn_bucket1/ds=today/delta_003_003_/bucket_1
> {"operation": 0, "originalTransaction": 3, "bucket": 536936448, "rowId": 0, 
> "currentTransaction": 3, "row": {"a": 4, "b": 4, "c": 1005}}
> {"operation": 0, "originalTransaction": 3, "bucket": 536936448, "rowId": 1, 
> "currentTransaction": 3, "row": {"a": 4, "b": 2, "c": 1003}}
> {code}
> As seen above, the expected behaviour is that new data with column 'a' being 
> 3 should go to bucket1 and column 'a' being 4 should go to bucket0, but the 
> partitioning is wrong.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21296) Dropping varchar partition throw exception

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773516#comment-16773516
 ] 

Hive QA commented on HIVE-21296:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12959474/HIVE-21296.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15809 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.TestMetaStoreEventListenerOnlyOnCommit.testEventStatus
 (batchId=230)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16169/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16169/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16169/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12959474 - PreCommit-HIVE-Build

> Dropping varchar partition throw exception
> --
>
> Key: HIVE-21296
> URL: https://issues.apache.org/jira/browse/HIVE-21296
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-21296.1.patch, HIVE-21296.2.patch
>
>
> Drop partition fail if the partition column is varchar. For example:
> {code:java}
> create external table BS_TAB_0_211494(c_date_SAD_29630 date) PARTITIONED BY 
> (part_varchar_37229 varchar(56)) STORED AS orc;
> INSERT INTO BS_TAB_0_211494 values('4740-04-04','BrNTRsv3c');
> ALTER TABLE BS_TAB_0_211494 DROP PARTITION 
> (part_varchar_37229='BrNTRsv3c');{code}
> Exception:
> {code}
> 2019-02-19T22:12:55,843  WARN [HiveServer2-Handler-Pool: Thread-42] 
> thrift.ThriftCLIService: Error executing statement: 
> org.apache.hive.service.cli.HiveSQLException: Error while compiling 
> statement: FAILED: SemanticException [Error 10006]: Partition not found 
> (part_varchar_37229 = 'BrNTRsv3c')
>   at 
> org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:356)
>  ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:206)
>  ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:269)
>  ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hive.service.cli.operation.Operation.run(Operation.java:268) 
> ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:576)
>  ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:561)
>  ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_202]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_202]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_202]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_202]
>   at 
> org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
>  ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
>  ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
>  ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at java.security.AccessController.doPrivileged(Native Method) 
> ~[?:1.8.0_202]
>   at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_202]
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>  ~[hadoop-common-3.1.0.jar:?]
>   at 
> org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
>  ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at com.sun.proxy.$Proxy43.executeStatementAsync(Unknown Source) ~[?:?]
>   at 
> org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:315)
>  ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hive.service.cli.thrift.ThriftCLIServ

[jira] [Commented] (HIVE-21295) StorageHandler shall convert date to string using Hive convention

2019-02-20 Thread Daniel Dai (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773491#comment-16773491
 ] 

Daniel Dai commented on HIVE-21295:
---

Yes, it's absolutely better. New patch attached.

> StorageHandler shall convert date to string using Hive convention
> -
>
> Key: HIVE-21295
> URL: https://issues.apache.org/jira/browse/HIVE-21295
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-21295.1.patch, HIVE-21295.2.patch
>
>
> If we have date datatype in mysql, string datatype defined in hive, 
> JdbcStorageHandler will translate the date to string with the format 
> -MM-dd HH:mm:ss. However, Hive convention is -MM-dd, we shall follow 
> Hive convention. Eg:
> mysql: CREATE TABLE test ("datekey" DATE);
> hive: CREATE TABLE test (datekey string) STORED BY 
> 'org.apache.hive.storage.jdbc.JdbcStorageHandler' TBLPROPERTIES 
> (.."hive.sql.table" = "test"..);
> Then in hive, do: select datekey from test;
> We get: 1999-03-24 00:00:00
> But should be: 1999-03-24



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21295) StorageHandler shall convert date to string using Hive convention

2019-02-20 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-21295:
--
Attachment: HIVE-21295.2.patch

> StorageHandler shall convert date to string using Hive convention
> -
>
> Key: HIVE-21295
> URL: https://issues.apache.org/jira/browse/HIVE-21295
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-21295.1.patch, HIVE-21295.2.patch
>
>
> If we have date datatype in mysql, string datatype defined in hive, 
> JdbcStorageHandler will translate the date to string with the format 
> -MM-dd HH:mm:ss. However, Hive convention is -MM-dd, we shall follow 
> Hive convention. Eg:
> mysql: CREATE TABLE test ("datekey" DATE);
> hive: CREATE TABLE test (datekey string) STORED BY 
> 'org.apache.hive.storage.jdbc.JdbcStorageHandler' TBLPROPERTIES 
> (.."hive.sql.table" = "test"..);
> Then in hive, do: select datekey from test;
> We get: 1999-03-24 00:00:00
> But should be: 1999-03-24



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21296) Dropping varchar partition throw exception

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773484#comment-16773484
 ] 

Hive QA commented on HIVE-21296:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
50s{color} | {color:blue} ql in master has 2260 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16169/dev-support/hive-personality.sh
 |
| git revision | master / 89b9f12 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16169/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Dropping varchar partition throw exception
> --
>
> Key: HIVE-21296
> URL: https://issues.apache.org/jira/browse/HIVE-21296
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-21296.1.patch, HIVE-21296.2.patch
>
>
> Drop partition fail if the partition column is varchar. For example:
> {code:java}
> create external table BS_TAB_0_211494(c_date_SAD_29630 date) PARTITIONED BY 
> (part_varchar_37229 varchar(56)) STORED AS orc;
> INSERT INTO BS_TAB_0_211494 values('4740-04-04','BrNTRsv3c');
> ALTER TABLE BS_TAB_0_211494 DROP PARTITION 
> (part_varchar_37229='BrNTRsv3c');{code}
> Exception:
> {code}
> 2019-02-19T22:12:55,843  WARN [HiveServer2-Handler-Pool: Thread-42] 
> thrift.ThriftCLIService: Error executing statement: 
> org.apache.hive.service.cli.HiveSQLException: Error while compiling 
> statement: FAILED: SemanticException [Error 10006]: Partition not found 
> (part_varchar_37229 = 'BrNTRsv3c')
>   at 
> org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:356)
>  ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:206)
>  ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:269)
>  ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hive.service.cli.operation.Operation.run(Operation.java:268) 
> ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   

[jira] [Updated] (HIVE-20376) Timestamp Timezone parser dosen't handler ISO formats "2013-08-31T01:02:33Z"

2019-02-20 Thread Bruno Pusztahazi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Pusztahazi updated HIVE-20376:

Status: Patch Available  (was: Open)

> Timestamp Timezone parser dosen't handler ISO formats "2013-08-31T01:02:33Z"
> 
>
> Key: HIVE-20376
> URL: https://issues.apache.org/jira/browse/HIVE-20376
> Project: Hive
>  Issue Type: Bug
>Reporter: slim bouguerra
>Assignee: Bruno Pusztahazi
>Priority: Minor
> Attachments: HIVE-20376.1.patch, HIVE-20376.2.patch
>
>
> It will be nice to add ISO formats to timezone utils parser to handler the 
> following  "2013-08-31T01:02:33Z"
> org.apache.hadoop.hive.common.type.TimestampTZUtil#parse(java.lang.String)
> CC [~jcamachorodriguez]/ [~ashutoshc]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20376) Timestamp Timezone parser dosen't handler ISO formats "2013-08-31T01:02:33Z"

2019-02-20 Thread Bruno Pusztahazi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Pusztahazi updated HIVE-20376:

Status: Open  (was: Patch Available)

> Timestamp Timezone parser dosen't handler ISO formats "2013-08-31T01:02:33Z"
> 
>
> Key: HIVE-20376
> URL: https://issues.apache.org/jira/browse/HIVE-20376
> Project: Hive
>  Issue Type: Bug
>Reporter: slim bouguerra
>Assignee: Bruno Pusztahazi
>Priority: Minor
> Attachments: HIVE-20376.1.patch, HIVE-20376.2.patch
>
>
> It will be nice to add ISO formats to timezone utils parser to handler the 
> following  "2013-08-31T01:02:33Z"
> org.apache.hadoop.hive.common.type.TimestampTZUtil#parse(java.lang.String)
> CC [~jcamachorodriguez]/ [~ashutoshc]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20376) Timestamp Timezone parser dosen't handler ISO formats "2013-08-31T01:02:33Z"

2019-02-20 Thread Bruno Pusztahazi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Pusztahazi updated HIVE-20376:

Attachment: HIVE-20376.2.patch

> Timestamp Timezone parser dosen't handler ISO formats "2013-08-31T01:02:33Z"
> 
>
> Key: HIVE-20376
> URL: https://issues.apache.org/jira/browse/HIVE-20376
> Project: Hive
>  Issue Type: Bug
>Reporter: slim bouguerra
>Assignee: Bruno Pusztahazi
>Priority: Minor
> Attachments: HIVE-20376.1.patch, HIVE-20376.2.patch
>
>
> It will be nice to add ISO formats to timezone utils parser to handler the 
> following  "2013-08-31T01:02:33Z"
> org.apache.hadoop.hive.common.type.TimestampTZUtil#parse(java.lang.String)
> CC [~jcamachorodriguez]/ [~ashutoshc]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21292) Break up DDLTask 1 - extract Database related operations

2019-02-20 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21292:
--
Attachment: HIVE-21292.05.patch

> Break up DDLTask 1 - extract Database related operations
> 
>
> Key: HIVE-21292
> URL: https://issues.apache.org/jira/browse/HIVE-21292
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21292.01.patch, HIVE-21292.02.patch, 
> HIVE-21292.03.patch, HIVE-21292.04.patch, HIVE-21292.05.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #1: extract all the database related operations from the old DDLTask, 
> and move them under the new package. Also create the new internal framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21292) Break up DDLTask 1 - extract Database related operations

2019-02-20 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21292:
--
Status: Open  (was: Patch Available)

> Break up DDLTask 1 - extract Database related operations
> 
>
> Key: HIVE-21292
> URL: https://issues.apache.org/jira/browse/HIVE-21292
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21292.01.patch, HIVE-21292.02.patch, 
> HIVE-21292.03.patch, HIVE-21292.04.patch, HIVE-21292.05.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #1: extract all the database related operations from the old DDLTask, 
> and move them under the new package. Also create the new internal framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21292) Break up DDLTask 1 - extract Database related operations

2019-02-20 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21292:
--
Status: Patch Available  (was: Open)

> Break up DDLTask 1 - extract Database related operations
> 
>
> Key: HIVE-21292
> URL: https://issues.apache.org/jira/browse/HIVE-21292
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21292.01.patch, HIVE-21292.02.patch, 
> HIVE-21292.03.patch, HIVE-21292.04.patch, HIVE-21292.05.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #1: extract all the database related operations from the old DDLTask, 
> and move them under the new package. Also create the new internal framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21292) Break up DDLTask 1 - extract Database related operations

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773450#comment-16773450
 ] 

Hive QA commented on HIVE-21292:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12959460/HIVE-21292.04.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 15809 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropParitionsCleanup
 (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropPartitionsCacheCrossSession
 (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSqlErrorMetrics 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testEmptyTrustStoreProps 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testMaxEventResponse 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testPartitionOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testQueryCloseOnError 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testRoleOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testUseSSLProperty 
(batchId=230)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16168/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16168/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16168/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 9 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12959460 - PreCommit-HIVE-Build

> Break up DDLTask 1 - extract Database related operations
> 
>
> Key: HIVE-21292
> URL: https://issues.apache.org/jira/browse/HIVE-21292
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21292.01.patch, HIVE-21292.02.patch, 
> HIVE-21292.03.patch, HIVE-21292.04.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #1: extract all the database related operations from the old DDLTask, 
> and move them under the new package. Also create the new internal framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21292) Break up DDLTask 1 - extract Database related operations

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773428#comment-16773428
 ] 

Hive QA commented on HIVE-21292:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
19s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
3s{color} | {color:blue} ql in master has 2260 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
39s{color} | {color:blue} hcatalog/core in master has 29 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
46s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} ql: The patch generated 0 new + 507 unchanged - 25 
fixed = 507 total (was 532) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} hcatalog/core: The patch generated 0 new + 40 
unchanged - 2 fixed = 40 total (was 42) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} The patch hive-unit passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
6s{color} | {color:green} ql generated 0 new + 2259 unchanged - 1 fixed = 2259 
total (was 2260) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} core in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} hive-unit in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16168/dev-support/hive-personality.sh
 |
| git revision | master / 89b9f12 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql hcatalog/core itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16168/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Break up DDLTask 1 - extract Database related operations
> 
>
> Key: HIVE-21292
> URL: https://issues.apache.org/jira/browse/HIVE-21292
> Project: Hive
>  Issue Type: Imp

[jira] [Commented] (HIVE-21224) Upgrade tests JUnit3 to JUnit4

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773400#comment-16773400
 ] 

Hive QA commented on HIVE-21224:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12959453/HIVE-21224.8.patch

{color:green}SUCCESS:{color} +1 due to 153 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 15805 tests 
executed
*Failed tests:*
{noformat}
TestGenericUDFConcat - did not produce a TEST-*.xml file (likely timed out) 
(batchId=287)
TestLocationQueries - did not produce a TEST-*.xml file (likely timed out) 
(batchId=254)
TestMTQueries - did not produce a TEST-*.xml file (likely timed out) 
(batchId=252)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16167/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16167/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16167/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12959453 - PreCommit-HIVE-Build

> Upgrade tests JUnit3 to JUnit4
> --
>
> Key: HIVE-21224
> URL: https://issues.apache.org/jira/browse/HIVE-21224
> Project: Hive
>  Issue Type: Improvement
>Reporter: Bruno Pusztahazi
>Assignee: Bruno Pusztahazi
>Priority: Major
> Attachments: HIVE-21224.1.patch, HIVE-21224.2.patch, 
> HIVE-21224.3.patch, HIVE-21224.4.patch, HIVE-21224.5.patch, 
> HIVE-21224.6.patch, HIVE-21224.7.patch, HIVE-21224.8.patch
>
>
> Old JUnit3 tests should be upgraded to JUnit4



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21224) Upgrade tests JUnit3 to JUnit4

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773401#comment-16773401
 ] 

Hive QA commented on HIVE-21224:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
21s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
33s{color} | {color:blue} common in master has 65 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
40s{color} | {color:blue} serde in master has 197 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
52s{color} | {color:blue} ql in master has 2260 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
37s{color} | {color:blue} service in master has 48 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
27s{color} | {color:blue} cli in master has 13 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
22s{color} | {color:blue} contrib in master has 10 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
28s{color} | {color:blue} druid-handler in master has 3 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
32s{color} | {color:blue} hbase-handler in master has 15 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
36s{color} | {color:blue} hcatalog/core in master has 29 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
25s{color} | {color:blue} hcatalog/webhcat/java-client in master has 3 extant 
Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
34s{color} | {color:blue} hcatalog/webhcat/svr in master has 96 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
41s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
53s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} common: The patch generated 0 new + 9 unchanged - 3 
fixed = 9 total (was 12) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
19s{color} | {color:red} serde: The patch generated 270 new + 265 unchanged - 
43 fixed = 535 total (was 308) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
49s{color} | {color:red} ql: The patch generated 1 new + 916 unchanged - 94 
fixed = 917 total (was 1010) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} service: The patch generated 0 new + 17 unchanged - 
6 fixed = 17 total (was 23) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} cli: The patch generated 0 new + 9 unchanged - 1 
fixed = 9 total (was 10) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} The patch contrib passed checkstyle {color} |

[jira] [Updated] (HIVE-21298) Move Hive Schema Tool classes to their own package to have cleaner structure

2019-02-20 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21298:
--
Attachment: HIVE-21298.02.patch

> Move Hive Schema Tool classes to their own package to have  cleaner structure
> -
>
> Key: HIVE-21298
> URL: https://issues.apache.org/jira/browse/HIVE-21298
> Project: Hive
>  Issue Type: Improvement
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-21298.01.patch, HIVE-21298.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-9004) Reset doesn't work for the default empty value entry

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773355#comment-16773355
 ] 

Hive QA commented on HIVE-9004:
---



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12687952/HIVE-9004.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16166/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16166/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16166/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-02-20 20:04:36.101
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-16166/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-02-20 20:04:36.105
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 89b9f12 HIVE-21278: Fix ambiguity in grammar warnings at 
compilation time (Jesus Camacho Rodriguez, reviewed by Vineet Garg)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 89b9f12 HIVE-21278: Fix ambiguity in grammar warnings at 
compilation time (Jesus Camacho Rodriguez, reviewed by Vineet Garg)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-02-20 20:04:36.866
+ rm -rf ../yetus_PreCommit-HIVE-Build-16166
+ mkdir ../yetus_PreCommit-HIVE-Build-16166
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-16166
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-16166/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/processors/ResetProcessor.java:50
error: repository lacks the necessary blob to fall back on 3-way merge.
error: ql/src/java/org/apache/hadoop/hive/ql/processors/ResetProcessor.java: 
patch does not apply
error: src/java/org/apache/hadoop/hive/ql/processors/ResetProcessor.java: does 
not exist in index
error: java/org/apache/hadoop/hive/ql/processors/ResetProcessor.java: does not 
exist in index
The patch does not appear to apply with p0, p1, or p2
+ result=1
+ '[' 1 -ne 0 ']'
+ rm -rf yetus_PreCommit-HIVE-Build-16166
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12687952 - PreCommit-HIVE-Build

> Reset doesn't work for the default empty value entry
> 
>
> Key: HIVE-9004
> URL: https://issues.apache.org/jira/browse/HIVE-9004
> Project: Hive
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Cheng Hao
>Assignee: Cheng Hao
>Priority: Major
> Attachments: HIVE-9004.patch
>
>
> To illustrate that:
> In hive cli:
> hive> set hive.table.parameters.default;
> hive.table.parameters.default is undefined
> hive> set hive.table.parameters.default=key1=value1;
> hive> reset;
> hive> set hive.table.parameters.default;
> hive.table.parameters.default=key1=value1
> I think we expect the last output as "hive.table.parameters.default is 
> undefined"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21298) Move Hive Schema Tool classes to their own package to have cleaner structure

2019-02-20 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21298:
--
Status: Open  (was: Patch Available)

> Move Hive Schema Tool classes to their own package to have  cleaner structure
> -
>
> Key: HIVE-21298
> URL: https://issues.apache.org/jira/browse/HIVE-21298
> Project: Hive
>  Issue Type: Improvement
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-21298.01.patch, HIVE-21298.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21298) Move Hive Schema Tool classes to their own package to have cleaner structure

2019-02-20 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21298:
--
Status: Patch Available  (was: Open)

> Move Hive Schema Tool classes to their own package to have  cleaner structure
> -
>
> Key: HIVE-21298
> URL: https://issues.apache.org/jira/browse/HIVE-21298
> Project: Hive
>  Issue Type: Improvement
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-21298.01.patch, HIVE-21298.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21298) Move Hive Schema Tool classes to their own package to have cleaner structure

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773352#comment-16773352
 ] 

Hive QA commented on HIVE-21298:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12959451/HIVE-21298.01.patch

{color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15809 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16165/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16165/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16165/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12959451 - PreCommit-HIVE-Build

> Move Hive Schema Tool classes to their own package to have  cleaner structure
> -
>
> Key: HIVE-21298
> URL: https://issues.apache.org/jira/browse/HIVE-21298
> Project: Hive
>  Issue Type: Improvement
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-21298.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21298) Move Hive Schema Tool classes to their own package to have cleaner structure

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773305#comment-16773305
 ] 

Hive QA commented on HIVE-21298:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
57s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
11s{color} | {color:blue} standalone-metastore/metastore-server in master has 
181 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
32s{color} | {color:blue} beeline in master has 45 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
44s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
17s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 8 new + 179 unchanged - 1 fixed = 187 total (was 180) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
18s{color} | {color:red} itests/hive-unit: The patch generated 8 new + 88 
unchanged - 0 fixed = 96 total (was 88) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
20s{color} | {color:red} standalone-metastore/metastore-server generated 1 new 
+ 180 unchanged - 1 fixed = 181 total (was 181) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} standalone-metastore_metastore-server generated 8 new 
+ 40 unchanged - 8 fixed = 48 total (was 48) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:standalone-metastore/metastore-server |
|  |  
org.apache.hadoop.hive.metastore.MetaStoreSchemaInfo.getMetaStoreSchemaVersion(HiveSchemaHelper$MetaStoreConnectionInfo)
 may fail to clean up java.sql.ResultSet  Obligation to clean up resource 
created at MetaStoreSchemaInfo.java:up java.sql.ResultSet  Obligation to clean 
up resource created at MetaStoreSchemaInfo.java:[line 233] is not discharged |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16165/dev-support/hive-personality.sh
 |
| git revision | master / 89b9f12 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16165/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16165/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16165/yetus/new-findbugs-standalone-metastore_metastore-

[jira] [Commented] (HIVE-20079) Populate more accurate rawDataSize for parquet format

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773273#comment-16773273
 ] 

Hive QA commented on HIVE-20079:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12959444/HIVE-20079.3.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 11 failed/errored test(s), 15790 tests 
executed
*Failed tests:*
{noformat}
TestMiniSparkOnYarnCliDriver - did not produce a TEST-*.xml file (likely timed 
out) (batchId=191)

[infer_bucket_sort_num_buckets.q,gen_udf_example_add10.q,spark_explainuser_1.q,spark_use_ts_stats_for_mapjoin.q,orc_merge6.q,orc_merge5.q,bucketmapjoin6.q,spark_opt_shuffle_serde.q,temp_table_external.q,spark_dynamic_partition_pruning_6.q,dynamic_rdd_cache.q,auto_sortmerge_join_16.q,vector_outer_join3.q,spark_dynamic_partition_pruning_7.q,schemeAuthority.q,parallel_orderby.q,vector_outer_join1.q,load_hdfs_file_with_space_in_the_name.q,spark_dynamic_partition_pruning_recursive_mapjoin.q,spark_dynamic_partition_pruning_mapjoin_only.q]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_stats] 
(batchId=48)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropParitionsCleanup
 (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropPartitionsCacheCrossSession
 (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSqlErrorMetrics 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testEmptyTrustStoreProps 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testMaxEventResponse 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testPartitionOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testQueryCloseOnError 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testRoleOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testUseSSLProperty 
(batchId=230)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16164/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16164/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16164/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 11 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12959444 - PreCommit-HIVE-Build

> Populate more accurate rawDataSize for parquet format
> -
>
> Key: HIVE-20079
> URL: https://issues.apache.org/jira/browse/HIVE-20079
> Project: Hive
>  Issue Type: Improvement
>  Components: File Formats
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Antal Sinkovits
>Priority: Major
> Attachments: HIVE-20079.1.patch, HIVE-20079.2.patch, 
> HIVE-20079.3.patch
>
>
> Run the following queries and you will see the raw data for the table is 4 
> (that is the number of fields) incorrectly. We need to populate correct data 
> size so data can be split properly.
> {noformat}
> SET hive.stats.autogather=true;
> CREATE TABLE parquet_stats (id int,str string) STORED AS PARQUET;
> INSERT INTO parquet_stats values(0, 'this is string 0'), (1, 'string 1');
> DESC FORMATTED parquet_stats;
> {noformat}
> {noformat}
> Table Parameters:
>   COLUMN_STATS_ACCURATE   true
>   numFiles1
>   numRows 2
>   rawDataSize 4
>   totalSize   373
>   transient_lastDdlTime   1530660523
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21279) Avoid moving/rename operation in FileSink op for SELECT queries

2019-02-20 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21279:
---
Attachment: HIVE-21279.4.patch

> Avoid moving/rename operation in FileSink op for SELECT queries
> ---
>
> Key: HIVE-21279
> URL: https://issues.apache.org/jira/browse/HIVE-21279
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21279.1.patch, HIVE-21279.2.patch, 
> HIVE-21279.3.patch, HIVE-21279.4.patch
>
>
> Currently at the end of a job FileSink operator moves/rename temp directory 
> to another directory from which FetchTask fetches result. This is done to 
> avoid fetching potential partial/invalid files by failed/runway tasks. This 
> operation is expensive for cloud storage. It could be avoided if FetchTask is 
> passed on set of files to read from instead of whole directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21279) Avoid moving/rename operation in FileSink op for SELECT queries

2019-02-20 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21279:
---
Status: Patch Available  (was: Open)

> Avoid moving/rename operation in FileSink op for SELECT queries
> ---
>
> Key: HIVE-21279
> URL: https://issues.apache.org/jira/browse/HIVE-21279
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21279.1.patch, HIVE-21279.2.patch, 
> HIVE-21279.3.patch, HIVE-21279.4.patch
>
>
> Currently at the end of a job FileSink operator moves/rename temp directory 
> to another directory from which FetchTask fetches result. This is done to 
> avoid fetching potential partial/invalid files by failed/runway tasks. This 
> operation is expensive for cloud storage. It could be avoided if FetchTask is 
> passed on set of files to read from instead of whole directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21279) Avoid moving/rename operation in FileSink op for SELECT queries

2019-02-20 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21279:
---
Status: Open  (was: Patch Available)

> Avoid moving/rename operation in FileSink op for SELECT queries
> ---
>
> Key: HIVE-21279
> URL: https://issues.apache.org/jira/browse/HIVE-21279
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21279.1.patch, HIVE-21279.2.patch, 
> HIVE-21279.3.patch, HIVE-21279.4.patch
>
>
> Currently at the end of a job FileSink operator moves/rename temp directory 
> to another directory from which FetchTask fetches result. This is done to 
> avoid fetching potential partial/invalid files by failed/runway tasks. This 
> operation is expensive for cloud storage. It could be avoided if FetchTask is 
> passed on set of files to read from instead of whole directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20079) Populate more accurate rawDataSize for parquet format

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773250#comment-16773250
 ] 

Hive QA commented on HIVE-20079:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
2s{color} | {color:blue} ql in master has 2260 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
42s{color} | {color:red} ql: The patch generated 3 new + 13 unchanged - 6 fixed 
= 16 total (was 19) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16164/dev-support/hive-personality.sh
 |
| git revision | master / 89b9f12 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16164/yetus/diff-checkstyle-ql.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16164/yetus/whitespace-eol.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16164/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Populate more accurate rawDataSize for parquet format
> -
>
> Key: HIVE-20079
> URL: https://issues.apache.org/jira/browse/HIVE-20079
> Project: Hive
>  Issue Type: Improvement
>  Components: File Formats
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Antal Sinkovits
>Priority: Major
> Attachments: HIVE-20079.1.patch, HIVE-20079.2.patch, 
> HIVE-20079.3.patch
>
>
> Run the following queries and you will see the raw data for the table is 4 
> (that is the number of fields) incorrectly. We need to populate correct data 
> size so data can be split properly.
> {noformat}
> SET hive.stats.autogather=true;
> CREATE TABLE parquet_stats (id int,str string) STORED AS PARQUET;
> INSERT INTO parquet_stats values(0, 'this is string 0'), (1, 'string 1');
> DESC FORMATTED parquet_stats;
> {noformat}
> {noformat}
> Table Parameters:
>   COLUMN_STATS_ACCURATE   true
>   numFiles1
>   numRows 2
>   rawDataSize 4
>   totalSize   373
>   transient_lastDdlTime   1530660523
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21296) Dropping varchar partition throw exception

2019-02-20 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-21296:
--
Attachment: HIVE-21296.2.patch

> Dropping varchar partition throw exception
> --
>
> Key: HIVE-21296
> URL: https://issues.apache.org/jira/browse/HIVE-21296
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-21296.1.patch, HIVE-21296.2.patch
>
>
> Drop partition fail if the partition column is varchar. For example:
> {code:java}
> create external table BS_TAB_0_211494(c_date_SAD_29630 date) PARTITIONED BY 
> (part_varchar_37229 varchar(56)) STORED AS orc;
> INSERT INTO BS_TAB_0_211494 values('4740-04-04','BrNTRsv3c');
> ALTER TABLE BS_TAB_0_211494 DROP PARTITION 
> (part_varchar_37229='BrNTRsv3c');{code}
> Exception:
> {code}
> 2019-02-19T22:12:55,843  WARN [HiveServer2-Handler-Pool: Thread-42] 
> thrift.ThriftCLIService: Error executing statement: 
> org.apache.hive.service.cli.HiveSQLException: Error while compiling 
> statement: FAILED: SemanticException [Error 10006]: Partition not found 
> (part_varchar_37229 = 'BrNTRsv3c')
>   at 
> org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:356)
>  ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:206)
>  ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:269)
>  ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hive.service.cli.operation.Operation.run(Operation.java:268) 
> ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:576)
>  ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:561)
>  ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_202]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_202]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_202]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_202]
>   at 
> org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
>  ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
>  ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
>  ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at java.security.AccessController.doPrivileged(Native Method) 
> ~[?:1.8.0_202]
>   at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_202]
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>  ~[hadoop-common-3.1.0.jar:?]
>   at 
> org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
>  ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at com.sun.proxy.$Proxy43.executeStatementAsync(Unknown Source) ~[?:?]
>   at 
> org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:315)
>  ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:568)
>  ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1557)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1542)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
> ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
>  ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
>  ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[?:1.8.0_202]
>   at 
> jav

[jira] [Commented] (HIVE-20376) Timestamp Timezone parser dosen't handler ISO formats "2013-08-31T01:02:33Z"

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773233#comment-16773233
 ] 

Hive QA commented on HIVE-20376:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12959432/HIVE-20376.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 15810 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniHiveKafkaCliDriver.testCliDriver[kafka_storage_handler]
 (batchId=275)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[hybridgrace_hashjoin_2]
 (batchId=109)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16163/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16163/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16163/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12959432 - PreCommit-HIVE-Build

> Timestamp Timezone parser dosen't handler ISO formats "2013-08-31T01:02:33Z"
> 
>
> Key: HIVE-20376
> URL: https://issues.apache.org/jira/browse/HIVE-20376
> Project: Hive
>  Issue Type: Bug
>Reporter: slim bouguerra
>Assignee: Bruno Pusztahazi
>Priority: Minor
> Attachments: HIVE-20376.1.patch
>
>
> It will be nice to add ISO formats to timezone utils parser to handler the 
> following  "2013-08-31T01:02:33Z"
> org.apache.hadoop.hive.common.type.TimestampTZUtil#parse(java.lang.String)
> CC [~jcamachorodriguez]/ [~ashutoshc]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20376) Timestamp Timezone parser dosen't handler ISO formats "2013-08-31T01:02:33Z"

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773180#comment-16773180
 ] 

Hive QA commented on HIVE-20376:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
35s{color} | {color:blue} common in master has 65 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} common: The patch generated 11 new + 6 unchanged - 0 
fixed = 17 total (was 6) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 15 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16163/dev-support/hive-personality.sh
 |
| git revision | master / 89b9f12 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16163/yetus/diff-checkstyle-common.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16163/yetus/whitespace-tabs.txt
 |
| modules | C: common U: common |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16163/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Timestamp Timezone parser dosen't handler ISO formats "2013-08-31T01:02:33Z"
> 
>
> Key: HIVE-20376
> URL: https://issues.apache.org/jira/browse/HIVE-20376
> Project: Hive
>  Issue Type: Bug
>Reporter: slim bouguerra
>Assignee: Bruno Pusztahazi
>Priority: Minor
> Attachments: HIVE-20376.1.patch
>
>
> It will be nice to add ISO formats to timezone utils parser to handler the 
> following  "2013-08-31T01:02:33Z"
> org.apache.hadoop.hive.common.type.TimestampTZUtil#parse(java.lang.String)
> CC [~jcamachorodriguez]/ [~ashutoshc]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21292) Break up DDLTask 1 - extract Database related operations

2019-02-20 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21292:
--
Attachment: HIVE-21292.04.patch

> Break up DDLTask 1 - extract Database related operations
> 
>
> Key: HIVE-21292
> URL: https://issues.apache.org/jira/browse/HIVE-21292
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21292.01.patch, HIVE-21292.02.patch, 
> HIVE-21292.03.patch, HIVE-21292.04.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #1: extract all the database related operations from the old DDLTask, 
> and move them under the new package. Also create the new internal framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21292) Break up DDLTask 1 - extract Database related operations

2019-02-20 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21292:
--
Status: Patch Available  (was: Open)

> Break up DDLTask 1 - extract Database related operations
> 
>
> Key: HIVE-21292
> URL: https://issues.apache.org/jira/browse/HIVE-21292
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21292.01.patch, HIVE-21292.02.patch, 
> HIVE-21292.03.patch, HIVE-21292.04.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #1: extract all the database related operations from the old DDLTask, 
> and move them under the new package. Also create the new internal framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21292) Break up DDLTask 1 - extract Database related operations

2019-02-20 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21292:
--
Status: Open  (was: Patch Available)

> Break up DDLTask 1 - extract Database related operations
> 
>
> Key: HIVE-21292
> URL: https://issues.apache.org/jira/browse/HIVE-21292
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21292.01.patch, HIVE-21292.02.patch, 
> HIVE-21292.03.patch, HIVE-21292.04.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #1: extract all the database related operations from the old DDLTask, 
> and move them under the new package. Also create the new internal framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21292) Break up DDLTask 1 - extract Database related operations

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773167#comment-16773167
 ] 

Hive QA commented on HIVE-21292:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12959427/HIVE-21292.03.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15809 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16162/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16162/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16162/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12959427 - PreCommit-HIVE-Build

> Break up DDLTask 1 - extract Database related operations
> 
>
> Key: HIVE-21292
> URL: https://issues.apache.org/jira/browse/HIVE-21292
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21292.01.patch, HIVE-21292.02.patch, 
> HIVE-21292.03.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #1: extract all the database related operations from the old DDLTask, 
> and move them under the new package. Also create the new internal framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21297) Replace all occurences of new Long, Boolean, Double etc with the corresponding .valueOf

2019-02-20 Thread slim bouguerra (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773171#comment-16773171
 ] 

slim bouguerra commented on HIVE-21297:
---

[~isuller] any chance you can report on the improvement of this patch before / 
after ?

> Replace all occurences of new Long, Boolean, Double etc with the 
> corresponding .valueOf
> ---
>
> Key: HIVE-21297
> URL: https://issues.apache.org/jira/browse/HIVE-21297
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ivan Suller
>Assignee: Ivan Suller
>Priority: Trivial
> Fix For: 4.0.0
>
> Attachments: HIVE-21297.01.patch
>
>
> Creating new objects with new Long(...), new Boolean etc creates a new 
> object, while Long.valueOf(...), Boolean.valueOf(...) can be cached (and is 
> actually cached in most if not all JVMs) thus reducing GC overhead. I already 
> had two similar tickets (HIVE-21228, HIVE-21199) - this one finishes the job.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21295) StorageHandler shall convert date to string using Hive convention

2019-02-20 Thread Jesus Camacho Rodriguez (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773133#comment-16773133
 ] 

Jesus Camacho Rodriguez commented on HIVE-21295:


[~daijy], does it make sense to use {{DateUtils.getDateFormat()}}? Other than 
that, LGTM, +1 (pending tests)

> StorageHandler shall convert date to string using Hive convention
> -
>
> Key: HIVE-21295
> URL: https://issues.apache.org/jira/browse/HIVE-21295
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-21295.1.patch
>
>
> If we have date datatype in mysql, string datatype defined in hive, 
> JdbcStorageHandler will translate the date to string with the format 
> -MM-dd HH:mm:ss. However, Hive convention is -MM-dd, we shall follow 
> Hive convention. Eg:
> mysql: CREATE TABLE test ("datekey" DATE);
> hive: CREATE TABLE test (datekey string) STORED BY 
> 'org.apache.hive.storage.jdbc.JdbcStorageHandler' TBLPROPERTIES 
> (.."hive.sql.table" = "test"..);
> Then in hive, do: select datekey from test;
> We get: 1999-03-24 00:00:00
> But should be: 1999-03-24



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21292) Break up DDLTask 1 - extract Database related operations

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773130#comment-16773130
 ] 

Hive QA commented on HIVE-21292:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
23s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
15s{color} | {color:blue} ql in master has 2262 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
38s{color} | {color:blue} hcatalog/core in master has 29 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
45s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} ql: The patch generated 0 new + 507 unchanged - 25 
fixed = 507 total (was 532) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
13s{color} | {color:red} hcatalog/core: The patch generated 2 new + 40 
unchanged - 2 fixed = 42 total (was 42) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} The patch hive-unit passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
10s{color} | {color:green} ql generated 0 new + 2261 unchanged - 1 fixed = 2261 
total (was 2262) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} core in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} hive-unit in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16162/dev-support/hive-personality.sh
 |
| git revision | master / e5a35e7 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16162/yetus/diff-checkstyle-hcatalog_core.txt
 |
| modules | C: ql hcatalog/core itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16162/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Break up DDLTask 1 - extract Database related operations
> 
>
> Key: HIVE-21292
> URL

[jira] [Updated] (HIVE-21278) Fix ambiguity in grammar warnings at compilation time

2019-02-20 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-21278:
---
Fix Version/s: 3.2.0

> Fix ambiguity in grammar warnings at compilation time
> -
>
> Key: HIVE-21278
> URL: https://issues.apache.org/jira/browse/HIVE-21278
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-21278.01.patch, HIVE-21278.02.patch, 
> HIVE-21278.patch
>
>
> These are the warnings at compilation time:
> {code}
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_DATETIME" using multiple 
> alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_DATE {LPAREN, StringLiteral}" 
> using multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_UNIONTYPE LESSTHAN" using 
> multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK {KW_EXISTS, KW_TINYINT}" using 
> multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_STRUCT LESSTHAN" using multiple 
> alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> {code}
> This means that multiple parser rules can match certain query text, possibly 
> leading to unexpected errors at parsing time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21278) Fix ambiguity in grammar warnings at compilation time

2019-02-20 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-21278:
---
   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master, thanks for reviewing [~vgarg]!

> Fix ambiguity in grammar warnings at compilation time
> -
>
> Key: HIVE-21278
> URL: https://issues.apache.org/jira/browse/HIVE-21278
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21278.01.patch, HIVE-21278.02.patch, 
> HIVE-21278.patch
>
>
> These are the warnings at compilation time:
> {code}
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_DATETIME" using multiple 
> alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_DATE {LPAREN, StringLiteral}" 
> using multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_UNIONTYPE LESSTHAN" using 
> multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK {KW_EXISTS, KW_TINYINT}" using 
> multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_STRUCT LESSTHAN" using multiple 
> alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> {code}
> This means that multiple parser rules can match certain query text, possibly 
> leading to unexpected errors at parsing time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21224) Upgrade tests JUnit3 to JUnit4

2019-02-20 Thread Bruno Pusztahazi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Pusztahazi updated HIVE-21224:

Attachment: HIVE-21224.8.patch

> Upgrade tests JUnit3 to JUnit4
> --
>
> Key: HIVE-21224
> URL: https://issues.apache.org/jira/browse/HIVE-21224
> Project: Hive
>  Issue Type: Improvement
>Reporter: Bruno Pusztahazi
>Assignee: Bruno Pusztahazi
>Priority: Major
> Attachments: HIVE-21224.1.patch, HIVE-21224.2.patch, 
> HIVE-21224.3.patch, HIVE-21224.4.patch, HIVE-21224.5.patch, 
> HIVE-21224.6.patch, HIVE-21224.7.patch, HIVE-21224.8.patch
>
>
> Old JUnit3 tests should be upgraded to JUnit4



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21224) Upgrade tests JUnit3 to JUnit4

2019-02-20 Thread Bruno Pusztahazi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Pusztahazi updated HIVE-21224:

Status: Patch Available  (was: Open)

> Upgrade tests JUnit3 to JUnit4
> --
>
> Key: HIVE-21224
> URL: https://issues.apache.org/jira/browse/HIVE-21224
> Project: Hive
>  Issue Type: Improvement
>Reporter: Bruno Pusztahazi
>Assignee: Bruno Pusztahazi
>Priority: Major
> Attachments: HIVE-21224.1.patch, HIVE-21224.2.patch, 
> HIVE-21224.3.patch, HIVE-21224.4.patch, HIVE-21224.5.patch, 
> HIVE-21224.6.patch, HIVE-21224.7.patch, HIVE-21224.8.patch
>
>
> Old JUnit3 tests should be upgraded to JUnit4



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21224) Upgrade tests JUnit3 to JUnit4

2019-02-20 Thread Bruno Pusztahazi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Pusztahazi updated HIVE-21224:

Status: Open  (was: Patch Available)

> Upgrade tests JUnit3 to JUnit4
> --
>
> Key: HIVE-21224
> URL: https://issues.apache.org/jira/browse/HIVE-21224
> Project: Hive
>  Issue Type: Improvement
>Reporter: Bruno Pusztahazi
>Assignee: Bruno Pusztahazi
>Priority: Major
> Attachments: HIVE-21224.1.patch, HIVE-21224.2.patch, 
> HIVE-21224.3.patch, HIVE-21224.4.patch, HIVE-21224.5.patch, 
> HIVE-21224.6.patch, HIVE-21224.7.patch
>
>
> Old JUnit3 tests should be upgraded to JUnit4



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20079) Populate more accurate rawDataSize for parquet format

2019-02-20 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773098#comment-16773098
 ] 

Sahil Takiar commented on HIVE-20079:
-

How does ORC handle this? Is there a fundamental reason we can't mimic the same 
thing they are doing? Getting things to be consistent with how ORC handles this 
makes more sense to me than implementing two different approaches for ORC vs. 
Parquet and ending up with an inconsistent definition of {{rawDataSize}} 
depending on the file format. Sure, this patch is probably a better estimation 
so I see no reason to not proceed with it.

> Populate more accurate rawDataSize for parquet format
> -
>
> Key: HIVE-20079
> URL: https://issues.apache.org/jira/browse/HIVE-20079
> Project: Hive
>  Issue Type: Improvement
>  Components: File Formats
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Antal Sinkovits
>Priority: Major
> Attachments: HIVE-20079.1.patch, HIVE-20079.2.patch, 
> HIVE-20079.3.patch
>
>
> Run the following queries and you will see the raw data for the table is 4 
> (that is the number of fields) incorrectly. We need to populate correct data 
> size so data can be split properly.
> {noformat}
> SET hive.stats.autogather=true;
> CREATE TABLE parquet_stats (id int,str string) STORED AS PARQUET;
> INSERT INTO parquet_stats values(0, 'this is string 0'), (1, 'string 1');
> DESC FORMATTED parquet_stats;
> {noformat}
> {noformat}
> Table Parameters:
>   COLUMN_STATS_ACCURATE   true
>   numFiles1
>   numRows 2
>   rawDataSize 4
>   totalSize   373
>   transient_lastDdlTime   1530660523
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-16716) Clean up javadoc from errors in module ql

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-16716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773097#comment-16773097
 ] 

Hive QA commented on HIVE-16716:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12959410/HIVE-16716.6.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 15809 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testComplexQuery (batchId=261)
org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testDataTypes (batchId=261)
org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testEscapedStrings (batchId=261)
org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testKillQuery (batchId=261)
org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testLlapInputFormatEndToEnd 
(batchId=261)
org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testNonAsciiStrings (batchId=261)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16161/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16161/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16161/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12959410 - PreCommit-HIVE-Build

> Clean up javadoc from errors in module ql
> -
>
> Key: HIVE-16716
> URL: https://issues.apache.org/jira/browse/HIVE-16716
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Janos Gub
>Assignee: Robert Kucsora
>Priority: Major
> Attachments: HIVE-16716-v2.patch, HIVE-16716.2.patch, 
> HIVE-16716.3.patch, HIVE-16716.4.patch, HIVE-16716.5.patch, 
> HIVE-16716.6.patch, HIVE-16716.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-9004) Reset doesn't work for the default empty value entry

2019-02-20 Thread Shawn Weeks (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773084#comment-16773084
 ] 

Shawn Weeks commented on HIVE-9004:
---

Any reason why this never got done. Currently working an issue where the reset 
command doesn't reset tez defaults like tez.runtime.io.sort.mb if you change 
them at the session level.

> Reset doesn't work for the default empty value entry
> 
>
> Key: HIVE-9004
> URL: https://issues.apache.org/jira/browse/HIVE-9004
> Project: Hive
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Cheng Hao
>Assignee: Cheng Hao
>Priority: Major
> Attachments: HIVE-9004.patch
>
>
> To illustrate that:
> In hive cli:
> hive> set hive.table.parameters.default;
> hive.table.parameters.default is undefined
> hive> set hive.table.parameters.default=key1=value1;
> hive> reset;
> hive> set hive.table.parameters.default;
> hive.table.parameters.default=key1=value1
> I think we expect the last output as "hive.table.parameters.default is 
> undefined"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21298) Move Hive Schema Tool classes to their own package to have cleaner structure

2019-02-20 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21298:
--
Status: Patch Available  (was: Open)

> Move Hive Schema Tool classes to their own package to have  cleaner structure
> -
>
> Key: HIVE-21298
> URL: https://issues.apache.org/jira/browse/HIVE-21298
> Project: Hive
>  Issue Type: Improvement
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-21298.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21298) Move Hive Schema Tool classes to their own package to have cleaner structure

2019-02-20 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21298:
--
Attachment: HIVE-21298.01.patch

> Move Hive Schema Tool classes to their own package to have  cleaner structure
> -
>
> Key: HIVE-21298
> URL: https://issues.apache.org/jira/browse/HIVE-21298
> Project: Hive
>  Issue Type: Improvement
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-21298.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-16716) Clean up javadoc from errors in module ql

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-16716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773070#comment-16773070
 ] 

Hive QA commented on HIVE-16716:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
1s{color} | {color:blue} ql in master has 2262 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
14s{color} | {color:red} ql: The patch generated 5 new + 2523 unchanged - 7 
fixed = 2528 total (was 2530) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
57s{color} | {color:red} ql generated 42 new + 58 unchanged - 42 fixed = 100 
total (was 100) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16161/dev-support/hive-personality.sh
 |
| git revision | master / e5a35e7 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16161/yetus/diff-checkstyle-ql.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16161/yetus/whitespace-eol.txt
 |
| javadoc | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16161/yetus/diff-javadoc-javadoc-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16161/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Clean up javadoc from errors in module ql
> -
>
> Key: HIVE-16716
> URL: https://issues.apache.org/jira/browse/HIVE-16716
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Janos Gub
>Assignee: Robert Kucsora
>Priority: Major
> Attachments: HIVE-16716-v2.patch, HIVE-16716.2.patch, 
> HIVE-16716.3.patch, HIVE-16716.4.patch, HIVE-16716.5.patch, 
> HIVE-16716.6.patch, HIVE-16716.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-20079) Populate more accurate rawDataSize for parquet format

2019-02-20 Thread Antal Sinkovits (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773048#comment-16773048
 ] 

Antal Sinkovits edited comment on HIVE-20079 at 2/20/19 2:35 PM:
-

[~stakiar] I'm afraid, thats not an option, as it will cause discrepancy 
between the calculations.
Check my comment here:
https://issues.apache.org/jira/browse/HIVE-20523

>From the parquet docs it says:
TotalByteSize: Total byte size of all the uncompressed column data in this row 
group

There might be some overhead when its loaded into the hash table, but it's 
still a better estimate, than the current one, which estimates 1 byte per 
column. And more important, the estimation is consistent.


was (Author: asinkovits):
[~stakiar] I'm afraid, thats not an option, as it will cause discrepancy 
between the calculations.
Check my comment here:
https://issues.apache.org/jira/browse/HIVE-20523



> Populate more accurate rawDataSize for parquet format
> -
>
> Key: HIVE-20079
> URL: https://issues.apache.org/jira/browse/HIVE-20079
> Project: Hive
>  Issue Type: Improvement
>  Components: File Formats
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Antal Sinkovits
>Priority: Major
> Attachments: HIVE-20079.1.patch, HIVE-20079.2.patch, 
> HIVE-20079.3.patch
>
>
> Run the following queries and you will see the raw data for the table is 4 
> (that is the number of fields) incorrectly. We need to populate correct data 
> size so data can be split properly.
> {noformat}
> SET hive.stats.autogather=true;
> CREATE TABLE parquet_stats (id int,str string) STORED AS PARQUET;
> INSERT INTO parquet_stats values(0, 'this is string 0'), (1, 'string 1');
> DESC FORMATTED parquet_stats;
> {noformat}
> {noformat}
> Table Parameters:
>   COLUMN_STATS_ACCURATE   true
>   numFiles1
>   numRows 2
>   rawDataSize 4
>   totalSize   373
>   transient_lastDdlTime   1530660523
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21298) Move Hive Schema Tool classes to their own package to have cleaner structure

2019-02-20 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely reassigned HIVE-21298:
-


> Move Hive Schema Tool classes to their own package to have  cleaner structure
> -
>
> Key: HIVE-21298
> URL: https://issues.apache.org/jira/browse/HIVE-21298
> Project: Hive
>  Issue Type: Improvement
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21224) Upgrade tests JUnit3 to JUnit4

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773039#comment-16773039
 ] 

Hive QA commented on HIVE-21224:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12959411/HIVE-21224.7.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16160/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16160/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16160/

Messages:
{noformat}
 This message was trimmed, see log for full details 
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/Closeable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/AutoCloseable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/Flushable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(javax/xml/bind/annotation/XmlRootElement.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/commons/commons-exec/1.1/commons-exec-1.1.jar(org/apache/commons/exec/ExecuteException.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/security/PrivilegedExceptionAction.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/concurrent/ExecutionException.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/concurrent/TimeoutException.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/3.1.0/hadoop-common-3.1.0.jar(org/apache/hadoop/fs/FileSystem.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/shims/common/target/hive-shims-common-4.0.0-SNAPSHOT.jar(org/apache/hadoop/hive/shims/HadoopShimsSecure.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/shims/common/target/hive-shims-common-4.0.0-SNAPSHOT.jar(org/apache/hadoop/hive/shims/ShimLoader.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/shims/common/target/hive-shims-common-4.0.0-SNAPSHOT.jar(org/apache/hadoop/hive/shims/HadoopShims.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/shims/common/target/hive-shims-common-4.0.0-SNAPSHOT.jar(org/apache/hadoop/hive/shims/HadoopShims$WebHCatJTShim.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/3.1.0/hadoop-common-3.1.0.jar(org/apache/hadoop/util/ToolRunner.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/concurrent/CancellationException.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/concurrent/RejectedExecutionException.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/concurrent/SynchronousQueue.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/concurrent/ThreadPoolExecutor.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/concurrent/TimeUnit.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/concurrent/Future.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/3.1.0/hadoop-common-3.1.0.jar(org/apache/hadoop/conf/Configured.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/3.1.0/hadoop-common-3.1.0.jar(org/apache/hadoop/io/NullWritable.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/3.1.0/hadoop-common-3.1.0.jar(org/apache/hadoop/io/Text.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-mapreduce-client-core/3.1.0/hadoop-mapreduce-client-core-3.1.0.jar(org/apache/hadoop/mapred/JobClient.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-mapreduce-client-core/3.1.0/hadoop-mapreduce-client-core-3.1.0.jar(org/apache/hadoop/mapred/JobConf.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-mapreduce-client-core/3.1.0/hadoop-mapreduce-client-core-3.1.0.jar(org/apache/hadoop/mapreduce/Job.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-mapreduce-client-core/3.1.0/hadoop-mapreduce-client-core-3.1.0.jar(org/apache/hadoop/mapreduce/JobID.class)]]
[loading 
ZipFileIndexFileObject[/dat

[jira] [Commented] (HIVE-20079) Populate more accurate rawDataSize for parquet format

2019-02-20 Thread Antal Sinkovits (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773048#comment-16773048
 ] 

Antal Sinkovits commented on HIVE-20079:


[~stakiar] I'm afraid, thats not an option, as it will cause discrepancy 
between the calculations.
Check my comment here:
https://issues.apache.org/jira/browse/HIVE-20523



> Populate more accurate rawDataSize for parquet format
> -
>
> Key: HIVE-20079
> URL: https://issues.apache.org/jira/browse/HIVE-20079
> Project: Hive
>  Issue Type: Improvement
>  Components: File Formats
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Antal Sinkovits
>Priority: Major
> Attachments: HIVE-20079.1.patch, HIVE-20079.2.patch, 
> HIVE-20079.3.patch
>
>
> Run the following queries and you will see the raw data for the table is 4 
> (that is the number of fields) incorrectly. We need to populate correct data 
> size so data can be split properly.
> {noformat}
> SET hive.stats.autogather=true;
> CREATE TABLE parquet_stats (id int,str string) STORED AS PARQUET;
> INSERT INTO parquet_stats values(0, 'this is string 0'), (1, 'string 1');
> DESC FORMATTED parquet_stats;
> {noformat}
> {noformat}
> Table Parameters:
>   COLUMN_STATS_ACCURATE   true
>   numFiles1
>   numRows 2
>   rawDataSize 4
>   totalSize   373
>   transient_lastDdlTime   1530660523
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20079) Populate more accurate rawDataSize for parquet format

2019-02-20 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773046#comment-16773046
 ] 

Sahil Takiar commented on HIVE-20079:
-

FYI I don't think {{block.getTotalByteSize}} provides the size of the data when 
loaded into memory. Talking to a few Parquet folks, no such method to get the 
raw data size exists. If we want to implement this patch we will have to do 
something similar to what ORC does - 
https://github.com/apache/orc/blob/master/java/core/src/java/org/apache/orc/impl/ReaderImpl.java#L601

> Populate more accurate rawDataSize for parquet format
> -
>
> Key: HIVE-20079
> URL: https://issues.apache.org/jira/browse/HIVE-20079
> Project: Hive
>  Issue Type: Improvement
>  Components: File Formats
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Antal Sinkovits
>Priority: Major
> Attachments: HIVE-20079.1.patch, HIVE-20079.2.patch, 
> HIVE-20079.3.patch
>
>
> Run the following queries and you will see the raw data for the table is 4 
> (that is the number of fields) incorrectly. We need to populate correct data 
> size so data can be split properly.
> {noformat}
> SET hive.stats.autogather=true;
> CREATE TABLE parquet_stats (id int,str string) STORED AS PARQUET;
> INSERT INTO parquet_stats values(0, 'this is string 0'), (1, 'string 1');
> DESC FORMATTED parquet_stats;
> {noformat}
> {noformat}
> Table Parameters:
>   COLUMN_STATS_ACCURATE   true
>   numFiles1
>   numRows 2
>   rawDataSize 4
>   totalSize   373
>   transient_lastDdlTime   1530660523
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20079) Populate more accurate rawDataSize for parquet format

2019-02-20 Thread Antal Sinkovits (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antal Sinkovits updated HIVE-20079:
---
Attachment: HIVE-20079.3.patch

> Populate more accurate rawDataSize for parquet format
> -
>
> Key: HIVE-20079
> URL: https://issues.apache.org/jira/browse/HIVE-20079
> Project: Hive
>  Issue Type: Improvement
>  Components: File Formats
>Affects Versions: 2.0.0
>Reporter: Aihua Xu
>Assignee: Antal Sinkovits
>Priority: Major
> Attachments: HIVE-20079.1.patch, HIVE-20079.2.patch, 
> HIVE-20079.3.patch
>
>
> Run the following queries and you will see the raw data for the table is 4 
> (that is the number of fields) incorrectly. We need to populate correct data 
> size so data can be split properly.
> {noformat}
> SET hive.stats.autogather=true;
> CREATE TABLE parquet_stats (id int,str string) STORED AS PARQUET;
> INSERT INTO parquet_stats values(0, 'this is string 0'), (1, 'string 1');
> DESC FORMATTED parquet_stats;
> {noformat}
> {noformat}
> Table Parameters:
>   COLUMN_STATS_ACCURATE   true
>   numFiles1
>   numRows 2
>   rawDataSize 4
>   totalSize   373
>   transient_lastDdlTime   1530660523
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21197) Hive Replication can add duplicate data during migration to a target with hive.strict.managed.tables enabled

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773035#comment-16773035
 ] 

Hive QA commented on HIVE-21197:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12959405/HIVE-21197.01.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15813 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.testDisableCompactionDuringReplLoad
 (batchId=242)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16159/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16159/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16159/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12959405 - PreCommit-HIVE-Build

> Hive Replication can add duplicate data during migration to a target with 
> hive.strict.managed.tables enabled
> 
>
> Key: HIVE-21197
> URL: https://issues.apache.org/jira/browse/HIVE-21197
> Project: Hive
>  Issue Type: Task
>  Components: repl
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
> Attachments: HIVE-21197.01.patch
>
>
> During bootstrap phase it may happen that the files copied to target are 
> created by events which are not part of the bootstrap. This is because of the 
> fact that, bootstrap first gets the last event id and then the file list. 
> During this period if some event are added, then bootstrap will include files 
> created by these events also.The same files will be copied again during the 
> first incremental replication just after the bootstrap. In normal scenario, 
> the duplicate copy does not cause any issue as hive allows the use of target 
> database only after the first incremental. But in case of migration, the file 
> at source and target are copied to different location (based on the write id 
> at target) and thus this may lead to duplicate data at target. This can be 
> avoided by having at check at load time for duplicate file. This check can be 
> done only for the first incremental and the search can be done in the 
> bootstrap directory (with write id 1). if the file is already present then 
> just ignore the copy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21197) Hive Replication can add duplicate data during migration to a target with hive.strict.managed.tables enabled

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773012#comment-16773012
 ] 

Hive QA commented on HIVE-21197:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
18s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
14s{color} | {color:blue} standalone-metastore/metastore-server in master has 
181 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
0s{color} | {color:blue} ql in master has 2262 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
42s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
43s{color} | {color:red} ql: The patch generated 4 new + 310 unchanged - 0 
fixed = 314 total (was 310) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
20s{color} | {color:red} itests/hive-unit: The patch generated 81 new + 272 
unchanged - 0 fixed = 353 total (was 272) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16159/dev-support/hive-personality.sh
 |
| git revision | master / e5a35e7 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16159/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16159/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| modules | C: standalone-metastore/metastore-server ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16159/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Hive Replication can add duplicate data during migration to a target with 
> hive.strict.managed.tables enabled
> 
>
> Key: HIVE-21197
> URL: https://issues.apache.org/jira/browse/HIVE-21197
> Project: Hive
>  Issue Type: Task
>  Components: repl
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Prio

[jira] [Updated] (HIVE-21297) Replace all occurences of new Long, Boolean, Double etc with the corresponding .valueOf

2019-02-20 Thread Ivan Suller (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Suller updated HIVE-21297:
---
Attachment: HIVE-21297.01.patch

> Replace all occurences of new Long, Boolean, Double etc with the 
> corresponding .valueOf
> ---
>
> Key: HIVE-21297
> URL: https://issues.apache.org/jira/browse/HIVE-21297
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ivan Suller
>Assignee: Ivan Suller
>Priority: Trivial
> Fix For: 4.0.0
>
> Attachments: HIVE-21297.01.patch
>
>
> Creating new objects with new Long(...), new Boolean etc creates a new 
> object, while Long.valueOf(...), Boolean.valueOf(...) can be cached (and is 
> actually cached in most if not all JVMs) thus reducing GC overhead. I already 
> had two similar tickets (HIVE-21228, HIVE-21199) - this one finishes the job.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HIVE-21297) Replace all occurences of new Long, Boolean, Double etc with the corresponding .valueOf

2019-02-20 Thread Ivan Suller (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-21297 started by Ivan Suller.
--
> Replace all occurences of new Long, Boolean, Double etc with the 
> corresponding .valueOf
> ---
>
> Key: HIVE-21297
> URL: https://issues.apache.org/jira/browse/HIVE-21297
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ivan Suller
>Assignee: Ivan Suller
>Priority: Trivial
> Fix For: 4.0.0
>
> Attachments: HIVE-21297.01.patch
>
>
> Creating new objects with new Long(...), new Boolean etc creates a new 
> object, while Long.valueOf(...), Boolean.valueOf(...) can be cached (and is 
> actually cached in most if not all JVMs) thus reducing GC overhead. I already 
> had two similar tickets (HIVE-21228, HIVE-21199) - this one finishes the job.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20376) Timestamp Timezone parser dosen't handler ISO formats "2013-08-31T01:02:33Z"

2019-02-20 Thread Bruno Pusztahazi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Pusztahazi updated HIVE-20376:

Attachment: HIVE-20376.1.patch

> Timestamp Timezone parser dosen't handler ISO formats "2013-08-31T01:02:33Z"
> 
>
> Key: HIVE-20376
> URL: https://issues.apache.org/jira/browse/HIVE-20376
> Project: Hive
>  Issue Type: Bug
>Reporter: slim bouguerra
>Assignee: Bruno Pusztahazi
>Priority: Minor
> Attachments: HIVE-20376.1.patch
>
>
> It will be nice to add ISO formats to timezone utils parser to handler the 
> following  "2013-08-31T01:02:33Z"
> org.apache.hadoop.hive.common.type.TimestampTZUtil#parse(java.lang.String)
> CC [~jcamachorodriguez]/ [~ashutoshc]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-19580) Hive 2.3.2 with ORC files & stored on S3 are case sensitive on EMR

2019-02-20 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HIVE-19580.
---
Resolution: Not A Problem

OK. closing.

Trying hard to think of the best way to classify, e.g. cannot reproduce, etc, 
invalid. Closing as "not a problem" as it's not an ASF issue.

> Hive 2.3.2 with ORC files & stored on S3 are case sensitive on EMR
> --
>
> Key: HIVE-19580
> URL: https://issues.apache.org/jira/browse/HIVE-19580
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.3.2
> Environment: EMR s3:// connector
> Spark 2.3 but also true for lower versions
> Hive 2.3.2
>Reporter: Arthur Baudry
>Priority: Major
> Fix For: 2.3.2
>
>
> Original file is csv:
> COL1,COL2
>  1,2
> ORC file are created with Spark 2.3:
> scala> val df = spark.read.option("header","true").csv("/user/hadoop/file")
> scala> df.printSchema
>  root
> |– COL1: string (nullable = true)|
> |– COL2: string (nullable = true)|
> scala> df.write.orc("s3://bucket/prefix")
> In Hive:
> hive> CREATE EXTERNAL TABLE test_orc(COL1 STRING, COL2 STRING) STORED AS ORC 
> LOCATION ("s3://bucket/prefix");
> hive> SELECT * FROM test_orc;
>  OK
>  NULL NULL
> *Everyfield is null. However if fields are generated using lower case in 
> Spark schemas then everything works.*
> The reason why I'm raising this bug is that we have customers using Hive 
> 2.3.2 to read files we generate through Spark and all our code base is 
> addressing fields using upper case while this is incompatible with their Hive 
> instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19580) Hive 2.3.2 with ORC files & stored on S3 are case sensitive on EMR

2019-02-20 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HIVE-19580:
--
Summary: Hive 2.3.2 with ORC files & stored on S3 are case sensitive on EMR 
 (was: Hive 2.3.2 with ORC files stored on S3 are case sensitive)

> Hive 2.3.2 with ORC files & stored on S3 are case sensitive on EMR
> --
>
> Key: HIVE-19580
> URL: https://issues.apache.org/jira/browse/HIVE-19580
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.3.2
> Environment: EMR s3:// connector
> Spark 2.3 but also true for lower versions
> Hive 2.3.2
>Reporter: Arthur Baudry
>Priority: Major
> Fix For: 2.3.2
>
>
> Original file is csv:
> COL1,COL2
>  1,2
> ORC file are created with Spark 2.3:
> scala> val df = spark.read.option("header","true").csv("/user/hadoop/file")
> scala> df.printSchema
>  root
> |– COL1: string (nullable = true)|
> |– COL2: string (nullable = true)|
> scala> df.write.orc("s3://bucket/prefix")
> In Hive:
> hive> CREATE EXTERNAL TABLE test_orc(COL1 STRING, COL2 STRING) STORED AS ORC 
> LOCATION ("s3://bucket/prefix");
> hive> SELECT * FROM test_orc;
>  OK
>  NULL NULL
> *Everyfield is null. However if fields are generated using lower case in 
> Spark schemas then everything works.*
> The reason why I'm raising this bug is that we have customers using Hive 
> 2.3.2 to read files we generate through Spark and all our code base is 
> addressing fields using upper case while this is incompatible with their Hive 
> instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20376) Timestamp Timezone parser dosen't handler ISO formats "2013-08-31T01:02:33Z"

2019-02-20 Thread Bruno Pusztahazi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Pusztahazi updated HIVE-20376:

Status: Patch Available  (was: Open)

> Timestamp Timezone parser dosen't handler ISO formats "2013-08-31T01:02:33Z"
> 
>
> Key: HIVE-20376
> URL: https://issues.apache.org/jira/browse/HIVE-20376
> Project: Hive
>  Issue Type: Bug
>Reporter: slim bouguerra
>Assignee: Bruno Pusztahazi
>Priority: Minor
> Attachments: HIVE-20376.1.patch
>
>
> It will be nice to add ISO formats to timezone utils parser to handler the 
> following  "2013-08-31T01:02:33Z"
> org.apache.hadoop.hive.common.type.TimestampTZUtil#parse(java.lang.String)
> CC [~jcamachorodriguez]/ [~ashutoshc]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21000) Upgrade thrift to at least 0.10.0

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772957#comment-16772957
 ] 

Hive QA commented on HIVE-21000:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
17s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
25s{color} | {color:blue} standalone-metastore/metastore-common in master has 
29 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
42s{color} | {color:blue} serde in master has 197 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
52s{color} | {color:blue} ql in master has 2262 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
31s{color} | {color:blue} accumulo-handler in master has 21 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 10m 
45s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
6s{color} | {color:red} The patch has 286 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
27s{color} | {color:green} metastore-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} serde generated 0 new + 196 unchanged - 1 fixed = 
196 total (was 197) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
58s{color} | {color:green} ql in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} accumulo-handler in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 15m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}111m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  
xml  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16158/dev-support/hive-personality.sh
 |
| git revision | master / a8bd1eb |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16158/yetus/whitespace-eol.txt
 |
| modules | C: standalone-metastore standalone-metastore/metastore-common 
service-rpc serde ql accumulo-handler . itests/qtest-accumulo U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16158/yetus.txt |
| Powered by | Apache Yetushttp:/

[jira] [Updated] (HIVE-21292) Break up DDLTask 1 - extract Database related operations

2019-02-20 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21292:
--
Status: Patch Available  (was: Open)

> Break up DDLTask 1 - extract Database related operations
> 
>
> Key: HIVE-21292
> URL: https://issues.apache.org/jira/browse/HIVE-21292
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21292.01.patch, HIVE-21292.02.patch, 
> HIVE-21292.03.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #1: extract all the database related operations from the old DDLTask, 
> and move them under the new package. Also create the new internal framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20376) Timestamp Timezone parser dosen't handler ISO formats "2013-08-31T01:02:33Z"

2019-02-20 Thread Bruno Pusztahazi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Pusztahazi reassigned HIVE-20376:
---

Assignee: Bruno Pusztahazi

> Timestamp Timezone parser dosen't handler ISO formats "2013-08-31T01:02:33Z"
> 
>
> Key: HIVE-20376
> URL: https://issues.apache.org/jira/browse/HIVE-20376
> Project: Hive
>  Issue Type: Bug
>Reporter: slim bouguerra
>Assignee: Bruno Pusztahazi
>Priority: Minor
>
> It will be nice to add ISO formats to timezone utils parser to handler the 
> following  "2013-08-31T01:02:33Z"
> org.apache.hadoop.hive.common.type.TimestampTZUtil#parse(java.lang.String)
> CC [~jcamachorodriguez]/ [~ashutoshc]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-15355) Concurrency issues during parallel moveFile due to HDFSUtils.setFullFileStatus

2019-02-20 Thread Dhirendra Khanka (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-15355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772944#comment-16772944
 ] 

Dhirendra Khanka commented on HIVE-15355:
-

Hi, We are having this issue in Hive 1.2.1.2.5 as well. Dont know if the patch 
is going to be applicable for our case, appreciate any possible workarounds.

Caused by: java.util.ConcurrentModificationException
    at java.util.ArrayList$SubList.checkForComodification(ArrayList.java:1231)
    at java.util.ArrayList$SubList.removeRange(ArrayList.java:1062)
    at java.util.AbstractList.clear(AbstractList.java:234)
    at 
com.google.common.collect.Iterables.removeIfFromRandomAccessList(Iterables.java:209)
    at com.google.common.collect.Iterables.removeIf(Iterables.java:180)
    at 
org.apache.hadoop.hive.shims.Hadoop23Shims.removeBaseAclEntries(Hadoop23Shims.java:855)
    at 
org.apache.hadoop.hive.shims.Hadoop23Shims.setFullFileStatus(Hadoop23Shims.java:751)
    at org.apache.hadoop.hive.ql.metadata.Hive$2.call(Hive.java:2625)
    at org.apache.hadoop.hive.ql.metadata.Hive$2.call(Hive.java:2611)
    ... 4 more

 

 

> Concurrency issues during parallel moveFile due to HDFSUtils.setFullFileStatus
> --
>
> Key: HIVE-15355
> URL: https://issues.apache.org/jira/browse/HIVE-15355
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.0, 2.2.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Major
> Fix For: 2.2.0
>
> Attachments: HIVE-15355.01.patch, HIVE-15355.02.patch
>
>
> It is possible to run into concurrency issues during multi-threaded moveFile 
> issued when processing queries like {{INSERT OVERWRITE TABLE ... SELECT ..}} 
> when there are multiple files in the staging directory which is a 
> subdirectory of the target directory. The issue is hard to reproduce but 
> following stacktrace is one such example:
> {noformat}
> INFO  : Loading data to table 
> functional_text_gzip.alltypesaggmultifilesnopart from 
> hdfs://localhost:20500/test-warehouse/alltypesaggmultifilesnopart_text_gzip/.hive-staging_hive_2016-12-01_19-58-21_712_8968735301422943318-1/-ext-1
> ERROR : Failed with exception java.lang.ArrayIndexOutOfBoundsException
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.ArrayIndexOutOfBoundsException
> at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2858)
> at 
> org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:3124)
> at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java:1701)
> at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:313)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1976)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1689)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1421)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1205)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1200)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:237)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.access$300(SQLOperation.java:88)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$3$1.run(SQLOperation.java:293)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1796)
> Getting log thread is interrupted, since query is done!
> at 
> org.apache.hive.service.cli.operation.SQLOperation$3.run(SQLOperation.java:306)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ArrayIndexOutOfBoundsException
> at java.lang.System.arraycopy(Native Method)
> at java.util.ArrayList.removeRange(ArrayList.java:616)
> at java.util.ArrayList$SubList.removeRange(ArrayList.java:1021)
> at java.util.AbstractList.clear(AbstractList.java:234)
> at 
> com.google.common.collect.Iterables.removeIfFromRandomAccessList(Iterables.java:213)
> at com.google.common.collect.Iterables.removeIf(Iterables.java:184)
> at 
> org.apac

[jira] [Updated] (HIVE-21292) Break up DDLTask 1 - extract Database related operations

2019-02-20 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21292:
--
Attachment: HIVE-21292.03.patch

> Break up DDLTask 1 - extract Database related operations
> 
>
> Key: HIVE-21292
> URL: https://issues.apache.org/jira/browse/HIVE-21292
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21292.01.patch, HIVE-21292.02.patch, 
> HIVE-21292.03.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #1: extract all the database related operations from the old DDLTask, 
> and move them under the new package. Also create the new internal framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21292) Break up DDLTask 1 - extract Database related operations

2019-02-20 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21292:
--
Status: Open  (was: Patch Available)

> Break up DDLTask 1 - extract Database related operations
> 
>
> Key: HIVE-21292
> URL: https://issues.apache.org/jira/browse/HIVE-21292
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21292.01.patch, HIVE-21292.02.patch, 
> HIVE-21292.03.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #1: extract all the database related operations from the old DDLTask, 
> and move them under the new package. Also create the new internal framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21000) Upgrade thrift to at least 0.10.0

2019-02-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772925#comment-16772925
 ] 

Hive QA commented on HIVE-21000:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12959400/HIVE-21000.07.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 17 failed/errored test(s), 15799 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_predicate_pushdown]
 (batchId=267)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=267)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_single_sourced_multi_insert]
 (batchId=267)
org.apache.hadoop.hive.metastore.client.TestGetPartitions.testGetPartitionWithAuthInfoNullDbName[Remote]
 (batchId=222)
org.apache.hadoop.hive.metastore.client.TestGetPartitions.testGetPartitionWithAuthInfoNullTblName[Remote]
 (batchId=222)
org.apache.hadoop.hive.metastore.client.TestListPartitions.testListPartitionNamesByValuesNullDbName[Remote]
 (batchId=220)
org.apache.hadoop.hive.metastore.client.TestListPartitions.testListPartitionNamesByValuesNullTblName[Remote]
 (batchId=220)
org.apache.hadoop.hive.metastore.client.TestListPartitions.testListPartitionNamesNullDbName[Remote]
 (batchId=220)
org.apache.hadoop.hive.metastore.client.TestListPartitions.testListPartitionNamesNullTblName[Remote]
 (batchId=220)
org.apache.hadoop.hive.metastore.client.TestListPartitions.testListPartitionSpecsNullTblName[Remote]
 (batchId=220)
org.apache.hadoop.hive.metastore.client.TestListPartitions.testListPartitionValuesEmptySchema[Remote]
 (batchId=220)
org.apache.hadoop.hive.metastore.client.TestListPartitions.testListPartitionsAllNullDbName[Remote]
 (batchId=220)
org.apache.hadoop.hive.metastore.client.TestListPartitions.testListPartitionsAllNullTblName[Remote]
 (batchId=220)
org.apache.hadoop.hive.metastore.client.TestListPartitions.testListPartitionsByFilterNullDbName[Remote]
 (batchId=220)
org.apache.hadoop.hive.metastore.client.TestListPartitions.testListPartitionsByFilterNullTblName[Remote]
 (batchId=220)
org.apache.hadoop.hive.metastore.client.TestListPartitions.testListPartitionsWithAuthByValuesNullDbName[Remote]
 (batchId=220)
org.apache.hadoop.hive.metastore.client.TestListPartitions.testListPartitionsWithAuthByValuesNullTblName[Remote]
 (batchId=220)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16158/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16158/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16158/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 17 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12959400 - PreCommit-HIVE-Build

> Upgrade thrift to at least 0.10.0
> -
>
> Key: HIVE-21000
> URL: https://issues.apache.org/jira/browse/HIVE-21000
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Ivan Suller
>Priority: Major
> Attachments: HIVE-21000.01.patch, HIVE-21000.02.patch, 
> HIVE-21000.03.patch, HIVE-21000.04.patch, HIVE-21000.05.patch, 
> HIVE-21000.06.patch, HIVE-21000.07.patch, sampler_before.png
>
>
> I was looking into some compile profiles for tables with lots of columns; and 
> it turned out that [thrift 0.9.3 is allocating a 
> List|https://github.com/apache/hive/blob/8e30b5e029570407d8a1db67d322a95db705750e/standalone-metastore/metastore-common/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/FieldSchema.java#L348]
>  during every hashcode calculation; but luckily THRIFT-2877 is improving on 
> that - so I propose to upgrade to at least 0.10.0 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21283) Create Synonym mid for substr, position for locate

2019-02-20 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21283?focusedWorklogId=201232&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-201232
 ]

ASF GitHub Bot logged work on HIVE-21283:
-

Author: ASF GitHub Bot
Created on: 20/Feb/19 11:19
Start Date: 20/Feb/19 11:19
Worklog Time Spent: 10m 
  Work Description: rmsmani commented on issue #540: HIVE-21283 Synonyms 
for the existing functions
URL: https://github.com/apache/hive/pull/540#issuecomment-465533619
 
 
   HI @pvary 
   Kindly review and merge to master
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 201232)
Time Spent: 20m  (was: 10m)

> Create Synonym mid for  substr, position for  locate
> 
>
> Key: HIVE-21283
> URL: https://issues.apache.org/jira/browse/HIVE-21283
> Project: Hive
>  Issue Type: New Feature
>Reporter: Mani M
>Assignee: Mani M
>Priority: Minor
>  Labels: UDF, pull-request-available, todoc4.0
> Fix For: 4.0.0
>
> Attachments: HIVE.21283.2.PATCH, HIVE.21283.PATCH
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Create new synonym for the existing function
>  
> Mid for substr
> postiion for locate 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21217) Optimize range calculation for PTF

2019-02-20 Thread Adam Szita (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772863#comment-16772863
 ] 

Adam Szita commented on HIVE-21217:
---

Committed to master. Thanks for the thorough review Peter!

> Optimize range calculation for PTF
> --
>
> Key: HIVE-21217
> URL: https://issues.apache.org/jira/browse/HIVE-21217
> Project: Hive
>  Issue Type: Improvement
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21217.0.patch, HIVE-21217.1.patch, 
> HIVE-21217.2.patch, HIVE-21217.3.patch, HIVE-21217.4.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> During window function execution Hive has to iterate on neighbouring rows of 
> the current row to find the beginning and end of the proper range (on which 
> the aggregation will be executed).
> When we're using range based windows and have many rows with a certain key 
> value this can take a lot of time. (e.g. partition size of 80M, in which we 
> have 2 ranges of 40M rows according to the orderby column: within these 40M 
> rowsets we're doing 40M x 40M/2 steps.. which is of n^2 time complexity)
> I propose to introduce a cache that keeps track of already calculated range 
> ends so it can be reused in future scans.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21284) StatsWork should use footer scan for Parquet

2019-02-20 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-21284:
--
   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master.

Thanks for the patch [~asinkovits] and [~kgyrtkirk] for the review!

> StatsWork should use footer scan for Parquet
> 
>
> Key: HIVE-21284
> URL: https://issues.apache.org/jira/browse/HIVE-21284
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Antal Sinkovits
>Assignee: Antal Sinkovits
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21284.01.patch, HIVE-21284.02.patch, 
> HIVE-21284.03.patch.txt, HIVE-21284.04.patch, HIVE-21284.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >