[jira] [Commented] (HIVE-22107) Correlated subquery producing wrong schema

2019-08-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908782#comment-16908782
 ] 

Hive QA commented on HIVE-22107:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
9s{color} | {color:blue} ql in master has 2251 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
39s{color} | {color:red} ql: The patch generated 1 new + 21 unchanged - 0 fixed 
= 22 total (was 21) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18356/dev-support/hive-personality.sh
 |
| git revision | master / bd42f23 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18356/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18356/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Correlated subquery producing wrong schema
> --
>
> Key: HIVE-22107
> URL: https://issues.apache.org/jira/browse/HIVE-22107
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-22107.1.patch, HIVE-22107.2.patch, 
> HIVE-22107.3.patch
>
>
> *Repro*
> {code:sql}
> create table test(id int, name string,dept string);
> insert into test values(1,'a','it'),(2,'b','eee'),(NULL, 'c', 'cse');
> select distinct 'empno' as eid, a.id from test a where NOT EXISTS (select 
> c.id from test c where a.id=c.id);
> {code}
> {code}
> +---++
> |  eid  |  a.id  |
> +---++
> | NULL  | empno  |
> +---++
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-22068) Return the last event id dumped as repl status to avoid notification event missing error.

2019-08-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22068?focusedWorklogId=296104&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296104
 ]

ASF GitHub Bot logged work on HIVE-22068:
-

Author: ASF GitHub Bot
Created on: 16/Aug/19 06:32
Start Date: 16/Aug/19 06:32
Worklog Time Spent: 10m 
  Work Description: sankarh commented on pull request #742: HIVE-22068 : 
Add more logging to notification cleaner and replication to track events
URL: https://github.com/apache/hive/pull/742#discussion_r314595417
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplLoadTask.java
 ##
 @@ -522,6 +525,25 @@ private int executeIncrementalLoad(DriverContext 
driverContext) {
   // bootstrap of tables if exist.
   if (builder.hasMoreWork() || work.getPathsToCopyIterator().hasNext() || 
work.hasBootstrapLoadTasks()) {
 DAGTraversal.traverse(childTasks, new 
AddDependencyToLeaves(TaskFactory.get(work, conf)));
+  } else if (work.dbNameToLoadIn != null) {
 
 Review comment:
   I think, work.dbNameToLoadIn will be null if you don't specify the name in 
REPL LOAD command. In this case, we should get the name from DumpMetadata to 
set the last repl ID.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296104)
Time Spent: 20m  (was: 10m)

> Return the last event id dumped as repl status to avoid notification event 
> missing error.
> -
>
> Key: HIVE-22068
> URL: https://issues.apache.org/jira/browse/HIVE-22068
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22068.01.patch, HIVE-22068.02.patch, 
> HIVE-22068.03.patch, HIVE-22068.04.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In repl load, update the status of target database to the last event dumped 
> so that repl status returns that and next incremental can specify it as the 
> event from which to start the dump. WIthout that repl status might return and 
> old event which might cause, older events to be dumped again and/or a 
> notification event missing error if the older events are cleaned by the 
> cleaner.
> While at it
>  * Add more logging to DB notification listener cleaner thread
>  ** The time when it considered cleaning, the interval and time before which 
> events were cleared, the min and max id at that time
>  ** how many events were cleared
>  ** min and max id after the cleaning.
>  * In REPL::START document the starting event, end event if specified and the 
> maximum number of events, if specified.
>  *



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-22068) Return the last event id dumped as repl status to avoid notification event missing error.

2019-08-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22068?focusedWorklogId=296105&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296105
 ]

ASF GitHub Bot logged work on HIVE-22068:
-

Author: ASF GitHub Bot
Created on: 16/Aug/19 06:32
Start Date: 16/Aug/19 06:32
Worklog Time Spent: 10m 
  Work Description: sankarh commented on pull request #742: HIVE-22068 : 
Add more logging to notification cleaner and replication to track events
URL: https://github.com/apache/hive/pull/742#discussion_r314596061
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenariosExternalTables.java
 ##
 @@ -750,6 +766,38 @@ public Table apply(@Nullable Table table) {
 .verifyResults(Arrays.asList("1", "2"));
   }
 
+  @Test
+  public void testIncrementalDumpEmptyDumpDirectory() throws Throwable {
 
 Review comment:
   Add another test case where we dynamically bootstrap a table (table level 
replication) with incremental dump but no events are dumped. It takes a special 
route in executeIncrementalLoad() method Line: 503 and so I guess, as per 
current change, it won't update the database last repl ID.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296105)
Time Spent: 20m  (was: 10m)

> Return the last event id dumped as repl status to avoid notification event 
> missing error.
> -
>
> Key: HIVE-22068
> URL: https://issues.apache.org/jira/browse/HIVE-22068
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22068.01.patch, HIVE-22068.02.patch, 
> HIVE-22068.03.patch, HIVE-22068.04.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In repl load, update the status of target database to the last event dumped 
> so that repl status returns that and next incremental can specify it as the 
> event from which to start the dump. WIthout that repl status might return and 
> old event which might cause, older events to be dumped again and/or a 
> notification event missing error if the older events are cleaned by the 
> cleaner.
> While at it
>  * Add more logging to DB notification listener cleaner thread
>  ** The time when it considered cleaning, the interval and time before which 
> events were cleared, the min and max id at that time
>  ** how many events were cleared
>  ** min and max id after the cleaning.
>  * In REPL::START document the starting event, end event if specified and the 
> maximum number of events, if specified.
>  *



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-22068) Return the last event id dumped as repl status to avoid notification event missing error.

2019-08-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22068?focusedWorklogId=296106&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296106
 ]

ASF GitHub Bot logged work on HIVE-22068:
-

Author: ASF GitHub Bot
Created on: 16/Aug/19 06:32
Start Date: 16/Aug/19 06:32
Worklog Time Spent: 10m 
  Work Description: sankarh commented on pull request #742: HIVE-22068 : 
Add more logging to notification cleaner and replication to track events
URL: https://github.com/apache/hive/pull/742#discussion_r314592846
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenariosExternalTables.java
 ##
 @@ -750,6 +766,38 @@ public Table apply(@Nullable Table table) {
 .verifyResults(Arrays.asList("1", "2"));
   }
 
+  @Test
+  public void testIncrementalDumpEmptyDumpDirectory() throws Throwable {
+WarehouseInstance.Tuple tuple = primary.run("use " + primaryDbName)
+.run("create external table t1 (id int)")
+.run("insert into table t1 values (1)")
+.run("insert into table t1 values (2)")
+.dump(primaryDbName, null);
+
+replica.load(replicatedDbName, tuple.dumpLocation)
+.status(replicatedDbName)
+.verifyResult(tuple.lastReplicationId);
+
+WarehouseInstance.Tuple incTuple = primary.dump(primaryDbName, 
tuple.lastReplicationId);
+
+replica.load(replicatedDbName, incTuple.dumpLocation)
+.status(replicatedDbName)
+.verifyResult(incTuple.lastReplicationId);
+
+// create events for some other database and then dump the primaryDbName 
to dump an empty directory.
+primary.run("create database " + extraPrimaryDb + " WITH DBPROPERTIES ( '" 
+
+SOURCE_OF_REPLICATION + "' = '1,2,3')");
+WarehouseInstance.Tuple inc2Tuple = primary.run("use " + extraPrimaryDb)
+.run("create table tbl (fld int)")
+.run("use " + primaryDbName)
+.dump(primaryDbName, incTuple.lastReplicationId);
+
 
 Review comment:
   Shall add a validation if REPL DUMP returned last_repl_id is same as the 
latest event ID in notification event table even though no events on dumped db.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296106)
Time Spent: 0.5h  (was: 20m)

> Return the last event id dumped as repl status to avoid notification event 
> missing error.
> -
>
> Key: HIVE-22068
> URL: https://issues.apache.org/jira/browse/HIVE-22068
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22068.01.patch, HIVE-22068.02.patch, 
> HIVE-22068.03.patch, HIVE-22068.04.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In repl load, update the status of target database to the last event dumped 
> so that repl status returns that and next incremental can specify it as the 
> event from which to start the dump. WIthout that repl status might return and 
> old event which might cause, older events to be dumped again and/or a 
> notification event missing error if the older events are cleaned by the 
> cleaner.
> While at it
>  * Add more logging to DB notification listener cleaner thread
>  ** The time when it considered cleaning, the interval and time before which 
> events were cleared, the min and max id at that time
>  ** how many events were cleared
>  ** min and max id after the cleaning.
>  * In REPL::START document the starting event, end event if specified and the 
> maximum number of events, if specified.
>  *



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-22068) Return the last event id dumped as repl status to avoid notification event missing error.

2019-08-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22068?focusedWorklogId=296103&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296103
 ]

ASF GitHub Bot logged work on HIVE-22068:
-

Author: ASF GitHub Bot
Created on: 16/Aug/19 06:32
Start Date: 16/Aug/19 06:32
Worklog Time Spent: 10m 
  Work Description: sankarh commented on pull request #742: HIVE-22068 : 
Add more logging to notification cleaner and replication to track events
URL: https://github.com/apache/hive/pull/742#discussion_r314596395
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplLoadTask.java
 ##
 @@ -522,6 +525,25 @@ private int executeIncrementalLoad(DriverContext 
driverContext) {
   // bootstrap of tables if exist.
   if (builder.hasMoreWork() || work.getPathsToCopyIterator().hasNext() || 
work.hasBootstrapLoadTasks()) {
 DAGTraversal.traverse(childTasks, new 
AddDependencyToLeaves(TaskFactory.get(work, conf)));
+  } else if (work.dbNameToLoadIn != null) {
+// Nothing to be done for repl load now. Add a task to update the 
last.repl.id of the
+// target database to the event id of the last event considered by the 
dump. Next
+// incremental cycle if starts from this id, the events considered for 
this dump, won't
+// be considered again. If we are replicating to multiple databases at 
a time, it's not
+// possible to know which all databases we are replicating into and 
hence we can not
+// update repl id in all those databases.
+String lastEventid = builder.eventTo().toString();
 
 Review comment:
   Can we try to re-use ReplLoadTask.updateDatabaseLastReplID method instead of 
duplicating the code here?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296103)
Time Spent: 20m  (was: 10m)

> Return the last event id dumped as repl status to avoid notification event 
> missing error.
> -
>
> Key: HIVE-22068
> URL: https://issues.apache.org/jira/browse/HIVE-22068
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22068.01.patch, HIVE-22068.02.patch, 
> HIVE-22068.03.patch, HIVE-22068.04.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In repl load, update the status of target database to the last event dumped 
> so that repl status returns that and next incremental can specify it as the 
> event from which to start the dump. WIthout that repl status might return and 
> old event which might cause, older events to be dumped again and/or a 
> notification event missing error if the older events are cleaned by the 
> cleaner.
> While at it
>  * Add more logging to DB notification listener cleaner thread
>  ** The time when it considered cleaning, the interval and time before which 
> events were cleared, the min and max id at that time
>  ** how many events were cleared
>  ** min and max id after the cleaning.
>  * In REPL::START document the starting event, end event if specified and the 
> maximum number of events, if specified.
>  *



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22115) Prevent the creation of query-router logger in HS2 as per property

2019-08-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908767#comment-16908767
 ] 

Hive QA commented on HIVE-22115:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12977738/HIVE-22115.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16740 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18355/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18355/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18355/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12977738 - PreCommit-HIVE-Build

> Prevent the creation of query-router logger in HS2 as per property
> --
>
> Key: HIVE-22115
> URL: https://issues.apache.org/jira/browse/HIVE-22115
> Project: Hive
>  Issue Type: Improvement
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Attachments: HIVE-22115.patch, HIVE-22115.patch, HIVE-22115.patch
>
>
> Avoid the creation and registration of query-router logger if the Hive server 
> Property is set to false by the user
> {code}
> HiveConf.ConfVars.HIVE_SERVER2_LOGGING_OPERATION_ENABLED
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22121) Turning on hive.tez.bucket.pruning produce wrong results

2019-08-15 Thread Gopal V (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908732#comment-16908732
 ] 

Gopal V commented on HIVE-22121:



{code}
'Map Operator Tree:'
'TableScan'
'  alias: test_table'
'  filterExpr: (col_1 <> 2) (type: boolean)'
'  buckets included: [] of 4'
{code}

on the explain extended, the SARG generated is 

{code}
leaf-0 = (EQUALS col_1 2), expr = (not leaf-0)
{code}

So the SARG leaf only has

{code}
[(EQUALS col_1 2)]
{code}

and expression tree does not bail out for NOT.

> Turning on hive.tez.bucket.pruning produce wrong results
> 
>
> Key: HIVE-22121
> URL: https://issues.apache.org/jira/browse/HIVE-22121
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.0, 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
>
> *Reproducer*
> {code:sql}
> set hive.query.results.cache.enabled=false;
> set hive.optimize.ppd.storage=true;
> set hive.optimize.index.filter=true;
> set hive.tez.bucket.pruning=true; 
> CREATE TABLE `test_table`( 
>`col_1` int, 
>`col_2` string,  
>`col_3` string)  
>  CLUSTERED BY ( 
>col_1)   
>  INTO 4 BUCKETS; 
> insert into test_table values(1, 'one', 'ONE'), (2, 'two', 'TWO'), 
> (3,'three','THREE'),(4,'four','FOUR');
> select * from test_table;
> explain select col_1, col_2, col_3 from test_table where col_1 <> 2 order by 
> col_2;
> select col_1, col_2, col_3 from test_table where col_1 <> 2 order by col_2;
> {code}
> Above sql query produce zero rows.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22115) Prevent the creation of query-router logger in HS2 as per property

2019-08-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908719#comment-16908719
 ] 

Hive QA commented on HIVE-22115:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
7s{color} | {color:blue} ql in master has 2251 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18355/dev-support/hive-personality.sh
 |
| git revision | master / bd42f23 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18355/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Prevent the creation of query-router logger in HS2 as per property
> --
>
> Key: HIVE-22115
> URL: https://issues.apache.org/jira/browse/HIVE-22115
> Project: Hive
>  Issue Type: Improvement
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Attachments: HIVE-22115.patch, HIVE-22115.patch, HIVE-22115.patch
>
>
> Avoid the creation and registration of query-router logger if the Hive server 
> Property is set to false by the user
> {code}
> HiveConf.ConfVars.HIVE_SERVER2_LOGGING_OPERATION_ENABLED
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22121) Turning on hive.tez.bucket.pruning produce wrong results

2019-08-15 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-22121:
---
Affects Version/s: 4.0.0
   3.1.0

> Turning on hive.tez.bucket.pruning produce wrong results
> 
>
> Key: HIVE-22121
> URL: https://issues.apache.org/jira/browse/HIVE-22121
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.0, 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
>
> *Reproducer*
> {code:sql}
> set hive.query.results.cache.enabled=false;
> set hive.optimize.ppd.storage=true;
> set hive.optimize.index.filter=true;
> set hive.tez.bucket.pruning=true; 
> CREATE TABLE `test_table`( 
>`col_1` int, 
>`col_2` string,  
>`col_3` string)  
>  CLUSTERED BY ( 
>col_1)   
>  INTO 4 BUCKETS; 
> insert into test_table values(1, 'one', 'ONE'), (2, 'two', 'TWO'), 
> (3,'three','THREE'),(4,'four','FOUR');
> select * from test_table;
> explain select col_1, col_2, col_3 from test_table where col_1 <> 2 order by 
> col_2;
> select col_1, col_2, col_3 from test_table where col_1 <> 2 order by col_2;
> {code}
> Above sql query produce zero rows.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (HIVE-22121) Turning on hive.tez.bucket.pruning produce wrong results

2019-08-15 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg reassigned HIVE-22121:
--


> Turning on hive.tez.bucket.pruning produce wrong results
> 
>
> Key: HIVE-22121
> URL: https://issues.apache.org/jira/browse/HIVE-22121
> Project: Hive
>  Issue Type: Bug
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
>
> *Reproducer*
> {code:sql}
> set hive.query.results.cache.enabled=false;
> set hive.optimize.ppd.storage=true;
> set hive.optimize.index.filter=true;
> set hive.tez.bucket.pruning=true; 
> CREATE TABLE `test_table`( 
>`col_1` int, 
>`col_2` string,  
>`col_3` string)  
>  CLUSTERED BY ( 
>col_1)   
>  INTO 4 BUCKETS; 
> insert into test_table values(1, 'one', 'ONE'), (2, 'two', 'TWO'), 
> (3,'three','THREE'),(4,'four','FOUR');
> select * from test_table;
> explain select col_1, col_2, col_3 from test_table where col_1 <> 2 order by 
> col_2;
> select col_1, col_2, col_3 from test_table where col_1 <> 2 order by col_2;
> {code}
> Above sql query produce zero rows.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22120) Fix wrong results/ArrayOutOfBound exception in left outer map joins on specific boundary conditions

2019-08-15 Thread Ramesh Kumar Thangarajan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-22120:

Attachment: HIVE-22120.1.patch
Status: Patch Available  (was: In Progress)

> Fix wrong results/ArrayOutOfBound exception in left outer map joins on 
> specific boundary conditions
> ---
>
> Key: HIVE-22120
> URL: https://issues.apache.org/jira/browse/HIVE-22120
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, llap, Vectorization
>Affects Versions: 4.0.0
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-22120.1.patch
>
>
> Vectorized version of left outer map join produces wrong results or 
> encounters ArrayOutOfBound exception.
> The boundary conditions are:
>  * The complete batch of the big table should have the join key repeated for 
> all the join columns.
>  * The complete batch of the big table should have not have a matched key 
> value in the small table
>  * The repeated value should not be a null value
>  * Some rows should be filtered out as part of the on clause filter.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22118) Log the table name while skipping the compaction because it's sorted table/partitions

2019-08-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908663#comment-16908663
 ] 

Hive QA commented on HIVE-22118:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12977733/HIVE-22118.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16740 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18354/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18354/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18354/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12977733 - PreCommit-HIVE-Build

> Log the table name while skipping the compaction because it's sorted 
> table/partitions
> -
>
> Key: HIVE-22118
> URL: https://issues.apache.org/jira/browse/HIVE-22118
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Minor
> Attachments: HIVE-22118.patch
>
>
> for debugging perspective it's good if we log the full table name while 
> skipping the table for compaction otherwise it's tedious to know why the 
> compaction is not happening for the target table.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work started] (HIVE-22120) Fix wrong results/ArrayOutOfBound exception in left outer map joins on specific boundary conditions

2019-08-15 Thread Ramesh Kumar Thangarajan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-22120 started by Ramesh Kumar Thangarajan.
---
> Fix wrong results/ArrayOutOfBound exception in left outer map joins on 
> specific boundary conditions
> ---
>
> Key: HIVE-22120
> URL: https://issues.apache.org/jira/browse/HIVE-22120
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, llap, Vectorization
>Affects Versions: 4.0.0
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
>
> Vectorized version of left outer map join produces wrong results or 
> encounters ArrayOutOfBound exception.
> The boundary conditions are:
>  * The complete batch of the big table should have the join key repeated for 
> all the join columns.
>  * The complete batch of the big table should have not have a matched key 
> value in the small table
>  * The repeated value should not be a null value
>  * Some rows should be filtered out as part of the on clause filter.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22107) Correlated subquery producing wrong schema

2019-08-15 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-22107:
---
Attachment: HIVE-22107.3.patch

> Correlated subquery producing wrong schema
> --
>
> Key: HIVE-22107
> URL: https://issues.apache.org/jira/browse/HIVE-22107
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-22107.1.patch, HIVE-22107.2.patch, 
> HIVE-22107.3.patch
>
>
> *Repro*
> {code:sql}
> create table test(id int, name string,dept string);
> insert into test values(1,'a','it'),(2,'b','eee'),(NULL, 'c', 'cse');
> select distinct 'empno' as eid, a.id from test a where NOT EXISTS (select 
> c.id from test c where a.id=c.id);
> {code}
> {code}
> +---++
> |  eid  |  a.id  |
> +---++
> | NULL  | empno  |
> +---++
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22107) Correlated subquery producing wrong schema

2019-08-15 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-22107:
---
Status: Open  (was: Patch Available)

> Correlated subquery producing wrong schema
> --
>
> Key: HIVE-22107
> URL: https://issues.apache.org/jira/browse/HIVE-22107
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-22107.1.patch, HIVE-22107.2.patch, 
> HIVE-22107.3.patch
>
>
> *Repro*
> {code:sql}
> create table test(id int, name string,dept string);
> insert into test values(1,'a','it'),(2,'b','eee'),(NULL, 'c', 'cse');
> select distinct 'empno' as eid, a.id from test a where NOT EXISTS (select 
> c.id from test c where a.id=c.id);
> {code}
> {code}
> +---++
> |  eid  |  a.id  |
> +---++
> | NULL  | empno  |
> +---++
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22107) Correlated subquery producing wrong schema

2019-08-15 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-22107:
---
Status: Patch Available  (was: Open)

> Correlated subquery producing wrong schema
> --
>
> Key: HIVE-22107
> URL: https://issues.apache.org/jira/browse/HIVE-22107
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-22107.1.patch, HIVE-22107.2.patch, 
> HIVE-22107.3.patch
>
>
> *Repro*
> {code:sql}
> create table test(id int, name string,dept string);
> insert into test values(1,'a','it'),(2,'b','eee'),(NULL, 'c', 'cse');
> select distinct 'empno' as eid, a.id from test a where NOT EXISTS (select 
> c.id from test c where a.id=c.id);
> {code}
> {code}
> +---++
> |  eid  |  a.id  |
> +---++
> | NULL  | empno  |
> +---++
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (HIVE-22120) Fix wrong results/ArrayOutOfBound exception in left outer map joins on specific boundary conditions

2019-08-15 Thread Ramesh Kumar Thangarajan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan reassigned HIVE-22120:
---


> Fix wrong results/ArrayOutOfBound exception in left outer map joins on 
> specific boundary conditions
> ---
>
> Key: HIVE-22120
> URL: https://issues.apache.org/jira/browse/HIVE-22120
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, llap, Vectorization
>Affects Versions: 4.0.0
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
>
> Vectorized version of left outer map join produces wrong results or 
> encounters ArrayOutOfBound exception.
> The boundary conditions are:
>  * The complete batch of the big table should have the join key repeated for 
> all the join columns.
>  * The complete batch of the big table should have not have a matched key 
> value in the small table
>  * The repeated value should not be a null value
>  * Some rows should be filtered out as part of the on clause filter.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22118) Log the table name while skipping the compaction because it's sorted table/partitions

2019-08-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908625#comment-16908625
 ] 

Hive QA commented on HIVE-22118:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
2s{color} | {color:blue} ql in master has 2251 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18354/dev-support/hive-personality.sh
 |
| git revision | master / bd42f23 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18354/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Log the table name while skipping the compaction because it's sorted 
> table/partitions
> -
>
> Key: HIVE-22118
> URL: https://issues.apache.org/jira/browse/HIVE-22118
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Minor
> Attachments: HIVE-22118.patch
>
>
> for debugging perspective it's good if we log the full table name while 
> skipping the table for compaction otherwise it's tedious to know why the 
> compaction is not happening for the target table.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22115) Prevent the creation of query-router logger in HS2 as per property

2019-08-15 Thread slim bouguerra (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated HIVE-22115:
--
Attachment: HIVE-22115.patch

> Prevent the creation of query-router logger in HS2 as per property
> --
>
> Key: HIVE-22115
> URL: https://issues.apache.org/jira/browse/HIVE-22115
> Project: Hive
>  Issue Type: Improvement
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Attachments: HIVE-22115.patch, HIVE-22115.patch, HIVE-22115.patch
>
>
> Avoid the creation and registration of query-router logger if the Hive server 
> Property is set to false by the user
> {code}
> HiveConf.ConfVars.HIVE_SERVER2_LOGGING_OPERATION_ENABLED
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22113) Prevent LLAP shutdown on AMReporter related RuntimeException

2019-08-15 Thread slim bouguerra (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated HIVE-22113:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks Oli 
[https://git-wip-us.apache.org/repos/asf?p=hive.git;a=commit;h=bd42f23d49d9948f690a14675d6e77830adddfef]

> Prevent LLAP shutdown on AMReporter related RuntimeException
> 
>
> Key: HIVE-22113
> URL: https://issues.apache.org/jira/browse/HIVE-22113
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 3.1.1
>Reporter: Oliver Draese
>Assignee: Oliver Draese
>Priority: Major
>  Labels: llap
> Attachments: HIVE-22113.1.patch, HIVE-22113.2.patch, HIVE-22113.patch
>
>
> If a task attempt cannot be removed from AMReporter (i.e. task attempt was 
> not found), the AMReporter throws a RuntimeException. This exception is not 
> caught and trickles up, causing an LLAP shutdown:
> {{2019-08-08T23:34:39,748 ERROR [Wait-Queue-Scheduler-0 ()] org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread Thread[Wait-Queue-Scheduler-0,5,main] threw an Exception. Shutting down now...}}{{java.lang.RuntimeException: attempt_1563528877295_18872_3728_01_03_0 was not registered and couldn't be removed}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.AMReporter$AMNodeInfo.removeTaskAttempt(AMReporter.java:524) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.AMReporter.unregisterTask(AMReporter.java:243) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.TaskRunnerCallable.killTask(TaskRunnerCallable.java:384) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.handleScheduleAttemptedRejection(TaskExecutorService.java:739) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.access$1100(TaskExecutorService.java:91) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService$WaitQueueWorker.run(TaskExecutorService.java:396) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_161]}}{{
> 
> at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108) [hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41) [hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77) [hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]}}{{
> 
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]}}{{
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22081) Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there are too many Table/partitions are eligible for compaction

2019-08-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908606#comment-16908606
 ] 

Hive QA commented on HIVE-22081:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12977728/HIVE-22081.04.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16740 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18353/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18353/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18353/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12977728 - PreCommit-HIVE-Build

> Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there 
> are too many Table/partitions are eligible for compaction 
> --
>
> Key: HIVE-22081
> URL: https://issues.apache.org/jira/browse/HIVE-22081
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21917.01.patch, HIVE-21917.02.patch, 
> HIVE-21917.03.patch, HIVE-22081.04.patch, HIVE-22081.patch
>
>
> if Automatic Compaction is turned on, Initiator thread check for potential 
> table/partitions which are eligible for compactions and run some checks in 
> for loop before requesting compaction for eligibles. Though initiator thread 
> is configured to run at interval 5 min default, in case of many objects it 
> keeps on running as these checks are IO intensive and hog cpu.
> In the proposed changes, I am planning to do
> 1. passing less object to for loop by filtering out the objects based on the 
> condition which we are checking within the loop.
> 2. Doing Async call using future to determine compaction type(this is where 
> we do FileSystem calls)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22081) Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there are too many Table/partitions are eligible for compaction

2019-08-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908588#comment-16908588
 ] 

Hive QA commented on HIVE-22081:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
11s{color} | {color:blue} ql in master has 2251 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} ql: The patch generated 0 new + 24 unchanged - 1 
fixed = 24 total (was 25) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18353/dev-support/hive-personality.sh
 |
| git revision | master / 28f2340 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18353/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there 
> are too many Table/partitions are eligible for compaction 
> --
>
> Key: HIVE-22081
> URL: https://issues.apache.org/jira/browse/HIVE-22081
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21917.01.patch, HIVE-21917.02.patch, 
> HIVE-21917.03.patch, HIVE-22081.04.patch, HIVE-22081.patch
>
>
> if Automatic Compaction is turned on, Initiator thread check for potential 
> table/partitions which are eligible for compactions and run some checks in 
> for loop before requesting compaction for eligibles. Though initiator thread 
> is configured to run at interval 5 min default, in case of many objects it 
> keeps on running as these checks are IO intensive and hog cpu.
> In the proposed changes, I am planning to do
> 1. passing less object to for loop by filtering out the objects based on the 
> condition which we are checking within the loop.
> 2. Doing Async call using future to determine compaction type(this is where 
> we do FileSystem calls)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (HIVE-22119) Ranger Hive authorizer to be enhanced to support Hive policies based on resource owners

2019-08-15 Thread Ramesh Mani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Mani reassigned HIVE-22119:
--

Assignee: Ramesh Mani

> Ranger Hive authorizer to be enhanced to  support Hive policies based on 
> resource owners
> 
>
> Key: HIVE-22119
> URL: https://issues.apache.org/jira/browse/HIVE-22119
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Mani
>Assignee: Ramesh Mani
>Priority: Major
>
> With changes in HIVE-21833, owner information is now made available to 
> authorizer implementations. Ranger Hive authorizer should be updated to 
> enable Hive policies based on resource owners - like
> - allow owner of a database to create tables in the database
> - allow owner of a table to perform all operations on the table
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22115) Prevent the creation of query-router logger in HS2 as per property

2019-08-15 Thread Gopal V (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908572#comment-16908572
 ] 

Gopal V commented on HIVE-22115:


LGTM - +1

> Prevent the creation of query-router logger in HS2 as per property
> --
>
> Key: HIVE-22115
> URL: https://issues.apache.org/jira/browse/HIVE-22115
> Project: Hive
>  Issue Type: Improvement
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Attachments: HIVE-22115.patch, HIVE-22115.patch
>
>
> Avoid the creation and registration of query-router logger if the Hive server 
> Property is set to false by the user
> {code}
> HiveConf.ConfVars.HIVE_SERVER2_LOGGING_OPERATION_ENABLED
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (HIVE-22119) Ranger Hive authorizer to be enhanced to support Hive policies based on resource owners

2019-08-15 Thread Ramesh Mani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Mani resolved HIVE-22119.

Resolution: Invalid

Created in wrong project

> Ranger Hive authorizer to be enhanced to  support Hive policies based on 
> resource owners
> 
>
> Key: HIVE-22119
> URL: https://issues.apache.org/jira/browse/HIVE-22119
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Mani
>Priority: Major
>
> With changes in HIVE-21833, owner information is now made available to 
> authorizer implementations. Ranger Hive authorizer should be updated to 
> enable Hive policies based on resource owners - like
> - allow owner of a database to create tables in the database
> - allow owner of a table to perform all operations on the table
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-20442) Hive stale lock when the hiveserver2 background thread died with NPE

2019-08-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908568#comment-16908568
 ] 

Hive QA commented on HIVE-20442:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12977727/HIVE-20442.5-branch-1.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 155 failed/errored test(s), 7897 tests 
executed
*Failed tests:*
{noformat}
TestAdminUser - did not produce a TEST-*.xml file (likely timed out) 
(batchId=339)
TestAuthorizationPreEventListener - did not produce a TEST-*.xml file (likely 
timed out) (batchId=370)
TestAuthzApiEmbedAuthorizerInEmbed - did not produce a TEST-*.xml file (likely 
timed out) (batchId=349)
TestAuthzApiEmbedAuthorizerInRemote - did not produce a TEST-*.xml file (likely 
timed out) (batchId=355)
TestBeeLineWithArgs - did not produce a TEST-*.xml file (likely timed out) 
(batchId=377)
TestCLIAuthzSessionContext - did not produce a TEST-*.xml file (likely timed 
out) (batchId=393)
TestClientSideAuthorizationProvider - did not produce a TEST-*.xml file (likely 
timed out) (batchId=369)
TestCompactor - did not produce a TEST-*.xml file (likely timed out) 
(batchId=359)
TestCreateUdfEntities - did not produce a TEST-*.xml file (likely timed out) 
(batchId=358)
TestCustomAuthentication - did not produce a TEST-*.xml file (likely timed out) 
(batchId=378)
TestDBTokenStore - did not produce a TEST-*.xml file (likely timed out) 
(batchId=324)
TestDDLWithRemoteMetastoreSecondNamenode - did not produce a TEST-*.xml file 
(likely timed out) (batchId=357)
TestDynamicSerDe - did not produce a TEST-*.xml file (likely timed out) 
(batchId=327)
TestEmbeddedHiveMetaStore - did not produce a TEST-*.xml file (likely timed 
out) (batchId=336)
TestEmbeddedThriftBinaryCLIService - did not produce a TEST-*.xml file (likely 
timed out) (batchId=381)
TestFilterHooks - did not produce a TEST-*.xml file (likely timed out) 
(batchId=331)
TestFolderPermissions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=364)
TestHS2AuthzContext - did not produce a TEST-*.xml file (likely timed out) 
(batchId=396)
TestHS2AuthzSessionContext - did not produce a TEST-*.xml file (likely timed 
out) (batchId=397)
TestHS2ImpersonationWithRemoteMS - did not produce a TEST-*.xml file (likely 
timed out) (batchId=385)
TestHiveAuthorizerCheckInvocation - did not produce a TEST-*.xml file (likely 
timed out) (batchId=373)
TestHiveAuthorizerShowFilters - did not produce a TEST-*.xml file (likely timed 
out) (batchId=372)
TestHiveHistory - did not produce a TEST-*.xml file (likely timed out) 
(batchId=375)
TestHiveMetaStoreTxns - did not produce a TEST-*.xml file (likely timed out) 
(batchId=351)
TestHiveMetaStoreWithEnvironmentContext - did not produce a TEST-*.xml file 
(likely timed out) (batchId=341)
TestHiveMetaTool - did not produce a TEST-*.xml file (likely timed out) 
(batchId=354)
TestHiveServer2 - did not produce a TEST-*.xml file (likely timed out) 
(batchId=399)
TestHiveServer2SessionTimeout - did not produce a TEST-*.xml file (likely timed 
out) (batchId=400)
TestHiveSessionImpl - did not produce a TEST-*.xml file (likely timed out) 
(batchId=382)
TestHs2Hooks - did not produce a TEST-*.xml file (likely timed out) 
(batchId=356)
TestHs2HooksWithMiniKdc - did not produce a TEST-*.xml file (likely timed out) 
(batchId=428)
TestJdbcDriver2 - did not produce a TEST-*.xml file (likely timed out) 
(batchId=387)
TestJdbcMetadataApiAuth - did not produce a TEST-*.xml file (likely timed out) 
(batchId=398)
TestJdbcWithLocalClusterSpark - did not produce a TEST-*.xml file (likely timed 
out) (batchId=392)
TestJdbcWithMiniHS2 - did not produce a TEST-*.xml file (likely timed out) 
(batchId=389)
TestJdbcWithMiniKdc - did not produce a TEST-*.xml file (likely timed out) 
(batchId=425)
TestJdbcWithMiniKdcCookie - did not produce a TEST-*.xml file (likely timed 
out) (batchId=424)
TestJdbcWithMiniKdcSQLAuthBinary - did not produce a TEST-*.xml file (likely 
timed out) (batchId=422)
TestJdbcWithMiniKdcSQLAuthHttp - did not produce a TEST-*.xml file (likely 
timed out) (batchId=427)
TestJdbcWithMiniMr - did not produce a TEST-*.xml file (likely timed out) 
(batchId=388)
TestJdbcWithSQLAuthUDFBlacklist - did not produce a TEST-*.xml file (likely 
timed out) (batchId=394)
TestJdbcWithSQLAuthorization - did not produce a TEST-*.xml file (likely timed 
out) (batchId=395)
TestLocationQueries - did not produce a TEST-*.xml file (likely timed out) 
(batchId=362)
TestMTQueries - did not produce a TEST-*.xml file (likely timed out) 
(batchId=360)
TestMarkPartition - did not produce a TEST-*.xml file (likely timed out) 
(batchId=348)
TestMarkPartitionRemote - did not produce a TEST-*.xml file (likely timed out) 
(batchId=352)
TestMetaStoreAuthorization - did not produce a TEST-*.xml file (likely t

[jira] [Commented] (HIVE-22118) Log the table name while skipping the compaction because it's sorted table/partitions

2019-08-15 Thread Gopal V (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908570#comment-16908570
 ] 

Gopal V commented on HIVE-22118:


+1 tests pending

> Log the table name while skipping the compaction because it's sorted 
> table/partitions
> -
>
> Key: HIVE-22118
> URL: https://issues.apache.org/jira/browse/HIVE-22118
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Minor
> Attachments: HIVE-22118.patch
>
>
> for debugging perspective it's good if we log the full table name while 
> skipping the table for compaction otherwise it's tedious to know why the 
> compaction is not happening for the target table.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22118) Log the table name while skipping the compaction because it's sorted table/partitions

2019-08-15 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-22118:
--
Summary: Log the table name while skipping the compaction because it's 
sorted table/partitions  (was: compaction worker thread won't log the table 
name while skipping the compaction because it's sorted table/partitions)

> Log the table name while skipping the compaction because it's sorted 
> table/partitions
> -
>
> Key: HIVE-22118
> URL: https://issues.apache.org/jira/browse/HIVE-22118
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Minor
> Attachments: HIVE-22118.patch
>
>
> for debugging perspective it's good if we log the full table name while 
> skipping the table for compaction otherwise it's tedious to know why the 
> compaction is not happening for the target table.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (HIVE-22118) compaction worker thread won't log the table name while skipping the compaction because it's sorted table/partitions

2019-08-15 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh reassigned HIVE-22118:
-


> compaction worker thread won't log the table name while skipping the 
> compaction because it's sorted table/partitions
> 
>
> Key: HIVE-22118
> URL: https://issues.apache.org/jira/browse/HIVE-22118
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Minor
> Attachments: HIVE-22118.patch
>
>
> for debugging perspective it's good if we log the full table name while 
> skipping the table for compaction otherwise it's tedious to know why the 
> compaction is not happening for the target table.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22118) compaction worker thread won't log the table name while skipping the compaction because it's sorted table/partitions

2019-08-15 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-22118:
--
Attachment: HIVE-22118.patch
Status: Patch Available  (was: Open)

> compaction worker thread won't log the table name while skipping the 
> compaction because it's sorted table/partitions
> 
>
> Key: HIVE-22118
> URL: https://issues.apache.org/jira/browse/HIVE-22118
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Minor
> Attachments: HIVE-22118.patch
>
>
> for debugging perspective it's good if we log the full table name while 
> skipping the table for compaction otherwise it's tedious to know why the 
> compaction is not happening for the target table.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22081) Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there are too many Table/partitions are eligible for compaction

2019-08-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908484#comment-16908484
 ] 

Hive QA commented on HIVE-22081:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12977725/HIVE-21917.03.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16740 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18351/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18351/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18351/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12977725 - PreCommit-HIVE-Build

> Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there 
> are too many Table/partitions are eligible for compaction 
> --
>
> Key: HIVE-22081
> URL: https://issues.apache.org/jira/browse/HIVE-22081
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21917.01.patch, HIVE-21917.02.patch, 
> HIVE-21917.03.patch, HIVE-22081.04.patch, HIVE-22081.patch
>
>
> if Automatic Compaction is turned on, Initiator thread check for potential 
> table/partitions which are eligible for compactions and run some checks in 
> for loop before requesting compaction for eligibles. Though initiator thread 
> is configured to run at interval 5 min default, in case of many objects it 
> keeps on running as these checks are IO intensive and hog cpu.
> In the proposed changes, I am planning to do
> 1. passing less object to for loop by filtering out the objects based on the 
> condition which we are checking within the loop.
> 2. Doing Async call using future to determine compaction type(this is where 
> we do FileSystem calls)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22081) Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there are too many Table/partitions are eligible for compaction

2019-08-15 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-22081:
--
Attachment: HIVE-22081.04.patch
Status: Patch Available  (was: Open)

> Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there 
> are too many Table/partitions are eligible for compaction 
> --
>
> Key: HIVE-22081
> URL: https://issues.apache.org/jira/browse/HIVE-22081
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21917.01.patch, HIVE-21917.02.patch, 
> HIVE-21917.03.patch, HIVE-22081.04.patch, HIVE-22081.patch
>
>
> if Automatic Compaction is turned on, Initiator thread check for potential 
> table/partitions which are eligible for compactions and run some checks in 
> for loop before requesting compaction for eligibles. Though initiator thread 
> is configured to run at interval 5 min default, in case of many objects it 
> keeps on running as these checks are IO intensive and hog cpu.
> In the proposed changes, I am planning to do
> 1. passing less object to for loop by filtering out the objects based on the 
> condition which we are checking within the loop.
> 2. Doing Async call using future to determine compaction type(this is where 
> we do FileSystem calls)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22081) Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there are too many Table/partitions are eligible for compaction

2019-08-15 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-22081:
--
Status: Open  (was: Patch Available)

> Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there 
> are too many Table/partitions are eligible for compaction 
> --
>
> Key: HIVE-22081
> URL: https://issues.apache.org/jira/browse/HIVE-22081
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21917.01.patch, HIVE-21917.02.patch, 
> HIVE-21917.03.patch, HIVE-22081.patch
>
>
> if Automatic Compaction is turned on, Initiator thread check for potential 
> table/partitions which are eligible for compactions and run some checks in 
> for loop before requesting compaction for eligibles. Though initiator thread 
> is configured to run at interval 5 min default, in case of many objects it 
> keeps on running as these checks are IO intensive and hog cpu.
> In the proposed changes, I am planning to do
> 1. passing less object to for loop by filtering out the objects based on the 
> condition which we are checking within the loop.
> 2. Doing Async call using future to determine compaction type(this is where 
> we do FileSystem calls)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-20442) Hive stale lock when the hiveserver2 background thread died with NPE

2019-08-15 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-20442:
--
Attachment: HIVE-20442.5-branch-1.2.patch
Status: Patch Available  (was: Open)

> Hive stale lock when the hiveserver2 background thread died with NPE
> 
>
> Key: HIVE-20442
> URL: https://issues.apache.org/jira/browse/HIVE-20442
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Transactions
>Affects Versions: 2.1.1, 1.2.0
> Environment: Hive-2.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-20442.01.branch-2.patch, 
> HIVE-20442.1-branch-1.2.patch, HIVE-20442.2-branch-1.2.patch, 
> HIVE-20442.3-branch-1.2.patch, HIVE-20442.4-branch-1.2.patch, 
> HIVE-20442.5-branch-1.2.patch
>
>
> this look like a race condition where background thread is not able to 
> release the lock it aquired.
> 1. hiveserver2 background thread request for lock
> {code}
> 2018-08-20T14:13:38,813 INFO  [HiveServer2-Background-Pool: Thread-X]: 
> lockmgr.DbLockManager (DbLockManager.java:lock(100)) - Requesting: 
> queryId=hive_xxx LockRequest(component:[LockComponent(type:SHARED_READ, 
> level:TABLE, dbname:testdb, tablename:test_table, operationType:SELECT)], 
> txnid:0, user:hive, hostname:HOSTNAME, agentInfo:hive_xxx)
> {code}
> 2. acquired the lock and start heartbeating
> {code}
> 2018-08-20T14:36:30,233 INFO  [HiveServer2-Background-Pool: Thread-X]: 
> lockmgr.DbTxnManager (DbTxnManager.java:startHeartbeat(517)) - Started 
> heartbeat with delay/interval = 15/15 MILLISECONDS for 
> query: agentInfo:hive_xxx
> {code}
> 3. during time between event #1 and #2, client disconnected and deleteContext 
> cleanup the session dir
> {code}
> 2018-08-21T15:39:57,820 INFO  [HiveServer2-Handler-Pool: Thread-XXX]: 
> thrift.ThriftCLIService (ThriftBinaryCLIService.java:deleteContext(136)) - 
> Session disconnected without closing properly.
> 2018-08-21T15:39:57,820 INFO  [HiveServer2-Handler-Pool: Thread-]: 
> thrift.ThriftCLIService (ThriftBinaryCLIService.java:deleteContext(140)) - 
> Closing the session: SessionHandle [3be07faf-5544-4178-8b50-8173002b171a]
> 2018-08-21T15:39:57,820 INFO  [HiveServer2-Handler-Pool: Thread-]: 
> service.CompositeService (SessionManager.java:closeSession(363)) - Session 
> closed, SessionHandle [xxx], current sessions:2
> {code}
> 4. background thread died with NPE while trying to get the queryid 
> {code}
> java.lang.NullPointerException: null
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1568) 
> ~[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1414) 
> ~[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1211) 
> ~[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1204) 
> ~[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:242)
>  [hive-service-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at 
> org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91)
>  [hive-service-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:336)
>  [hive-service-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at java.security.AccessController.doPrivileged(Native Method) 
> [?:1.8.0_77]
> at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_77]
> {code}
> did not get a chance to release the lock and heartbeater thread continue 
> heartbeat indefinately.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-20442) Hive stale lock when the hiveserver2 background thread died with NPE

2019-08-15 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-20442:
--
Status: Open  (was: Patch Available)

> Hive stale lock when the hiveserver2 background thread died with NPE
> 
>
> Key: HIVE-20442
> URL: https://issues.apache.org/jira/browse/HIVE-20442
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Transactions
>Affects Versions: 2.1.1, 1.2.0
> Environment: Hive-2.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-20442.01.branch-2.patch, 
> HIVE-20442.1-branch-1.2.patch, HIVE-20442.2-branch-1.2.patch, 
> HIVE-20442.3-branch-1.2.patch, HIVE-20442.4-branch-1.2.patch, 
> HIVE-20442.5-branch-1.2.patch
>
>
> this look like a race condition where background thread is not able to 
> release the lock it aquired.
> 1. hiveserver2 background thread request for lock
> {code}
> 2018-08-20T14:13:38,813 INFO  [HiveServer2-Background-Pool: Thread-X]: 
> lockmgr.DbLockManager (DbLockManager.java:lock(100)) - Requesting: 
> queryId=hive_xxx LockRequest(component:[LockComponent(type:SHARED_READ, 
> level:TABLE, dbname:testdb, tablename:test_table, operationType:SELECT)], 
> txnid:0, user:hive, hostname:HOSTNAME, agentInfo:hive_xxx)
> {code}
> 2. acquired the lock and start heartbeating
> {code}
> 2018-08-20T14:36:30,233 INFO  [HiveServer2-Background-Pool: Thread-X]: 
> lockmgr.DbTxnManager (DbTxnManager.java:startHeartbeat(517)) - Started 
> heartbeat with delay/interval = 15/15 MILLISECONDS for 
> query: agentInfo:hive_xxx
> {code}
> 3. during time between event #1 and #2, client disconnected and deleteContext 
> cleanup the session dir
> {code}
> 2018-08-21T15:39:57,820 INFO  [HiveServer2-Handler-Pool: Thread-XXX]: 
> thrift.ThriftCLIService (ThriftBinaryCLIService.java:deleteContext(136)) - 
> Session disconnected without closing properly.
> 2018-08-21T15:39:57,820 INFO  [HiveServer2-Handler-Pool: Thread-]: 
> thrift.ThriftCLIService (ThriftBinaryCLIService.java:deleteContext(140)) - 
> Closing the session: SessionHandle [3be07faf-5544-4178-8b50-8173002b171a]
> 2018-08-21T15:39:57,820 INFO  [HiveServer2-Handler-Pool: Thread-]: 
> service.CompositeService (SessionManager.java:closeSession(363)) - Session 
> closed, SessionHandle [xxx], current sessions:2
> {code}
> 4. background thread died with NPE while trying to get the queryid 
> {code}
> java.lang.NullPointerException: null
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1568) 
> ~[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1414) 
> ~[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1211) 
> ~[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1204) 
> ~[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:242)
>  [hive-service-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at 
> org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91)
>  [hive-service-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:336)
>  [hive-service-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at java.security.AccessController.doPrivileged(Native Method) 
> [?:1.8.0_77]
> at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_77]
> {code}
> did not get a chance to release the lock and heartbeater thread continue 
> heartbeat indefinately.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22081) Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there are too many Table/partitions are eligible for compaction

2019-08-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908456#comment-16908456
 ] 

Hive QA commented on HIVE-22081:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
8s{color} | {color:blue} ql in master has 2251 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
40s{color} | {color:red} ql: The patch generated 1 new + 24 unchanged - 1 fixed 
= 25 total (was 25) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18351/dev-support/hive-personality.sh
 |
| git revision | master / 28f2340 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18351/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18351/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there 
> are too many Table/partitions are eligible for compaction 
> --
>
> Key: HIVE-22081
> URL: https://issues.apache.org/jira/browse/HIVE-22081
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21917.01.patch, HIVE-21917.02.patch, 
> HIVE-21917.03.patch, HIVE-22081.patch
>
>
> if Automatic Compaction is turned on, Initiator thread check for potential 
> table/partitions which are eligible for compactions and run some checks in 
> for loop before requesting compaction for eligibles. Though initiator thread 
> is configured to run at interval 5 min default, in case of many objects it 
> keeps on running as these checks are IO intensive and hog cpu.
> In the proposed changes, I am planning to do
> 1. passing less object to for loop by filtering out the objects based on the 
> condition which we are checking within the loop.
> 2. Doing Async call using future to determine compaction type(this is where 
> we do FileSystem calls)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22115) Prevent the creation of query-router logger in HS2 as per property

2019-08-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908441#comment-16908441
 ] 

Hive QA commented on HIVE-22115:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12977724/HIVE-22115.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 16740 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_part2] (batchId=22)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18350/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18350/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18350/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12977724 - PreCommit-HIVE-Build

> Prevent the creation of query-router logger in HS2 as per property
> --
>
> Key: HIVE-22115
> URL: https://issues.apache.org/jira/browse/HIVE-22115
> Project: Hive
>  Issue Type: Improvement
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Attachments: HIVE-22115.patch, HIVE-22115.patch
>
>
> Avoid the creation and registration of query-router logger if the Hive server 
> Property is set to false by the user
> {code}
> HiveConf.ConfVars.HIVE_SERVER2_LOGGING_OPERATION_ENABLED
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22105) Update ORC to 1.5.6.

2019-08-15 Thread Alan Gates (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908426#comment-16908426
 ] 

Alan Gates commented on HIVE-22105:
---

Regenerating the results doesn't help.  The test still fails after 
regeneration, which I take to mean it produces inconsistent results. 

> Update ORC to 1.5.6.
> 
>
> Key: HIVE-22105
> URL: https://issues.apache.org/jira/browse/HIVE-22105
> Project: Hive
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> ORC has had some important fixes in the 1.5 branch and they should be picked 
> up by Hive.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22115) Prevent the creation of query-router logger in HS2 as per property

2019-08-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908419#comment-16908419
 ] 

Hive QA commented on HIVE-22115:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
11s{color} | {color:blue} ql in master has 2251 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18350/dev-support/hive-personality.sh
 |
| git revision | master / 28f2340 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18350/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Prevent the creation of query-router logger in HS2 as per property
> --
>
> Key: HIVE-22115
> URL: https://issues.apache.org/jira/browse/HIVE-22115
> Project: Hive
>  Issue Type: Improvement
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Attachments: HIVE-22115.patch, HIVE-22115.patch
>
>
> Avoid the creation and registration of query-router logger if the Hive server 
> Property is set to false by the user
> {code}
> HiveConf.ConfVars.HIVE_SERVER2_LOGGING_OPERATION_ENABLED
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22081) Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there are too many Table/partitions are eligible for compaction

2019-08-15 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-22081:
--
Attachment: HIVE-21917.03.patch
Status: Patch Available  (was: Open)

> Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there 
> are too many Table/partitions are eligible for compaction 
> --
>
> Key: HIVE-22081
> URL: https://issues.apache.org/jira/browse/HIVE-22081
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21917.01.patch, HIVE-21917.02.patch, 
> HIVE-21917.03.patch, HIVE-22081.patch
>
>
> if Automatic Compaction is turned on, Initiator thread check for potential 
> table/partitions which are eligible for compactions and run some checks in 
> for loop before requesting compaction for eligibles. Though initiator thread 
> is configured to run at interval 5 min default, in case of many objects it 
> keeps on running as these checks are IO intensive and hog cpu.
> In the proposed changes, I am planning to do
> 1. passing less object to for loop by filtering out the objects based on the 
> condition which we are checking within the loop.
> 2. Doing Async call using future to determine compaction type(this is where 
> we do FileSystem calls)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22081) Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there are too many Table/partitions are eligible for compaction

2019-08-15 Thread Rajkumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908407#comment-16908407
 ] 

Rajkumar Singh commented on HIVE-22081:
---

Thanks [~pvary], I have uploaded the fresh patch with the suggested changes for 
a clean run.

> Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there 
> are too many Table/partitions are eligible for compaction 
> --
>
> Key: HIVE-22081
> URL: https://issues.apache.org/jira/browse/HIVE-22081
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21917.01.patch, HIVE-21917.02.patch, 
> HIVE-21917.03.patch, HIVE-22081.patch
>
>
> if Automatic Compaction is turned on, Initiator thread check for potential 
> table/partitions which are eligible for compactions and run some checks in 
> for loop before requesting compaction for eligibles. Though initiator thread 
> is configured to run at interval 5 min default, in case of many objects it 
> keeps on running as these checks are IO intensive and hog cpu.
> In the proposed changes, I am planning to do
> 1. passing less object to for loop by filtering out the objects based on the 
> condition which we are checking within the loop.
> 2. Doing Async call using future to determine compaction type(this is where 
> we do FileSystem calls)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22081) Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there are too many Table/partitions are eligible for compaction

2019-08-15 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-22081:
--
Status: Open  (was: Patch Available)

> Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there 
> are too many Table/partitions are eligible for compaction 
> --
>
> Key: HIVE-22081
> URL: https://issues.apache.org/jira/browse/HIVE-22081
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21917.01.patch, HIVE-21917.02.patch, 
> HIVE-22081.patch
>
>
> if Automatic Compaction is turned on, Initiator thread check for potential 
> table/partitions which are eligible for compactions and run some checks in 
> for loop before requesting compaction for eligibles. Though initiator thread 
> is configured to run at interval 5 min default, in case of many objects it 
> keeps on running as these checks are IO intensive and hog cpu.
> In the proposed changes, I am planning to do
> 1. passing less object to for loop by filtering out the objects based on the 
> condition which we are checking within the loop.
> 2. Doing Async call using future to determine compaction type(this is where 
> we do FileSystem calls)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22113) Prevent LLAP shutdown on AMReporter related RuntimeException

2019-08-15 Thread Oliver Draese (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908405#comment-16908405
 ] 

Oliver Draese commented on HIVE-22113:
--

Created follow-up cleanup action as HIVE-22117

> Prevent LLAP shutdown on AMReporter related RuntimeException
> 
>
> Key: HIVE-22113
> URL: https://issues.apache.org/jira/browse/HIVE-22113
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 3.1.1
>Reporter: Oliver Draese
>Assignee: Oliver Draese
>Priority: Major
>  Labels: llap
> Attachments: HIVE-22113.1.patch, HIVE-22113.2.patch, HIVE-22113.patch
>
>
> If a task attempt cannot be removed from AMReporter (i.e. task attempt was 
> not found), the AMReporter throws a RuntimeException. This exception is not 
> caught and trickles up, causing an LLAP shutdown:
> {{2019-08-08T23:34:39,748 ERROR [Wait-Queue-Scheduler-0 ()] org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread Thread[Wait-Queue-Scheduler-0,5,main] threw an Exception. Shutting down now...}}{{java.lang.RuntimeException: attempt_1563528877295_18872_3728_01_03_0 was not registered and couldn't be removed}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.AMReporter$AMNodeInfo.removeTaskAttempt(AMReporter.java:524) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.AMReporter.unregisterTask(AMReporter.java:243) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.TaskRunnerCallable.killTask(TaskRunnerCallable.java:384) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.handleScheduleAttemptedRejection(TaskExecutorService.java:739) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.access$1100(TaskExecutorService.java:91) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService$WaitQueueWorker.run(TaskExecutorService.java:396) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_161]}}{{
> 
> at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108) [hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41) [hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77) [hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]}}{{
> 
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]}}{{
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (HIVE-22117) Clean up RuntimeException code in AMReporter

2019-08-15 Thread Oliver Draese (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oliver Draese reassigned HIVE-22117:



> Clean up RuntimeException code in AMReporter
> 
>
> Key: HIVE-22117
> URL: https://issues.apache.org/jira/browse/HIVE-22117
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 3.1.1
>Reporter: Oliver Draese
>Assignee: Oliver Draese
>Priority: Major
>
> The AMReporter of LLAP throws RuntimExceptions from within addTaskAttempt and 
> removeTaskAttempt. These can cause LLAP to come down.
> As an interims fix (see HIVE-22113), the RuntimeException of removeTaskAttemp 
> is caught from within TaskRunnerCallable, preventing LLAP termination if a 
> killed task is not found in AMReporter.
> Ideally, we would just log this on removeTask (a gone task is a gone task) 
> and have a checked exception in addTaskAttempt. If the checkedException is 
> caught, we should fail the task attempt (as there is already an attempt with 
> this ID running).



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22099) Several date related UDFs can't handle Julian dates properly since HIVE-20007

2019-08-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908398#comment-16908398
 ] 

Hive QA commented on HIVE-22099:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12977720/HIVE-22099.3.patch

{color:green}SUCCESS:{color} +1 due to 7 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 16742 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_timestamp_funcs]
 (batchId=168)
org.apache.hadoop.hive.ql.exec.vector.expressions.TestVectorTypeCasts.testCastDateToString
 (batchId=335)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18349/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18349/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18349/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12977720 - PreCommit-HIVE-Build

> Several date related UDFs can't handle Julian dates properly since HIVE-20007
> -
>
> Key: HIVE-22099
> URL: https://issues.apache.org/jira/browse/HIVE-22099
> Project: Hive
>  Issue Type: Bug
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-22099.0.patch, HIVE-22099.1.patch, 
> HIVE-22099.2.patch, HIVE-22099.3.patch
>
>
> Currently dates that belong to Julian calendar (before Oct 15, 1582) are 
> handled improperly by date/timestamp UDFs.
> E.g. DateFormat UDF:
> Although the dates are in Julian calendar, the formatter insists to print 
> these according to Gregorian calendar causing multiple days of difference in 
> some cases:
>  
> {code:java}
> beeline> select date_format('1001-01-05','dd---MM--');
> ++
> | _c0 |
> ++
> | 30---12--1000 |
> ++{code}
>  I've observed similar problems in the following UDFs:
>  * add_months
>  * date_format
>  * day
>  * month
>  * months_between
>  * weekofyear
>  * year
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22115) Prevent the creation of query-router logger in HS2 as per property

2019-08-15 Thread slim bouguerra (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated HIVE-22115:
--
Attachment: HIVE-22115.patch

> Prevent the creation of query-router logger in HS2 as per property
> --
>
> Key: HIVE-22115
> URL: https://issues.apache.org/jira/browse/HIVE-22115
> Project: Hive
>  Issue Type: Improvement
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Attachments: HIVE-22115.patch, HIVE-22115.patch
>
>
> Avoid the creation and registration of query-router logger if the Hive server 
> Property is set to false by the user
> {code}
> HiveConf.ConfVars.HIVE_SERVER2_LOGGING_OPERATION_ENABLED
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22099) Several date related UDFs can't handle Julian dates properly since HIVE-20007

2019-08-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908353#comment-16908353
 ] 

Hive QA commented on HIVE-22099:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
9s{color} | {color:blue} ql in master has 2251 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
42s{color} | {color:red} ql: The patch generated 1 new + 189 unchanged - 3 
fixed = 190 total (was 192) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18349/dev-support/hive-personality.sh
 |
| git revision | master / 28f2340 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18349/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18349/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Several date related UDFs can't handle Julian dates properly since HIVE-20007
> -
>
> Key: HIVE-22099
> URL: https://issues.apache.org/jira/browse/HIVE-22099
> Project: Hive
>  Issue Type: Bug
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-22099.0.patch, HIVE-22099.1.patch, 
> HIVE-22099.2.patch, HIVE-22099.3.patch
>
>
> Currently dates that belong to Julian calendar (before Oct 15, 1582) are 
> handled improperly by date/timestamp UDFs.
> E.g. DateFormat UDF:
> Although the dates are in Julian calendar, the formatter insists to print 
> these according to Gregorian calendar causing multiple days of difference in 
> some cases:
>  
> {code:java}
> beeline> select date_format('1001-01-05','dd---MM--');
> ++
> | _c0 |
> ++
> | 30---12--1000 |
> ++{code}
>  I've observed similar problems in the following UDFs:
>  * add_months
>  * date_format
>  * day
>  * month
>  * months_between
>  * weekofyear
>  * year
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (HIVE-9442) Make sure all data types work for PARQUET

2019-08-15 Thread Gopal V (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-9442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V resolved HIVE-9442.
---
Resolution: Done

The Parquet ACID compactor is in HIVE-20934, which replaced the existing concat 
with a transactional update.

So as of Hive-3.x, this is Done, but not Fixed as reported here.

> Make sure all data types work for PARQUET
> -
>
> Key: HIVE-9442
> URL: https://issues.apache.org/jira/browse/HIVE-9442
> Project: Hive
>  Issue Type: Improvement
>Reporter: Dong Chen
>Assignee: Dong Chen
>Priority: Major
>
> In HIVE-9235 (Turn off Parquet Vectorization until all data types work: 
> DECIMAL, DATE, TIMESTAMP, CHAR, and VARCHAR), some data types were found not 
> work for PARQUET.
> Work in this Jira will find the root cause, fix it, and add tests for them.
> This is an umbrella JIRA. Use sub-tasks for adding tests or fixing bugs.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22113) Prevent LLAP shutdown on AMReporter related RuntimeException

2019-08-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908317#comment-16908317
 ] 

Hive QA commented on HIVE-22113:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12977715/HIVE-22113.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16740 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18348/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18348/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18348/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12977715 - PreCommit-HIVE-Build

> Prevent LLAP shutdown on AMReporter related RuntimeException
> 
>
> Key: HIVE-22113
> URL: https://issues.apache.org/jira/browse/HIVE-22113
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 3.1.1
>Reporter: Oliver Draese
>Assignee: Oliver Draese
>Priority: Major
>  Labels: llap
> Attachments: HIVE-22113.1.patch, HIVE-22113.2.patch, HIVE-22113.patch
>
>
> If a task attempt cannot be removed from AMReporter (i.e. task attempt was 
> not found), the AMReporter throws a RuntimeException. This exception is not 
> caught and trickles up, causing an LLAP shutdown:
> {{2019-08-08T23:34:39,748 ERROR [Wait-Queue-Scheduler-0 ()] org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread Thread[Wait-Queue-Scheduler-0,5,main] threw an Exception. Shutting down now...}}{{java.lang.RuntimeException: attempt_1563528877295_18872_3728_01_03_0 was not registered and couldn't be removed}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.AMReporter$AMNodeInfo.removeTaskAttempt(AMReporter.java:524) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.AMReporter.unregisterTask(AMReporter.java:243) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.TaskRunnerCallable.killTask(TaskRunnerCallable.java:384) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.handleScheduleAttemptedRejection(TaskExecutorService.java:739) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.access$1100(TaskExecutorService.java:91) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService$WaitQueueWorker.run(TaskExecutorService.java:396) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_161]}}{{
> 
> at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108) [hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41) [hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77) [hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]}}{{
> 
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]}}{{
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22105) Update ORC to 1.5.6.

2019-08-15 Thread Alan Gates (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908304#comment-16908304
 ] 

Alan Gates commented on HIVE-22105:
---

After the latest version of the patch I still see failures in TestCliDriver and 
TestMiniLlapLocalCliDriver with orc_merge.9.  The only diff is:

181c181
< Found 1 items
---
> Found 2 items

If that looks reasonable I'll just regen the expected results when I apply the 
patch.

> Update ORC to 1.5.6.
> 
>
> Key: HIVE-22105
> URL: https://issues.apache.org/jira/browse/HIVE-22105
> Project: Hive
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> ORC has had some important fixes in the 1.5 branch and they should be picked 
> up by Hive.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22099) Several date related UDFs can't handle Julian dates properly since HIVE-20007

2019-08-15 Thread Adam Szita (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Szita updated HIVE-22099:
--
Status: In Progress  (was: Patch Available)

> Several date related UDFs can't handle Julian dates properly since HIVE-20007
> -
>
> Key: HIVE-22099
> URL: https://issues.apache.org/jira/browse/HIVE-22099
> Project: Hive
>  Issue Type: Bug
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-22099.0.patch, HIVE-22099.1.patch, 
> HIVE-22099.2.patch, HIVE-22099.3.patch
>
>
> Currently dates that belong to Julian calendar (before Oct 15, 1582) are 
> handled improperly by date/timestamp UDFs.
> E.g. DateFormat UDF:
> Although the dates are in Julian calendar, the formatter insists to print 
> these according to Gregorian calendar causing multiple days of difference in 
> some cases:
>  
> {code:java}
> beeline> select date_format('1001-01-05','dd---MM--');
> ++
> | _c0 |
> ++
> | 30---12--1000 |
> ++{code}
>  I've observed similar problems in the following UDFs:
>  * add_months
>  * date_format
>  * day
>  * month
>  * months_between
>  * weekofyear
>  * year
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22099) Several date related UDFs can't handle Julian dates properly since HIVE-20007

2019-08-15 Thread Adam Szita (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Szita updated HIVE-22099:
--
Status: Patch Available  (was: In Progress)

> Several date related UDFs can't handle Julian dates properly since HIVE-20007
> -
>
> Key: HIVE-22099
> URL: https://issues.apache.org/jira/browse/HIVE-22099
> Project: Hive
>  Issue Type: Bug
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-22099.0.patch, HIVE-22099.1.patch, 
> HIVE-22099.2.patch, HIVE-22099.3.patch
>
>
> Currently dates that belong to Julian calendar (before Oct 15, 1582) are 
> handled improperly by date/timestamp UDFs.
> E.g. DateFormat UDF:
> Although the dates are in Julian calendar, the formatter insists to print 
> these according to Gregorian calendar causing multiple days of difference in 
> some cases:
>  
> {code:java}
> beeline> select date_format('1001-01-05','dd---MM--');
> ++
> | _c0 |
> ++
> | 30---12--1000 |
> ++{code}
>  I've observed similar problems in the following UDFs:
>  * add_months
>  * date_format
>  * day
>  * month
>  * months_between
>  * weekofyear
>  * year
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22099) Several date related UDFs can't handle Julian dates properly since HIVE-20007

2019-08-15 Thread Adam Szita (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Szita updated HIVE-22099:
--
Attachment: HIVE-22099.3.patch

> Several date related UDFs can't handle Julian dates properly since HIVE-20007
> -
>
> Key: HIVE-22099
> URL: https://issues.apache.org/jira/browse/HIVE-22099
> Project: Hive
>  Issue Type: Bug
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-22099.0.patch, HIVE-22099.1.patch, 
> HIVE-22099.2.patch, HIVE-22099.3.patch
>
>
> Currently dates that belong to Julian calendar (before Oct 15, 1582) are 
> handled improperly by date/timestamp UDFs.
> E.g. DateFormat UDF:
> Although the dates are in Julian calendar, the formatter insists to print 
> these according to Gregorian calendar causing multiple days of difference in 
> some cases:
>  
> {code:java}
> beeline> select date_format('1001-01-05','dd---MM--');
> ++
> | _c0 |
> ++
> | 30---12--1000 |
> ++{code}
>  I've observed similar problems in the following UDFs:
>  * add_months
>  * date_format
>  * day
>  * month
>  * months_between
>  * weekofyear
>  * year
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22087) HMS Translation: Translate getDatabase() API to alter warehouse location

2019-08-15 Thread Thejas M Nair (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908282#comment-16908282
 ] 

Thejas M Nair commented on HIVE-22087:
--

+1


> HMS Translation: Translate getDatabase() API to alter warehouse location
> 
>
> Key: HIVE-22087
> URL: https://issues.apache.org/jira/browse/HIVE-22087
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-22087.1.patch, HIVE-22087.2.patch, 
> HIVE-22087.3.patch, HIVE-22087.5.patch, HIVE-22087.6.patch, HIVE-22087.7.patch
>
>
> It makes sense to translate getDatabase() calls as well, to alter the 
> location for the Database based on whether or not the processor has 
> capabilities to write to the managed warehouse directory. Every DB has 2 
> locations, one external and the other in the managed warehouse directory. If 
> the processor has any AcidWrite capability, then the location remains 
> unchanged for the database.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22113) Prevent LLAP shutdown on AMReporter related RuntimeException

2019-08-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908242#comment-16908242
 ] 

Hive QA commented on HIVE-22113:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
44s{color} | {color:blue} llap-server in master has 83 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18348/dev-support/hive-personality.sh
 |
| git revision | master / 28f2340 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: llap-server U: llap-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18348/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Prevent LLAP shutdown on AMReporter related RuntimeException
> 
>
> Key: HIVE-22113
> URL: https://issues.apache.org/jira/browse/HIVE-22113
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 3.1.1
>Reporter: Oliver Draese
>Assignee: Oliver Draese
>Priority: Major
>  Labels: llap
> Attachments: HIVE-22113.1.patch, HIVE-22113.2.patch, HIVE-22113.patch
>
>
> If a task attempt cannot be removed from AMReporter (i.e. task attempt was 
> not found), the AMReporter throws a RuntimeException. This exception is not 
> caught and trickles up, causing an LLAP shutdown:
> {{2019-08-08T23:34:39,748 ERROR [Wait-Queue-Scheduler-0 ()] org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread Thread[Wait-Queue-Scheduler-0,5,main] threw an Exception. Shutting down now...}}{{java.lang.RuntimeException: attempt_1563528877295_18872_3728_01_03_0 was not registered and couldn't be removed}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.AMReporter$AMNodeInfo.removeTaskAttempt(AMReporter.java:524) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.AMReporter.unregisterTask(AMReporter.java:243) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.TaskRunnerCallable.killTask(TaskRunnerCallable.java:384) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.handl

[jira] [Updated] (HIVE-22113) Prevent LLAP shutdown on AMReporter related RuntimeException

2019-08-15 Thread Oliver Draese (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oliver Draese updated HIVE-22113:
-
Attachment: HIVE-22113.2.patch

> Prevent LLAP shutdown on AMReporter related RuntimeException
> 
>
> Key: HIVE-22113
> URL: https://issues.apache.org/jira/browse/HIVE-22113
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 3.1.1
>Reporter: Oliver Draese
>Assignee: Oliver Draese
>Priority: Major
>  Labels: llap
> Attachments: HIVE-22113.1.patch, HIVE-22113.2.patch, HIVE-22113.patch
>
>
> If a task attempt cannot be removed from AMReporter (i.e. task attempt was 
> not found), the AMReporter throws a RuntimeException. This exception is not 
> caught and trickles up, causing an LLAP shutdown:
> {{2019-08-08T23:34:39,748 ERROR [Wait-Queue-Scheduler-0 ()] org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread Thread[Wait-Queue-Scheduler-0,5,main] threw an Exception. Shutting down now...}}{{java.lang.RuntimeException: attempt_1563528877295_18872_3728_01_03_0 was not registered and couldn't be removed}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.AMReporter$AMNodeInfo.removeTaskAttempt(AMReporter.java:524) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.AMReporter.unregisterTask(AMReporter.java:243) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.TaskRunnerCallable.killTask(TaskRunnerCallable.java:384) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.handleScheduleAttemptedRejection(TaskExecutorService.java:739) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.access$1100(TaskExecutorService.java:91) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService$WaitQueueWorker.run(TaskExecutorService.java:396) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_161]}}{{
> 
> at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108) [hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41) [hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77) [hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]}}{{
> 
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]}}{{
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22113) Prevent LLAP shutdown on AMReporter related RuntimeException

2019-08-15 Thread Oliver Draese (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908202#comment-16908202
 ] 

Oliver Draese commented on HIVE-22113:
--

Re-added attachment for repeated test execution. (Test failure in MiniDriver 
unrelated to fix)

> Prevent LLAP shutdown on AMReporter related RuntimeException
> 
>
> Key: HIVE-22113
> URL: https://issues.apache.org/jira/browse/HIVE-22113
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 3.1.1
>Reporter: Oliver Draese
>Assignee: Oliver Draese
>Priority: Major
>  Labels: llap
> Attachments: HIVE-22113.1.patch, HIVE-22113.2.patch, HIVE-22113.patch
>
>
> If a task attempt cannot be removed from AMReporter (i.e. task attempt was 
> not found), the AMReporter throws a RuntimeException. This exception is not 
> caught and trickles up, causing an LLAP shutdown:
> {{2019-08-08T23:34:39,748 ERROR [Wait-Queue-Scheduler-0 ()] org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread Thread[Wait-Queue-Scheduler-0,5,main] threw an Exception. Shutting down now...}}{{java.lang.RuntimeException: attempt_1563528877295_18872_3728_01_03_0 was not registered and couldn't be removed}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.AMReporter$AMNodeInfo.removeTaskAttempt(AMReporter.java:524) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.AMReporter.unregisterTask(AMReporter.java:243) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.TaskRunnerCallable.killTask(TaskRunnerCallable.java:384) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.handleScheduleAttemptedRejection(TaskExecutorService.java:739) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.access$1100(TaskExecutorService.java:91) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService$WaitQueueWorker.run(TaskExecutorService.java:396) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_161]}}{{
> 
> at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108) [hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41) [hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77) [hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]}}{{
> 
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]}}{{
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22116) MaterializedView refresh check might return incorrect result when Compaction is run

2019-08-15 Thread Peter Vary (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908184#comment-16908184
 ] 

Peter Vary commented on HIVE-22116:
---

CC: [~jcamachorodriguez]

> MaterializedView refresh check might return incorrect result when Compaction 
> is run
> ---
>
> Key: HIVE-22116
> URL: https://issues.apache.org/jira/browse/HIVE-22116
> Project: Hive
>  Issue Type: Bug
>  Components: Materialized views
>Reporter: Peter Vary
>Priority: Minor
>
> Reading the code of TxnHandler.getMaterializationInvalidationInfo I see that 
> we decide on the freshness of the view based on the COMPLETED_TXN_COMPONENTS 
> table. 
> See: 
> [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java#L2021]
> On the other hand if we run a major compaction we clean up 
> COMPLETED_TXN_COMPONENTS table, so we lose all previous information. We do it 
> in CompactionTxnHandler.markCleaned.
> See: 
> [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java#L382]
>  
> When the following sequence of events happen we do not refresh the 
> materialized view:
> - Create Table
> - Create MV
> - Refresh MV
> - Update Table
> - Start major compaction
> - Wait until compacted, and cleaned
> - Select Table



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22099) Several date related UDFs can't handle Julian dates properly since HIVE-20007

2019-08-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908153#comment-16908153
 ] 

Hive QA commented on HIVE-22099:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12977705/HIVE-22099.2.patch

{color:green}SUCCESS:{color} +1 due to 7 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 16743 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_timestamp_funcs]
 (batchId=33)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_timestamp_funcs]
 (batchId=168)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vectorized_timestamp_funcs]
 (batchId=126)
org.apache.hadoop.hive.ql.TestWarehouseExternalDir.org.apache.hadoop.hive.ql.TestWarehouseExternalDir
 (batchId=273)
org.apache.hadoop.hive.ql.TestWarehouseExternalDir.testExternalDefaultPaths 
(batchId=273)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18347/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18347/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18347/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12977705 - PreCommit-HIVE-Build

> Several date related UDFs can't handle Julian dates properly since HIVE-20007
> -
>
> Key: HIVE-22099
> URL: https://issues.apache.org/jira/browse/HIVE-22099
> Project: Hive
>  Issue Type: Bug
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-22099.0.patch, HIVE-22099.1.patch, 
> HIVE-22099.2.patch
>
>
> Currently dates that belong to Julian calendar (before Oct 15, 1582) are 
> handled improperly by date/timestamp UDFs.
> E.g. DateFormat UDF:
> Although the dates are in Julian calendar, the formatter insists to print 
> these according to Gregorian calendar causing multiple days of difference in 
> some cases:
>  
> {code:java}
> beeline> select date_format('1001-01-05','dd---MM--');
> ++
> | _c0 |
> ++
> | 30---12--1000 |
> ++{code}
>  I've observed similar problems in the following UDFs:
>  * add_months
>  * date_format
>  * day
>  * month
>  * months_between
>  * weekofyear
>  * year
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22099) Several date related UDFs can't handle Julian dates properly since HIVE-20007

2019-08-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908118#comment-16908118
 ] 

Hive QA commented on HIVE-22099:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
3s{color} | {color:blue} ql in master has 2251 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
41s{color} | {color:red} ql: The patch generated 2 new + 182 unchanged - 1 
fixed = 184 total (was 183) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18347/dev-support/hive-personality.sh
 |
| git revision | master / a501e6e |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18347/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18347/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Several date related UDFs can't handle Julian dates properly since HIVE-20007
> -
>
> Key: HIVE-22099
> URL: https://issues.apache.org/jira/browse/HIVE-22099
> Project: Hive
>  Issue Type: Bug
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-22099.0.patch, HIVE-22099.1.patch, 
> HIVE-22099.2.patch
>
>
> Currently dates that belong to Julian calendar (before Oct 15, 1582) are 
> handled improperly by date/timestamp UDFs.
> E.g. DateFormat UDF:
> Although the dates are in Julian calendar, the formatter insists to print 
> these according to Gregorian calendar causing multiple days of difference in 
> some cases:
>  
> {code:java}
> beeline> select date_format('1001-01-05','dd---MM--');
> ++
> | _c0 |
> ++
> | 30---12--1000 |
> ++{code}
>  I've observed similar problems in the following UDFs:
>  * add_months
>  * date_format
>  * day
>  * month
>  * months_between
>  * weekofyear
>  * year
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22110) Initialize ReplChangeManager before starting actual dump

2019-08-15 Thread Sankar Hariappan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-22110:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to master. 
Thanks [~ashutosh.bapat] for the contribution!

> Initialize ReplChangeManager before starting actual dump
> 
>
> Key: HIVE-22110
> URL: https://issues.apache.org/jira/browse/HIVE-22110
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-22110.01.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> REPL DUMP calls ReplChageManager.encodeFileUri() to add cmroot and checksum 
> to the url. This requires ReplChangeManager to be initialized. So, initialize 
> Repl change manager when taking a dump.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22099) Several date related UDFs can't handle Julian dates properly since HIVE-20007

2019-08-15 Thread Adam Szita (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Szita updated HIVE-22099:
--
Status: Patch Available  (was: In Progress)

> Several date related UDFs can't handle Julian dates properly since HIVE-20007
> -
>
> Key: HIVE-22099
> URL: https://issues.apache.org/jira/browse/HIVE-22099
> Project: Hive
>  Issue Type: Bug
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-22099.0.patch, HIVE-22099.1.patch, 
> HIVE-22099.2.patch
>
>
> Currently dates that belong to Julian calendar (before Oct 15, 1582) are 
> handled improperly by date/timestamp UDFs.
> E.g. DateFormat UDF:
> Although the dates are in Julian calendar, the formatter insists to print 
> these according to Gregorian calendar causing multiple days of difference in 
> some cases:
>  
> {code:java}
> beeline> select date_format('1001-01-05','dd---MM--');
> ++
> | _c0 |
> ++
> | 30---12--1000 |
> ++{code}
>  I've observed similar problems in the following UDFs:
>  * add_months
>  * date_format
>  * day
>  * month
>  * months_between
>  * weekofyear
>  * year
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22099) Several date related UDFs can't handle Julian dates properly since HIVE-20007

2019-08-15 Thread Adam Szita (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Szita updated HIVE-22099:
--
Attachment: HIVE-22099.2.patch

> Several date related UDFs can't handle Julian dates properly since HIVE-20007
> -
>
> Key: HIVE-22099
> URL: https://issues.apache.org/jira/browse/HIVE-22099
> Project: Hive
>  Issue Type: Bug
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-22099.0.patch, HIVE-22099.1.patch, 
> HIVE-22099.2.patch
>
>
> Currently dates that belong to Julian calendar (before Oct 15, 1582) are 
> handled improperly by date/timestamp UDFs.
> E.g. DateFormat UDF:
> Although the dates are in Julian calendar, the formatter insists to print 
> these according to Gregorian calendar causing multiple days of difference in 
> some cases:
>  
> {code:java}
> beeline> select date_format('1001-01-05','dd---MM--');
> ++
> | _c0 |
> ++
> | 30---12--1000 |
> ++{code}
>  I've observed similar problems in the following UDFs:
>  * add_months
>  * date_format
>  * day
>  * month
>  * months_between
>  * weekofyear
>  * year
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22099) Several date related UDFs can't handle Julian dates properly since HIVE-20007

2019-08-15 Thread Adam Szita (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Szita updated HIVE-22099:
--
Status: In Progress  (was: Patch Available)

> Several date related UDFs can't handle Julian dates properly since HIVE-20007
> -
>
> Key: HIVE-22099
> URL: https://issues.apache.org/jira/browse/HIVE-22099
> Project: Hive
>  Issue Type: Bug
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-22099.0.patch, HIVE-22099.1.patch, 
> HIVE-22099.2.patch
>
>
> Currently dates that belong to Julian calendar (before Oct 15, 1582) are 
> handled improperly by date/timestamp UDFs.
> E.g. DateFormat UDF:
> Although the dates are in Julian calendar, the formatter insists to print 
> these according to Gregorian calendar causing multiple days of difference in 
> some cases:
>  
> {code:java}
> beeline> select date_format('1001-01-05','dd---MM--');
> ++
> | _c0 |
> ++
> | 30---12--1000 |
> ++{code}
>  I've observed similar problems in the following UDFs:
>  * add_months
>  * date_format
>  * day
>  * month
>  * months_between
>  * weekofyear
>  * year
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22099) Several date related UDFs can't handle Julian dates properly since HIVE-20007

2019-08-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907992#comment-16907992
 ] 

Hive QA commented on HIVE-22099:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12977609/HIVE-22099.1.patch

{color:green}SUCCESS:{color} +1 due to 7 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 16742 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.exec.vector.expressions.TestVectorDateExpressions.testVectorUDFDayOfMonth
 (batchId=335)
org.apache.hadoop.hive.ql.exec.vector.expressions.TestVectorDateExpressions.testVectorUDFMonth
 (batchId=335)
org.apache.hadoop.hive.ql.exec.vector.expressions.TestVectorDateExpressions.testVectorUDFWeekOfYear
 (batchId=335)
org.apache.hadoop.hive.ql.exec.vector.expressions.TestVectorDateExpressions.testVectorUDFYear
 (batchId=335)
org.apache.hadoop.hive.ql.udf.generic.TestGenericUDFAddMonths.testAddMonthsInt 
(batchId=313)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18346/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18346/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18346/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12977609 - PreCommit-HIVE-Build

> Several date related UDFs can't handle Julian dates properly since HIVE-20007
> -
>
> Key: HIVE-22099
> URL: https://issues.apache.org/jira/browse/HIVE-22099
> Project: Hive
>  Issue Type: Bug
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-22099.0.patch, HIVE-22099.1.patch
>
>
> Currently dates that belong to Julian calendar (before Oct 15, 1582) are 
> handled improperly by date/timestamp UDFs.
> E.g. DateFormat UDF:
> Although the dates are in Julian calendar, the formatter insists to print 
> these according to Gregorian calendar causing multiple days of difference in 
> some cases:
>  
> {code:java}
> beeline> select date_format('1001-01-05','dd---MM--');
> ++
> | _c0 |
> ++
> | 30---12--1000 |
> ++{code}
>  I've observed similar problems in the following UDFs:
>  * add_months
>  * date_format
>  * day
>  * month
>  * months_between
>  * weekofyear
>  * year
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22099) Several date related UDFs can't handle Julian dates properly since HIVE-20007

2019-08-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907980#comment-16907980
 ] 

Hive QA commented on HIVE-22099:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
5s{color} | {color:blue} ql in master has 2251 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
42s{color} | {color:red} ql: The patch generated 2 new + 176 unchanged - 0 
fixed = 178 total (was 176) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18346/dev-support/hive-personality.sh
 |
| git revision | master / a501e6e |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18346/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18346/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Several date related UDFs can't handle Julian dates properly since HIVE-20007
> -
>
> Key: HIVE-22099
> URL: https://issues.apache.org/jira/browse/HIVE-22099
> Project: Hive
>  Issue Type: Bug
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-22099.0.patch, HIVE-22099.1.patch
>
>
> Currently dates that belong to Julian calendar (before Oct 15, 1582) are 
> handled improperly by date/timestamp UDFs.
> E.g. DateFormat UDF:
> Although the dates are in Julian calendar, the formatter insists to print 
> these according to Gregorian calendar causing multiple days of difference in 
> some cases:
>  
> {code:java}
> beeline> select date_format('1001-01-05','dd---MM--');
> ++
> | _c0 |
> ++
> | 30---12--1000 |
> ++{code}
>  I've observed similar problems in the following UDFs:
>  * add_months
>  * date_format
>  * day
>  * month
>  * months_between
>  * weekofyear
>  * year
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22099) Several date related UDFs can't handle Julian dates properly since HIVE-20007

2019-08-15 Thread Adam Szita (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Szita updated HIVE-22099:
--
Status: Patch Available  (was: In Progress)

> Several date related UDFs can't handle Julian dates properly since HIVE-20007
> -
>
> Key: HIVE-22099
> URL: https://issues.apache.org/jira/browse/HIVE-22099
> Project: Hive
>  Issue Type: Bug
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-22099.0.patch, HIVE-22099.1.patch
>
>
> Currently dates that belong to Julian calendar (before Oct 15, 1582) are 
> handled improperly by date/timestamp UDFs.
> E.g. DateFormat UDF:
> Although the dates are in Julian calendar, the formatter insists to print 
> these according to Gregorian calendar causing multiple days of difference in 
> some cases:
>  
> {code:java}
> beeline> select date_format('1001-01-05','dd---MM--');
> ++
> | _c0 |
> ++
> | 30---12--1000 |
> ++{code}
>  I've observed similar problems in the following UDFs:
>  * add_months
>  * date_format
>  * day
>  * month
>  * months_between
>  * weekofyear
>  * year
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (HIVE-22088) Dynamic partition insert problem on external table with "=" in location path spec

2019-08-15 Thread Hui An (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907881#comment-16907881
 ] 

Hui An edited comment on HIVE-22088 at 8/15/19 7:17 AM:


Hi [~aihuaxu] What do you think of this problem?


was (Author: bone an):
[~aihuaxu] What do you think of this problem?

> Dynamic partition insert problem on external table with "=" in location path 
> spec
> -
>
> Key: HIVE-22088
> URL: https://issues.apache.org/jira/browse/HIVE-22088
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.3.4
> Environment: Hive 2.6.0.10-2 Executing on Tez.
> OS: Ubuntu 16.04.4 LTS
> Config settings used:
> SET hive.exec.dynamic.partition=true;
>  SET hive.exec.dynamic.partition.mode=nonstrict;
>Reporter: Puneet Khatod
>Assignee: Hui An
>Priority: Major
>
> If external table location spec has a '=' sign (coincidentally partition 
> specifier) in it, then dynamic partition loading fails.
> *Use cases:*
> Quite often the same data is used in different contexts by creating different 
> external tables on top of the data. Many times the tables have different 
> partition depths depending on how data is organized.
> Like in below example, there are individual customer specific tables and 
> queries/jobs to insert data partitioned by type. And there is another table 
> to give the consolidated data view of all the customers, thus have two level 
> partition customer and type.
> The job to insert customer specific data into customer specific table fails 
> if we use dynamic partitioning. Static partition insert on same table works 
> fine though.
> *Replication:*
> To replicate following simple setup could be done. Below execution is on 
> 'Tez'.
> *Source table**-*
> CREATE EXTERNAL TABLE temp_dummy_table
>  (id STRING, type STRING)
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
> STORED AS TEXTFILE
>  LOCATION '/home/source/';
>  
> *Destination Table-*
> CREATE EXTERNAL TABLE temp_dummy_dest_table
>  (id STRING)
>  PARTITIONED BY (type string)
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
> STORED AS TEXTFILE
>  LOCATION '/home/destination/{color:#ff}customer=abc{color}/';
>  
> *Insert into destination-*
> insert overwrite table temp_dummy_dest_table partition (type)
>  select i.id as id, i.type as type
>  from temp_dummy_table i
>  where i.type in ('type1','type2');
>  
> *Log and Error Msgs on CLI*-
> Loading data to table temp_dummy_dest_table partition (type=null)
> Failed with exception Partition spec \{type=type1, customer=abc} contains 
> non-partition columns
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.MoveTask
>  
> *Possible resolution:*
> The dynamic partitioning should consider only those partition specs which are 
> under the defined table root/base path. If the path itself has partition 
> style format (customer=abc in above example) then that should not be 
> considered as partition as it is outside the scope of the table.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22088) Dynamic partition insert problem on external table with "=" in location path spec

2019-08-15 Thread Hui An (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907881#comment-16907881
 ] 

Hui An commented on HIVE-22088:
---

[~aihuaxu] What do you think of this problem?

> Dynamic partition insert problem on external table with "=" in location path 
> spec
> -
>
> Key: HIVE-22088
> URL: https://issues.apache.org/jira/browse/HIVE-22088
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.3.4
> Environment: Hive 2.6.0.10-2 Executing on Tez.
> OS: Ubuntu 16.04.4 LTS
> Config settings used:
> SET hive.exec.dynamic.partition=true;
>  SET hive.exec.dynamic.partition.mode=nonstrict;
>Reporter: Puneet Khatod
>Assignee: Hui An
>Priority: Major
>
> If external table location spec has a '=' sign (coincidentally partition 
> specifier) in it, then dynamic partition loading fails.
> *Use cases:*
> Quite often the same data is used in different contexts by creating different 
> external tables on top of the data. Many times the tables have different 
> partition depths depending on how data is organized.
> Like in below example, there are individual customer specific tables and 
> queries/jobs to insert data partitioned by type. And there is another table 
> to give the consolidated data view of all the customers, thus have two level 
> partition customer and type.
> The job to insert customer specific data into customer specific table fails 
> if we use dynamic partitioning. Static partition insert on same table works 
> fine though.
> *Replication:*
> To replicate following simple setup could be done. Below execution is on 
> 'Tez'.
> *Source table**-*
> CREATE EXTERNAL TABLE temp_dummy_table
>  (id STRING, type STRING)
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
> STORED AS TEXTFILE
>  LOCATION '/home/source/';
>  
> *Destination Table-*
> CREATE EXTERNAL TABLE temp_dummy_dest_table
>  (id STRING)
>  PARTITIONED BY (type string)
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
> STORED AS TEXTFILE
>  LOCATION '/home/destination/{color:#ff}customer=abc{color}/';
>  
> *Insert into destination-*
> insert overwrite table temp_dummy_dest_table partition (type)
>  select i.id as id, i.type as type
>  from temp_dummy_table i
>  where i.type in ('type1','type2');
>  
> *Log and Error Msgs on CLI*-
> Loading data to table temp_dummy_dest_table partition (type=null)
> Failed with exception Partition spec \{type=type1, customer=abc} contains 
> non-partition columns
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.MoveTask
>  
> *Possible resolution:*
> The dynamic partitioning should consider only those partition specs which are 
> under the defined table root/base path. If the path itself has partition 
> style format (customer=abc in above example) then that should not be 
> considered as partition as it is outside the scope of the table.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22088) Dynamic partition insert problem on external table with "=" in location path spec

2019-08-15 Thread Hui An (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907878#comment-16907878
 ] 

Hui An commented on HIVE-22088:
---

I think we should pass mapreduce work's base dir to method makeSpecFromName.

> Dynamic partition insert problem on external table with "=" in location path 
> spec
> -
>
> Key: HIVE-22088
> URL: https://issues.apache.org/jira/browse/HIVE-22088
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.3.4
> Environment: Hive 2.6.0.10-2 Executing on Tez.
> OS: Ubuntu 16.04.4 LTS
> Config settings used:
> SET hive.exec.dynamic.partition=true;
>  SET hive.exec.dynamic.partition.mode=nonstrict;
>Reporter: Puneet Khatod
>Assignee: Hui An
>Priority: Major
>
> If external table location spec has a '=' sign (coincidentally partition 
> specifier) in it, then dynamic partition loading fails.
> *Use cases:*
> Quite often the same data is used in different contexts by creating different 
> external tables on top of the data. Many times the tables have different 
> partition depths depending on how data is organized.
> Like in below example, there are individual customer specific tables and 
> queries/jobs to insert data partitioned by type. And there is another table 
> to give the consolidated data view of all the customers, thus have two level 
> partition customer and type.
> The job to insert customer specific data into customer specific table fails 
> if we use dynamic partitioning. Static partition insert on same table works 
> fine though.
> *Replication:*
> To replicate following simple setup could be done. Below execution is on 
> 'Tez'.
> *Source table**-*
> CREATE EXTERNAL TABLE temp_dummy_table
>  (id STRING, type STRING)
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
> STORED AS TEXTFILE
>  LOCATION '/home/source/';
>  
> *Destination Table-*
> CREATE EXTERNAL TABLE temp_dummy_dest_table
>  (id STRING)
>  PARTITIONED BY (type string)
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
> STORED AS TEXTFILE
>  LOCATION '/home/destination/{color:#ff}customer=abc{color}/';
>  
> *Insert into destination-*
> insert overwrite table temp_dummy_dest_table partition (type)
>  select i.id as id, i.type as type
>  from temp_dummy_table i
>  where i.type in ('type1','type2');
>  
> *Log and Error Msgs on CLI*-
> Loading data to table temp_dummy_dest_table partition (type=null)
> Failed with exception Partition spec \{type=type1, customer=abc} contains 
> non-partition columns
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.MoveTask
>  
> *Possible resolution:*
> The dynamic partitioning should consider only those partition specs which are 
> under the defined table root/base path. If the path itself has partition 
> style format (customer=abc in above example) then that should not be 
> considered as partition as it is outside the scope of the table.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22088) Dynamic partition insert problem on external table with "=" in location path spec

2019-08-15 Thread Hui An (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907875#comment-16907875
 ] 

Hui An commented on HIVE-22088:
---

In our newest version, Hive will pass oldPart as a parameters to 
loadPartitionInternal(same as loadPartition above), so this error will not 
throw out. But it  still is a bug we should fix. Actually it could generate 
wrong path in the HDFS(in our case, the type1's path is 
/home/destination/customer=abc/type=type1/customer=abc)

> Dynamic partition insert problem on external table with "=" in location path 
> spec
> -
>
> Key: HIVE-22088
> URL: https://issues.apache.org/jira/browse/HIVE-22088
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.3.4
> Environment: Hive 2.6.0.10-2 Executing on Tez.
> OS: Ubuntu 16.04.4 LTS
> Config settings used:
> SET hive.exec.dynamic.partition=true;
>  SET hive.exec.dynamic.partition.mode=nonstrict;
>Reporter: Puneet Khatod
>Assignee: Hui An
>Priority: Major
>
> If external table location spec has a '=' sign (coincidentally partition 
> specifier) in it, then dynamic partition loading fails.
> *Use cases:*
> Quite often the same data is used in different contexts by creating different 
> external tables on top of the data. Many times the tables have different 
> partition depths depending on how data is organized.
> Like in below example, there are individual customer specific tables and 
> queries/jobs to insert data partitioned by type. And there is another table 
> to give the consolidated data view of all the customers, thus have two level 
> partition customer and type.
> The job to insert customer specific data into customer specific table fails 
> if we use dynamic partitioning. Static partition insert on same table works 
> fine though.
> *Replication:*
> To replicate following simple setup could be done. Below execution is on 
> 'Tez'.
> *Source table**-*
> CREATE EXTERNAL TABLE temp_dummy_table
>  (id STRING, type STRING)
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
> STORED AS TEXTFILE
>  LOCATION '/home/source/';
>  
> *Destination Table-*
> CREATE EXTERNAL TABLE temp_dummy_dest_table
>  (id STRING)
>  PARTITIONED BY (type string)
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
> STORED AS TEXTFILE
>  LOCATION '/home/destination/{color:#ff}customer=abc{color}/';
>  
> *Insert into destination-*
> insert overwrite table temp_dummy_dest_table partition (type)
>  select i.id as id, i.type as type
>  from temp_dummy_table i
>  where i.type in ('type1','type2');
>  
> *Log and Error Msgs on CLI*-
> Loading data to table temp_dummy_dest_table partition (type=null)
> Failed with exception Partition spec \{type=type1, customer=abc} contains 
> non-partition columns
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.MoveTask
>  
> *Possible resolution:*
> The dynamic partitioning should consider only those partition specs which are 
> under the defined table root/base path. If the path itself has partition 
> style format (customer=abc in above example) then that should not be 
> considered as partition as it is outside the scope of the table.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22081) Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there are too many Table/partitions are eligible for compaction

2019-08-15 Thread Peter Vary (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907870#comment-16907870
 ] 

Peter Vary commented on HIVE-22081:
---

[~Rajkumar Singh]: Mostly only nits, but having the same style for code is the 
first step to better code:
 * Please fix checkstyle errors
 * Every if should look like this (space before and after the parenthesis)

{code:java}
if (isCompactDisabled) {{code}
 * Let me backpedal on my previous ask, and set this back to INFO (as this was 
info before):

{code:java}
LOG.debug("Compaction is disabled for table " + tbl.getTableName());{code}
 * This should be private, since nobody uses is, and static since it does not 
use any member variables:

{code:java}
public boolean checkDynPartitioning(Table t, CompactionInfo ci){{code}
 * Please add spaces around + when concatenating strings:

{code:java}
LOG.error("Caught Exception while checking compactiton eligibility 
"+StringUtils.stringifyException(e));{code}
 

Otherwise +1 LGTM

> Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there 
> are too many Table/partitions are eligible for compaction 
> --
>
> Key: HIVE-22081
> URL: https://issues.apache.org/jira/browse/HIVE-22081
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21917.01.patch, HIVE-21917.02.patch, 
> HIVE-22081.patch
>
>
> if Automatic Compaction is turned on, Initiator thread check for potential 
> table/partitions which are eligible for compactions and run some checks in 
> for loop before requesting compaction for eligibles. Though initiator thread 
> is configured to run at interval 5 min default, in case of many objects it 
> keeps on running as these checks are IO intensive and hog cpu.
> In the proposed changes, I am planning to do
> 1. passing less object to for loop by filtering out the objects based on the 
> condition which we are checking within the loop.
> 2. Doing Async call using future to determine compaction type(this is where 
> we do FileSystem calls)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22115) Prevent the creation of query-router logger in HS2 as per property

2019-08-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907872#comment-16907872
 ] 

Hive QA commented on HIVE-22115:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12977659/HIVE-22115.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 16708 tests 
executed
*Failed tests:*
{noformat}
TestDataSourceProviderFactory - did not produce a TEST-*.xml file (likely timed 
out) (batchId=232)
TestObjectStore - did not produce a TEST-*.xml file (likely timed out) 
(batchId=232)
org.apache.hadoop.hive.ql.security.TestClientSideAuthorizationProvider.testSimplePrivileges
 (batchId=276)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18345/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18345/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18345/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12977659 - PreCommit-HIVE-Build

> Prevent the creation of query-router logger in HS2 as per property
> --
>
> Key: HIVE-22115
> URL: https://issues.apache.org/jira/browse/HIVE-22115
> Project: Hive
>  Issue Type: Improvement
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Attachments: HIVE-22115.patch
>
>
> Avoid the creation and registration of query-router logger if the Hive server 
> Property is set to false by the user
> {code}
> HiveConf.ConfVars.HIVE_SERVER2_LOGGING_OPERATION_ENABLED
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22088) Dynamic partition insert problem on external table with "=" in location path spec

2019-08-15 Thread Hui An (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907865#comment-16907865
 ] 

Hui An commented on HIVE-22088:
---

The issue is caused by Method Warehouse.makeSpecFromName in 
Warehouse.java(branch2-3)

{code:java}
// Warehouse.java
  public static void makeSpecFromName(Map partSpec, Path 
currPath) {
List kvs = new ArrayList();
do {
  String component = currPath.getName();
  Matcher m = pat.matcher(component);
  if (m.matches()) {
String k = unescapePathName(m.group(1));
String v = unescapePathName(m.group(2));
String[] kv = new String[2];
kv[0] = k;
kv[1] = v;
kvs.add(kv);
  }
  currPath = currPath.getParent();
} while (currPath != null && !currPath.getName().isEmpty());

// reverse the list since we checked the part from leaf dir to table's base 
dir
for (int i = kvs.size(); i > 0; i--) {
  partSpec.put(kvs.get(i - 1)[0], kvs.get(i - 1)[1]);
}
  }
{code}
This method will generate partition specification by recursively matching 
currPath and its parents until currPath is empty with pattern 
_pat_("*([^/]+)=([^/]+*)"), so it will add "customer=abc" to partSpec and 
return.  After that, method loadPartition in Hive.java will use this to get 
oldPart, and error will throw out(it will call method validatePartColumnNames 
to check if column is valid).

> Dynamic partition insert problem on external table with "=" in location path 
> spec
> -
>
> Key: HIVE-22088
> URL: https://issues.apache.org/jira/browse/HIVE-22088
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.3.4
> Environment: Hive 2.6.0.10-2 Executing on Tez.
> OS: Ubuntu 16.04.4 LTS
> Config settings used:
> SET hive.exec.dynamic.partition=true;
>  SET hive.exec.dynamic.partition.mode=nonstrict;
>Reporter: Puneet Khatod
>Assignee: Hui An
>Priority: Major
>
> If external table location spec has a '=' sign (coincidentally partition 
> specifier) in it, then dynamic partition loading fails.
> *Use cases:*
> Quite often the same data is used in different contexts by creating different 
> external tables on top of the data. Many times the tables have different 
> partition depths depending on how data is organized.
> Like in below example, there are individual customer specific tables and 
> queries/jobs to insert data partitioned by type. And there is another table 
> to give the consolidated data view of all the customers, thus have two level 
> partition customer and type.
> The job to insert customer specific data into customer specific table fails 
> if we use dynamic partitioning. Static partition insert on same table works 
> fine though.
> *Replication:*
> To replicate following simple setup could be done. Below execution is on 
> 'Tez'.
> *Source table**-*
> CREATE EXTERNAL TABLE temp_dummy_table
>  (id STRING, type STRING)
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
> STORED AS TEXTFILE
>  LOCATION '/home/source/';
>  
> *Destination Table-*
> CREATE EXTERNAL TABLE temp_dummy_dest_table
>  (id STRING)
>  PARTITIONED BY (type string)
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
> STORED AS TEXTFILE
>  LOCATION '/home/destination/{color:#ff}customer=abc{color}/';
>  
> *Insert into destination-*
> insert overwrite table temp_dummy_dest_table partition (type)
>  select i.id as id, i.type as type
>  from temp_dummy_table i
>  where i.type in ('type1','type2');
>  
> *Log and Error Msgs on CLI*-
> Loading data to table temp_dummy_dest_table partition (type=null)
> Failed with exception Partition spec \{type=type1, customer=abc} contains 
> non-partition columns
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.MoveTask
>  
> *Possible resolution:*
> The dynamic partitioning should consider only those partition specs which are 
> under the defined table root/base path. If the path itself has partition 
> style format (customer=abc in above example) then that should not be 
> considered as partition as it is outside the scope of the table.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)