[jira] [Commented] (HIVE-21625) Fix TxnIdUtils.checkEquivalentWriteIds, also provides a comparison method

2019-05-05 Thread Jason Dere (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833531#comment-16833531
 ] 

Jason Dere commented on HIVE-21625:
---

+1, pending green run for ptests

> Fix TxnIdUtils.checkEquivalentWriteIds, also provides a comparison method
> -
>
> Key: HIVE-21625
> URL: https://issues.apache.org/jira/browse/HIVE-21625
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-21625.1.patch, HIVE-21625.2.patch
>
>
> TxnIdUtils.checkEquivalentWriteIds has a bug which thinks (\{1,2,3,4\}, 6) 
> and (\{1,2,3,4,5,6\}, 8) compatible (the notation is (invalidlist, hwm)). 
> Here is a patch to fix it, also provide a comparison method to check which is 
> newer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20967) Handle alter events when replicate to cluster with hive.strict.managed.tables enabled.

2019-05-05 Thread Ashutosh Bapat (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Bapat updated HIVE-20967:
--
Attachment: HIVE-20967.03.patch
Status: Patch Available  (was: In Progress)

> Handle alter events when replicate to cluster with hive.strict.managed.tables 
> enabled.
> --
>
> Key: HIVE-20967
> URL: https://issues.apache.org/jira/browse/HIVE-20967
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: Ashutosh Bapat
>Priority: Minor
>  Labels: DR, pull-request-available
> Attachments: HIVE-20967.01.patch, HIVE-20967.03.patch, 
> HIVE-21678.02.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Some of the events from Hive2 may cause conflicts in Hive3 
> (hive.strict.managed.tables=true) when applied. So, need to handle them 
> properly.
>  1. Alter table to convert non-acid to acid.
>  - Do not allow this conversion on source of replication if strict.managed is 
> false.
> 2. Alter table or partition that changes the location.
>  - For managed tables at source, the table location shouldn't be changed for 
> the given non-partitioned table and partition location shouldn't be changed 
> for partitioned table as alter event doesn't capture the new files list. So, 
> it may cause data inconsistsency. So, if database is enabled for replication 
> at source, then alter location on managed tables should be blocked.
>  - For external partitioned tables, if location is changed at source, the the 
> location should be changed for the table and any partitions which reside 
> within the table location, but not for the partitions which are not within 
> the table location. (may be we just need the test).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20967) Handle alter events when replicate to cluster with hive.strict.managed.tables enabled.

2019-05-05 Thread Ashutosh Bapat (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Bapat updated HIVE-20967:
--
Status: In Progress  (was: Patch Available)

> Handle alter events when replicate to cluster with hive.strict.managed.tables 
> enabled.
> --
>
> Key: HIVE-20967
> URL: https://issues.apache.org/jira/browse/HIVE-20967
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: Ashutosh Bapat
>Priority: Minor
>  Labels: DR, pull-request-available
> Attachments: HIVE-20967.01.patch, HIVE-21678.02.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Some of the events from Hive2 may cause conflicts in Hive3 
> (hive.strict.managed.tables=true) when applied. So, need to handle them 
> properly.
>  1. Alter table to convert non-acid to acid.
>  - Do not allow this conversion on source of replication if strict.managed is 
> false.
> 2. Alter table or partition that changes the location.
>  - For managed tables at source, the table location shouldn't be changed for 
> the given non-partitioned table and partition location shouldn't be changed 
> for partitioned table as alter event doesn't capture the new files list. So, 
> it may cause data inconsistsency. So, if database is enabled for replication 
> at source, then alter location on managed tables should be blocked.
>  - For external partitioned tables, if location is changed at source, the the 
> location should be changed for the table and any partitions which reside 
> within the table location, but not for the partitions which are not within 
> the table location. (may be we just need the test).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21660) Wrong result when union all and later view with explode is used

2019-05-05 Thread Ganesha Shreedhara (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833521#comment-16833521
 ] 

Ganesha Shreedhara commented on HIVE-21660:
---

Can someone please review the patch?

Cc: [~ashutoshc] [~vihangk1]

 

> Wrong result when union all and later view with explode is used
> ---
>
> Key: HIVE-21660
> URL: https://issues.apache.org/jira/browse/HIVE-21660
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 3.1.1
>Reporter: Ganesha Shreedhara
>Assignee: Ganesha Shreedhara
>Priority: Major
> Attachments: HIVE-21660.1.patch, HIVE-21660.patch
>
>
> There is a data loss when the data is inserted to a partitioned table using 
> union all and lateral view with explode. 
>  
> *Steps to reproduce:*
>  
> {code:java}
> create table t1 (id int, dt string);
> insert into t1 values (2, '2019-04-01');
> create table t2( id int, dates array);
> insert into t2 select 1 as id, array('2019-01-01','2019-01-02','2019-01-03') 
> as dates;
> create table dst (id int) partitioned by (dt string);
> set hive.exec.dynamic.partition.mode=nonstrict;
> set hive.exec.dynamic.partition=true;
> insert overwrite table dst partition (dt)
> select t.id, t.dt from (
> select id, dt from t1
> union all
> select id, dts as dt from t2 tt2 lateral view explode(tt2.dates) dd as dts ) 
> t;
> select * from dst_hdfs;
> {code}
>  
>  
> *Actual Result:*
> {code:java}
> +--+--+
> | 2| 2019-04-01   |
> +--+--+{code}
>  
> *Expected Result* (Run only the select part from the above insert query)*:* 
> {code:java}
> +---++
> | 2     | 2019-04-01 |
> | 1     | 2019-01-01 |
> | 1     | 2019-01-02 |
> | 1     | 2019-01-03 |
> +---++{code}
>  
> Data retrieved using union all and lateral view with explode from second 
> table is missing. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21637) Synchronized metastore cache

2019-05-05 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833519#comment-16833519
 ] 

Hive QA commented on HIVE-21637:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12967887/HIVE-21637.3.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17127/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17127/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17127/

Messages:
{noformat}
 This message was trimmed, see log for full details 
  found: org.apache.hadoop.hive.metastore.api.Table
  reason: actual and formal argument lists differ in length
[ERROR] 
/data/hiveptest/working/apache-github-source-source/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/TestDbNotificationListener.java:[665,13]
 method add_partition in interface 
org.apache.hadoop.hive.metastore.IMetaStoreClient cannot be applied to given 
types;
  required: org.apache.hadoop.hive.metastore.api.Partition,java.lang.String
  found: org.apache.hadoop.hive.metastore.api.Partition
  reason: actual and formal argument lists differ in length
[ERROR] 
/data/hiveptest/working/apache-github-source-source/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/TestDbNotificationListener.java:[697,15]
 method add_partition in interface 
org.apache.hadoop.hive.metastore.IMetaStoreClient cannot be applied to given 
types;
  required: org.apache.hadoop.hive.metastore.api.Partition,java.lang.String
  found: org.apache.hadoop.hive.metastore.api.Partition
  reason: actual and formal argument lists differ in length
[ERROR] 
/data/hiveptest/working/apache-github-source-source/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/TestDbNotificationListener.java:[729,13]
 method createTable in interface 
org.apache.hadoop.hive.metastore.IMetaStoreClient cannot be applied to given 
types;
  required: org.apache.hadoop.hive.metastore.api.Table,java.lang.String
  found: org.apache.hadoop.hive.metastore.api.Table
  reason: actual and formal argument lists differ in length
[ERROR] 
/data/hiveptest/working/apache-github-source-source/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/TestDbNotificationListener.java:[734,13]
 method add_partition in interface 
org.apache.hadoop.hive.metastore.IMetaStoreClient cannot be applied to given 
types;
  required: org.apache.hadoop.hive.metastore.api.Partition,java.lang.String
  found: org.apache.hadoop.hive.metastore.api.Partition
  reason: actual and formal argument lists differ in length
[ERROR] 
/data/hiveptest/working/apache-github-source-source/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/TestDbNotificationListener.java:[798,13]
 method createTable in interface 
org.apache.hadoop.hive.metastore.IMetaStoreClient cannot be applied to given 
types;
  required: org.apache.hadoop.hive.metastore.api.Table,java.lang.String
  found: org.apache.hadoop.hive.metastore.api.Table
  reason: actual and formal argument lists differ in length
[ERROR] 
/data/hiveptest/working/apache-github-source-source/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/TestDbNotificationListener.java:[803,13]
 method add_partition in interface 
org.apache.hadoop.hive.metastore.IMetaStoreClient cannot be applied to given 
types;
  required: org.apache.hadoop.hive.metastore.api.Partition,java.lang.String
  found: org.apache.hadoop.hive.metastore.api.Partition
  reason: actual and formal argument lists differ in length
[ERROR] 
/data/hiveptest/working/apache-github-source-source/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/TestDbNotificationListener.java:[838,13]
 method add_partition in interface 
org.apache.hadoop.hive.metastore.IMetaStoreClient cannot be applied to given 
types;
  required: org.apache.hadoop.hive.metastore.api.Partition,java.lang.String
  found: org.apache.hadoop.hive.metastore.api.Partition
  reason: actual and formal argument lists differ in length
[ERROR] 
/data/hiveptest/working/apache-github-source-source/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/TestDbNotificationListener.java:[864,13]
 method createTable in interface 
org.apache.hadoop.hive.metastore.IMetaStoreClient cannot be applied to given 
types;
  required: org.apache.hadoop.hive.metastore.api.Table,java.lang.String
  found: org.apache.hadoop.hive.metastore.api.Table
  reason: actual and formal argument lists differ in length
[ERROR] 
/data/hiveptest/working/apache-github-source-source/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/TestDbNotificationListener.java:[873,13]
 method createTable in interface 
org.apache.hadoop.hive.metastore.IMetaStoreClient cannot be applied to given 
types;
  required: 

[jira] [Updated] (HIVE-21637) Synchronized metastore cache

2019-05-05 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-21637:
--
Attachment: HIVE-21637.3.patch

> Synchronized metastore cache
> 
>
> Key: HIVE-21637
> URL: https://issues.apache.org/jira/browse/HIVE-21637
> Project: Hive
>  Issue Type: New Feature
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-21637-1.patch, HIVE-21637.2.patch, 
> HIVE-21637.3.patch
>
>
> Currently, HMS has a cache implemented by CachedStore. The cache is 
> asynchronized and in HMS HA setting, we can only get eventual consistency. In 
> this Jira, we try to make it synchronized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21637) Synchronized metastore cache

2019-05-05 Thread Daniel Dai (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833515#comment-16833515
 ] 

Daniel Dai commented on HIVE-21637:
---

Resync with master.

> Synchronized metastore cache
> 
>
> Key: HIVE-21637
> URL: https://issues.apache.org/jira/browse/HIVE-21637
> Project: Hive
>  Issue Type: New Feature
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-21637-1.patch, HIVE-21637.2.patch, 
> HIVE-21637.3.patch
>
>
> Currently, HMS has a cache implemented by CachedStore. The cache is 
> asynchronized and in HMS HA setting, we can only get eventual consistency. In 
> this Jira, we try to make it synchronized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21694) Hive driver wait time is fixed for task getting executed in parallel.

2019-05-05 Thread Gopal V (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833512#comment-16833512
 ] 

Gopal V commented on HIVE-21694:


Is this related to - HIVE-21646?

Sleep times are a bad idea in general, instead use the condition waits (see the 
fix in HIVE-20989, for example)

> Hive driver wait time is fixed for task getting executed in parallel.
> -
>
> Key: HIVE-21694
> URL: https://issues.apache.org/jira/browse/HIVE-21694
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
> Fix For: 4.0.0
>
>
> During a command execution hive driver executes the task in a separate thread 
> if the task to be executed is set as parallel. After starting the task, 
> driver checks if the task has finished execution or not. If the task 
> execution is not finished it waits for 2 seconds before waking up again to 
> check the task status. In case of task with execution time in milliseconds, 
> this wait time can induce substantial overhead. So instead of fixed wait 
> time, exponential backedup sleep time can be used to reduce the sleep 
> overhead. The sleep time can start with 100ms and can increase up to 2 
> seconds doubling on each iteration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21694) Hive driver wait time is fixed for task getting executed in parallel.

2019-05-05 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-21694:
---
Summary: Hive driver wait time is fixed for task getting executed in 
parallel.  (was: Hive driver waiting time is fixed for task getting executed in 
parallel.)

> Hive driver wait time is fixed for task getting executed in parallel.
> -
>
> Key: HIVE-21694
> URL: https://issues.apache.org/jira/browse/HIVE-21694
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
> Fix For: 4.0.0
>
>
> During a command execution hive driver executes the task in a separate thread 
> if the task to be executed is set as parallel. After starting the task, 
> driver checks if the task has finished execution or not. If the task 
> execution is not finished it waits for 2 seconds before waking up again to 
> check the task status. In case of task with execution time in milliseconds, 
> this wait time can induce substantial overhead. So instead of fixed wait 
> time, exponential backedup sleep time can be used to reduce the sleep 
> overhead. The sleep time can start with 100ms and can increase up to 2 
> seconds doubling on each iteration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21694) Hive driver waiting time is fixed for task getting executed in parallel.

2019-05-05 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera reassigned HIVE-21694:
--


> Hive driver waiting time is fixed for task getting executed in parallel.
> 
>
> Key: HIVE-21694
> URL: https://issues.apache.org/jira/browse/HIVE-21694
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
> Fix For: 4.0.0
>
>
> During a command execution hive driver executes the task in a separate thread 
> if the task to be executed is set as parallel. After starting the task, 
> driver checks if the task has finished execution or not. If the task 
> execution is not finished it waits for 2 seconds before waking up again to 
> check the task status. In case of task with execution time in milliseconds, 
> this wait time can induce substantial overhead. So instead of fixed wait 
> time, exponential backedup sleep time can be used to reduce the sleep 
> overhead. The sleep time can start with 100ms and can increase up to 2 
> seconds doubling on each iteration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21678) CTAS creating a partitioned table fails because of no writeId

2019-05-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21678?focusedWorklogId=237587=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-237587
 ]

ASF GitHub Bot logged work on HIVE-21678:
-

Author: ASF GitHub Bot
Created on: 06/May/19 03:01
Start Date: 06/May/19 03:01
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #614: HIVE-21678
URL: https://github.com/apache/hive/pull/614#discussion_r281053245
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
 ##
 @@ -7644,7 +7645,8 @@ protected Operator genFileSinkPlan(String dest, QB qb, 
Operator input)
 checkAcidConstraints(qb, tableDescriptor, null);
   }
   // isReplace = false in case concurrent operation is executed
-  ltd = new LoadTableDesc(queryTmpdir, tableDescriptor, dpCtx, acidOp, 
false, writeId);
+  ltd = new LoadTableDesc(queryTmpdir, tableDescriptor, dpCtx, acidOp, 
false, writeId,
+  true);
 
 Review comment:
   in this case write id is already passed, and the write id allocation does 
not check if the table is already present or not. Just uses the table name. So 
it may not be required to pass the flag. The write id should be populated 
correctly in load table desc.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 237587)
Time Spent: 0.5h  (was: 20m)

> CTAS creating a partitioned table fails because of no writeId
> -
>
> Key: HIVE-21678
> URL: https://issues.apache.org/jira/browse/HIVE-21678
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, repl
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21678.01.patch, HIVE-21678.02.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> create table t1(a int, b int);
> insert into t1 values (1, 2), (3, 4);
> create table t6_part partitioned by (a) stored as orc tblproperties 
> ("transactional"="true") as select * from t1;
> ERROR : FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.MoveTask. MoveTask : Write id is not set in 
> the config by open txn task for migration
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.MoveTask. MoveTask : Write id is not 
> set in the config by open txn task for migration (state=08S01,code=1)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21678) CTAS creating a partitioned table fails because of no writeId

2019-05-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21678?focusedWorklogId=237586=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-237586
 ]

ASF GitHub Bot logged work on HIVE-21678:
-

Author: ASF GitHub Bot
Created on: 06/May/19 03:01
Start Date: 06/May/19 03:01
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #614: HIVE-21678
URL: https://github.com/apache/hive/pull/614#discussion_r281053603
 
 

 ##
 File path: ql/src/test/queries/clientpositive/ctas.q
 ##
 @@ -61,11 +61,13 @@ create table nzhang_ctas6 (key string, `to` string);
 insert overwrite table nzhang_ctas6 select key, value from src tablesample (10 
rows);
 create table nzhang_ctas7 as select key, `to` from nzhang_ctas6;
 
-
-
-
-
-
-
-
-
+-- ACID CTAS
+set hive.support.concurrency=true;
+set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
+set hive.exec.dynamic.partition.mode=nonstrict;
+set hive.stats.autogather=false;
+
+create table acid_ctas_part partitioned by (k)
+  stored as orc TBLPROPERTIES ('transactional'='true')
+  as select key k, value from src order by k limit 5;
+select k, value from acid_ctas_part;
 
 Review comment:
   instead of .q file ..i think adding a replication test with CTAS for 
partitioned ACID table will be more meaningful.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 237586)
Time Spent: 20m  (was: 10m)

> CTAS creating a partitioned table fails because of no writeId
> -
>
> Key: HIVE-21678
> URL: https://issues.apache.org/jira/browse/HIVE-21678
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, repl
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21678.01.patch, HIVE-21678.02.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> create table t1(a int, b int);
> insert into t1 values (1, 2), (3, 4);
> create table t6_part partitioned by (a) stored as orc tblproperties 
> ("transactional"="true") as select * from t1;
> ERROR : FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.MoveTask. MoveTask : Write id is not set in 
> the config by open txn task for migration
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.MoveTask. MoveTask : Write id is not 
> set in the config by open txn task for migration (state=08S01,code=1)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20613) CachedStore: Add more UT coverage (outside of .q files)

2019-05-05 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-20613:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Patch pushed to master.

> CachedStore: Add more UT coverage (outside of .q files)
> ---
>
> Key: HIVE-20613
> URL: https://issues.apache.org/jira/browse/HIVE-20613
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Daniel Dai
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20613.1.patch, HIVE-20613.2.patch, 
> HIVE-20613.2.patch, HIVE-20613.3.patch, HIVE-20613.4.patch
>
>
> 1. Add tests which will use the background thread for updating the cached 
> data (database, table, partition, table stats, partition stats)
> 2. Add more tests for existing APIs: stats aggregation, listing partitions 
> when partial specs are provided, testing the storage descriptor 
> caching/deduplication (specially when tables/ptns are dropped/added), table 
> col stats, partition col stats
> 3. Test 1. in a multithreaded scenario
> 4. Test whitelist/blacklist
> 5. Test prewarm memory limit estimation code



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20613) CachedStore: Add more UT coverage (outside of .q files)

2019-05-05 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai reassigned HIVE-20613:
-

Assignee: Vaibhav Gumashta  (was: Daniel Dai)

> CachedStore: Add more UT coverage (outside of .q files)
> ---
>
> Key: HIVE-20613
> URL: https://issues.apache.org/jira/browse/HIVE-20613
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20613.1.patch, HIVE-20613.2.patch, 
> HIVE-20613.2.patch, HIVE-20613.3.patch, HIVE-20613.4.patch
>
>
> 1. Add tests which will use the background thread for updating the cached 
> data (database, table, partition, table stats, partition stats)
> 2. Add more tests for existing APIs: stats aggregation, listing partitions 
> when partial specs are provided, testing the storage descriptor 
> caching/deduplication (specially when tables/ptns are dropped/added), table 
> col stats, partition col stats
> 3. Test 1. in a multithreaded scenario
> 4. Test whitelist/blacklist
> 5. Test prewarm memory limit estimation code



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20613) CachedStore: Add more UT coverage (outside of .q files)

2019-05-05 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833498#comment-16833498
 ] 

Hive QA commented on HIVE-20613:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12967882/HIVE-20613.4.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15980 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17126/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17126/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17126/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12967882 - PreCommit-HIVE-Build

> CachedStore: Add more UT coverage (outside of .q files)
> ---
>
> Key: HIVE-20613
> URL: https://issues.apache.org/jira/browse/HIVE-20613
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Daniel Dai
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20613.1.patch, HIVE-20613.2.patch, 
> HIVE-20613.2.patch, HIVE-20613.3.patch, HIVE-20613.4.patch
>
>
> 1. Add tests which will use the background thread for updating the cached 
> data (database, table, partition, table stats, partition stats)
> 2. Add more tests for existing APIs: stats aggregation, listing partitions 
> when partial specs are provided, testing the storage descriptor 
> caching/deduplication (specially when tables/ptns are dropped/added), table 
> col stats, partition col stats
> 3. Test 1. in a multithreaded scenario
> 4. Test whitelist/blacklist
> 5. Test prewarm memory limit estimation code



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20613) CachedStore: Add more UT coverage (outside of .q files)

2019-05-05 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833482#comment-16833482
 ] 

Hive QA commented on HIVE-20613:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
11s{color} | {color:blue} standalone-metastore/metastore-server in master has 
182 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
19s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 23 new + 179 unchanged - 9 fixed = 202 total (was 188) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 8 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} standalone-metastore/metastore-server generated 0 
new + 181 unchanged - 1 fixed = 181 total (was 182) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17126/dev-support/hive-personality.sh
 |
| git revision | master / aebfaad |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17126/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17126/yetus/whitespace-eol.txt
 |
| modules | C: standalone-metastore/metastore-server U: 
standalone-metastore/metastore-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17126/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> CachedStore: Add more UT coverage (outside of .q files)
> ---
>
> Key: HIVE-20613
> URL: https://issues.apache.org/jira/browse/HIVE-20613
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Daniel Dai
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20613.1.patch, HIVE-20613.2.patch, 
> HIVE-20613.2.patch, HIVE-20613.3.patch, HIVE-20613.4.patch
>
>
> 1. Add tests which will use the background thread for updating the cached 
> data (database, table, partition, table stats, partition stats)
> 2. Add more tests for existing APIs: stats aggregation, listing partitions 
> when partial specs are provided, testing the storage descriptor 
> caching/deduplication (specially when tables/ptns are dropped/added), table 
> col stats, partition col stats
> 3. Test 1. in a multithreaded scenario
> 4. Test whitelist/blacklist
> 5. Test prewarm memory limit estimation code



--
This message was 

[jira] [Updated] (HIVE-20613) CachedStore: Add more UT coverage (outside of .q files)

2019-05-05 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-20613:
--
Attachment: HIVE-20613.4.patch

> CachedStore: Add more UT coverage (outside of .q files)
> ---
>
> Key: HIVE-20613
> URL: https://issues.apache.org/jira/browse/HIVE-20613
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Daniel Dai
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20613.1.patch, HIVE-20613.2.patch, 
> HIVE-20613.2.patch, HIVE-20613.3.patch, HIVE-20613.4.patch
>
>
> 1. Add tests which will use the background thread for updating the cached 
> data (database, table, partition, table stats, partition stats)
> 2. Add more tests for existing APIs: stats aggregation, listing partitions 
> when partial specs are provided, testing the storage descriptor 
> caching/deduplication (specially when tables/ptns are dropped/added), table 
> col stats, partition col stats
> 3. Test 1. in a multithreaded scenario
> 4. Test whitelist/blacklist
> 5. Test prewarm memory limit estimation code



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20613) CachedStore: Add more UT coverage (outside of .q files)

2019-05-05 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833469#comment-16833469
 ] 

Hive QA commented on HIVE-20613:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12967881/HIVE-20613.3.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17125/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17125/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17125/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-05-05 22:25:44.638
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-17125/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-05-05 22:25:44.642
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at aebfaad HIVE-20615: CachedStore: Background refresh thread bug 
fixes (Vaibhav Gumashta, reviewed by Daniel Dai)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at aebfaad HIVE-20615: CachedStore: Background refresh thread bug 
fixes (Vaibhav Gumashta, reviewed by Daniel Dai)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-05-05 22:25:45.607
+ rm -rf ../yetus_PreCommit-HIVE-Build-17125
+ mkdir ../yetus_PreCommit-HIVE-Build-17125
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-17125
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-17125/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/cache/CachedStore.java:
 does not exist in index
error: 
a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/cache/SharedCache.java:
 does not exist in index
error: 
a/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/cache/TestCachedStore.java:
 does not exist in index
Going to apply patch with: git apply -p1
/data/hiveptest/working/scratch/build.patch:215: trailing whitespace.
} 
/data/hiveptest/working/scratch/build.patch:837: trailing whitespace.
  // Note: the 44Kb approximation has been determined based on trial/error. 
/data/hiveptest/working/scratch/build.patch:902: trailing whitespace.
// Create a new unpartitioned table under basedb1 
/data/hiveptest/working/scratch/build.patch:1198: trailing whitespace.
// Create a new unpartitioned table under db1 
/data/hiveptest/working/scratch/build.patch:1288: trailing whitespace.
  // Note: the 44Kb approximation has been determined based on trial/error. 
warning: squelched 3 whitespace errors
warning: 8 lines add whitespace errors.
+ [[ maven == \m\a\v\e\n ]]
+ rm -rf /data/hiveptest/working/maven/org/apache/hive
+ mvn -B clean install -DskipTests -T 4 -q 
-Dmaven.repo.local=/data/hiveptest/working/maven
protoc-jar: executing: [/tmp/protoc1775397990093280147.exe, --version]
libprotoc 2.5.0
protoc-jar: executing: [/tmp/protoc1775397990093280147.exe, 
-I/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore,
 
--java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/target/generated-sources,
 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto]
ANTLR 

[jira] [Updated] (HIVE-20613) CachedStore: Add more UT coverage (outside of .q files)

2019-05-05 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-20613:
--
Fix Version/s: 4.0.0

> CachedStore: Add more UT coverage (outside of .q files)
> ---
>
> Key: HIVE-20613
> URL: https://issues.apache.org/jira/browse/HIVE-20613
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Daniel Dai
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20613.1.patch, HIVE-20613.2.patch, 
> HIVE-20613.2.patch, HIVE-20613.3.patch
>
>
> 1. Add tests which will use the background thread for updating the cached 
> data (database, table, partition, table stats, partition stats)
> 2. Add more tests for existing APIs: stats aggregation, listing partitions 
> when partial specs are provided, testing the storage descriptor 
> caching/deduplication (specially when tables/ptns are dropped/added), table 
> col stats, partition col stats
> 3. Test 1. in a multithreaded scenario
> 4. Test whitelist/blacklist
> 5. Test prewarm memory limit estimation code



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20613) CachedStore: Add more UT coverage (outside of .q files)

2019-05-05 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-20613:
--
Attachment: HIVE-20613.3.patch

> CachedStore: Add more UT coverage (outside of .q files)
> ---
>
> Key: HIVE-20613
> URL: https://issues.apache.org/jira/browse/HIVE-20613
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-20613.1.patch, HIVE-20613.2.patch, 
> HIVE-20613.2.patch, HIVE-20613.3.patch
>
>
> 1. Add tests which will use the background thread for updating the cached 
> data (database, table, partition, table stats, partition stats)
> 2. Add more tests for existing APIs: stats aggregation, listing partitions 
> when partial specs are provided, testing the storage descriptor 
> caching/deduplication (specially when tables/ptns are dropped/added), table 
> col stats, partition col stats
> 3. Test 1. in a multithreaded scenario
> 4. Test whitelist/blacklist
> 5. Test prewarm memory limit estimation code



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20613) CachedStore: Add more UT coverage (outside of .q files)

2019-05-05 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai reassigned HIVE-20613:
-

Assignee: Daniel Dai  (was: Vaibhav Gumashta)

> CachedStore: Add more UT coverage (outside of .q files)
> ---
>
> Key: HIVE-20613
> URL: https://issues.apache.org/jira/browse/HIVE-20613
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-20613.1.patch, HIVE-20613.2.patch, 
> HIVE-20613.2.patch, HIVE-20613.3.patch
>
>
> 1. Add tests which will use the background thread for updating the cached 
> data (database, table, partition, table stats, partition stats)
> 2. Add more tests for existing APIs: stats aggregation, listing partitions 
> when partial specs are provided, testing the storage descriptor 
> caching/deduplication (specially when tables/ptns are dropped/added), table 
> col stats, partition col stats
> 3. Test 1. in a multithreaded scenario
> 4. Test whitelist/blacklist
> 5. Test prewarm memory limit estimation code



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-13582) E061-07 and E061-12: Quantified Comparison Predicates

2019-05-05 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-13582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833458#comment-16833458
 ] 

Hive QA commented on HIVE-13582:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12967880/HIVE-13582.6.patch

{color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 15946 tests 
executed
*Failed tests:*
{noformat}
TestDataSourceProviderFactory - did not produce a TEST-*.xml file (likely timed 
out) (batchId=232)
TestObjectStore - did not produce a TEST-*.xml file (likely timed out) 
(batchId=232)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_ALL]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_ANY]
 (batchId=176)
org.apache.hive.hcatalog.mapreduce.TestHCatPartitioned.testHCatPartitionedTable[0]
 (batchId=211)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17124/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17124/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17124/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12967880 - PreCommit-HIVE-Build

> E061-07 and E061-12: Quantified Comparison Predicates
> -
>
> Key: HIVE-13582
> URL: https://issues.apache.org/jira/browse/HIVE-13582
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Carter Shanklin
>Assignee: Vineet Garg
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-13582.1.patch, HIVE-13582.2.patch, 
> HIVE-13582.3.patch, HIVE-13582.4.patch, HIVE-13582.5.patch, HIVE-13582.6.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> This is a part of the SQL:2011 Analytics Complete Umbrella JIRA HIVE-13554. 
> Quantified comparison predicates (ANY/SOME/ALL) are mandatory in the SQL 
> standard. Hive should support the predicates (E061-07) and you should be able 
> to use these with subqueries (E061-12)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-13582) E061-07 and E061-12: Quantified Comparison Predicates

2019-05-05 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-13582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833446#comment-16833446
 ] 

Hive QA commented on HIVE-13582:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
1s{color} | {color:blue} ql in master has 2253 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
45s{color} | {color:red} ql: The patch generated 15 new + 539 unchanged - 38 
fixed = 554 total (was 577) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 10 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch 8 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
10s{color} | {color:red} ql generated 14 new + 2244 unchanged - 9 fixed = 2258 
total (was 2253) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Switch statement found in 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveSubQueryRemoveRule.rewriteInExists(RexSubQuery,
 Set, RelOptUtil$Logic, HiveSubQRemoveRelBuilder, int, boolean) where one case 
falls through to the next case  At HiveSubQueryRemoveRule.java:Set, 
RelOptUtil$Logic, HiveSubQRemoveRelBuilder, int, boolean) where one case falls 
through to the next case  At HiveSubQueryRemoveRule.java:[lines 460-463] |
|  |  Switch statement found in 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveSubQueryRemoveRule.rewriteInExists(RexSubQuery,
 Set, RelOptUtil$Logic, HiveSubQRemoveRelBuilder, int, boolean) where default 
case is missing  At HiveSubQueryRemoveRule.java:Set, RelOptUtil$Logic, 
HiveSubQRemoveRelBuilder, int, boolean) where default case is missing  At 
HiveSubQueryRemoveRule.java:[lines 356-378] |
|  |  Dead store to stream_retval in 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceSimilarExpressionQuantifierPredicate(CommonTree)
  At 
HiveParser_IdentifiersParser.java:org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceSimilarExpressionQuantifierPredicate(CommonTree)
  At HiveParser_IdentifiersParser.java:[line 9822] |
|  |  Dead store to stream_retval in 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.quantifierType()  
At 
HiveParser_IdentifiersParser.java:org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.quantifierType()
  At HiveParser_IdentifiersParser.java:[line 9940] |
|  |  Redundant nullcheck of nonReserved311, which is known to be non-null in 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.identifier()  
Redundant null check at HiveParser_IdentifiersParser.java:is known to be 

[jira] [Assigned] (HIVE-20615) CachedStore: Background refresh thread bug fixes

2019-05-05 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai reassigned HIVE-20615:
-

Assignee: Vaibhav Gumashta  (was: Daniel Dai)

> CachedStore: Background refresh thread bug fixes
> 
>
> Key: HIVE-20615
> URL: https://issues.apache.org/jira/browse/HIVE-20615
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Affects Versions: 3.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20615.1.patch, HIVE-20615.1.patch, 
> HIVE-20615.1.patch, HIVE-20615.1.patch, HIVE-20615.1.patch, 
> HIVE-20615.1.patch, HIVE-20615.3.patch, HIVE-20615.4.patch, 
> HIVE-20615.5.patch, HIVE-21625.2.patch
>
>
> Regression introduced in HIVE-18264. Fixes background thread starting and 
> refreshing of the table cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-13582) E061-07 and E061-12: Quantified Comparison Predicates

2019-05-05 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-13582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-13582:
---
Status: Patch Available  (was: Open)

> E061-07 and E061-12: Quantified Comparison Predicates
> -
>
> Key: HIVE-13582
> URL: https://issues.apache.org/jira/browse/HIVE-13582
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Carter Shanklin
>Assignee: Vineet Garg
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-13582.1.patch, HIVE-13582.2.patch, 
> HIVE-13582.3.patch, HIVE-13582.4.patch, HIVE-13582.5.patch, HIVE-13582.6.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> This is a part of the SQL:2011 Analytics Complete Umbrella JIRA HIVE-13554. 
> Quantified comparison predicates (ANY/SOME/ALL) are mandatory in the SQL 
> standard. Hive should support the predicates (E061-07) and you should be able 
> to use these with subqueries (E061-12)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-13582) E061-07 and E061-12: Quantified Comparison Predicates

2019-05-05 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-13582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-13582:
---
Attachment: HIVE-13582.6.patch

> E061-07 and E061-12: Quantified Comparison Predicates
> -
>
> Key: HIVE-13582
> URL: https://issues.apache.org/jira/browse/HIVE-13582
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Carter Shanklin
>Assignee: Vineet Garg
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-13582.1.patch, HIVE-13582.2.patch, 
> HIVE-13582.3.patch, HIVE-13582.4.patch, HIVE-13582.5.patch, HIVE-13582.6.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> This is a part of the SQL:2011 Analytics Complete Umbrella JIRA HIVE-13554. 
> Quantified comparison predicates (ANY/SOME/ALL) are mandatory in the SQL 
> standard. Hive should support the predicates (E061-07) and you should be able 
> to use these with subqueries (E061-12)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-13582) E061-07 and E061-12: Quantified Comparison Predicates

2019-05-05 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-13582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-13582:
---
Status: Open  (was: Patch Available)

> E061-07 and E061-12: Quantified Comparison Predicates
> -
>
> Key: HIVE-13582
> URL: https://issues.apache.org/jira/browse/HIVE-13582
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Carter Shanklin
>Assignee: Vineet Garg
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-13582.1.patch, HIVE-13582.2.patch, 
> HIVE-13582.3.patch, HIVE-13582.4.patch, HIVE-13582.5.patch, HIVE-13582.6.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> This is a part of the SQL:2011 Analytics Complete Umbrella JIRA HIVE-13554. 
> Quantified comparison predicates (ANY/SOME/ALL) are mandatory in the SQL 
> standard. Hive should support the predicates (E061-07) and you should be able 
> to use these with subqueries (E061-12)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21693) Break up DDLTask - extract Process related operations

2019-05-05 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833425#comment-16833425
 ] 

Hive QA commented on HIVE-21693:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12967874/HIVE-21693.03.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15972 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17123/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17123/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17123/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12967874 - PreCommit-HIVE-Build

> Break up DDLTask - extract Process related operations
> -
>
> Key: HIVE-21693
> URL: https://issues.apache.org/jira/browse/HIVE-21693
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Fix For: 4.0.0
>
> Attachments: HIVE-21693.01.patch, HIVE-21693.02.patch, 
> HIVE-21693.03.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #7: extract all the process related operations from the old DDLTask, and 
> move them under the new package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21693) Break up DDLTask - extract Process related operations

2019-05-05 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833416#comment-16833416
 ] 

Hive QA commented on HIVE-21693:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
55s{color} | {color:blue} ql in master has 2253 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
38s{color} | {color:blue} hcatalog/core in master has 28 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
47s{color} | {color:blue} itests/util in master has 46 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
50s{color} | {color:red} ql: The patch generated 2 new + 1346 unchanged - 22 
fixed = 1348 total (was 1368) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17123/dev-support/hive-personality.sh
 |
| git revision | master / 341fc33 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17123/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql hcatalog/core itests/util U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17123/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Break up DDLTask - extract Process related operations
> -
>
> Key: HIVE-21693
> URL: https://issues.apache.org/jira/browse/HIVE-21693
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Fix For: 4.0.0
>
> Attachments: HIVE-21693.01.patch, HIVE-21693.02.patch, 
> HIVE-21693.03.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * 

[jira] [Updated] (HIVE-20615) CachedStore: Background refresh thread bug fixes

2019-05-05 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-20615:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to master again after green run.

> CachedStore: Background refresh thread bug fixes
> 
>
> Key: HIVE-20615
> URL: https://issues.apache.org/jira/browse/HIVE-20615
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Affects Versions: 3.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Daniel Dai
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20615.1.patch, HIVE-20615.1.patch, 
> HIVE-20615.1.patch, HIVE-20615.1.patch, HIVE-20615.1.patch, 
> HIVE-20615.1.patch, HIVE-20615.3.patch, HIVE-20615.4.patch, 
> HIVE-20615.5.patch, HIVE-21625.2.patch
>
>
> Regression introduced in HIVE-18264. Fixes background thread starting and 
> refreshing of the table cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21693) Break up DDLTask - extract Process related operations

2019-05-05 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21693:
--
Status: Open  (was: Patch Available)

> Break up DDLTask - extract Process related operations
> -
>
> Key: HIVE-21693
> URL: https://issues.apache.org/jira/browse/HIVE-21693
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Fix For: 4.0.0
>
> Attachments: HIVE-21693.01.patch, HIVE-21693.02.patch, 
> HIVE-21693.03.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #7: extract all the process related operations from the old DDLTask, and 
> move them under the new package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21693) Break up DDLTask - extract Process related operations

2019-05-05 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21693:
--
Attachment: HIVE-21693.03.patch

> Break up DDLTask - extract Process related operations
> -
>
> Key: HIVE-21693
> URL: https://issues.apache.org/jira/browse/HIVE-21693
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Fix For: 4.0.0
>
> Attachments: HIVE-21693.01.patch, HIVE-21693.02.patch, 
> HIVE-21693.03.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #7: extract all the process related operations from the old DDLTask, and 
> move them under the new package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21693) Break up DDLTask - extract Process related operations

2019-05-05 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21693:
--
Status: Patch Available  (was: Open)

> Break up DDLTask - extract Process related operations
> -
>
> Key: HIVE-21693
> URL: https://issues.apache.org/jira/browse/HIVE-21693
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Fix For: 4.0.0
>
> Attachments: HIVE-21693.01.patch, HIVE-21693.02.patch, 
> HIVE-21693.03.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #7: extract all the process related operations from the old DDLTask, and 
> move them under the new package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21693) Break up DDLTask - extract Process related operations

2019-05-05 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833368#comment-16833368
 ] 

Hive QA commented on HIVE-21693:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12967866/HIVE-21693.02.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 15972 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dbtxnmgr_abort] 
(batchId=46)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[lockneg4] 
(batchId=100)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[lockneg5] 
(batchId=101)
org.apache.hadoop.hive.ql.TestTxnCommandsWithSplitUpdateAndVectorization.testMergeOnTezEdges
 (batchId=320)
org.apache.hadoop.hive.ql.parse.TestQBCompact.showCompactions (batchId=314)
org.apache.hadoop.hive.ql.parse.TestQBCompact.showTxns (batchId=314)
org.apache.hive.minikdc.TestSSLWithMiniKdc.testConnection (batchId=286)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17122/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17122/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17122/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12967866 - PreCommit-HIVE-Build

> Break up DDLTask - extract Process related operations
> -
>
> Key: HIVE-21693
> URL: https://issues.apache.org/jira/browse/HIVE-21693
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Fix For: 4.0.0
>
> Attachments: HIVE-21693.01.patch, HIVE-21693.02.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #7: extract all the process related operations from the old DDLTask, and 
> move them under the new package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21693) Break up DDLTask - extract Process related operations

2019-05-05 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833363#comment-16833363
 ] 

Hive QA commented on HIVE-21693:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
21s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
3s{color} | {color:blue} ql in master has 2253 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
38s{color} | {color:blue} hcatalog/core in master has 28 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
46s{color} | {color:blue} itests/util in master has 46 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
51s{color} | {color:red} ql: The patch generated 6 new + 1343 unchanged - 22 
fixed = 1349 total (was 1365) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17122/dev-support/hive-personality.sh
 |
| git revision | master / 341fc33 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17122/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql hcatalog/core itests/util U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17122/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Break up DDLTask - extract Process related operations
> -
>
> Key: HIVE-21693
> URL: https://issues.apache.org/jira/browse/HIVE-21693
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Fix For: 4.0.0
>
> Attachments: HIVE-21693.01.patch, HIVE-21693.02.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for 

[jira] [Updated] (HIVE-21693) Break up DDLTask - extract Process related operations

2019-05-05 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21693:
--
Status: Patch Available  (was: Open)

> Break up DDLTask - extract Process related operations
> -
>
> Key: HIVE-21693
> URL: https://issues.apache.org/jira/browse/HIVE-21693
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Fix For: 4.0.0
>
> Attachments: HIVE-21693.01.patch, HIVE-21693.02.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #7: extract all the process related operations from the old DDLTask, and 
> move them under the new package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21693) Break up DDLTask - extract Process related operations

2019-05-05 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21693:
--
Status: Open  (was: Patch Available)

> Break up DDLTask - extract Process related operations
> -
>
> Key: HIVE-21693
> URL: https://issues.apache.org/jira/browse/HIVE-21693
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Fix For: 4.0.0
>
> Attachments: HIVE-21693.01.patch, HIVE-21693.02.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #7: extract all the process related operations from the old DDLTask, and 
> move them under the new package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21693) Break up DDLTask - extract Process related operations

2019-05-05 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21693:
--
Attachment: HIVE-21693.02.patch

> Break up DDLTask - extract Process related operations
> -
>
> Key: HIVE-21693
> URL: https://issues.apache.org/jira/browse/HIVE-21693
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Fix For: 4.0.0
>
> Attachments: HIVE-21693.01.patch, HIVE-21693.02.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #7: extract all the process related operations from the old DDLTask, and 
> move them under the new package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21693) Break up DDLTask - extract Process related operations

2019-05-05 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833291#comment-16833291
 ] 

Hive QA commented on HIVE-21693:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12967862/HIVE-21693.01.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17121/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17121/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17121/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-05-05 09:31:53.401
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-17121/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-05-05 09:31:53.405
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 341fc33 HIVE-14669: Have the actual error reported when a q test 
fails instead of having to go through the logs (Laszlo Bodor via Zoltan 
Haindrich)
+ git clean -f -d
Removing ${project.basedir}/
Removing itests/${project.basedir}/
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 341fc33 HIVE-14669: Have the actual error reported when a q test 
fails instead of having to go through the logs (Laszlo Bodor via Zoltan 
Haindrich)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-05-05 09:31:54.579
+ rm -rf ../yetus_PreCommit-HIVE-Build-17121
+ mkdir ../yetus_PreCommit-HIVE-Build-17121
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-17121
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-17121/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
Going to apply patch with: git apply -p0
+ [[ maven == \m\a\v\e\n ]]
+ rm -rf /data/hiveptest/working/maven/org/apache/hive
+ mvn -B clean install -DskipTests -T 4 -q 
-Dmaven.repo.local=/data/hiveptest/working/maven
protoc-jar: executing: [/tmp/protoc2749861970115698451.exe, --version]
libprotoc 2.5.0
protoc-jar: executing: [/tmp/protoc2749861970115698451.exe, 
-I/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore,
 
--java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/target/generated-sources,
 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto]
ANTLR Parser Generator  Version 3.5.2
protoc-jar: executing: [/tmp/protoc2569106713924516175.exe, --version]
libprotoc 2.5.0
ANTLR Parser Generator  Version 3.5.2
Output file 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-server/target/generated-sources/org/apache/hadoop/hive/metastore/parser/FilterParser.java
 does not exist: must build 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/parser/Filter.g
org/apache/hadoop/hive/metastore/parser/Filter.g
log4j:WARN No appenders could be found for logger (DataNucleus.Persistence).
log4j:WARN Please initialize the log4j system properly.
DataNucleus Enhancer (version 4.1.17) for API "JDO"
DataNucleus Enhancer completed with success for 41 classes.
ANTLR Parser Generator  Version 3.5.2
Output file 

[jira] [Commented] (HIVE-20615) CachedStore: Background refresh thread bug fixes

2019-05-05 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833290#comment-16833290
 ] 

Hive QA commented on HIVE-20615:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12967857/HIVE-20615.5.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15972 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17120/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17120/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17120/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12967857 - PreCommit-HIVE-Build

> CachedStore: Background refresh thread bug fixes
> 
>
> Key: HIVE-20615
> URL: https://issues.apache.org/jira/browse/HIVE-20615
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Affects Versions: 3.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Daniel Dai
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20615.1.patch, HIVE-20615.1.patch, 
> HIVE-20615.1.patch, HIVE-20615.1.patch, HIVE-20615.1.patch, 
> HIVE-20615.1.patch, HIVE-20615.3.patch, HIVE-20615.4.patch, 
> HIVE-20615.5.patch, HIVE-21625.2.patch
>
>
> Regression introduced in HIVE-18264. Fixes background thread starting and 
> refreshing of the table cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21693) Break up DDLTask - extract Process related operations

2019-05-05 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21693:
--
Status: Patch Available  (was: Open)

> Break up DDLTask - extract Process related operations
> -
>
> Key: HIVE-21693
> URL: https://issues.apache.org/jira/browse/HIVE-21693
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Fix For: 4.0.0
>
> Attachments: HIVE-21693.01.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #7: extract all the process related operations from the old DDLTask, and 
> move them under the new package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21693) Break up DDLTask - extract Process related operations

2019-05-05 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21693:
--
Attachment: HIVE-21693.01.patch

> Break up DDLTask - extract Process related operations
> -
>
> Key: HIVE-21693
> URL: https://issues.apache.org/jira/browse/HIVE-21693
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Fix For: 4.0.0
>
> Attachments: HIVE-21693.01.patch
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #7: extract all the process related operations from the old DDLTask, and 
> move them under the new package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21693) Break up DDLTask - extract Process related operations

2019-05-05 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21693:
--
Description: 
DDLTask is a huge class, more than 5000 lines long. The related DDLWork is also 
a huge class, which has a field for each DDL operation it supports. The goal is 
to refactor these in order to have everything cut into more handleable classes 
under the package  org.apache.hadoop.hive.ql.exec.ddl:
 * have a separate class for each operation
 * have a package for each operation group (database ddl, table ddl, etc), so 
the amount of classes under a package is more manageable
 * make all the requests (DDLDesc subclasses) immutable
 * DDLTask should be agnostic to the actual operations
 * right now let's ignore the issue of having some operations handled by 
DDLTask which are not actual DDL operations (lock, unlock, desc...)

In the interim time when there are two DDLTask and DDLWork classes in the code 
base the new ones in the new package are called DDLTask2 and DDLWork2 thus 
avoiding the usage of fully qualified class names where both the old and the 
new classes are in use.

Step #7: extract all the process related operations from the old DDLTask, and 
move them under the new package.

  was:
DDLTask is a huge class, more than 5000 lines long. The related DDLWork is also 
a huge class, which has a field for each DDL operation it supports. The goal is 
to refactor these in order to have everything cut into more handleable classes 
under the package  org.apache.hadoop.hive.ql.exec.ddl:
 * have a separate class for each operation
 * have a package for each operation group (database ddl, table ddl, etc), so 
the amount of classes under a package is more manageable
 * make all the requests (DDLDesc subclasses) immutable
 * DDLTask should be agnostic to the actual operations
 * right now let's ignore the issue of having some operations handled by 
DDLTask which are not actual DDL operations (lock, unlock, desc...)

In the interim time when there are two DDLTask and DDLWork classes in the code 
base the new ones in the new package are called DDLTask2 and DDLWork2 thus 
avoiding the usage of fully qualified class names where both the old and the 
new classes are in use.

Step #6: extract all the workload management related operations from the old 
DDLTask, and move them under the new package.


> Break up DDLTask - extract Process related operations
> -
>
> Key: HIVE-21693
> URL: https://issues.apache.org/jira/browse/HIVE-21693
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Fix For: 4.0.0
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #7: extract all the process related operations from the old DDLTask, and 
> move them under the new package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21693) Break up DDLTask - extract Process related operations

2019-05-05 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely reassigned HIVE-21693:
-


> Break up DDLTask - extract Process related operations
> -
>
> Key: HIVE-21693
> URL: https://issues.apache.org/jira/browse/HIVE-21693
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Fix For: 4.0.0
>
>
> DDLTask is a huge class, more than 5000 lines long. The related DDLWork is 
> also a huge class, which has a field for each DDL operation it supports. The 
> goal is to refactor these in order to have everything cut into more 
> handleable classes under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each operation
>  * have a package for each operation group (database ddl, table ddl, etc), so 
> the amount of classes under a package is more manageable
>  * make all the requests (DDLDesc subclasses) immutable
>  * DDLTask should be agnostic to the actual operations
>  * right now let's ignore the issue of having some operations handled by 
> DDLTask which are not actual DDL operations (lock, unlock, desc...)
> In the interim time when there are two DDLTask and DDLWork classes in the 
> code base the new ones in the new package are called DDLTask2 and DDLWork2 
> thus avoiding the usage of fully qualified class names where both the old and 
> the new classes are in use.
> Step #6: extract all the workload management related operations from the old 
> DDLTask, and move them under the new package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20615) CachedStore: Background refresh thread bug fixes

2019-05-05 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833269#comment-16833269
 ] 

Hive QA commented on HIVE-20615:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
13s{color} | {color:blue} standalone-metastore/metastore-server in master has 
180 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
22s{color} | {color:red} standalone-metastore/metastore-server generated 2 new 
+ 180 unchanged - 0 fixed = 182 total (was 180) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:standalone-metastore/metastore-server |
|  |  Redundant nullcheck of tableColStats, which is known to be non-null in 
org.apache.hadoop.hive.metastore.cache.SharedCache.populateTableInCache(Table, 
ColumnStatistics, List, List, AggrStats, AggrStats)  Redundant null check at 
SharedCache.java:is known to be non-null in 
org.apache.hadoop.hive.metastore.cache.SharedCache.populateTableInCache(Table, 
ColumnStatistics, List, List, AggrStats, AggrStats)  Redundant null check at 
SharedCache.java:[line 1203] |
|  |  Redundant nullcheck of tblWrapper, which is known to be non-null in 
org.apache.hadoop.hive.metastore.cache.SharedCache.removeTableFromCache(String, 
String, String)  Redundant null check at SharedCache.java:is known to be 
non-null in 
org.apache.hadoop.hive.metastore.cache.SharedCache.removeTableFromCache(String, 
String, String)  Redundant null check at SharedCache.java:[line 1324] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17120/dev-support/hive-personality.sh
 |
| git revision | master / 341fc33 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17120/yetus/new-findbugs-standalone-metastore_metastore-server.html
 |
| modules | C: standalone-metastore/metastore-server U: 
standalone-metastore/metastore-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17120/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> CachedStore: Background refresh thread bug fixes
> 
>
> Key: HIVE-20615
> URL: https://issues.apache.org/jira/browse/HIVE-20615
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Affects Versions: 3.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Daniel Dai
>Priority: Major
> Fix 

[jira] [Updated] (HIVE-20615) CachedStore: Background refresh thread bug fixes

2019-05-05 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-20615:
--
Attachment: HIVE-20615.5.patch

> CachedStore: Background refresh thread bug fixes
> 
>
> Key: HIVE-20615
> URL: https://issues.apache.org/jira/browse/HIVE-20615
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Affects Versions: 3.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Daniel Dai
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20615.1.patch, HIVE-20615.1.patch, 
> HIVE-20615.1.patch, HIVE-20615.1.patch, HIVE-20615.1.patch, 
> HIVE-20615.1.patch, HIVE-20615.3.patch, HIVE-20615.4.patch, 
> HIVE-20615.5.patch, HIVE-21625.2.patch
>
>
> Regression introduced in HIVE-18264. Fixes background thread starting and 
> refreshing of the table cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21692) Hiveconf initiated in "fat jar" fails to retrieve configuration from hive-common jar

2019-05-05 Thread Danny Polonsky (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Polonsky updated HIVE-21692:
--
Summary: Hiveconf initiated in "fat jar" fails to retrieve configuration 
from hive-common jar  (was: findConfigFile fails to retrieve configuration from 
hive-common jar from "fat jar")

> Hiveconf initiated in "fat jar" fails to retrieve configuration from 
> hive-common jar
> 
>
> Key: HIVE-21692
> URL: https://issues.apache.org/jira/browse/HIVE-21692
> Project: Hive
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 2.3.2
>Reporter: Danny Polonsky
>Priority: Minor
>
> Hiveconf initiated in "fat jar" assembled by spring boot maven plugin cannot 
> access configuratiuon file, getting uri is not hierarchical (constructed URI 
> starts with jar:  ).
> e.g.
> _jar:[file:/opt/someproject/someproject.jar!/BOOT-INF/lib/hive-common-2.3.2.jar!/_|file:///opt/someproject/someproject.jar!/BOOT-INF/lib/hive-common-2.3.2.jar!/_]
> relebant stack trace
> _Caused by: java.lang.IllegalArgumentException: URI is not hierarchical_
>  _at java.io.File.(File.java:418) ~[?:1.8.0_144]_
>  _at org.apache.hadoop.hive.conf.HiveConf.findConfigFile(HiveConf.java:176) 
> ~[hive-common-2.3.2.jar!/:2.3.2]_



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21692) findConfigFile fails to retrieve configuration from hive-common jar from "fat jar"

2019-05-05 Thread Danny Polonsky (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Polonsky updated HIVE-21692:
--
Description: 
Hiveconf initiated in "fat jar" assembled by spring boot maven plugin cannot 
access configuratiuon file, getting uri is not hierarchical (constructed URI 
starts with jar:  ).

e.g.

_jar:[file:/opt/someproject/someproject.jar!/BOOT-INF/lib/hive-common-2.3.2.jar!/_|file:///opt/someproject/someproject.jar!/BOOT-INF/lib/hive-common-2.3.2.jar!/_]

relebant stack trace

_Caused by: java.lang.IllegalArgumentException: URI is not hierarchical_
 _at java.io.File.(File.java:418) ~[?:1.8.0_144]_
 _at org.apache.hadoop.hive.conf.HiveConf.findConfigFile(HiveConf.java:176) 
~[hive-common-2.3.2.jar!/:2.3.2]_

  was:
Hiveconf initiated in "fat jar" assembled by spring boot maven plugin cannot 
access configuratiuon file, getting uri is not hierarchical (constructed URI 
starts with jar:).

e.g.

_jar:file:/opt/someproject/someproject.jar!/BOOT-INF/lib/hive-common-2.3.2.jar!/_

relebant stack trace

_Caused by: java.lang.IllegalArgumentException: URI is not hierarchical_
 _at java.io.File.(File.java:418) ~[?:1.8.0_144]_
 _at org.apache.hadoop.hive.conf.HiveConf.findConfigFile(HiveConf.java:176) 
~[hive-common-2.3.2.jar!/:2.3.2]_


> findConfigFile fails to retrieve configuration from hive-common jar from "fat 
> jar"
> --
>
> Key: HIVE-21692
> URL: https://issues.apache.org/jira/browse/HIVE-21692
> Project: Hive
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 2.3.2
>Reporter: Danny Polonsky
>Priority: Minor
>
> Hiveconf initiated in "fat jar" assembled by spring boot maven plugin cannot 
> access configuratiuon file, getting uri is not hierarchical (constructed URI 
> starts with jar:  ).
> e.g.
> _jar:[file:/opt/someproject/someproject.jar!/BOOT-INF/lib/hive-common-2.3.2.jar!/_|file:///opt/someproject/someproject.jar!/BOOT-INF/lib/hive-common-2.3.2.jar!/_]
> relebant stack trace
> _Caused by: java.lang.IllegalArgumentException: URI is not hierarchical_
>  _at java.io.File.(File.java:418) ~[?:1.8.0_144]_
>  _at org.apache.hadoop.hive.conf.HiveConf.findConfigFile(HiveConf.java:176) 
> ~[hive-common-2.3.2.jar!/:2.3.2]_



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)