[jira] [Work logged] (HIVE-21731) Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster with strict managed table set to true.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21731?focusedWorklogId=243804=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243804
 ]

ASF GitHub Bot logged work on HIVE-21731:
-

Author: ASF GitHub Bot
Created on: 17/May/19 03:23
Start Date: 17/May/19 03:23
Worklog Time Spent: 10m 
  Work Description: sankarh commented on pull request #628: HIVE-21731 : 
Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster 
with strict managed table set to true.
URL: https://github.com/apache/hive/pull/628#discussion_r284966682
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplLoadTask.java
 ##
 @@ -471,6 +472,7 @@ private int executeIncrementalLoad(DriverContext 
driverContext) {
 if (work.hasBootstrapLoadTasks()) {
   LOG.debug("Current incremental dump have tables to be bootstrapped. 
Switching to bootstrap "
   + "mode after applying all events.");
+  work.setBootstrapDuringIncLoad(true);
 
 Review comment:
   We shall set it in ReplLoadWork constructor once for all.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243804)
Time Spent: 2.5h  (was: 2h 20m)

> Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 
> cluster with strict managed table set to true.
> -
>
> Key: HIVE-21731
> URL: https://issues.apache.org/jira/browse/HIVE-21731
> Project: Hive
>  Issue Type: Bug
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21731.01.patch, HIVE-21731.02.patch, 
> HIVE-21731.03.patch
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> The scenario is 
>  # Replication policy is set with hive  3.0 source cluster (strict managed 
> table set to false) and hive 4.0 target cluster with strict managed table set 
>  true.
>  # User upgrades the 3.0 source cluster to 4.0 cluster using upgrade tool.
>  # The upgrade converts all managed tables to acid tables.
>  # In the next repl dump, user sets hive .repl .dump .include .acid .tables 
> and hive .repl .bootstrap. acid. tables set true triggering bootstrap of 
> newly converted ACID tables.
>  # As the old tables are non-txn tables, dump is not filtering the events 
> even tough bootstrap acid table is set to true. This is causing the repl load 
> to fail as the write id is not set in the table object.
>  # If we ignore the event replay, the bootstrap is failing with dump 
> directory mismatch error.
> The fix should be 
>  # Ignore dumping the alter table event if bootstrap acid table is set true 
> and the alter is converting a non-acid table to acid table.
>  # In case of bootstrap during incremental load, ignore the dump directory 
> property set in table object.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21731) Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster with strict managed table set to true.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21731?focusedWorklogId=243803=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243803
 ]

ASF GitHub Bot logged work on HIVE-21731:
-

Author: ASF GitHub Bot
Created on: 17/May/19 03:22
Start Date: 17/May/19 03:22
Worklog Time Spent: 10m 
  Work Description: sankarh commented on pull request #628: HIVE-21731 : 
Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster 
with strict managed table set to true.
URL: https://github.com/apache/hive/pull/628#discussion_r284966591
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplLoadTask.java
 ##
 @@ -264,6 +264,7 @@ a database ( directory )
   || work.getPathsToCopyIterator().hasNext();
 
   if (addAnotherLoadTask) {
+// pass on the bootstrap during incremental flag for next iteration.
 
 Review comment:
   No need of this comment.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243803)
Time Spent: 2h 20m  (was: 2h 10m)

> Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 
> cluster with strict managed table set to true.
> -
>
> Key: HIVE-21731
> URL: https://issues.apache.org/jira/browse/HIVE-21731
> Project: Hive
>  Issue Type: Bug
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21731.01.patch, HIVE-21731.02.patch, 
> HIVE-21731.03.patch
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> The scenario is 
>  # Replication policy is set with hive  3.0 source cluster (strict managed 
> table set to false) and hive 4.0 target cluster with strict managed table set 
>  true.
>  # User upgrades the 3.0 source cluster to 4.0 cluster using upgrade tool.
>  # The upgrade converts all managed tables to acid tables.
>  # In the next repl dump, user sets hive .repl .dump .include .acid .tables 
> and hive .repl .bootstrap. acid. tables set true triggering bootstrap of 
> newly converted ACID tables.
>  # As the old tables are non-txn tables, dump is not filtering the events 
> even tough bootstrap acid table is set to true. This is causing the repl load 
> to fail as the write id is not set in the table object.
>  # If we ignore the event replay, the bootstrap is failing with dump 
> directory mismatch error.
> The fix should be 
>  # Ignore dumping the alter table event if bootstrap acid table is set true 
> and the alter is converting a non-acid table to acid table.
>  # In case of bootstrap during incremental load, ignore the dump directory 
> property set in table object.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21731) Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster with strict managed table set to true.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21731?focusedWorklogId=243802=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243802
 ]

ASF GitHub Bot logged work on HIVE-21731:
-

Author: ASF GitHub Bot
Created on: 17/May/19 03:21
Start Date: 17/May/19 03:21
Worklog Time Spent: 10m 
  Work Description: sankarh commented on pull request #628: HIVE-21731 : 
Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster 
with strict managed table set to true.
URL: https://github.com/apache/hive/pull/628#discussion_r284966493
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/WarehouseInstance.java
 ##
 @@ -563,7 +563,21 @@ public void testEventCounts(String dbName, long 
fromEventId, Long toEventId, Int
   }
 
   public boolean isAcidEnabled() {
-return hiveConf.getBoolVar(HiveConf.ConfVars.HIVE_SUPPORT_CONCURRENCY);
+if (hiveConf.getBoolVar(HiveConf.ConfVars.HIVE_SUPPORT_CONCURRENCY) &&
+
hiveConf.getVar(HiveConf.ConfVars.HIVE_TXN_MANAGER).equals("org.apache.hadoop.hive.ql.lockmgr.DbTxnManager"))
 {
+  return true;
+}
+return false;
+  }
+
+  public void disableAcid() {
+hiveConf.setBoolVar(HiveConf.ConfVars.HIVE_SUPPORT_CONCURRENCY, false);
+hiveConf.setVar(HiveConf.ConfVars.HIVE_TXN_MANAGER, 
"org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager");
+  }
+
+  public void enableAcid() {
+hiveConf.setBoolVar(HiveConf.ConfVars.HIVE_SUPPORT_CONCURRENCY, true);
+hiveConf.setVar(HiveConf.ConfVars.HIVE_TXN_MANAGER, 
"org.apache.hadoop.hive.ql.lockmgr.DbTxnManager");
 
 Review comment:
   It is working for this test because, replica warehouse is acid enabled and 
it initialises the derby DB which is shared by both warehouse instances. But, 
if both primary and replica are acid disabled, then this won't work if you 
dynamically change this config.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243802)
Time Spent: 2h 10m  (was: 2h)

> Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 
> cluster with strict managed table set to true.
> -
>
> Key: HIVE-21731
> URL: https://issues.apache.org/jira/browse/HIVE-21731
> Project: Hive
>  Issue Type: Bug
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21731.01.patch, HIVE-21731.02.patch, 
> HIVE-21731.03.patch
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> The scenario is 
>  # Replication policy is set with hive  3.0 source cluster (strict managed 
> table set to false) and hive 4.0 target cluster with strict managed table set 
>  true.
>  # User upgrades the 3.0 source cluster to 4.0 cluster using upgrade tool.
>  # The upgrade converts all managed tables to acid tables.
>  # In the next repl dump, user sets hive .repl .dump .include .acid .tables 
> and hive .repl .bootstrap. acid. tables set true triggering bootstrap of 
> newly converted ACID tables.
>  # As the old tables are non-txn tables, dump is not filtering the events 
> even tough bootstrap acid table is set to true. This is causing the repl load 
> to fail as the write id is not set in the table object.
>  # If we ignore the event replay, the bootstrap is failing with dump 
> directory mismatch error.
> The fix should be 
>  # Ignore dumping the alter table event if bootstrap acid table is set true 
> and the alter is converting a non-acid table to acid table.
>  # In case of bootstrap during incremental load, ignore the dump directory 
> property set in table object.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21365) Refactor Hep planner steps in CBO

2019-05-16 Thread Vineet Garg (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841837#comment-16841837
 ] 

Vineet Garg commented on HIVE-21365:


+1 pending tests.

> Refactor Hep planner steps in CBO
> -
>
> Key: HIVE-21365
> URL: https://issues.apache.org/jira/browse/HIVE-21365
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21365.01.patch, HIVE-21365.01.patch, 
> HIVE-21365.02.patch, HIVE-21365.03.patch, HIVE-21365.03.patch, 
> HIVE-21365.04.patch, HIVE-21365.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Using subprograms to decrease number of planner instantiations and benefit 
> fully from metadata providers caching, among other benefits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21365) Refactor Hep planner steps in CBO

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21365?focusedWorklogId=243758=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243758
 ]

ASF GitHub Bot logged work on HIVE-21365:
-

Author: ASF GitHub Bot
Created on: 17/May/19 00:19
Start Date: 17/May/19 00:19
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #630: HIVE-21365
URL: https://github.com/apache/hive/pull/630#discussion_r284941789
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveAggregateJoinTransposeRule.java
 ##
 @@ -63,20 +66,16 @@
  */
 public class HiveAggregateJoinTransposeRule extends AggregateJoinTransposeRule 
{
 
-  /** Extended instance of the rule that can push down aggregate functions. */
-  public static final HiveAggregateJoinTransposeRule INSTANCE =
-  new HiveAggregateJoinTransposeRule(HiveAggregate.class, HiveJoin.class,
-  HiveRelFactories.HIVE_BUILDER, true);
+  private static final Logger LOG = 
LoggerFactory.getLogger(HiveAggregateJoinTransposeRule.class);
 
   private final boolean allowFunctions;
+  private final AtomicInteger noColsMissingStats;
 
 Review comment:
   We have always captured how many columns have missing stats (this is same as 
previous behavior, observe that this variable is coming from CalcitePlanner and 
it is used in other places, e.g., ```RelOptHiveTable``` has a reference to it). 
The change here is that since we trigger all rules using a single planner, we 
cannot capture the stats exception from outside of the rule, and we need to 
rather do it from within the rule logic. That is why it is passed as a 
parameter.
   About AtomicInteger vs other type, I do not think this is shared by multiple 
threads indeed, but I suspect this was done because Boolean/Integer are 
immutable, hence instead of passing a reference to the planner and changing the 
object itself, you pass around the AtomicInteger that is not immutable.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243758)
Time Spent: 1.5h  (was: 1h 20m)

> Refactor Hep planner steps in CBO
> -
>
> Key: HIVE-21365
> URL: https://issues.apache.org/jira/browse/HIVE-21365
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21365.01.patch, HIVE-21365.01.patch, 
> HIVE-21365.02.patch, HIVE-21365.03.patch, HIVE-21365.03.patch, 
> HIVE-21365.04.patch, HIVE-21365.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Using subprograms to decrease number of planner instantiations and benefit 
> fully from metadata providers caching, among other benefits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21365) Refactor Hep planner steps in CBO

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21365?focusedWorklogId=243757=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243757
 ]

ASF GitHub Bot logged work on HIVE-21365:
-

Author: ASF GitHub Bot
Created on: 17/May/19 00:18
Start Date: 17/May/19 00:18
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #630: HIVE-21365
URL: https://github.com/apache/hive/pull/630#discussion_r284942180
 
 

 ##
 File path: ql/src/test/results/clientpositive/perf/tez/cbo_query14.q.out
 ##
 @@ -251,7 +251,7 @@ HiveSortLimit(sort0=[$0], sort1=[$1], sort2=[$2], 
sort3=[$3], dir0=[ASC], dir1=[
 HiveProject($f0=[$0], $f1=[$1], $f2=[$2])
   HiveFilter(condition=[=($3, 3)])
 HiveAggregate(group=[{0, 1, 2}], 
agg#0=[count($3)])
-  HiveProject(i_brand_id=[$0], 
i_class_id=[$1], i_category_id=[$2], $f3=[$3])
+  HiveProject(brand_id=[$0], class_id=[$1], 
category_id=[$2], $f3=[$3])
 
 Review comment:
   I am not sure why these names changed (it happened in a few other q files 
too). Since I have to regenerate a few more q files, I will explore this and 
comment back.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243757)
Time Spent: 1h 20m  (was: 1h 10m)

> Refactor Hep planner steps in CBO
> -
>
> Key: HIVE-21365
> URL: https://issues.apache.org/jira/browse/HIVE-21365
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21365.01.patch, HIVE-21365.01.patch, 
> HIVE-21365.02.patch, HIVE-21365.03.patch, HIVE-21365.03.patch, 
> HIVE-21365.04.patch, HIVE-21365.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Using subprograms to decrease number of planner instantiations and benefit 
> fully from metadata providers caching, among other benefits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21365) Refactor Hep planner steps in CBO

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21365?focusedWorklogId=243754=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243754
 ]

ASF GitHub Bot logged work on HIVE-21365:
-

Author: ASF GitHub Bot
Created on: 17/May/19 00:16
Start Date: 17/May/19 00:16
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #630: HIVE-21365
URL: https://github.com/apache/hive/pull/630#discussion_r284941883
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
 ##
 @@ -2150,75 +1966,69 @@ private RelNode applyPreJoinOrderingTransforms(RelNode 
basePlan, RelMetadataProv
   rules.add(HiveSortLimitPullUpConstantsRule.INSTANCE);
   rules.add(HiveUnionPullUpConstantsRule.INSTANCE);
   rules.add(HiveAggregatePullUpConstantsRule.INSTANCE);
-  perfLogger.PerfLogBegin(this.getClass().getName(), PerfLogger.OPTIMIZER);
-  basePlan = hepPlan(basePlan, true, mdProvider, executorProvider, 
HepMatchOrder.BOTTOM_UP,
-  rules.toArray(new RelOptRule[rules.size()]));
-  perfLogger.PerfLogEnd(this.getClass().getName(), PerfLogger.OPTIMIZER,
-"Calcite: Prejoin ordering transformation, PPD, not null predicates, 
transitive inference, constant folding");
+  generatePartialProgram(program, true, HepMatchOrder.BOTTOM_UP,
+  rules.toArray(new RelOptRule[rules.size()]));
 
   // 4. Push down limit through outer join
   // NOTE: We run this after PPD to support old style join syntax.
   // Ex: select * from R1 left outer join R2 where ((R1.x=R2.x) and 
R1.y<10) or
   // ((R1.x=R2.x) and R1.z=10)) and rand(1) < 0.1 order by R1.x limit 10
   if (conf.getBoolVar(HiveConf.ConfVars.HIVE_OPTIMIZE_LIMIT_TRANSPOSE)) {
-perfLogger.PerfLogBegin(this.getClass().getName(), 
PerfLogger.OPTIMIZER);
 // This should be a cost based decision, but till we enable the 
extended cost
 // model, we will use the given value for the variable
 final float reductionProportion = HiveConf.getFloatVar(conf,
 
HiveConf.ConfVars.HIVE_OPTIMIZE_LIMIT_TRANSPOSE_REDUCTION_PERCENTAGE);
 final long reductionTuples = HiveConf.getLongVar(conf,
 HiveConf.ConfVars.HIVE_OPTIMIZE_LIMIT_TRANSPOSE_REDUCTION_TUPLES);
-basePlan = hepPlan(basePlan, true, mdProvider, executorProvider, 
HiveSortMergeRule.INSTANCE,
-HiveSortProjectTransposeRule.INSTANCE, 
HiveSortJoinReduceRule.INSTANCE,
-HiveSortUnionReduceRule.INSTANCE);
-basePlan = hepPlan(basePlan, true, mdProvider, executorProvider, 
HepMatchOrder.BOTTOM_UP,
+generatePartialProgram(program, true, HepMatchOrder.TOP_DOWN,
+HiveSortMergeRule.INSTANCE, HiveSortProjectTransposeRule.INSTANCE,
+HiveSortJoinReduceRule.INSTANCE, HiveSortUnionReduceRule.INSTANCE);
+generatePartialProgram(program, true, HepMatchOrder.BOTTOM_UP,
 new HiveSortRemoveRule(reductionProportion, reductionTuples),
 HiveProjectSortTransposeRule.INSTANCE);
-perfLogger.PerfLogEnd(this.getClass().getName(), PerfLogger.OPTIMIZER,
-  "Calcite: Prejoin ordering transformation, Push down limit through 
outer join");
   }
 
-  // 5. Push Down Semi Joins
+  // Push Down Semi Joins
   //TODO: Enable this later
   /*perfLogger.PerfLogBegin(this.getClass().getName(), 
PerfLogger.OPTIMIZER);
   basePlan = hepPlan(basePlan, true, mdProvider, executorProvider, 
SemiJoinJoinTransposeRule.INSTANCE,
   SemiJoinFilterTransposeRule.INSTANCE, 
SemiJoinProjectTransposeRule.INSTANCE);
   perfLogger.PerfLogEnd(this.getClass().getName(), PerfLogger.OPTIMIZER,
 "Calcite: Prejoin ordering transformation, Push Down Semi Joins"); */
 
-  perfLogger.PerfLogBegin(this.getClass().getName(), PerfLogger.OPTIMIZER);
-  basePlan = hepPlan(basePlan, false, mdProvider, executorProvider,
-HiveSortLimitRemoveRule.INSTANCE);
-  perfLogger.PerfLogEnd(this.getClass().getName(), PerfLogger.OPTIMIZER,
-"Calcite: Trying to remove Limit and Order by");
+  // 5. Try to remove limit and order by
+  generatePartialProgram(program, false, HepMatchOrder.DEPTH_FIRST,
+  HiveSortLimitRemoveRule.INSTANCE);
 
   // 6. Apply Partition Pruning
-  perfLogger.PerfLogBegin(this.getClass().getName(), PerfLogger.OPTIMIZER);
-  basePlan = hepPlan(basePlan, false, mdProvider, executorProvider, new 
HivePartitionPruneRule(conf));
-  perfLogger.PerfLogEnd(this.getClass().getName(), PerfLogger.OPTIMIZER,
-"Calcite: Prejoin ordering transformation, Partition Pruning");
+  generatePartialProgram(program, false, HepMatchOrder.DEPTH_FIRST,
+  new HivePartitionPruneRule(conf));
 
   // 7. Projection Pruning (this introduces select 

[jira] [Work logged] (HIVE-21365) Refactor Hep planner steps in CBO

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21365?focusedWorklogId=243752=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243752
 ]

ASF GitHub Bot logged work on HIVE-21365:
-

Author: ASF GitHub Bot
Created on: 17/May/19 00:15
Start Date: 17/May/19 00:15
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #630: HIVE-21365
URL: https://github.com/apache/hive/pull/630#discussion_r284941789
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveAggregateJoinTransposeRule.java
 ##
 @@ -63,20 +66,16 @@
  */
 public class HiveAggregateJoinTransposeRule extends AggregateJoinTransposeRule 
{
 
-  /** Extended instance of the rule that can push down aggregate functions. */
-  public static final HiveAggregateJoinTransposeRule INSTANCE =
-  new HiveAggregateJoinTransposeRule(HiveAggregate.class, HiveJoin.class,
-  HiveRelFactories.HIVE_BUILDER, true);
+  private static final Logger LOG = 
LoggerFactory.getLogger(HiveAggregateJoinTransposeRule.class);
 
   private final boolean allowFunctions;
+  private final AtomicInteger noColsMissingStats;
 
 Review comment:
   We have always captured how many columns have missing stats (this is same as 
previous behavior, observe that this variable is coming from CalcitePlanner and 
it is used in other places, e.g., ```RelOptHiveTable``` has a reference to it). 
The change here is that since we trigger all rules using a single planner, we 
cannot capture the stats exception from outside of the rule, and we need to 
rather do it from within the rule logic. That is why it is passed as a 
parameter.
   About AtomicInteger vs other type, I do think this is shared by multiple 
threads indeed, but I suspect this was done because Boolean/Integer are 
immutable, hence instead of passing a reference to the planner and changing the 
object itself, you pass around the AtomicInteger that is not immutable.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243752)
Time Spent: 50m  (was: 40m)

> Refactor Hep planner steps in CBO
> -
>
> Key: HIVE-21365
> URL: https://issues.apache.org/jira/browse/HIVE-21365
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21365.01.patch, HIVE-21365.01.patch, 
> HIVE-21365.02.patch, HIVE-21365.03.patch, HIVE-21365.03.patch, 
> HIVE-21365.04.patch, HIVE-21365.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Using subprograms to decrease number of planner instantiations and benefit 
> fully from metadata providers caching, among other benefits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21365) Refactor Hep planner steps in CBO

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21365?focusedWorklogId=243753=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243753
 ]

ASF GitHub Bot logged work on HIVE-21365:
-

Author: ASF GitHub Bot
Created on: 17/May/19 00:15
Start Date: 17/May/19 00:15
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #630: HIVE-21365
URL: https://github.com/apache/hive/pull/630#discussion_r284941818
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveAggregateJoinTransposeRule.java
 ##
 @@ -63,20 +66,16 @@
  */
 public class HiveAggregateJoinTransposeRule extends AggregateJoinTransposeRule 
{
 
-  /** Extended instance of the rule that can push down aggregate functions. */
-  public static final HiveAggregateJoinTransposeRule INSTANCE =
-  new HiveAggregateJoinTransposeRule(HiveAggregate.class, HiveJoin.class,
-  HiveRelFactories.HIVE_BUILDER, true);
+  private static final Logger LOG = 
LoggerFactory.getLogger(HiveAggregateJoinTransposeRule.class);
 
   private final boolean allowFunctions;
+  private final AtomicInteger noColsMissingStats;
 
   /** Creates an AggregateJoinTransposeRule that may push down functions. */
-  private HiveAggregateJoinTransposeRule(Class 
aggregateClass,
-  Class joinClass,
-  RelBuilderFactory relBuilderFactory,
-  boolean allowFunctions) {
-super(aggregateClass, joinClass, relBuilderFactory, true);
-this.allowFunctions = allowFunctions;
+  public HiveAggregateJoinTransposeRule(AtomicInteger noColsMissingStats) {
 
 Review comment:
   Since ```AtomicInteger``` needs to be passed as a parameter from 
```CalcitePlanner```, this cannot be a static final instance anymore.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243753)
Time Spent: 1h  (was: 50m)

> Refactor Hep planner steps in CBO
> -
>
> Key: HIVE-21365
> URL: https://issues.apache.org/jira/browse/HIVE-21365
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21365.01.patch, HIVE-21365.01.patch, 
> HIVE-21365.02.patch, HIVE-21365.03.patch, HIVE-21365.03.patch, 
> HIVE-21365.04.patch, HIVE-21365.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Using subprograms to decrease number of planner instantiations and benefit 
> fully from metadata providers caching, among other benefits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21365) Refactor Hep planner steps in CBO

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21365?focusedWorklogId=243741=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243741
 ]

ASF GitHub Bot logged work on HIVE-21365:
-

Author: ASF GitHub Bot
Created on: 16/May/19 23:50
Start Date: 16/May/19 23:50
Worklog Time Spent: 10m 
  Work Description: vineetgarg02 commented on pull request #630: HIVE-21365
URL: https://github.com/apache/hive/pull/630#discussion_r284928489
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
 ##
 @@ -2150,75 +1966,69 @@ private RelNode applyPreJoinOrderingTransforms(RelNode 
basePlan, RelMetadataProv
   rules.add(HiveSortLimitPullUpConstantsRule.INSTANCE);
   rules.add(HiveUnionPullUpConstantsRule.INSTANCE);
   rules.add(HiveAggregatePullUpConstantsRule.INSTANCE);
-  perfLogger.PerfLogBegin(this.getClass().getName(), PerfLogger.OPTIMIZER);
-  basePlan = hepPlan(basePlan, true, mdProvider, executorProvider, 
HepMatchOrder.BOTTOM_UP,
-  rules.toArray(new RelOptRule[rules.size()]));
-  perfLogger.PerfLogEnd(this.getClass().getName(), PerfLogger.OPTIMIZER,
-"Calcite: Prejoin ordering transformation, PPD, not null predicates, 
transitive inference, constant folding");
+  generatePartialProgram(program, true, HepMatchOrder.BOTTOM_UP,
+  rules.toArray(new RelOptRule[rules.size()]));
 
   // 4. Push down limit through outer join
   // NOTE: We run this after PPD to support old style join syntax.
   // Ex: select * from R1 left outer join R2 where ((R1.x=R2.x) and 
R1.y<10) or
   // ((R1.x=R2.x) and R1.z=10)) and rand(1) < 0.1 order by R1.x limit 10
   if (conf.getBoolVar(HiveConf.ConfVars.HIVE_OPTIMIZE_LIMIT_TRANSPOSE)) {
-perfLogger.PerfLogBegin(this.getClass().getName(), 
PerfLogger.OPTIMIZER);
 // This should be a cost based decision, but till we enable the 
extended cost
 // model, we will use the given value for the variable
 final float reductionProportion = HiveConf.getFloatVar(conf,
 
HiveConf.ConfVars.HIVE_OPTIMIZE_LIMIT_TRANSPOSE_REDUCTION_PERCENTAGE);
 final long reductionTuples = HiveConf.getLongVar(conf,
 HiveConf.ConfVars.HIVE_OPTIMIZE_LIMIT_TRANSPOSE_REDUCTION_TUPLES);
-basePlan = hepPlan(basePlan, true, mdProvider, executorProvider, 
HiveSortMergeRule.INSTANCE,
-HiveSortProjectTransposeRule.INSTANCE, 
HiveSortJoinReduceRule.INSTANCE,
-HiveSortUnionReduceRule.INSTANCE);
-basePlan = hepPlan(basePlan, true, mdProvider, executorProvider, 
HepMatchOrder.BOTTOM_UP,
+generatePartialProgram(program, true, HepMatchOrder.TOP_DOWN,
+HiveSortMergeRule.INSTANCE, HiveSortProjectTransposeRule.INSTANCE,
+HiveSortJoinReduceRule.INSTANCE, HiveSortUnionReduceRule.INSTANCE);
+generatePartialProgram(program, true, HepMatchOrder.BOTTOM_UP,
 new HiveSortRemoveRule(reductionProportion, reductionTuples),
 HiveProjectSortTransposeRule.INSTANCE);
-perfLogger.PerfLogEnd(this.getClass().getName(), PerfLogger.OPTIMIZER,
-  "Calcite: Prejoin ordering transformation, Push down limit through 
outer join");
   }
 
-  // 5. Push Down Semi Joins
+  // Push Down Semi Joins
   //TODO: Enable this later
   /*perfLogger.PerfLogBegin(this.getClass().getName(), 
PerfLogger.OPTIMIZER);
   basePlan = hepPlan(basePlan, true, mdProvider, executorProvider, 
SemiJoinJoinTransposeRule.INSTANCE,
   SemiJoinFilterTransposeRule.INSTANCE, 
SemiJoinProjectTransposeRule.INSTANCE);
   perfLogger.PerfLogEnd(this.getClass().getName(), PerfLogger.OPTIMIZER,
 "Calcite: Prejoin ordering transformation, Push Down Semi Joins"); */
 
-  perfLogger.PerfLogBegin(this.getClass().getName(), PerfLogger.OPTIMIZER);
-  basePlan = hepPlan(basePlan, false, mdProvider, executorProvider,
-HiveSortLimitRemoveRule.INSTANCE);
-  perfLogger.PerfLogEnd(this.getClass().getName(), PerfLogger.OPTIMIZER,
-"Calcite: Trying to remove Limit and Order by");
+  // 5. Try to remove limit and order by
+  generatePartialProgram(program, false, HepMatchOrder.DEPTH_FIRST,
+  HiveSortLimitRemoveRule.INSTANCE);
 
   // 6. Apply Partition Pruning
-  perfLogger.PerfLogBegin(this.getClass().getName(), PerfLogger.OPTIMIZER);
-  basePlan = hepPlan(basePlan, false, mdProvider, executorProvider, new 
HivePartitionPruneRule(conf));
-  perfLogger.PerfLogEnd(this.getClass().getName(), PerfLogger.OPTIMIZER,
-"Calcite: Prejoin ordering transformation, Partition Pruning");
+  generatePartialProgram(program, false, HepMatchOrder.DEPTH_FIRST,
+  new HivePartitionPruneRule(conf));
 
   // 7. Projection Pruning (this introduces 

[jira] [Work logged] (HIVE-21365) Refactor Hep planner steps in CBO

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21365?focusedWorklogId=243740=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243740
 ]

ASF GitHub Bot logged work on HIVE-21365:
-

Author: ASF GitHub Bot
Created on: 16/May/19 23:50
Start Date: 16/May/19 23:50
Worklog Time Spent: 10m 
  Work Description: vineetgarg02 commented on pull request #630: HIVE-21365
URL: https://github.com/apache/hive/pull/630#discussion_r284926715
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveAggregateJoinTransposeRule.java
 ##
 @@ -63,20 +66,16 @@
  */
 public class HiveAggregateJoinTransposeRule extends AggregateJoinTransposeRule 
{
 
-  /** Extended instance of the rule that can push down aggregate functions. */
-  public static final HiveAggregateJoinTransposeRule INSTANCE =
-  new HiveAggregateJoinTransposeRule(HiveAggregate.class, HiveJoin.class,
-  HiveRelFactories.HIVE_BUILDER, true);
+  private static final Logger LOG = 
LoggerFactory.getLogger(HiveAggregateJoinTransposeRule.class);
 
   private final boolean allowFunctions;
+  private final AtomicInteger noColsMissingStats;
 
 Review comment:
   Why AtomicInteger instead of boolean? This will not be shared among 
different threads right?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243740)
Time Spent: 20m  (was: 10m)

> Refactor Hep planner steps in CBO
> -
>
> Key: HIVE-21365
> URL: https://issues.apache.org/jira/browse/HIVE-21365
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21365.01.patch, HIVE-21365.01.patch, 
> HIVE-21365.02.patch, HIVE-21365.03.patch, HIVE-21365.03.patch, 
> HIVE-21365.04.patch, HIVE-21365.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Using subprograms to decrease number of planner instantiations and benefit 
> fully from metadata providers caching, among other benefits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21365) Refactor Hep planner steps in CBO

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21365?focusedWorklogId=243742=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243742
 ]

ASF GitHub Bot logged work on HIVE-21365:
-

Author: ASF GitHub Bot
Created on: 16/May/19 23:50
Start Date: 16/May/19 23:50
Worklog Time Spent: 10m 
  Work Description: vineetgarg02 commented on pull request #630: HIVE-21365
URL: https://github.com/apache/hive/pull/630#discussion_r284927275
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveAggregateJoinTransposeRule.java
 ##
 @@ -63,20 +66,16 @@
  */
 public class HiveAggregateJoinTransposeRule extends AggregateJoinTransposeRule 
{
 
-  /** Extended instance of the rule that can push down aggregate functions. */
-  public static final HiveAggregateJoinTransposeRule INSTANCE =
-  new HiveAggregateJoinTransposeRule(HiveAggregate.class, HiveJoin.class,
-  HiveRelFactories.HIVE_BUILDER, true);
+  private static final Logger LOG = 
LoggerFactory.getLogger(HiveAggregateJoinTransposeRule.class);
 
   private final boolean allowFunctions;
+  private final AtomicInteger noColsMissingStats;
 
   /** Creates an AggregateJoinTransposeRule that may push down functions. */
-  private HiveAggregateJoinTransposeRule(Class 
aggregateClass,
-  Class joinClass,
-  RelBuilderFactory relBuilderFactory,
-  boolean allowFunctions) {
-super(aggregateClass, joinClass, relBuilderFactory, true);
-this.allowFunctions = allowFunctions;
+  public HiveAggregateJoinTransposeRule(AtomicInteger noColsMissingStats) {
 
 Review comment:
   With this change instead of having a static instance now we will create a 
different instance each time this rule needs to be executed. What is the reason 
to change this? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243742)
Time Spent: 0.5h  (was: 20m)

> Refactor Hep planner steps in CBO
> -
>
> Key: HIVE-21365
> URL: https://issues.apache.org/jira/browse/HIVE-21365
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21365.01.patch, HIVE-21365.01.patch, 
> HIVE-21365.02.patch, HIVE-21365.03.patch, HIVE-21365.03.patch, 
> HIVE-21365.04.patch, HIVE-21365.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Using subprograms to decrease number of planner instantiations and benefit 
> fully from metadata providers caching, among other benefits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21365) Refactor Hep planner steps in CBO

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21365?focusedWorklogId=243743=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243743
 ]

ASF GitHub Bot logged work on HIVE-21365:
-

Author: ASF GitHub Bot
Created on: 16/May/19 23:50
Start Date: 16/May/19 23:50
Worklog Time Spent: 10m 
  Work Description: vineetgarg02 commented on pull request #630: HIVE-21365
URL: https://github.com/apache/hive/pull/630#discussion_r284927728
 
 

 ##
 File path: ql/src/test/results/clientpositive/perf/tez/cbo_query14.q.out
 ##
 @@ -251,7 +251,7 @@ HiveSortLimit(sort0=[$0], sort1=[$1], sort2=[$2], 
sort3=[$3], dir0=[ASC], dir1=[
 HiveProject($f0=[$0], $f1=[$1], $f2=[$2])
   HiveFilter(condition=[=($3, 3)])
 HiveAggregate(group=[{0, 1, 2}], 
agg#0=[count($3)])
-  HiveProject(i_brand_id=[$0], 
i_class_id=[$1], i_category_id=[$2], $f3=[$3])
+  HiveProject(brand_id=[$0], class_id=[$1], 
category_id=[$2], $f3=[$3])
 
 Review comment:
   Why did the project expression name change here?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243743)
Time Spent: 40m  (was: 0.5h)

> Refactor Hep planner steps in CBO
> -
>
> Key: HIVE-21365
> URL: https://issues.apache.org/jira/browse/HIVE-21365
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21365.01.patch, HIVE-21365.01.patch, 
> HIVE-21365.02.patch, HIVE-21365.03.patch, HIVE-21365.03.patch, 
> HIVE-21365.04.patch, HIVE-21365.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Using subprograms to decrease number of planner instantiations and benefit 
> fully from metadata providers caching, among other benefits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21746) ArrayIndexOutOfBoundsException during dynamically partitioned hash join, with CBO disabled

2019-05-16 Thread Jason Dere (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841815#comment-16841815
 ] 

Jason Dere commented on HIVE-21746:
---

Initial patch on hive 2.x. This adds foldedFromTab to ExprNodeConstantDesc and 
makes use of this information during 
ExprNodeDescUtils.resolveJoinKeysAsRSColumns().

So far I've been unable to create a locally failing qfile test. I'll update 
with a patch for master branch when I do.

> ArrayIndexOutOfBoundsException during dynamically partitioned hash join, with 
> CBO disabled
> --
>
> Key: HIVE-21746
> URL: https://issues.apache.org/jira/browse/HIVE-21746
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-21746.hive2.patch
>
>
> ArrayIndexOutOfBounds exception during query execution with dynamically 
> partitioned hash join.
> Found on Hive 2.x. Seems to occur with CBO disabled/failed.
> Disabling constant propagation seems to allow the query to succeed.
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 203
> at 
> org.apache.hadoop.hive.serde2.io.TimestampWritable.getTotalLength(TimestampWritable.java:217)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryUtils.checkObjectByteInfo(LazyBinaryUtils.java:205)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.parse(LazyBinaryStruct.java:142)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.getFieldsAsList(LazyBinaryStruct.java:281)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.persistence.MapJoinBytesTableContainer$ReusableRowContainer.unpack(MapJoinBytesTableContainer.java:744)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.persistence.MapJoinBytesTableContainer$ReusableRowContainer.next(MapJoinBytesTableContainer.java:730)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.persistence.MapJoinBytesTableContainer$ReusableRowContainer.next(MapJoinBytesTableContainer.java:605)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.persistence.UnwrapRowContainer.next(UnwrapRowContainer.java:70)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.persistence.UnwrapRowContainer.next(UnwrapRowContainer.java:34)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genUniqueJoinObject(CommonJoinOperator.java:819)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:924)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.process(MapJoinOperator.java:456)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:359)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:290)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:319)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:189)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:172) 
> ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:377)
>  ~[tez-runtime-internals-0.8.4.2.6.4.119-3.jar:0.8.4.2.6.4.119-3]
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
>  ~[tez-runtime-internals-0.8.4.2.6.4.119-3.jar:0.8.4.2.6.4.119-3]
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
>  ~[tez-runtime-internals-0.8.4.2.6.4.119-3.jar:0.8.4.2.6.4.119-3]
> at java.security.AccessController.doPrivileged(Native Method) 
> ~[?:1.8.0_112]
> at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112]
> at 
> 

[jira] [Updated] (HIVE-21746) ArrayIndexOutOfBoundsException during dynamically partitioned hash join, with CBO disabled

2019-05-16 Thread Jason Dere (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-21746:
--
Attachment: HIVE-21746.hive2.patch

> ArrayIndexOutOfBoundsException during dynamically partitioned hash join, with 
> CBO disabled
> --
>
> Key: HIVE-21746
> URL: https://issues.apache.org/jira/browse/HIVE-21746
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-21746.hive2.patch
>
>
> ArrayIndexOutOfBounds exception during query execution with dynamically 
> partitioned hash join.
> Found on Hive 2.x. Seems to occur with CBO disabled/failed.
> Disabling constant propagation seems to allow the query to succeed.
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 203
> at 
> org.apache.hadoop.hive.serde2.io.TimestampWritable.getTotalLength(TimestampWritable.java:217)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryUtils.checkObjectByteInfo(LazyBinaryUtils.java:205)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.parse(LazyBinaryStruct.java:142)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.getFieldsAsList(LazyBinaryStruct.java:281)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.persistence.MapJoinBytesTableContainer$ReusableRowContainer.unpack(MapJoinBytesTableContainer.java:744)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.persistence.MapJoinBytesTableContainer$ReusableRowContainer.next(MapJoinBytesTableContainer.java:730)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.persistence.MapJoinBytesTableContainer$ReusableRowContainer.next(MapJoinBytesTableContainer.java:605)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.persistence.UnwrapRowContainer.next(UnwrapRowContainer.java:70)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.persistence.UnwrapRowContainer.next(UnwrapRowContainer.java:34)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genUniqueJoinObject(CommonJoinOperator.java:819)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:924)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.process(MapJoinOperator.java:456)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:359)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:290)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:319)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:189)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:172) 
> ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:377)
>  ~[tez-runtime-internals-0.8.4.2.6.4.119-3.jar:0.8.4.2.6.4.119-3]
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
>  ~[tez-runtime-internals-0.8.4.2.6.4.119-3.jar:0.8.4.2.6.4.119-3]
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
>  ~[tez-runtime-internals-0.8.4.2.6.4.119-3.jar:0.8.4.2.6.4.119-3]
> at java.security.AccessController.doPrivileged(Native Method) 
> ~[?:1.8.0_112]
> at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112]
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
>  ~[hadoop-common-2.7.3.2.6.4.119-3.jar:?]
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
>  

[jira] [Commented] (HIVE-21746) ArrayIndexOutOfBoundsException during dynamically partitioned hash join, with CBO disabled

2019-05-16 Thread Jason Dere (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841809#comment-16841809
 ] 

Jason Dere commented on HIVE-21746:
---

I believe the dynamically partitioned hash join has issues when the join keys 
are constant folded.
Looking at the ReduceSink output that feeds into the dynamically partitioned 
hash join:
{noformat}
  Reduce Output Operator
key expressions: _col20 (type: string), 'HR3' (type: 
string)
null sort order: aa
sort order: ++
Map-reduce partition columns: _col20 (type: string), 
'HR3' (type: string)
Statistics: Num rows: 380 Data size: 1288485344 
Basic stats: COMPLETE Column stats: PARTIAL
tag: 0
value expressions: _col2 (type: timestamp), _col3 
(type: timestamp), _col51 (type: timestamp), _col124 (type: timestamp)
{noformat}

So the value expressions in the ReduceSink consists of 4 timestamp columns. And 
it appears that the data written out and sent to the Join also matches that.
However, the input schema to the MapJoin operator shows 5 columns rather than 4:
{noformat}
*** valCols[0] for JOIN JOIN_13: [Column[VALUE._col2], Column[VALUE._col3], 
Column[KEY.reducesinkkey1], Column[VALUE._col49], Column[VALUE._col122]]
{noformat}
With types (timestamp, timestamp, string, timestamp, timestamp)

Note that the third column in this list is KEY.reducesinkkey1. Key columns 
should have been filtered out from the values columns in 
MapJoinProcessor.getMapJoinDesc(), during the section that populates 
valueTableDescs.
But the keyExprMap generated by ExprNodeDescUtils.resolveJoinKeysAsRSColumns(), 
which is only done for dynamically partitioned hash join, does not properly 
match the KEY.reducesinkkey1 column from the ReduceSinkOperator, when filtering 
the key columns from the value columns.

The column reference generated from the constant folded column, in keyExprMap:
{noformat}
   1 = {ExprNodeColumnDesc@9714} "Column[KEY.reducesinkkey1]"
column = "KEY.reducesinkkey1"
tabAlias = ""
isPartitionColOrVirtualCol = false
isSkewedCol = false
typeInfo = {PrimitiveTypeInfo@9719} "string"
{noformat}

What should have been the corresponding key in the ReduceSinkOperator:
{noformat}
expr = {ExprNodeColumnDesc@8704} "Column[KEY.reducesinkkey1]"
 column = "KEY.reducesinkkey1"
 tabAlias = "t2"
 isPartitionColOrVirtualCol = true
 isSkewedCol = false
 typeInfo = {PrimitiveTypeInfo@9719} "string"
{noformat}

The difference is the ReduceSinkOperator key has tabAlias = "t2". The one 
generated by ExprNodeDescUtils.resolveJoinKeysAsRSColumns() currently has a 
tabAlias hardcoded to "".

One solution is for ExprNodeConstantDesc to keep a foldedFromTab for the table 
alias, in addition to foldedFromCol which it already has. That way 
ExprNodeDescUtils.resolveJoinKeysAsRSColumns() can generate a column reference 
with the same matching tableAlias as its parent ReduceSinkOperator.

> ArrayIndexOutOfBoundsException during dynamically partitioned hash join, with 
> CBO disabled
> --
>
> Key: HIVE-21746
> URL: https://issues.apache.org/jira/browse/HIVE-21746
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
>
> ArrayIndexOutOfBounds exception during query execution with dynamically 
> partitioned hash join.
> Found on Hive 2.x. Seems to occur with CBO disabled/failed.
> Disabling constant propagation seems to allow the query to succeed.
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 203
> at 
> org.apache.hadoop.hive.serde2.io.TimestampWritable.getTotalLength(TimestampWritable.java:217)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryUtils.checkObjectByteInfo(LazyBinaryUtils.java:205)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.parse(LazyBinaryStruct.java:142)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.getFieldsAsList(LazyBinaryStruct.java:281)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.persistence.MapJoinBytesTableContainer$ReusableRowContainer.unpack(MapJoinBytesTableContainer.java:744)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.persistence.MapJoinBytesTableContainer$ReusableRowContainer.next(MapJoinBytesTableContainer.java:730)
>  

[jira] [Commented] (HIVE-21739) Make metastore DB backward compatible with pre-catalog versions of hive.

2019-05-16 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841806#comment-16841806
 ] 

Hive QA commented on HIVE-21739:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12968950/HIVE-21739.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 16056 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.parse.TestReplAcidTablesBootstrapWithJsonMessage.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites
 (batchId=248)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17241/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17241/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17241/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12968950 - PreCommit-HIVE-Build

> Make metastore DB backward compatible with pre-catalog versions of hive.
> 
>
> Key: HIVE-21739
> URL: https://issues.apache.org/jira/browse/HIVE-21739
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 1.2.0, 2.1.1
>Reporter: Aditya Shah
>Assignee: Aditya Shah
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21739.1.patch, HIVE-21739.patch
>
>
> Since the addition of foreign key constraint between Database ('DBS') table 
> and catalogs ('CTLGS') table in HIVE-18755 we are able to run a simple create 
> database command with an older version of Metastore Server. This is due to 
> older versions having JDO schema as per older schema of 'DBS' which did not 
> have an additional 'CTLG_NAME' column.
> The error is as follows: 
> {code:java}
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> MetaException(message:Exception thrown flushing changes to datastore)
> 
> java.sql.BatchUpdateException: Cannot add or update a child row: a foreign 
> key constraint fails ("metastore_1238"."DBS", CONSTRAINT "CTLG_FK1" FOREIGN 
> KEY ("CTLG_NAME") REFERENCES "CTLGS" ("NAME"))
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21739) Make metastore DB backward compatible with pre-catalog versions of hive.

2019-05-16 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841784#comment-16841784
 ] 

Hive QA commented on HIVE-21739:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
3s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
45s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17241/dev-support/hive-personality.sh
 |
| git revision | master / 9a10bc2 |
| Default Java | 1.8.0_111 |
| modules | C: standalone-metastore/metastore-server . U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17241/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Make metastore DB backward compatible with pre-catalog versions of hive.
> 
>
> Key: HIVE-21739
> URL: https://issues.apache.org/jira/browse/HIVE-21739
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 1.2.0, 2.1.1
>Reporter: Aditya Shah
>Assignee: Aditya Shah
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21739.1.patch, HIVE-21739.patch
>
>
> Since the addition of foreign key constraint between Database ('DBS') table 
> and catalogs ('CTLGS') table in HIVE-18755 we are able to run a simple create 
> database command with an older version of Metastore Server. This is due to 
> older versions having JDO schema as per older schema of 'DBS' which did not 
> have an additional 'CTLG_NAME' column.
> The error is as follows: 
> {code:java}
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> MetaException(message:Exception thrown flushing changes to datastore)
> 
> java.sql.BatchUpdateException: Cannot add or update a child row: a foreign 
> key constraint fails ("metastore_1238"."DBS", CONSTRAINT "CTLG_FK1" FOREIGN 
> KEY ("CTLG_NAME") REFERENCES "CTLGS" ("NAME"))
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21746) ArrayIndexOutOfBoundsException during dynamically partitioned hash join, with CBO disabled

2019-05-16 Thread Jason Dere (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere reassigned HIVE-21746:
-


> ArrayIndexOutOfBoundsException during dynamically partitioned hash join, with 
> CBO disabled
> --
>
> Key: HIVE-21746
> URL: https://issues.apache.org/jira/browse/HIVE-21746
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
>
> ArrayIndexOutOfBounds exception during query execution with dynamically 
> partitioned hash join.
> Found on Hive 2.x. Seems to occur with CBO disabled/failed.
> Disabling constant propagation seems to allow the query to succeed.
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 203
> at 
> org.apache.hadoop.hive.serde2.io.TimestampWritable.getTotalLength(TimestampWritable.java:217)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryUtils.checkObjectByteInfo(LazyBinaryUtils.java:205)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.parse(LazyBinaryStruct.java:142)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.getFieldsAsList(LazyBinaryStruct.java:281)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.persistence.MapJoinBytesTableContainer$ReusableRowContainer.unpack(MapJoinBytesTableContainer.java:744)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.persistence.MapJoinBytesTableContainer$ReusableRowContainer.next(MapJoinBytesTableContainer.java:730)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.persistence.MapJoinBytesTableContainer$ReusableRowContainer.next(MapJoinBytesTableContainer.java:605)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.persistence.UnwrapRowContainer.next(UnwrapRowContainer.java:70)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.persistence.UnwrapRowContainer.next(UnwrapRowContainer.java:34)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genUniqueJoinObject(CommonJoinOperator.java:819)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:924)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.process(MapJoinOperator.java:456)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:359)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:290)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:319)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:189)
>  ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:172) 
> ~[hive-exec-2.1.0.2.6.4.119-3.jar:2.1.0.2.6.4.119-3]
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:377)
>  ~[tez-runtime-internals-0.8.4.2.6.4.119-3.jar:0.8.4.2.6.4.119-3]
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
>  ~[tez-runtime-internals-0.8.4.2.6.4.119-3.jar:0.8.4.2.6.4.119-3]
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
>  ~[tez-runtime-internals-0.8.4.2.6.4.119-3.jar:0.8.4.2.6.4.119-3]
> at java.security.AccessController.doPrivileged(Native Method) 
> ~[?:1.8.0_112]
> at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112]
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
>  ~[hadoop-common-2.7.3.2.6.4.119-3.jar:?]
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
>  ~[tez-runtime-internals-0.8.4.2.6.4.119-3.jar:0.8.4.2.6.4.119-3]
> at 
> 

[jira] [Commented] (HIVE-21739) Make metastore DB backward compatible with pre-catalog versions of hive.

2019-05-16 Thread Alan Gates (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841772#comment-16841772
 ] 

Alan Gates commented on HIVE-21739:
---

This patch only makes the changes for MySQL.  You'll need to make the same 
changes for the Derby, Oracle, Postgres, and SqlServer scripts.

Did you run the DBInstall tests?  These don't run as part of the standard test 
run, but they test db install and upgrade so you'll want to make sure and run 
them.

> Make metastore DB backward compatible with pre-catalog versions of hive.
> 
>
> Key: HIVE-21739
> URL: https://issues.apache.org/jira/browse/HIVE-21739
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 1.2.0, 2.1.1
>Reporter: Aditya Shah
>Assignee: Aditya Shah
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21739.1.patch, HIVE-21739.patch
>
>
> Since the addition of foreign key constraint between Database ('DBS') table 
> and catalogs ('CTLGS') table in HIVE-18755 we are able to run a simple create 
> database command with an older version of Metastore Server. This is due to 
> older versions having JDO schema as per older schema of 'DBS' which did not 
> have an additional 'CTLG_NAME' column.
> The error is as follows: 
> {code:java}
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> MetaException(message:Exception thrown flushing changes to datastore)
> 
> java.sql.BatchUpdateException: Cannot add or update a child row: a foreign 
> key constraint fails ("metastore_1238"."DBS", CONSTRAINT "CTLG_FK1" FOREIGN 
> KEY ("CTLG_NAME") REFERENCES "CTLGS" ("NAME"))
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21731) Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster with strict managed table set to true.

2019-05-16 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841761#comment-16841761
 ] 

Hive QA commented on HIVE-21731:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12968952/HIVE-21731.03.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16057 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17240/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17240/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17240/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12968952 - PreCommit-HIVE-Build

> Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 
> cluster with strict managed table set to true.
> -
>
> Key: HIVE-21731
> URL: https://issues.apache.org/jira/browse/HIVE-21731
> Project: Hive
>  Issue Type: Bug
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21731.01.patch, HIVE-21731.02.patch, 
> HIVE-21731.03.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The scenario is 
>  # Replication policy is set with hive  3.0 source cluster (strict managed 
> table set to false) and hive 4.0 target cluster with strict managed table set 
>  true.
>  # User upgrades the 3.0 source cluster to 4.0 cluster using upgrade tool.
>  # The upgrade converts all managed tables to acid tables.
>  # In the next repl dump, user sets hive .repl .dump .include .acid .tables 
> and hive .repl .bootstrap. acid. tables set true triggering bootstrap of 
> newly converted ACID tables.
>  # As the old tables are non-txn tables, dump is not filtering the events 
> even tough bootstrap acid table is set to true. This is causing the repl load 
> to fail as the write id is not set in the table object.
>  # If we ignore the event replay, the bootstrap is failing with dump 
> directory mismatch error.
> The fix should be 
>  # Ignore dumping the alter table event if bootstrap acid table is set true 
> and the alter is converting a non-acid table to acid table.
>  # In case of bootstrap during incremental load, ignore the dump directory 
> property set in table object.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21731) Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster with strict managed table set to true.

2019-05-16 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841730#comment-16841730
 ] 

Hive QA commented on HIVE-21731:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
16s{color} | {color:blue} ql in master has 2258 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
42s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
20s{color} | {color:red} itests/hive-unit: The patch generated 7 new + 103 
unchanged - 0 fixed = 110 total (was 103) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17240/dev-support/hive-personality.sh
 |
| git revision | master / 9a10bc2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17240/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| modules | C: ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17240/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 
> cluster with strict managed table set to true.
> -
>
> Key: HIVE-21731
> URL: https://issues.apache.org/jira/browse/HIVE-21731
> Project: Hive
>  Issue Type: Bug
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21731.01.patch, HIVE-21731.02.patch, 
> HIVE-21731.03.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The scenario is 
>  # Replication policy is set with hive  3.0 source cluster (strict managed 
> table set to false) and hive 4.0 target cluster with strict managed table set 
>  true.
>  # User upgrades the 3.0 source cluster to 4.0 cluster using upgrade tool.
>  # The upgrade converts all managed tables to acid tables.
>  # In the next repl 

[jira] [Commented] (HIVE-21740) Collect LLAP execution latency metrics

2019-05-16 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841695#comment-16841695
 ] 

Hive QA commented on HIVE-21740:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12968936/HIVE-21740.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16057 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17239/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17239/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17239/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12968936 - PreCommit-HIVE-Build

> Collect LLAP execution latency metrics
> --
>
> Key: HIVE-21740
> URL: https://issues.apache.org/jira/browse/HIVE-21740
> Project: Hive
>  Issue Type: New Feature
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21740.patch
>
>
> Collect metrics for LLAP task execution times



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21745) Change in join order causes query parse to fail

2019-05-16 Thread Andre Araujo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andre Araujo updated HIVE-21745:

Description: 
I ran into the following case, where a query fails to parse if the join order 
is changed:

{code}
create database if not exists test;

drop table if exists test.table1;
create table test.table1 (
  id string,
  col_a string
)
stored as textfile;

drop table if exists test.table2;
create table test.table2 (
  id string
)
stored as textfile;

drop table if exists test.table3;
create table test.table3 (
  col_a string,
  col_b string
)
stored as textfile;

drop table if exists test.table4;
create table test.table4 (
  id string
)
stored as textfile;

-- This fails with: Invalid table alias or column reference 't3': (possible 
column names are: id, col_a)
select
  1
from
  test.table1 as t1
  left join test.table2 as t2 on t2.id = t1.id
  left join test.table3 as t3 on t1.col_a = t3.col_a
  left join test.table4 as t4 on t1.id = t4.id and t3.col_b = 'X'
;

-- This works
select
  1
from
  test.table1 as t1
  left join test.table3 as t3 on t1.col_a = t3.col_a
  left join test.table4 as t4 on t1.id = t4.id and t3.col_b = 'X'
  left join test.table2 as t2 on t2.id = t1.id
;
{code}

  was:
I ran into the following case, where a query fails to parse if the join order 
is changed:

{code}
create database if not exists test;

drop table if exists test.table1;
create table test.table1 (
  id string,
  col_a string
)
stored as textfile;

drop table if exists test.table2;
create table test.table2 (
  id string
)
stored as textfile;

drop table if exists test.table3;
create table test.table3 (
  col_a string,
  col_b string
)
stored as textfile;

drop table if exists test.table4;
create table test.table4 (
  id string
)
stored as textfile;

-- This fails with: Invalid table alias or column reference 't3': (possible 
column names are: id, col_a)
drop view if exists test.v;
create view test.v as
select
  1
from
  test.table1 as t1
  left join test.table2 as t2 on t2.id = t1.id
  left join test.table3 as t3 on t1.col_a = t3.col_a
  left join test.table4 as t4 on t1.id = t4.id and t3.col_b = 'X'
;

-- This works
drop view if exists test.v;
create view test.v as
select
  1
from
  test.table1 as t1
  left join test.table3 as t3 on t1.col_a = t3.col_a
  left join test.table4 as t4 on t1.id = t4.id and t3.col_b = 'X'
  left join test.table2 as t2 on t2.id = t1.id
;
{code}


> Change in join order causes query parse to fail
> ---
>
> Key: HIVE-21745
> URL: https://issues.apache.org/jira/browse/HIVE-21745
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.1.0
>Reporter: Andre Araujo
>Priority: Major
>
> I ran into the following case, where a query fails to parse if the join order 
> is changed:
> {code}
> create database if not exists test;
> drop table if exists test.table1;
> create table test.table1 (
>   id string,
>   col_a string
> )
> stored as textfile;
> drop table if exists test.table2;
> create table test.table2 (
>   id string
> )
> stored as textfile;
> drop table if exists test.table3;
> create table test.table3 (
>   col_a string,
>   col_b string
> )
> stored as textfile;
> drop table if exists test.table4;
> create table test.table4 (
>   id string
> )
> stored as textfile;
> -- This fails with: Invalid table alias or column reference 't3': (possible 
> column names are: id, col_a)
> select
>   1
> from
>   test.table1 as t1
>   left join test.table2 as t2 on t2.id = t1.id
>   left join test.table3 as t3 on t1.col_a = t3.col_a
>   left join test.table4 as t4 on t1.id = t4.id and t3.col_b = 'X'
> ;
> -- This works
> select
>   1
> from
>   test.table1 as t1
>   left join test.table3 as t3 on t1.col_a = t3.col_a
>   left join test.table4 as t4 on t1.id = t4.id and t3.col_b = 'X'
>   left join test.table2 as t2 on t2.id = t1.id
> ;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21740) Collect LLAP execution latency metrics

2019-05-16 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841676#comment-16841676
 ] 

Hive QA commented on HIVE-21740:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
36s{color} | {color:blue} common in master has 62 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
34s{color} | {color:blue} llap-common in master has 76 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
26s{color} | {color:blue} llap-tez in master has 17 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
10s{color} | {color:red} llap-common: The patch generated 4 new + 5 unchanged - 
5 fixed = 9 total (was 10) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
13s{color} | {color:red} llap-tez: The patch generated 8 new + 74 unchanged - 0 
fixed = 82 total (was 74) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17239/dev-support/hive-personality.sh
 |
| git revision | master / 9a10bc2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17239/yetus/diff-checkstyle-llap-common.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17239/yetus/diff-checkstyle-llap-tez.txt
 |
| modules | C: common llap-common llap-tez U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17239/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Collect LLAP execution latency metrics
> --
>
> Key: HIVE-21740
> URL: https://issues.apache.org/jira/browse/HIVE-21740
> Project: Hive
>  Issue Type: New Feature
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21740.patch
>
>
> Collect metrics for LLAP task execution times



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21745) Change in join order causes query parse to fail

2019-05-16 Thread Andre Araujo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andre Araujo updated HIVE-21745:

Description: 
I ran into the following case, where a query fails to parse if the join order 
is changed:

{code}
create database if not exists test;

drop table if exists test.table1;
create table test.table1 (
  id string,
  col_a string
)
stored as textfile;

drop table if exists test.table2;
create table test.table2 (
  id string
)
stored as textfile;

drop table if exists test.table3;
create table test.table3 (
  col_a string,
  col_b string
)
stored as textfile;

drop table if exists test.table4;
create table test.table4 (
  id string
)
stored as textfile;

-- This fails with: Invalid table alias or column reference 't3': (possible 
column names are: id, col_a)
drop view if exists test.v;
create view test.v as
select
  1
from
  test.table1 as t1
  left join test.table2 as t2 on t2.id = t1.id
  left join test.table3 as t3 on t1.col_a = t3.col_a
  left join test.table4 as t4 on t1.id = t4.id and t3.col_b = 'X'
;

-- This works
drop view if exists test.v;
create view test.v as
select
  1
from
  test.table1 as t1
  left join test.table3 as t3 on t1.col_a = t3.col_a
  left join test.table4 as t4 on t1.id = t4.id and t3.col_b = 'X'
  left join test.table2 as t2 on t2.id = t1.id
;
{code}

  was:
I ran into the following case, where a query fails to parse if the join order 
is changed:

{code}
reate database if not exists test;

drop table if exists test.table1;
create table test.table1 (
  id string,
  col_a string
)
stored as textfile;

drop table if exists test.table2;
create table test.table2 (
  id string
)
stored as textfile;

drop table if exists test.table3;
create table test.table3 (
  col_a string,
  col_b string
)
stored as textfile;

drop table if exists test.table4;
create table test.table4 (
  id string
)
stored as textfile;

-- This fails with: Invalid table alias or column reference 't3': (possible 
column names are: id, col_a)
drop view if exists test.v;
create view test.v as
select
  1
from
  test.table1 as t1
  left join test.table2 as t2 on t2.id = t1.id
  left join test.table3 as t3 on t1.col_a = t3.col_a
  left join test.table4 as t4 on t1.id = t4.id and t3.col_b = 'X'
;

-- This works
drop view if exists test.v;
create view test.v as
select
  1
from
  test.table1 as t1
  left join test.table3 as t3 on t1.col_a = t3.col_a
  left join test.table4 as t4 on t1.id = t4.id and t3.col_b = 'X'
  left join test.table2 as t2 on t2.id = t1.id
;
{code}


> Change in join order causes query parse to fail
> ---
>
> Key: HIVE-21745
> URL: https://issues.apache.org/jira/browse/HIVE-21745
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.1.0
>Reporter: Andre Araujo
>Priority: Major
>
> I ran into the following case, where a query fails to parse if the join order 
> is changed:
> {code}
> create database if not exists test;
> drop table if exists test.table1;
> create table test.table1 (
>   id string,
>   col_a string
> )
> stored as textfile;
> drop table if exists test.table2;
> create table test.table2 (
>   id string
> )
> stored as textfile;
> drop table if exists test.table3;
> create table test.table3 (
>   col_a string,
>   col_b string
> )
> stored as textfile;
> drop table if exists test.table4;
> create table test.table4 (
>   id string
> )
> stored as textfile;
> -- This fails with: Invalid table alias or column reference 't3': (possible 
> column names are: id, col_a)
> drop view if exists test.v;
> create view test.v as
> select
>   1
> from
>   test.table1 as t1
>   left join test.table2 as t2 on t2.id = t1.id
>   left join test.table3 as t3 on t1.col_a = t3.col_a
>   left join test.table4 as t4 on t1.id = t4.id and t3.col_b = 'X'
> ;
> -- This works
> drop view if exists test.v;
> create view test.v as
> select
>   1
> from
>   test.table1 as t1
>   left join test.table3 as t3 on t1.col_a = t3.col_a
>   left join test.table4 as t4 on t1.id = t4.id and t3.col_b = 'X'
>   left join test.table2 as t2 on t2.id = t1.id
> ;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21743) day( ) gives wrong day from the date in Apache Hive 3.1 server

2019-05-16 Thread Adarshdeep Cheema (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841650#comment-16841650
 ] 

Adarshdeep Cheema commented on HIVE-21743:
--

[https://www.tutorialspoint.com/hive/hive_built_in_functions.htm]
This link says


|int|day(string date)|

|It returns the day part of a date or a timestamp string: day("1970-11-01 
00:00:00") = 1, day("1970-11-01") = 1|

> day( ) gives wrong day from the date in Apache Hive 3.1 server
> --
>
> Key: HIVE-21743
> URL: https://issues.apache.org/jira/browse/HIVE-21743
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.1
> Environment: Server: Apache Hive 3.1 
> Driver hive-jdbc-3.1.0.3.1.0.0-78
>Reporter: Adarshdeep Cheema
>Priority: Critical
>
> Using Apache Hive 3.1 server 
> Run the following SQL and you will get 3 instead of 1
> SELECT
>  (day( DATE '0001-01-01'))
> FROM
>  `table`
> PLEASE NOTE THIS DOES NOT HAPPEN WITH Apache HIVE 2.1 SERVER 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21732) Configurable injection of latency for LLAP task execution

2019-05-16 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841648#comment-16841648
 ] 

Hive QA commented on HIVE-21732:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12968937/HIVE-21732.5.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16057 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17238/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17238/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17238/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12968937 - PreCommit-HIVE-Build

> Configurable injection of latency for LLAP task execution
> -
>
> Key: HIVE-21732
> URL: https://issues.apache.org/jira/browse/HIVE-21732
> Project: Hive
>  Issue Type: Test
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21732.2.patch, HIVE-21732.3.patch, 
> HIVE-21732.4.patch, HIVE-21732.5.patch, HIVE-21732.patch
>
>
> For evaluating testing, it would be good to have a configurable way to inject 
> latency for LLAP tasks.
> The configuration should be able to control how much latency is injected into 
> each daemon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21743) day( ) gives wrong day from the date in Apache Hive 3.1 server

2019-05-16 Thread Rajkumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841628#comment-16841628
 ] 

Rajkumar Singh commented on HIVE-21743:
---

with HIVE-12192 hive do all the date/time computation in UTC that might be 
causing the issue here.
https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFDayOfMonth.java#L119
https://github.com/apache/hive/blob/master/common/src/java/org/apache/hadoop/hive/common/type/Date.java#L120

> day( ) gives wrong day from the date in Apache Hive 3.1 server
> --
>
> Key: HIVE-21743
> URL: https://issues.apache.org/jira/browse/HIVE-21743
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.1
> Environment: Server: Apache Hive 3.1 
> Driver hive-jdbc-3.1.0.3.1.0.0-78
>Reporter: Adarshdeep Cheema
>Priority: Critical
>
> Using Apache Hive 3.1 server 
> Run the following SQL and you will get 3 instead of 1
> SELECT
>  (day( DATE '0001-01-01'))
> FROM
>  `table`
> PLEASE NOTE THIS DOES NOT HAPPEN WITH Apache HIVE 2.1 SERVER 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21663) Hive Metastore Translation Layer

2019-05-16 Thread Naveen Gangam (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-21663:
-
   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Fix has been pushed to master. Closing this feature request. I will follow up 
on some remaining items via the sub task jiras. 

> Hive Metastore Translation Layer
> 
>
> Key: HIVE-21663
> URL: https://issues.apache.org/jira/browse/HIVE-21663
> Project: Hive
>  Issue Type: New Feature
>  Components: Standalone Metastore
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21663.3.patch, HIVE-21663.4.patch, 
> HIVE-21663.5.patch, HIVE-21663.6.patch, HIVE-21663.7.patch, 
> HIVE-21663.8.patch, HMS Translation Layer_v1.0.pdf
>
>
> This task is for the implementation of the default provider for translation, 
> that is extensible if needed for a custom translator. Please refer the spec 
> for additional details on the translation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21731) Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster with strict managed table set to true.

2019-05-16 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-21731:
---
Status: Patch Available  (was: Open)

> Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 
> cluster with strict managed table set to true.
> -
>
> Key: HIVE-21731
> URL: https://issues.apache.org/jira/browse/HIVE-21731
> Project: Hive
>  Issue Type: Bug
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21731.01.patch, HIVE-21731.02.patch, 
> HIVE-21731.03.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The scenario is 
>  # Replication policy is set with hive  3.0 source cluster (strict managed 
> table set to false) and hive 4.0 target cluster with strict managed table set 
>  true.
>  # User upgrades the 3.0 source cluster to 4.0 cluster using upgrade tool.
>  # The upgrade converts all managed tables to acid tables.
>  # In the next repl dump, user sets hive .repl .dump .include .acid .tables 
> and hive .repl .bootstrap. acid. tables set true triggering bootstrap of 
> newly converted ACID tables.
>  # As the old tables are non-txn tables, dump is not filtering the events 
> even tough bootstrap acid table is set to true. This is causing the repl load 
> to fail as the write id is not set in the table object.
>  # If we ignore the event replay, the bootstrap is failing with dump 
> directory mismatch error.
> The fix should be 
>  # Ignore dumping the alter table event if bootstrap acid table is set true 
> and the alter is converting a non-acid table to acid table.
>  # In case of bootstrap during incremental load, ignore the dump directory 
> property set in table object.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21731) Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster with strict managed table set to true.

2019-05-16 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-21731:
---
Status: Open  (was: Patch Available)

> Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 
> cluster with strict managed table set to true.
> -
>
> Key: HIVE-21731
> URL: https://issues.apache.org/jira/browse/HIVE-21731
> Project: Hive
>  Issue Type: Bug
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21731.01.patch, HIVE-21731.02.patch, 
> HIVE-21731.03.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The scenario is 
>  # Replication policy is set with hive  3.0 source cluster (strict managed 
> table set to false) and hive 4.0 target cluster with strict managed table set 
>  true.
>  # User upgrades the 3.0 source cluster to 4.0 cluster using upgrade tool.
>  # The upgrade converts all managed tables to acid tables.
>  # In the next repl dump, user sets hive .repl .dump .include .acid .tables 
> and hive .repl .bootstrap. acid. tables set true triggering bootstrap of 
> newly converted ACID tables.
>  # As the old tables are non-txn tables, dump is not filtering the events 
> even tough bootstrap acid table is set to true. This is causing the repl load 
> to fail as the write id is not set in the table object.
>  # If we ignore the event replay, the bootstrap is failing with dump 
> directory mismatch error.
> The fix should be 
>  # Ignore dumping the alter table event if bootstrap acid table is set true 
> and the alter is converting a non-acid table to acid table.
>  # In case of bootstrap during incremental load, ignore the dump directory 
> property set in table object.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21731) Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster with strict managed table set to true.

2019-05-16 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-21731:
---
Attachment: HIVE-21731.03.patch

> Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 
> cluster with strict managed table set to true.
> -
>
> Key: HIVE-21731
> URL: https://issues.apache.org/jira/browse/HIVE-21731
> Project: Hive
>  Issue Type: Bug
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21731.01.patch, HIVE-21731.02.patch, 
> HIVE-21731.03.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The scenario is 
>  # Replication policy is set with hive  3.0 source cluster (strict managed 
> table set to false) and hive 4.0 target cluster with strict managed table set 
>  true.
>  # User upgrades the 3.0 source cluster to 4.0 cluster using upgrade tool.
>  # The upgrade converts all managed tables to acid tables.
>  # In the next repl dump, user sets hive .repl .dump .include .acid .tables 
> and hive .repl .bootstrap. acid. tables set true triggering bootstrap of 
> newly converted ACID tables.
>  # As the old tables are non-txn tables, dump is not filtering the events 
> even tough bootstrap acid table is set to true. This is causing the repl load 
> to fail as the write id is not set in the table object.
>  # If we ignore the event replay, the bootstrap is failing with dump 
> directory mismatch error.
> The fix should be 
>  # Ignore dumping the alter table event if bootstrap acid table is set true 
> and the alter is converting a non-acid table to acid table.
>  # In case of bootstrap during incremental load, ignore the dump directory 
> property set in table object.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21731) Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster with strict managed table set to true.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21731?focusedWorklogId=243532=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243532
 ]

ASF GitHub Bot logged work on HIVE-21731:
-

Author: ASF GitHub Bot
Created on: 16/May/19 18:20
Start Date: 16/May/19 18:20
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #628: HIVE-21731 : 
Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster 
with strict managed table set to true.
URL: https://github.com/apache/hive/pull/628#discussion_r284828151
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/parse/repl/dump/events/AlterTableHandler.java
 ##
 @@ -91,6 +92,14 @@ public void handle(Context withinContext) throws Exception {
   return;
 }
 
+if 
(withinContext.hiveConf.getBoolVar(HiveConf.ConfVars.REPL_BOOTSTRAP_ACID_TABLES))
 {
+  if (!AcidUtils.isTransactionalTable(before) && 
AcidUtils.isTransactionalTable(after)) {
+LOG.info("The table " + after.getTableName() + " is converted to ACID 
table." +
+" It will be replicated with bootstrap load as 
REPL_BOOTSTRAP_ACID_TABLES is set to true.");
 
 Review comment:
   done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243532)
Time Spent: 1h 40m  (was: 1.5h)

> Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 
> cluster with strict managed table set to true.
> -
>
> Key: HIVE-21731
> URL: https://issues.apache.org/jira/browse/HIVE-21731
> Project: Hive
>  Issue Type: Bug
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21731.01.patch, HIVE-21731.02.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> The scenario is 
>  # Replication policy is set with hive  3.0 source cluster (strict managed 
> table set to false) and hive 4.0 target cluster with strict managed table set 
>  true.
>  # User upgrades the 3.0 source cluster to 4.0 cluster using upgrade tool.
>  # The upgrade converts all managed tables to acid tables.
>  # In the next repl dump, user sets hive .repl .dump .include .acid .tables 
> and hive .repl .bootstrap. acid. tables set true triggering bootstrap of 
> newly converted ACID tables.
>  # As the old tables are non-txn tables, dump is not filtering the events 
> even tough bootstrap acid table is set to true. This is causing the repl load 
> to fail as the write id is not set in the table object.
>  # If we ignore the event replay, the bootstrap is failing with dump 
> directory mismatch error.
> The fix should be 
>  # Ignore dumping the alter table event if bootstrap acid table is set true 
> and the alter is converting a non-acid table to acid table.
>  # In case of bootstrap during incremental load, ignore the dump directory 
> property set in table object.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21731) Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster with strict managed table set to true.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21731?focusedWorklogId=243533=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243533
 ]

ASF GitHub Bot logged work on HIVE-21731:
-

Author: ASF GitHub Bot
Created on: 16/May/19 18:20
Start Date: 16/May/19 18:20
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #628: HIVE-21731 : 
Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster 
with strict managed table set to true.
URL: https://github.com/apache/hive/pull/628#discussion_r284828193
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/repl/bootstrap/load/table/LoadTable.java
 ##
 @@ -159,10 +159,20 @@ public TaskTracker tasks() throws Exception {
 return tracker;
   }
 
-  private ReplLoadOpType getLoadTableType(Table table) throws 
InvalidOperationException, HiveException {
+  private ReplLoadOpType getLoadTableType(Table table, boolean 
isBootstrapDuringInc)
+  throws InvalidOperationException, HiveException {
 if (table == null) {
   return ReplLoadOpType.LOAD_NEW;
 }
+
+// In case user has asked for bootstrap of transactional table, we replace 
the old one if present. This is to
+// make sure that the transactional info like write id etc for the table 
is consistent between the
+// source and target cluster.
+if (isBootstrapDuringInc && AcidUtils.isTransactionalTable(table)) {
 
 Review comment:
   done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243533)
Time Spent: 1h 40m  (was: 1.5h)

> Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 
> cluster with strict managed table set to true.
> -
>
> Key: HIVE-21731
> URL: https://issues.apache.org/jira/browse/HIVE-21731
> Project: Hive
>  Issue Type: Bug
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21731.01.patch, HIVE-21731.02.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> The scenario is 
>  # Replication policy is set with hive  3.0 source cluster (strict managed 
> table set to false) and hive 4.0 target cluster with strict managed table set 
>  true.
>  # User upgrades the 3.0 source cluster to 4.0 cluster using upgrade tool.
>  # The upgrade converts all managed tables to acid tables.
>  # In the next repl dump, user sets hive .repl .dump .include .acid .tables 
> and hive .repl .bootstrap. acid. tables set true triggering bootstrap of 
> newly converted ACID tables.
>  # As the old tables are non-txn tables, dump is not filtering the events 
> even tough bootstrap acid table is set to true. This is causing the repl load 
> to fail as the write id is not set in the table object.
>  # If we ignore the event replay, the bootstrap is failing with dump 
> directory mismatch error.
> The fix should be 
>  # Ignore dumping the alter table event if bootstrap acid table is set true 
> and the alter is converting a non-acid table to acid table.
>  # In case of bootstrap during incremental load, ignore the dump directory 
> property set in table object.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21731) Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster with strict managed table set to true.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21731?focusedWorklogId=243534=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243534
 ]

ASF GitHub Bot logged work on HIVE-21731:
-

Author: ASF GitHub Bot
Created on: 16/May/19 18:20
Start Date: 16/May/19 18:20
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #628: HIVE-21731 : 
Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster 
with strict managed table set to true.
URL: https://github.com/apache/hive/pull/628#discussion_r284829909
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationWithTableMigration.java
 ##
 @@ -504,4 +507,56 @@ public void dynamicallyConvertExternalToManagedTable() 
throws Throwable {
 .runFailure("alter table t1 set tblproperties('EXTERNAL'='false')")
 .runFailure("alter table t2 set 
tblproperties('EXTERNAL'='false')");
   }
+
+  @Test
+  public void testMigrationWithUpgrade() throws Throwable {
+WarehouseInstance.Tuple tuple = primary.run("use " + primaryDbName)
+.run("create table tacid (id int) clustered by(id) into 3 buckets 
stored as orc ")
+.run("create table texternal (id int) ")
+.run("insert into texternal values (1)")
+.dump(primaryDbName, null);
+replica.load(replicatedDbName, tuple.dumpLocation)
+.run("use " + replicatedDbName)
+.run("repl status " + replicatedDbName)
+.verifyResult(tuple.lastReplicationId)
+.run("select count(*) from tacid")
+.verifyResult("0")
+.run("select id from texternal")
+.verifyResult("1");
+
+assertTrue(isFullAcidTable(replica.getTable(replicatedDbName, "tacid")));
+
assertFalse(MetaStoreUtils.isExternalTable(replica.getTable(replicatedDbName, 
"texternal")));
+
+// forcefully (setting db property) alter the table type. For acid table, 
set the bootstrap acid table to true. For
 
 Review comment:
   done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243534)
Time Spent: 1h 50m  (was: 1h 40m)

> Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 
> cluster with strict managed table set to true.
> -
>
> Key: HIVE-21731
> URL: https://issues.apache.org/jira/browse/HIVE-21731
> Project: Hive
>  Issue Type: Bug
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21731.01.patch, HIVE-21731.02.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> The scenario is 
>  # Replication policy is set with hive  3.0 source cluster (strict managed 
> table set to false) and hive 4.0 target cluster with strict managed table set 
>  true.
>  # User upgrades the 3.0 source cluster to 4.0 cluster using upgrade tool.
>  # The upgrade converts all managed tables to acid tables.
>  # In the next repl dump, user sets hive .repl .dump .include .acid .tables 
> and hive .repl .bootstrap. acid. tables set true triggering bootstrap of 
> newly converted ACID tables.
>  # As the old tables are non-txn tables, dump is not filtering the events 
> even tough bootstrap acid table is set to true. This is causing the repl load 
> to fail as the write id is not set in the table object.
>  # If we ignore the event replay, the bootstrap is failing with dump 
> directory mismatch error.
> The fix should be 
>  # Ignore dumping the alter table event if bootstrap acid table is set true 
> and the alter is converting a non-acid table to acid table.
>  # In case of bootstrap during incremental load, ignore the dump directory 
> property set in table object.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21731) Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster with strict managed table set to true.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21731?focusedWorklogId=243531=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243531
 ]

ASF GitHub Bot logged work on HIVE-21731:
-

Author: ASF GitHub Bot
Created on: 16/May/19 18:20
Start Date: 16/May/19 18:20
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #628: HIVE-21731 : 
Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster 
with strict managed table set to true.
URL: https://github.com/apache/hive/pull/628#discussion_r284828988
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/WarehouseInstance.java
 ##
 @@ -563,7 +563,21 @@ public void testEventCounts(String dbName, long 
fromEventId, Long toEventId, Int
   }
 
   public boolean isAcidEnabled() {
-return hiveConf.getBoolVar(HiveConf.ConfVars.HIVE_SUPPORT_CONCURRENCY);
+if (hiveConf.getBoolVar(HiveConf.ConfVars.HIVE_SUPPORT_CONCURRENCY) &&
+
hiveConf.getVar(HiveConf.ConfVars.HIVE_TXN_MANAGER).equals("org.apache.hadoop.hive.ql.lockmgr.DbTxnManager"))
 {
+  return true;
+}
+return false;
+  }
+
+  public void disableAcid() {
+hiveConf.setBoolVar(HiveConf.ConfVars.HIVE_SUPPORT_CONCURRENCY, false);
+hiveConf.setVar(HiveConf.ConfVars.HIVE_TXN_MANAGER, 
"org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager");
+  }
+
+  public void enableAcid() {
+hiveConf.setBoolVar(HiveConf.ConfVars.HIVE_SUPPORT_CONCURRENCY, true);
+hiveConf.setVar(HiveConf.ConfVars.HIVE_TXN_MANAGER, 
"org.apache.hadoop.hive.ql.lockmgr.DbTxnManager");
 
 Review comment:
   its working fine for this test ..i could see the table getting converted to 
acid and bootstrap happening 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243531)
Time Spent: 1.5h  (was: 1h 20m)

> Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 
> cluster with strict managed table set to true.
> -
>
> Key: HIVE-21731
> URL: https://issues.apache.org/jira/browse/HIVE-21731
> Project: Hive
>  Issue Type: Bug
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21731.01.patch, HIVE-21731.02.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The scenario is 
>  # Replication policy is set with hive  3.0 source cluster (strict managed 
> table set to false) and hive 4.0 target cluster with strict managed table set 
>  true.
>  # User upgrades the 3.0 source cluster to 4.0 cluster using upgrade tool.
>  # The upgrade converts all managed tables to acid tables.
>  # In the next repl dump, user sets hive .repl .dump .include .acid .tables 
> and hive .repl .bootstrap. acid. tables set true triggering bootstrap of 
> newly converted ACID tables.
>  # As the old tables are non-txn tables, dump is not filtering the events 
> even tough bootstrap acid table is set to true. This is causing the repl load 
> to fail as the write id is not set in the table object.
>  # If we ignore the event replay, the bootstrap is failing with dump 
> directory mismatch error.
> The fix should be 
>  # Ignore dumping the alter table event if bootstrap acid table is set true 
> and the alter is converting a non-acid table to acid table.
>  # In case of bootstrap during incremental load, ignore the dump directory 
> property set in table object.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21731) Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster with strict managed table set to true.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21731?focusedWorklogId=243535=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243535
 ]

ASF GitHub Bot logged work on HIVE-21731:
-

Author: ASF GitHub Bot
Created on: 16/May/19 18:20
Start Date: 16/May/19 18:20
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #628: HIVE-21731 : 
Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster 
with strict managed table set to true.
URL: https://github.com/apache/hive/pull/628#discussion_r284830345
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationWithTableMigration.java
 ##
 @@ -504,4 +507,56 @@ public void dynamicallyConvertExternalToManagedTable() 
throws Throwable {
 .runFailure("alter table t1 set tblproperties('EXTERNAL'='false')")
 .runFailure("alter table t2 set 
tblproperties('EXTERNAL'='false')");
   }
+
+  @Test
+  public void testMigrationWithUpgrade() throws Throwable {
+WarehouseInstance.Tuple tuple = primary.run("use " + primaryDbName)
+.run("create table tacid (id int) clustered by(id) into 3 buckets 
stored as orc ")
 
 Review comment:
   done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243535)
Time Spent: 2h  (was: 1h 50m)

> Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 
> cluster with strict managed table set to true.
> -
>
> Key: HIVE-21731
> URL: https://issues.apache.org/jira/browse/HIVE-21731
> Project: Hive
>  Issue Type: Bug
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21731.01.patch, HIVE-21731.02.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The scenario is 
>  # Replication policy is set with hive  3.0 source cluster (strict managed 
> table set to false) and hive 4.0 target cluster with strict managed table set 
>  true.
>  # User upgrades the 3.0 source cluster to 4.0 cluster using upgrade tool.
>  # The upgrade converts all managed tables to acid tables.
>  # In the next repl dump, user sets hive .repl .dump .include .acid .tables 
> and hive .repl .bootstrap. acid. tables set true triggering bootstrap of 
> newly converted ACID tables.
>  # As the old tables are non-txn tables, dump is not filtering the events 
> even tough bootstrap acid table is set to true. This is causing the repl load 
> to fail as the write id is not set in the table object.
>  # If we ignore the event replay, the bootstrap is failing with dump 
> directory mismatch error.
> The fix should be 
>  # Ignore dumping the alter table event if bootstrap acid table is set true 
> and the alter is converting a non-acid table to acid table.
>  # In case of bootstrap during incremental load, ignore the dump directory 
> property set in table object.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21731) Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster with strict managed table set to true.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21731?focusedWorklogId=243530=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243530
 ]

ASF GitHub Bot logged work on HIVE-21731:
-

Author: ASF GitHub Bot
Created on: 16/May/19 18:20
Start Date: 16/May/19 18:20
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #628: HIVE-21731 : 
Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster 
with strict managed table set to true.
URL: https://github.com/apache/hive/pull/628#discussion_r284828762
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplLoadTask.java
 ##
 @@ -471,7 +473,7 @@ private int executeIncrementalLoad(DriverContext 
driverContext) {
 if (work.hasBootstrapLoadTasks()) {
   LOG.debug("Current incremental dump have tables to be bootstrapped. 
Switching to bootstrap "
   + "mode after applying all events.");
-  return executeBootStrapLoad(driverContext);
+  return executeBootStrapLoad(driverContext, true);
 
 Review comment:
   done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243530)
Time Spent: 1h 20m  (was: 1h 10m)

> Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 
> cluster with strict managed table set to true.
> -
>
> Key: HIVE-21731
> URL: https://issues.apache.org/jira/browse/HIVE-21731
> Project: Hive
>  Issue Type: Bug
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21731.01.patch, HIVE-21731.02.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> The scenario is 
>  # Replication policy is set with hive  3.0 source cluster (strict managed 
> table set to false) and hive 4.0 target cluster with strict managed table set 
>  true.
>  # User upgrades the 3.0 source cluster to 4.0 cluster using upgrade tool.
>  # The upgrade converts all managed tables to acid tables.
>  # In the next repl dump, user sets hive .repl .dump .include .acid .tables 
> and hive .repl .bootstrap. acid. tables set true triggering bootstrap of 
> newly converted ACID tables.
>  # As the old tables are non-txn tables, dump is not filtering the events 
> even tough bootstrap acid table is set to true. This is causing the repl load 
> to fail as the write id is not set in the table object.
>  # If we ignore the event replay, the bootstrap is failing with dump 
> directory mismatch error.
> The fix should be 
>  # Ignore dumping the alter table event if bootstrap acid table is set true 
> and the alter is converting a non-acid table to acid table.
>  # In case of bootstrap during incremental load, ignore the dump directory 
> property set in table object.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21739) Make metastore DB backward compatible with pre-catalog versions of hive.

2019-05-16 Thread Aditya Shah (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Shah updated HIVE-21739:
---
Attachment: HIVE-21739.1.patch
Status: Patch Available  (was: Open)

Unrelated failures. Triggering tests again. 

cc [~alangates] [~pvary] can you please review

> Make metastore DB backward compatible with pre-catalog versions of hive.
> 
>
> Key: HIVE-21739
> URL: https://issues.apache.org/jira/browse/HIVE-21739
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 2.1.1, 1.2.0
>Reporter: Aditya Shah
>Assignee: Aditya Shah
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21739.1.patch, HIVE-21739.patch
>
>
> Since the addition of foreign key constraint between Database ('DBS') table 
> and catalogs ('CTLGS') table in HIVE-18755 we are able to run a simple create 
> database command with an older version of Metastore Server. This is due to 
> older versions having JDO schema as per older schema of 'DBS' which did not 
> have an additional 'CTLG_NAME' column.
> The error is as follows: 
> {code:java}
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> MetaException(message:Exception thrown flushing changes to datastore)
> 
> java.sql.BatchUpdateException: Cannot add or update a child row: a foreign 
> key constraint fails ("metastore_1238"."DBS", CONSTRAINT "CTLG_FK1" FOREIGN 
> KEY ("CTLG_NAME") REFERENCES "CTLGS" ("NAME"))
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21739) Make metastore DB backward compatible with pre-catalog versions of hive.

2019-05-16 Thread Aditya Shah (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Shah updated HIVE-21739:
---
Status: Open  (was: Patch Available)

> Make metastore DB backward compatible with pre-catalog versions of hive.
> 
>
> Key: HIVE-21739
> URL: https://issues.apache.org/jira/browse/HIVE-21739
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 2.1.1, 1.2.0
>Reporter: Aditya Shah
>Assignee: Aditya Shah
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21739.patch
>
>
> Since the addition of foreign key constraint between Database ('DBS') table 
> and catalogs ('CTLGS') table in HIVE-18755 we are able to run a simple create 
> database command with an older version of Metastore Server. This is due to 
> older versions having JDO schema as per older schema of 'DBS' which did not 
> have an additional 'CTLG_NAME' column.
> The error is as follows: 
> {code:java}
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> MetaException(message:Exception thrown flushing changes to datastore)
> 
> java.sql.BatchUpdateException: Cannot add or update a child row: a foreign 
> key constraint fails ("metastore_1238"."DBS", CONSTRAINT "CTLG_FK1" FOREIGN 
> KEY ("CTLG_NAME") REFERENCES "CTLGS" ("NAME"))
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21732) Configurable injection of latency for LLAP task execution

2019-05-16 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841604#comment-16841604
 ] 

Hive QA commented on HIVE-21732:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
36s{color} | {color:blue} common in master has 62 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
45s{color} | {color:blue} llap-server in master has 81 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
21s{color} | {color:red} llap-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
21s{color} | {color:red} llap-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 21s{color} 
| {color:red} llap-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
14s{color} | {color:red} llap-server: The patch generated 1 new + 36 unchanged 
- 0 fixed = 37 total (was 36) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
21s{color} | {color:red} llap-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17238/dev-support/hive-personality.sh
 |
| git revision | master / 9a10bc2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17238/yetus/patch-mvninstall-llap-server.txt
 |
| compile | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17238/yetus/patch-compile-llap-server.txt
 |
| javac | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17238/yetus/patch-compile-llap-server.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17238/yetus/diff-checkstyle-llap-server.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17238/yetus/patch-findbugs-llap-server.txt
 |
| modules | C: common llap-server U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17238/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Configurable injection of latency for LLAP task execution
> -
>
> Key: HIVE-21732
> URL: https://issues.apache.org/jira/browse/HIVE-21732
> Project: Hive
>  Issue Type: Test
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21732.2.patch, HIVE-21732.3.patch, 
> HIVE-21732.4.patch, HIVE-21732.5.patch, HIVE-21732.patch
>
>
> For evaluating testing, it would be good to have a configurable way to inject 
> latency for LLAP 

[jira] [Comment Edited] (HIVE-21709) Count with expression does not work in Parquet

2019-05-16 Thread David Lavati (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841569#comment-16841569
 ] 

David Lavati edited comment on HIVE-21709 at 5/16/19 5:42 PM:
--

These steps along with an apache commiter's +1 approval on this ticket - which 
is kind of the result of the review - are needed to make it eligable for 
merging. Merging is also done by commiters.


was (Author: dlavati):
These steps along with an apache commiter's +1 approval on this ticket - which 
is kind of the result of the review - are needed to get it merged.

> Count with expression does not work in Parquet
> --
>
> Key: HIVE-21709
> URL: https://issues.apache.org/jira/browse/HIVE-21709
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.3.2
>Reporter: Mainak Ghosh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For parquet file with nested schema, count with expression as column name 
> does not work when you are filtering on another column in the same struct. 
> Here are the steps to reproduce:
> {code:java}
> CREATE TABLE `test_table`( `rtb_win` struct<`impression_id`:string, 
> `pub_id`:string>) ROW FORMAT SERDE 
> 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' STORED AS 
> INPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat' 
> OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat';
> INSERT INTO TABLE test_table SELECT named_struct('impression_id', 'cat', 
> 'pub_id', '2');
> select count(rtb_win.impression_id) from test_table where rtb_win.pub_id ='2';
> WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the 
> future versions. Consider using a different execution engine (i.e. spark, 
> tez) or using Hive 1.X releases.
> +--+ 
> | _c0  |
> +--+ 
> | 0    | 
> +--+
> select count(*) from test_parquet_count_mghosh where rtb_win.pub_id ='2';
> WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the 
> future versions. Consider using a different execution engine (i.e. spark, 
> tez) or using Hive 1.X releases. 
> +--+ 
> | _c0  | 
> +--+ 
> | 1    | 
> +--+{code}
> As you can see the first query returns the wrong result while the second one 
> returns the correct result.
> The issue is an column order mismatch between the actual parquet file 
> (impression_id first and pub_id second) and the Hive prunedCols datastructure 
> (reverse). As a result in the filter we compare with the wrong value and the 
> count returns 0. I have been able to identify the cause of this mismatch.
> I would love to get the code reviewed and merged. Some of the code changes 
> are changes to commits from Ferdinand Xu and Chao Sun.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21709) Count with expression does not work in Parquet

2019-05-16 Thread David Lavati (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841569#comment-16841569
 ] 

David Lavati commented on HIVE-21709:
-

These steps along with an apache commiter's +1 approval on this ticket - which 
is kind of the result of the review - are needed to get it merged.

> Count with expression does not work in Parquet
> --
>
> Key: HIVE-21709
> URL: https://issues.apache.org/jira/browse/HIVE-21709
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.3.2
>Reporter: Mainak Ghosh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For parquet file with nested schema, count with expression as column name 
> does not work when you are filtering on another column in the same struct. 
> Here are the steps to reproduce:
> {code:java}
> CREATE TABLE `test_table`( `rtb_win` struct<`impression_id`:string, 
> `pub_id`:string>) ROW FORMAT SERDE 
> 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' STORED AS 
> INPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat' 
> OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat';
> INSERT INTO TABLE test_table SELECT named_struct('impression_id', 'cat', 
> 'pub_id', '2');
> select count(rtb_win.impression_id) from test_table where rtb_win.pub_id ='2';
> WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the 
> future versions. Consider using a different execution engine (i.e. spark, 
> tez) or using Hive 1.X releases.
> +--+ 
> | _c0  |
> +--+ 
> | 0    | 
> +--+
> select count(*) from test_parquet_count_mghosh where rtb_win.pub_id ='2';
> WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the 
> future versions. Consider using a different execution engine (i.e. spark, 
> tez) or using Hive 1.X releases. 
> +--+ 
> | _c0  | 
> +--+ 
> | 1    | 
> +--+{code}
> As you can see the first query returns the wrong result while the second one 
> returns the correct result.
> The issue is an column order mismatch between the actual parquet file 
> (impression_id first and pub_id second) and the Hive prunedCols datastructure 
> (reverse). As a result in the filter we compare with the wrong value and the 
> count returns 0. I have been able to identify the cause of this mismatch.
> I would love to get the code reviewed and merged. Some of the code changes 
> are changes to commits from Ferdinand Xu and Chao Sun.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21732) Configurable injection of latency for LLAP task execution

2019-05-16 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841563#comment-16841563
 ] 

Hive QA commented on HIVE-21732:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12968931/HIVE-21732.4.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16057 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17237/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17237/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17237/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12968931 - PreCommit-HIVE-Build

> Configurable injection of latency for LLAP task execution
> -
>
> Key: HIVE-21732
> URL: https://issues.apache.org/jira/browse/HIVE-21732
> Project: Hive
>  Issue Type: Test
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21732.2.patch, HIVE-21732.3.patch, 
> HIVE-21732.4.patch, HIVE-21732.5.patch, HIVE-21732.patch
>
>
> For evaluating testing, it would be good to have a configurable way to inject 
> latency for LLAP tasks.
> The configuration should be able to control how much latency is injected into 
> each daemon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21743) day( ) gives wrong day from the date in Apache Hive 3.1 server

2019-05-16 Thread Adarshdeep Cheema (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adarshdeep Cheema updated HIVE-21743:
-
Description: 
Using Apache Hive 3.1 server 

Run the following SQL and you will get 3 instead of i.

SELECT
 (day( DATE '0001-01-01'))

FROM
 `table`

PLEASE NOTE THIS DOES NOT HAPPEN WITH Apache HIVE 2.1 SERVER 

  was:
Using Apache Hive 3.1 server 


Run the following SQL and you will get 3 instead of i.

SELECT
 (day( DATE '0001-01-01'))

FROM
 `tabke`



PLEASE NOTE THIS DOES NOT HAPPEN WITH Apache HIVE 2.1 SERVER 


> day( ) gives wrong day from the date in Apache Hive 3.1 server
> --
>
> Key: HIVE-21743
> URL: https://issues.apache.org/jira/browse/HIVE-21743
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.1
> Environment: Server: Apache Hive 3.1 
> Driver hive-jdbc-3.1.0.3.1.0.0-78
>Reporter: Adarshdeep Cheema
>Priority: Critical
>
> Using Apache Hive 3.1 server 
> Run the following SQL and you will get 3 instead of i.
> SELECT
>  (day( DATE '0001-01-01'))
> FROM
>  `table`
> PLEASE NOTE THIS DOES NOT HAPPEN WITH Apache HIVE 2.1 SERVER 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21743) day( ) gives wrong day from the date in Apache Hive 3.1 server

2019-05-16 Thread Adarshdeep Cheema (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adarshdeep Cheema updated HIVE-21743:
-
Affects Version/s: (was: 3.0.0)
   3.0.1

> day( ) gives wrong day from the date in Apache Hive 3.1 server
> --
>
> Key: HIVE-21743
> URL: https://issues.apache.org/jira/browse/HIVE-21743
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.1
> Environment: Server: Apache Hive 3.1 
> Driver hive-jdbc-3.1.0.3.1.0.0-78
>Reporter: Adarshdeep Cheema
>Priority: Critical
> Fix For: 3.1.2
>
>
> Using Apache Hive 3.1 server 
> Run the following SQL and you will get 3 instead of i.
> SELECT
>  (day( DATE '0001-01-01'))
> FROM
>  `tabke`
> PLEASE NOTE THIS DOES NOT HAPPEN WITH Apache HIVE 2.1 SERVER 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21743) day( ) gives wrong day from the date in Apache Hive 3.1 server

2019-05-16 Thread Adarshdeep Cheema (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adarshdeep Cheema updated HIVE-21743:
-
Target Version/s: 3.0.1  (was: 3.0.0)

> day( ) gives wrong day from the date in Apache Hive 3.1 server
> --
>
> Key: HIVE-21743
> URL: https://issues.apache.org/jira/browse/HIVE-21743
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.1
> Environment: Server: Apache Hive 3.1 
> Driver hive-jdbc-3.1.0.3.1.0.0-78
>Reporter: Adarshdeep Cheema
>Priority: Critical
> Fix For: 3.1.2
>
>
> Using Apache Hive 3.1 server 
> Run the following SQL and you will get 3 instead of i.
> SELECT
>  (day( DATE '0001-01-01'))
> FROM
>  `tabke`
> PLEASE NOTE THIS DOES NOT HAPPEN WITH Apache HIVE 2.1 SERVER 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21743) day( ) gives wrong day from the date in Apache Hive 3.1 server

2019-05-16 Thread Adarshdeep Cheema (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adarshdeep Cheema updated HIVE-21743:
-
Fix Version/s: (was: 3.1.2)

> day( ) gives wrong day from the date in Apache Hive 3.1 server
> --
>
> Key: HIVE-21743
> URL: https://issues.apache.org/jira/browse/HIVE-21743
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.1
> Environment: Server: Apache Hive 3.1 
> Driver hive-jdbc-3.1.0.3.1.0.0-78
>Reporter: Adarshdeep Cheema
>Priority: Critical
>
> Using Apache Hive 3.1 server 
> Run the following SQL and you will get 3 instead of i.
> SELECT
>  (day( DATE '0001-01-01'))
> FROM
>  `tabke`
> PLEASE NOTE THIS DOES NOT HAPPEN WITH Apache HIVE 2.1 SERVER 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21743) day( ) gives wrong day from the date in Apache Hive 3.1 server

2019-05-16 Thread Adarshdeep Cheema (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adarshdeep Cheema updated HIVE-21743:
-
Description: 
Using Apache Hive 3.1 server 

Run the following SQL and you will get 3 instead of 1

SELECT
 (day( DATE '0001-01-01'))

FROM
 `table`

PLEASE NOTE THIS DOES NOT HAPPEN WITH Apache HIVE 2.1 SERVER 

  was:
Using Apache Hive 3.1 server 

Run the following SQL and you will get 3 instead of i.

SELECT
 (day( DATE '0001-01-01'))

FROM
 `table`

PLEASE NOTE THIS DOES NOT HAPPEN WITH Apache HIVE 2.1 SERVER 


> day( ) gives wrong day from the date in Apache Hive 3.1 server
> --
>
> Key: HIVE-21743
> URL: https://issues.apache.org/jira/browse/HIVE-21743
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.1
> Environment: Server: Apache Hive 3.1 
> Driver hive-jdbc-3.1.0.3.1.0.0-78
>Reporter: Adarshdeep Cheema
>Priority: Critical
>
> Using Apache Hive 3.1 server 
> Run the following SQL and you will get 3 instead of 1
> SELECT
>  (day( DATE '0001-01-01'))
> FROM
>  `table`
> PLEASE NOTE THIS DOES NOT HAPPEN WITH Apache HIVE 2.1 SERVER 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21709) Count with expression does not work in Parquet

2019-05-16 Thread Mainak Ghosh (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841549#comment-16841549
 ] 

Mainak Ghosh commented on HIVE-21709:
-

Thanks David. I will add the unit test and the patch after following the 
documentation you shared. Does the code review depend on these steps?

I am not sure whether the problem occurs in the current Hive version. I would 
assume it does as the original code has not changed in the current version 
either. Can you help me test it in the new version?

 

> Count with expression does not work in Parquet
> --
>
> Key: HIVE-21709
> URL: https://issues.apache.org/jira/browse/HIVE-21709
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.3.2
>Reporter: Mainak Ghosh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For parquet file with nested schema, count with expression as column name 
> does not work when you are filtering on another column in the same struct. 
> Here are the steps to reproduce:
> {code:java}
> CREATE TABLE `test_table`( `rtb_win` struct<`impression_id`:string, 
> `pub_id`:string>) ROW FORMAT SERDE 
> 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' STORED AS 
> INPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat' 
> OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat';
> INSERT INTO TABLE test_table SELECT named_struct('impression_id', 'cat', 
> 'pub_id', '2');
> select count(rtb_win.impression_id) from test_table where rtb_win.pub_id ='2';
> WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the 
> future versions. Consider using a different execution engine (i.e. spark, 
> tez) or using Hive 1.X releases.
> +--+ 
> | _c0  |
> +--+ 
> | 0    | 
> +--+
> select count(*) from test_parquet_count_mghosh where rtb_win.pub_id ='2';
> WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the 
> future versions. Consider using a different execution engine (i.e. spark, 
> tez) or using Hive 1.X releases. 
> +--+ 
> | _c0  | 
> +--+ 
> | 1    | 
> +--+{code}
> As you can see the first query returns the wrong result while the second one 
> returns the correct result.
> The issue is an column order mismatch between the actual parquet file 
> (impression_id first and pub_id second) and the Hive prunedCols datastructure 
> (reverse). As a result in the filter we compare with the wrong value and the 
> count returns 0. I have been able to identify the cause of this mismatch.
> I would love to get the code reviewed and merged. Some of the code changes 
> are changes to commits from Ferdinand Xu and Chao Sun.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17794) HCatLoader breaks when a member is added to a struct-column of a table

2019-05-16 Thread Mithun Radhakrishnan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841538#comment-16841538
 ] 

Mithun Radhakrishnan commented on HIVE-17794:
-

This patch will need rebasing. I'm afraid it's been ages since I posted this, 
so it *could* take some doing.

> HCatLoader breaks when a member is added to a struct-column of a table
> --
>
> Key: HIVE-17794
> URL: https://issues.apache.org/jira/browse/HIVE-17794
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 2.2.0, 3.0.0, 2.4.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
>Priority: Major
> Attachments: HIVE-17794.02.patch, HIVE-17794.03.patch, 
> HIVE-17794.1.patch
>
>
> When a table's schema evolves to add a new member to a struct column, Hive 
> queries work fine, but {{HCatLoader}} breaks with the following trace:
> {noformat}
> TaskAttempt 1 failed, info=
>  Error: Failure while running 
> task:org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: kite_composites_with_segments: Local Rearrange
>  tuple
> {chararray}(false) - scope-555-> scope-974 Operator Key: scope-555): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup: New For Each(false,false)
>  bag
> - scope-548 Operator Key: scope-548): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLocalRearrange.getNextTuple(POLocalRearrange.java:287)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.plan.operator.POLocalRearrangeTez.getNextTuple(POLocalRearrangeTez.java:127)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.runPipeline(PigProcessor.java:376)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.run(PigProcessor.java:241)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:362)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:179)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:171)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1679)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:171)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:167)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> Exception while executing (Name: gup: New For Each(false,false)
>  bag
> - scope-548 Operator Key: scope-548): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNextTuple(POForEach.java:252)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:305)
> ... 17 more
> Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> Exception while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> 

[jira] [Commented] (HIVE-21732) Configurable injection of latency for LLAP task execution

2019-05-16 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841522#comment-16841522
 ] 

Hive QA commented on HIVE-21732:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
37s{color} | {color:blue} common in master has 62 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
48s{color} | {color:blue} llap-server in master has 81 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
23s{color} | {color:red} llap-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
23s{color} | {color:red} llap-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 23s{color} 
| {color:red} llap-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
14s{color} | {color:red} llap-server: The patch generated 3 new + 36 unchanged 
- 0 fixed = 39 total (was 36) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
20s{color} | {color:red} llap-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17237/dev-support/hive-personality.sh
 |
| git revision | master / 9a10bc2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17237/yetus/patch-mvninstall-llap-server.txt
 |
| compile | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17237/yetus/patch-compile-llap-server.txt
 |
| javac | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17237/yetus/patch-compile-llap-server.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17237/yetus/diff-checkstyle-llap-server.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17237/yetus/patch-findbugs-llap-server.txt
 |
| modules | C: common llap-server U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17237/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Configurable injection of latency for LLAP task execution
> -
>
> Key: HIVE-21732
> URL: https://issues.apache.org/jira/browse/HIVE-21732
> Project: Hive
>  Issue Type: Test
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21732.2.patch, HIVE-21732.3.patch, 
> HIVE-21732.4.patch, HIVE-21732.5.patch, HIVE-21732.patch
>
>
> For evaluating testing, it would be good to have a configurable way to inject 
> latency for LLAP 

[jira] [Updated] (HIVE-21732) Configurable injection of latency for LLAP task execution

2019-05-16 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-21732:
--
Attachment: HIVE-21732.5.patch

> Configurable injection of latency for LLAP task execution
> -
>
> Key: HIVE-21732
> URL: https://issues.apache.org/jira/browse/HIVE-21732
> Project: Hive
>  Issue Type: Test
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21732.2.patch, HIVE-21732.3.patch, 
> HIVE-21732.4.patch, HIVE-21732.5.patch, HIVE-21732.patch
>
>
> For evaluating testing, it would be good to have a configurable way to inject 
> latency for LLAP tasks.
> The configuration should be able to control how much latency is injected into 
> each daemon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21740) Collect LLAP execution latency metrics

2019-05-16 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-21740:
--
Attachment: HIVE-21740.patch

> Collect LLAP execution latency metrics
> --
>
> Key: HIVE-21740
> URL: https://issues.apache.org/jira/browse/HIVE-21740
> Project: Hive
>  Issue Type: New Feature
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21740.patch
>
>
> Collect metrics for LLAP task execution times



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21740) Collect LLAP execution latency metrics

2019-05-16 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-21740:
--
Status: Patch Available  (was: Open)

> Collect LLAP execution latency metrics
> --
>
> Key: HIVE-21740
> URL: https://issues.apache.org/jira/browse/HIVE-21740
> Project: Hive
>  Issue Type: New Feature
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21740.patch
>
>
> Collect metrics for LLAP task execution times



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21731) Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster with strict managed table set to true.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21731?focusedWorklogId=243425=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243425
 ]

ASF GitHub Bot logged work on HIVE-21731:
-

Author: ASF GitHub Bot
Created on: 16/May/19 15:52
Start Date: 16/May/19 15:52
Worklog Time Spent: 10m 
  Work Description: sankarh commented on pull request #628: HIVE-21731 : 
Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster 
with strict managed table set to true.
URL: https://github.com/apache/hive/pull/628#discussion_r284759007
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationWithTableMigration.java
 ##
 @@ -504,4 +507,56 @@ public void dynamicallyConvertExternalToManagedTable() 
throws Throwable {
 .runFailure("alter table t1 set tblproperties('EXTERNAL'='false')")
 .runFailure("alter table t2 set 
tblproperties('EXTERNAL'='false')");
   }
+
+  @Test
+  public void testMigrationWithUpgrade() throws Throwable {
+WarehouseInstance.Tuple tuple = primary.run("use " + primaryDbName)
+.run("create table tacid (id int) clustered by(id) into 3 buckets 
stored as orc ")
 
 Review comment:
   Better to have some data in tacid table too. To know if writeId seeding 
impacts data read after bootstrap.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243425)
Time Spent: 1h 10m  (was: 1h)

> Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 
> cluster with strict managed table set to true.
> -
>
> Key: HIVE-21731
> URL: https://issues.apache.org/jira/browse/HIVE-21731
> Project: Hive
>  Issue Type: Bug
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21731.01.patch, HIVE-21731.02.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The scenario is 
>  # Replication policy is set with hive  3.0 source cluster (strict managed 
> table set to false) and hive 4.0 target cluster with strict managed table set 
>  true.
>  # User upgrades the 3.0 source cluster to 4.0 cluster using upgrade tool.
>  # The upgrade converts all managed tables to acid tables.
>  # In the next repl dump, user sets hive .repl .dump .include .acid .tables 
> and hive .repl .bootstrap. acid. tables set true triggering bootstrap of 
> newly converted ACID tables.
>  # As the old tables are non-txn tables, dump is not filtering the events 
> even tough bootstrap acid table is set to true. This is causing the repl load 
> to fail as the write id is not set in the table object.
>  # If we ignore the event replay, the bootstrap is failing with dump 
> directory mismatch error.
> The fix should be 
>  # Ignore dumping the alter table event if bootstrap acid table is set true 
> and the alter is converting a non-acid table to acid table.
>  # In case of bootstrap during incremental load, ignore the dump directory 
> property set in table object.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21731) Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster with strict managed table set to true.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21731?focusedWorklogId=243422=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243422
 ]

ASF GitHub Bot logged work on HIVE-21731:
-

Author: ASF GitHub Bot
Created on: 16/May/19 15:52
Start Date: 16/May/19 15:52
Worklog Time Spent: 10m 
  Work Description: sankarh commented on pull request #628: HIVE-21731 : 
Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster 
with strict managed table set to true.
URL: https://github.com/apache/hive/pull/628#discussion_r284758759
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationWithTableMigration.java
 ##
 @@ -504,4 +507,56 @@ public void dynamicallyConvertExternalToManagedTable() 
throws Throwable {
 .runFailure("alter table t1 set tblproperties('EXTERNAL'='false')")
 .runFailure("alter table t2 set 
tblproperties('EXTERNAL'='false')");
   }
+
+  @Test
+  public void testMigrationWithUpgrade() throws Throwable {
+WarehouseInstance.Tuple tuple = primary.run("use " + primaryDbName)
+.run("create table tacid (id int) clustered by(id) into 3 buckets 
stored as orc ")
+.run("create table texternal (id int) ")
+.run("insert into texternal values (1)")
+.dump(primaryDbName, null);
+replica.load(replicatedDbName, tuple.dumpLocation)
+.run("use " + replicatedDbName)
+.run("repl status " + replicatedDbName)
+.verifyResult(tuple.lastReplicationId)
+.run("select count(*) from tacid")
+.verifyResult("0")
+.run("select id from texternal")
+.verifyResult("1");
+
+assertTrue(isFullAcidTable(replica.getTable(replicatedDbName, "tacid")));
+
assertFalse(MetaStoreUtils.isExternalTable(replica.getTable(replicatedDbName, 
"texternal")));
+
+// forcefully (setting db property) alter the table type. For acid table, 
set the bootstrap acid table to true. For
 
 Review comment:
   Shall explicitly mention that this is a mock up for 
HiveStrictManagedMigration tool which is used during upgrade to migrate the 
tables based on migration rules.
   We simulate the scenario by explicitly allowing alter commands to change 
table types.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243422)
Time Spent: 40m  (was: 0.5h)

> Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 
> cluster with strict managed table set to true.
> -
>
> Key: HIVE-21731
> URL: https://issues.apache.org/jira/browse/HIVE-21731
> Project: Hive
>  Issue Type: Bug
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21731.01.patch, HIVE-21731.02.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The scenario is 
>  # Replication policy is set with hive  3.0 source cluster (strict managed 
> table set to false) and hive 4.0 target cluster with strict managed table set 
>  true.
>  # User upgrades the 3.0 source cluster to 4.0 cluster using upgrade tool.
>  # The upgrade converts all managed tables to acid tables.
>  # In the next repl dump, user sets hive .repl .dump .include .acid .tables 
> and hive .repl .bootstrap. acid. tables set true triggering bootstrap of 
> newly converted ACID tables.
>  # As the old tables are non-txn tables, dump is not filtering the events 
> even tough bootstrap acid table is set to true. This is causing the repl load 
> to fail as the write id is not set in the table object.
>  # If we ignore the event replay, the bootstrap is failing with dump 
> directory mismatch error.
> The fix should be 
>  # Ignore dumping the alter table event if bootstrap acid table is set true 
> and the alter is converting a non-acid table to acid table.
>  # In case of bootstrap during incremental load, ignore the dump directory 
> property set in table object.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21731) Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster with strict managed table set to true.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21731?focusedWorklogId=243424=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243424
 ]

ASF GitHub Bot logged work on HIVE-21731:
-

Author: ASF GitHub Bot
Created on: 16/May/19 15:52
Start Date: 16/May/19 15:52
Worklog Time Spent: 10m 
  Work Description: sankarh commented on pull request #628: HIVE-21731 : 
Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster 
with strict managed table set to true.
URL: https://github.com/apache/hive/pull/628#discussion_r284776123
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/parse/repl/dump/events/AlterTableHandler.java
 ##
 @@ -91,6 +92,14 @@ public void handle(Context withinContext) throws Exception {
   return;
 }
 
+if 
(withinContext.hiveConf.getBoolVar(HiveConf.ConfVars.REPL_BOOTSTRAP_ACID_TABLES))
 {
+  if (!AcidUtils.isTransactionalTable(before) && 
AcidUtils.isTransactionalTable(after)) {
+LOG.info("The table " + after.getTableName() + " is converted to ACID 
table." +
+" It will be replicated with bootstrap load as 
REPL_BOOTSTRAP_ACID_TABLES is set to true.");
 
 Review comment:
   Use hive.repl.bootstrap.acid.tables instead of REPL_BOOTSTRAP_ACID_TABLES.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243424)
Time Spent: 1h  (was: 50m)

> Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 
> cluster with strict managed table set to true.
> -
>
> Key: HIVE-21731
> URL: https://issues.apache.org/jira/browse/HIVE-21731
> Project: Hive
>  Issue Type: Bug
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21731.01.patch, HIVE-21731.02.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The scenario is 
>  # Replication policy is set with hive  3.0 source cluster (strict managed 
> table set to false) and hive 4.0 target cluster with strict managed table set 
>  true.
>  # User upgrades the 3.0 source cluster to 4.0 cluster using upgrade tool.
>  # The upgrade converts all managed tables to acid tables.
>  # In the next repl dump, user sets hive .repl .dump .include .acid .tables 
> and hive .repl .bootstrap. acid. tables set true triggering bootstrap of 
> newly converted ACID tables.
>  # As the old tables are non-txn tables, dump is not filtering the events 
> even tough bootstrap acid table is set to true. This is causing the repl load 
> to fail as the write id is not set in the table object.
>  # If we ignore the event replay, the bootstrap is failing with dump 
> directory mismatch error.
> The fix should be 
>  # Ignore dumping the alter table event if bootstrap acid table is set true 
> and the alter is converting a non-acid table to acid table.
>  # In case of bootstrap during incremental load, ignore the dump directory 
> property set in table object.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21731) Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster with strict managed table set to true.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21731?focusedWorklogId=243423=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243423
 ]

ASF GitHub Bot logged work on HIVE-21731:
-

Author: ASF GitHub Bot
Created on: 16/May/19 15:52
Start Date: 16/May/19 15:52
Worklog Time Spent: 10m 
  Work Description: sankarh commented on pull request #628: HIVE-21731 : 
Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster 
with strict managed table set to true.
URL: https://github.com/apache/hive/pull/628#discussion_r284777561
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/repl/bootstrap/load/table/LoadTable.java
 ##
 @@ -159,10 +159,20 @@ public TaskTracker tasks() throws Exception {
 return tracker;
   }
 
-  private ReplLoadOpType getLoadTableType(Table table) throws 
InvalidOperationException, HiveException {
+  private ReplLoadOpType getLoadTableType(Table table, boolean 
isBootstrapDuringInc)
+  throws InvalidOperationException, HiveException {
 if (table == null) {
   return ReplLoadOpType.LOAD_NEW;
 }
+
+// In case user has asked for bootstrap of transactional table, we replace 
the old one if present. This is to
+// make sure that the transactional info like write id etc for the table 
is consistent between the
+// source and target cluster.
+if (isBootstrapDuringInc && AcidUtils.isTransactionalTable(table)) {
 
 Review comment:
   I think, this check should be done even for external tables. Why don't we 
always replace table in case of isBootstrapDuringInc?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243423)
Time Spent: 50m  (was: 40m)

> Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 
> cluster with strict managed table set to true.
> -
>
> Key: HIVE-21731
> URL: https://issues.apache.org/jira/browse/HIVE-21731
> Project: Hive
>  Issue Type: Bug
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21731.01.patch, HIVE-21731.02.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The scenario is 
>  # Replication policy is set with hive  3.0 source cluster (strict managed 
> table set to false) and hive 4.0 target cluster with strict managed table set 
>  true.
>  # User upgrades the 3.0 source cluster to 4.0 cluster using upgrade tool.
>  # The upgrade converts all managed tables to acid tables.
>  # In the next repl dump, user sets hive .repl .dump .include .acid .tables 
> and hive .repl .bootstrap. acid. tables set true triggering bootstrap of 
> newly converted ACID tables.
>  # As the old tables are non-txn tables, dump is not filtering the events 
> even tough bootstrap acid table is set to true. This is causing the repl load 
> to fail as the write id is not set in the table object.
>  # If we ignore the event replay, the bootstrap is failing with dump 
> directory mismatch error.
> The fix should be 
>  # Ignore dumping the alter table event if bootstrap acid table is set true 
> and the alter is converting a non-acid table to acid table.
>  # In case of bootstrap during incremental load, ignore the dump directory 
> property set in table object.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21731) Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster with strict managed table set to true.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21731?focusedWorklogId=243420=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243420
 ]

ASF GitHub Bot logged work on HIVE-21731:
-

Author: ASF GitHub Bot
Created on: 16/May/19 15:52
Start Date: 16/May/19 15:52
Worklog Time Spent: 10m 
  Work Description: sankarh commented on pull request #628: HIVE-21731 : 
Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster 
with strict managed table set to true.
URL: https://github.com/apache/hive/pull/628#discussion_r284771911
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/WarehouseInstance.java
 ##
 @@ -563,7 +563,21 @@ public void testEventCounts(String dbName, long 
fromEventId, Long toEventId, Int
   }
 
   public boolean isAcidEnabled() {
-return hiveConf.getBoolVar(HiveConf.ConfVars.HIVE_SUPPORT_CONCURRENCY);
+if (hiveConf.getBoolVar(HiveConf.ConfVars.HIVE_SUPPORT_CONCURRENCY) &&
+
hiveConf.getVar(HiveConf.ConfVars.HIVE_TXN_MANAGER).equals("org.apache.hadoop.hive.ql.lockmgr.DbTxnManager"))
 {
+  return true;
+}
+return false;
+  }
+
+  public void disableAcid() {
+hiveConf.setBoolVar(HiveConf.ConfVars.HIVE_SUPPORT_CONCURRENCY, false);
+hiveConf.setVar(HiveConf.ConfVars.HIVE_TXN_MANAGER, 
"org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager");
+  }
+
+  public void enableAcid() {
+hiveConf.setBoolVar(HiveConf.ConfVars.HIVE_SUPPORT_CONCURRENCY, true);
+hiveConf.setVar(HiveConf.ConfVars.HIVE_TXN_MANAGER, 
"org.apache.hadoop.hive.ql.lockmgr.DbTxnManager");
 
 Review comment:
   Is it enough to do this? I think, TxnDbUtil.prepDb should be done, 
otherwise, it won't work. As the modified test suit is migration case, replica 
warehouse instance init has initialised it and so it worked. But won't work 
otherwise.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243420)
Time Spent: 20m  (was: 10m)

> Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 
> cluster with strict managed table set to true.
> -
>
> Key: HIVE-21731
> URL: https://issues.apache.org/jira/browse/HIVE-21731
> Project: Hive
>  Issue Type: Bug
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21731.01.patch, HIVE-21731.02.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The scenario is 
>  # Replication policy is set with hive  3.0 source cluster (strict managed 
> table set to false) and hive 4.0 target cluster with strict managed table set 
>  true.
>  # User upgrades the 3.0 source cluster to 4.0 cluster using upgrade tool.
>  # The upgrade converts all managed tables to acid tables.
>  # In the next repl dump, user sets hive .repl .dump .include .acid .tables 
> and hive .repl .bootstrap. acid. tables set true triggering bootstrap of 
> newly converted ACID tables.
>  # As the old tables are non-txn tables, dump is not filtering the events 
> even tough bootstrap acid table is set to true. This is causing the repl load 
> to fail as the write id is not set in the table object.
>  # If we ignore the event replay, the bootstrap is failing with dump 
> directory mismatch error.
> The fix should be 
>  # Ignore dumping the alter table event if bootstrap acid table is set true 
> and the alter is converting a non-acid table to acid table.
>  # In case of bootstrap during incremental load, ignore the dump directory 
> property set in table object.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21731) Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster with strict managed table set to true.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21731?focusedWorklogId=243421=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243421
 ]

ASF GitHub Bot logged work on HIVE-21731:
-

Author: ASF GitHub Bot
Created on: 16/May/19 15:52
Start Date: 16/May/19 15:52
Worklog Time Spent: 10m 
  Work Description: sankarh commented on pull request #628: HIVE-21731 : 
Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 cluster 
with strict managed table set to true.
URL: https://github.com/apache/hive/pull/628#discussion_r284773568
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplLoadTask.java
 ##
 @@ -471,7 +473,7 @@ private int executeIncrementalLoad(DriverContext 
driverContext) {
 if (work.hasBootstrapLoadTasks()) {
   LOG.debug("Current incremental dump have tables to be bootstrapped. 
Switching to bootstrap "
   + "mode after applying all events.");
-  return executeBootStrapLoad(driverContext);
+  return executeBootStrapLoad(driverContext, true);
 
 Review comment:
   Can we set isBootstrapDuringIncLoad in ReplLoadWork object constructor 
itself? so that need not pass it as extra argument and need not set it after 
each iteration.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243421)
Time Spent: 0.5h  (was: 20m)

> Hive import fails, post upgrade of source 3.0 cluster, to a target 4.0 
> cluster with strict managed table set to true.
> -
>
> Key: HIVE-21731
> URL: https://issues.apache.org/jira/browse/HIVE-21731
> Project: Hive
>  Issue Type: Bug
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21731.01.patch, HIVE-21731.02.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The scenario is 
>  # Replication policy is set with hive  3.0 source cluster (strict managed 
> table set to false) and hive 4.0 target cluster with strict managed table set 
>  true.
>  # User upgrades the 3.0 source cluster to 4.0 cluster using upgrade tool.
>  # The upgrade converts all managed tables to acid tables.
>  # In the next repl dump, user sets hive .repl .dump .include .acid .tables 
> and hive .repl .bootstrap. acid. tables set true triggering bootstrap of 
> newly converted ACID tables.
>  # As the old tables are non-txn tables, dump is not filtering the events 
> even tough bootstrap acid table is set to true. This is causing the repl load 
> to fail as the write id is not set in the table object.
>  # If we ignore the event replay, the bootstrap is failing with dump 
> directory mismatch error.
> The fix should be 
>  # Ignore dumping the alter table event if bootstrap acid table is set true 
> and the alter is converting a non-acid table to acid table.
>  # In case of bootstrap during incremental load, ignore the dump directory 
> property set in table object.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21732) Configurable injection of latency for LLAP task execution

2019-05-16 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-21732:
--
Attachment: HIVE-21732.4.patch

> Configurable injection of latency for LLAP task execution
> -
>
> Key: HIVE-21732
> URL: https://issues.apache.org/jira/browse/HIVE-21732
> Project: Hive
>  Issue Type: Test
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21732.2.patch, HIVE-21732.3.patch, 
> HIVE-21732.4.patch, HIVE-21732.patch
>
>
> For evaluating testing, it would be good to have a configurable way to inject 
> latency for LLAP tasks.
> The configuration should be able to control how much latency is injected into 
> each daemon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21732) Configurable injection of latency for LLAP task execution

2019-05-16 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841445#comment-16841445
 ] 

Hive QA commented on HIVE-21732:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12968907/HIVE-21732.3.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 16027 tests 
executed
*Failed tests:*
{noformat}
TestMiniLlapLocalCliDriver - did not produce a TEST-*.xml file (likely timed 
out) (batchId=164)

[bucket3.q,materialized_view_create_rewrite_4.q,schema_evol_text_nonvec_table.q,external_jdbc_auth.q,check_constraint.q,cbo_simple_select.q,cbo_rp_udf_udaf_stats_opt.q,vector_parquet_nested_two_level_complex.q,vector_interval_1.q,groupby1.q,partition_shared_scan.q,vector_map_order.q,cbo_rp_udf_udaf.q,vector_decimal_aggregate.q,constprog_dpp.q,vector_groupby_grouping_sets3.q,leftsemijoin_mr.q,results_cache_transactional.q,constant_prop_when.q,update_all_types.q,auto_sortmerge_join_6.q,materialized_view_rewrite_part_1.q,vector_llap_text_1.q,vector_groupby4.q,ptf.q,update_where_non_partitioned.q,vectorized_nested_mapjoin.q,schema_evol_text_nonvec_part.q,enforce_constraint_notnull.q,vector_windowing_range_multiorder.q]
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17236/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17236/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17236/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12968907 - PreCommit-HIVE-Build

> Configurable injection of latency for LLAP task execution
> -
>
> Key: HIVE-21732
> URL: https://issues.apache.org/jira/browse/HIVE-21732
> Project: Hive
>  Issue Type: Test
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21732.2.patch, HIVE-21732.3.patch, HIVE-21732.patch
>
>
> For evaluating testing, it would be good to have a configurable way to inject 
> latency for LLAP tasks.
> The configuration should be able to control how much latency is injected into 
> each daemon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21732) Configurable injection of latency for LLAP task execution

2019-05-16 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841388#comment-16841388
 ] 

Hive QA commented on HIVE-21732:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
36s{color} | {color:blue} common in master has 62 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
46s{color} | {color:blue} llap-server in master has 81 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
21s{color} | {color:red} llap-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
22s{color} | {color:red} llap-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 22s{color} 
| {color:red} llap-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
13s{color} | {color:red} llap-server: The patch generated 7 new + 36 unchanged 
- 0 fixed = 43 total (was 36) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
18s{color} | {color:red} llap-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17236/dev-support/hive-personality.sh
 |
| git revision | master / 9a10bc2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17236/yetus/patch-mvninstall-llap-server.txt
 |
| compile | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17236/yetus/patch-compile-llap-server.txt
 |
| javac | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17236/yetus/patch-compile-llap-server.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17236/yetus/diff-checkstyle-llap-server.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17236/yetus/patch-findbugs-llap-server.txt
 |
| modules | C: common llap-server U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17236/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Configurable injection of latency for LLAP task execution
> -
>
> Key: HIVE-21732
> URL: https://issues.apache.org/jira/browse/HIVE-21732
> Project: Hive
>  Issue Type: Test
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21732.2.patch, HIVE-21732.3.patch, HIVE-21732.patch
>
>
> For evaluating testing, it would be good to have a configurable way to inject 
> latency for LLAP tasks.
> The configuration should be able 

[jira] [Updated] (HIVE-21741) Backport metastore SQL commits to branch-3

2019-05-16 Thread David Lavati (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Lavati updated HIVE-21741:

Fix Version/s: 3.1.2
   3.2.0

> Backport metastore SQL commits to branch-3
> --
>
> Key: HIVE-21741
> URL: https://issues.apache.org/jira/browse/HIVE-21741
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore, Standalone Metastore
>Affects Versions: 3.1.1
>Reporter: David Lavati
>Assignee: David Lavati
>Priority: Major
> Fix For: 3.2.0, 3.1.2
>
>
> This is an umbrella for backporting the following metastore-related tickets 
> to branch-3:
>  * HIVE-20221
>  * HIVE-20833
>  * HIVE-21404
>  * HIVE-21462
>  Also including a .gitignore improvement:
>  * HIVE-21406



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21741) Backport metastore SQL commits to branch-3

2019-05-16 Thread David Lavati (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Lavati updated HIVE-21741:

Affects Version/s: 3.1.1

> Backport metastore SQL commits to branch-3
> --
>
> Key: HIVE-21741
> URL: https://issues.apache.org/jira/browse/HIVE-21741
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore, Standalone Metastore
>Affects Versions: 3.1.1
>Reporter: David Lavati
>Assignee: David Lavati
>Priority: Major
>
> This is an umbrella for backporting the following metastore-related tickets 
> to branch-3:
>  * HIVE-20221
>  * HIVE-20833
>  * HIVE-21404
>  * HIVE-21462
>  Also including a .gitignore improvement:
>  * HIVE-21406



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21741) Backport metastore SQL commits to branch-3

2019-05-16 Thread David Lavati (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Lavati updated HIVE-21741:

Component/s: Standalone Metastore
 Metastore

> Backport metastore SQL commits to branch-3
> --
>
> Key: HIVE-21741
> URL: https://issues.apache.org/jira/browse/HIVE-21741
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore, Standalone Metastore
>Reporter: David Lavati
>Assignee: David Lavati
>Priority: Major
>
> This is an umbrella for backporting the following metastore-related tickets 
> to branch-3:
>  * HIVE-20221
>  * HIVE-20833
>  * HIVE-21404
>  * HIVE-21462
>  Also including a .gitignore improvement:
>  * HIVE-21406



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21741) Backport metastore SQL commits to branch-3

2019-05-16 Thread David Lavati (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Lavati updated HIVE-21741:

Description: 
This is an umbrella for backporting the following metastore-related tickets to 
branch-3:
 * HIVE-20221
 * HIVE-20833
 * HIVE-21404
 * HIVE-21462

 Also including a .gitignore improvement:
 * HIVE-21406

  was:
This is an umbrella for backporting the following metastore-related tickets to 
branch-3:
 * HIVE-20221
 * HIVE-20833
 * HIVE-21404
 * HIVE-21462

 


> Backport metastore SQL commits to branch-3
> --
>
> Key: HIVE-21741
> URL: https://issues.apache.org/jira/browse/HIVE-21741
> Project: Hive
>  Issue Type: Bug
>Reporter: David Lavati
>Assignee: David Lavati
>Priority: Major
>
> This is an umbrella for backporting the following metastore-related tickets 
> to branch-3:
>  * HIVE-20221
>  * HIVE-20833
>  * HIVE-21404
>  * HIVE-21462
>  Also including a .gitignore improvement:
>  * HIVE-21406



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21741) Backport metastore SQL commits to branch-3

2019-05-16 Thread David Lavati (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Lavati reassigned HIVE-21741:
---


> Backport metastore SQL commits to branch-3
> --
>
> Key: HIVE-21741
> URL: https://issues.apache.org/jira/browse/HIVE-21741
> Project: Hive
>  Issue Type: Bug
>Reporter: David Lavati
>Assignee: David Lavati
>Priority: Major
>
> This is an umbrella for backporting the following metastore-related tickets 
> to branch-3:
>  * HIVE-20221
>  * HIVE-20833
>  * HIVE-21404
>  * HIVE-21462
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21715) Adding a new partition specified by location (which is empty) leads to Exceptions

2019-05-16 Thread Zoltan Haindrich (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841367#comment-16841367
 ] 

Zoltan Haindrich commented on HIVE-21715:
-

[~ashutoshc] Could you please take a look?

> Adding a new partition specified by location (which is empty) leads to 
> Exceptions
> -
>
> Key: HIVE-21715
> URL: https://issues.apache.org/jira/browse/HIVE-21715
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21715.01.patch, HIVE-21715.01.patch, 
> HIVE-21715.02.patch, HIVE-21715.02.patch
>
>
> {code}
> create table supply (id int, part string, quantity int) partitioned by (day 
> int)
>stored as orc
>location 'hdfs:///tmp/a1'
>TBLPROPERTIES ('transactional'='true')
> ;
> alter table supply add partition (day=20110103) location 
>'hdfs:///tmp/a3';
> {code}
> check exception:
> {code}
> org.apache.hadoop.hive.ql.metadata.HiveException: Wrong file format. Please 
> check the file's format.
>   at 
> org.apache.hadoop.hive.ql.exec.MoveTask.checkFileFormats(MoveTask.java:696)
>   at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:370)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:210)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
> {code}
> If the format check is disabled; an exception happens from AcidUtils; because 
> during checking it doesn't expect it to be empty.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21714) Insert overwrite on an acid/mm table is ineffective if the input is empty

2019-05-16 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-21714:

   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

pushed to master. Thank you [~isuller]!

> Insert overwrite on an acid/mm table is ineffective if the input is empty
> -
>
> Key: HIVE-21714
> URL: https://issues.apache.org/jira/browse/HIVE-21714
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Ivan Suller
>Assignee: Ivan Suller
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21714.1.patch, HIVE-21714.1.patch, 
> HIVE-21714.2.patch, HIVE-21714.3.patch, HIVE-21714.4.patch, 
> HIVE-21714.4.patch, HIVE-21714.4.patch
>
>
> The issue of HIVE-18702 is present for ACID tables as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21739) Make metastore DB backward compatible with pre-catalog versions of hive.

2019-05-16 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841352#comment-16841352
 ] 

Hive QA commented on HIVE-21739:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12968906/HIVE-21739.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 16056 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.mapreduce.TestHCatMultiOutputFormat.org.apache.hive.hcatalog.mapreduce.TestHCatMultiOutputFormat
 (batchId=216)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17235/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17235/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17235/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12968906 - PreCommit-HIVE-Build

> Make metastore DB backward compatible with pre-catalog versions of hive.
> 
>
> Key: HIVE-21739
> URL: https://issues.apache.org/jira/browse/HIVE-21739
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 1.2.0, 2.1.1
>Reporter: Aditya Shah
>Assignee: Aditya Shah
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21739.patch
>
>
> Since the addition of foreign key constraint between Database ('DBS') table 
> and catalogs ('CTLGS') table in HIVE-18755 we are able to run a simple create 
> database command with an older version of Metastore Server. This is due to 
> older versions having JDO schema as per older schema of 'DBS' which did not 
> have an additional 'CTLG_NAME' column.
> The error is as follows: 
> {code:java}
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> MetaException(message:Exception thrown flushing changes to datastore)
> 
> java.sql.BatchUpdateException: Cannot add or update a child row: a foreign 
> key constraint fails ("metastore_1238"."DBS", CONSTRAINT "CTLG_FK1" FOREIGN 
> KEY ("CTLG_NAME") REFERENCES "CTLGS" ("NAME"))
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21739) Make metastore DB backward compatible with pre-catalog versions of hive.

2019-05-16 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841306#comment-16841306
 ] 

Hive QA commented on HIVE-21739:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
23s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17235/dev-support/hive-personality.sh
 |
| git revision | master / ac477b6 |
| Default Java | 1.8.0_111 |
| modules | C: standalone-metastore/metastore-server . U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17235/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Make metastore DB backward compatible with pre-catalog versions of hive.
> 
>
> Key: HIVE-21739
> URL: https://issues.apache.org/jira/browse/HIVE-21739
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 1.2.0, 2.1.1
>Reporter: Aditya Shah
>Assignee: Aditya Shah
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21739.patch
>
>
> Since the addition of foreign key constraint between Database ('DBS') table 
> and catalogs ('CTLGS') table in HIVE-18755 we are able to run a simple create 
> database command with an older version of Metastore Server. This is due to 
> older versions having JDO schema as per older schema of 'DBS' which did not 
> have an additional 'CTLG_NAME' column.
> The error is as follows: 
> {code:java}
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> MetaException(message:Exception thrown flushing changes to datastore)
> 
> java.sql.BatchUpdateException: Cannot add or update a child row: a foreign 
> key constraint fails ("metastore_1238"."DBS", CONSTRAINT "CTLG_FK1" FOREIGN 
> KEY ("CTLG_NAME") REFERENCES "CTLGS" ("NAME"))
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21740) Collect LLAP execution latency metrics

2019-05-16 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary reassigned HIVE-21740:
-


> Collect LLAP execution latency metrics
> --
>
> Key: HIVE-21740
> URL: https://issues.apache.org/jira/browse/HIVE-21740
> Project: Hive
>  Issue Type: New Feature
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>
> Collect metrics for LLAP task execution times



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21732) Configurable injection of latency for LLAP task execution

2019-05-16 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-21732:
--
Attachment: HIVE-21732.3.patch

> Configurable injection of latency for LLAP task execution
> -
>
> Key: HIVE-21732
> URL: https://issues.apache.org/jira/browse/HIVE-21732
> Project: Hive
>  Issue Type: Test
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21732.2.patch, HIVE-21732.3.patch, HIVE-21732.patch
>
>
> For evaluating testing, it would be good to have a configurable way to inject 
> latency for LLAP tasks.
> The configuration should be able to control how much latency is injected into 
> each daemon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21739) Make metastore DB backward compatible with pre-catalog versions of hive.

2019-05-16 Thread Aditya Shah (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Shah updated HIVE-21739:
---
Fix Version/s: 4.0.0
   Attachment: HIVE-21739.patch
   Status: Patch Available  (was: Open)

I've added "hive" as default value to the 'CTLG_NAME' column of 'DBS' table and 
added a default in 'CTLGS'. This also makes an upgraded schema from 2.3 or 
previous and a fresh schema consistent

> Make metastore DB backward compatible with pre-catalog versions of hive.
> 
>
> Key: HIVE-21739
> URL: https://issues.apache.org/jira/browse/HIVE-21739
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 2.1.1, 1.2.0
>Reporter: Aditya Shah
>Assignee: Aditya Shah
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21739.patch
>
>
> Since the addition of foreign key constraint between Database ('DBS') table 
> and catalogs ('CTLGS') table in HIVE-18755 we are able to run a simple create 
> database command with an older version of Metastore Server. This is due to 
> older versions having JDO schema as per older schema of 'DBS' which did not 
> have an additional 'CTLG_NAME' column.
> The error is as follows: 
> {code:java}
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> MetaException(message:Exception thrown flushing changes to datastore)
> 
> java.sql.BatchUpdateException: Cannot add or update a child row: a foreign 
> key constraint fails ("metastore_1238"."DBS", CONSTRAINT "CTLG_FK1" FOREIGN 
> KEY ("CTLG_NAME") REFERENCES "CTLGS" ("NAME"))
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21739) Make metastore DB backward compatible with pre-catalog versions of hive.

2019-05-16 Thread Aditya Shah (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Shah reassigned HIVE-21739:
--


> Make metastore DB backward compatible with pre-catalog versions of hive.
> 
>
> Key: HIVE-21739
> URL: https://issues.apache.org/jira/browse/HIVE-21739
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 2.1.1, 1.2.0
>Reporter: Aditya Shah
>Assignee: Aditya Shah
>Priority: Major
>
> Since the addition of foreign key constraint between Database ('DBS') table 
> and catalogs ('CTLGS') table in HIVE-18755 we are able to run a simple create 
> database command with an older version of Metastore Server. This is due to 
> older versions having JDO schema as per older schema of 'DBS' which did not 
> have an additional 'CTLG_NAME' column.
> The error is as follows: 
> {code:java}
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> MetaException(message:Exception thrown flushing changes to datastore)
> 
> java.sql.BatchUpdateException: Cannot add or update a child row: a foreign 
> key constraint fails ("metastore_1238"."DBS", CONSTRAINT "CTLG_FK1" FOREIGN 
> KEY ("CTLG_NAME") REFERENCES "CTLGS" ("NAME"))
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17794) HCatLoader breaks when a member is added to a struct-column of a table

2019-05-16 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841252#comment-16841252
 ] 

Hive QA commented on HIVE-17794:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12901796/HIVE-17794.03.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17234/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17234/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17234/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-05-16 12:04:51.210
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-17234/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-05-16 12:04:51.216
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at ac477b6 HIVE-21730: HiveStatement.getQueryId throws 
TProtocolException when response is null (Sankar Hariappan, reviewed by Mahesh 
Kumar Behera)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at ac477b6 HIVE-21730: HiveStatement.getQueryId throws 
TProtocolException when response is null (Sankar Hariappan, reviewed by Mahesh 
Kumar Behera)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-05-16 12:04:52.449
+ rm -rf ../yetus_PreCommit-HIVE-Build-17234
+ mkdir ../yetus_PreCommit-HIVE-Build-17234
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-17234
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-17234/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: a/hcatalog/core/src/test/java/org/apache/hive/hcatalog/MiniCluster.java: 
does not exist in index
error: 
a/hcatalog/core/src/test/java/org/apache/hive/hcatalog/data/HCatDataCheckUtil.java:
 does not exist in index
error: a/hcatalog/hcatalog-pig-adapter/pom.xml: does not exist in index
error: 
a/hcatalog/hcatalog-pig-adapter/src/main/java/org/apache/hive/hcatalog/pig/PigHCatUtil.java:
 does not exist in index
error: 
a/hcatalog/webhcat/java-client/src/test/java/org/apache/hive/hcatalog/api/TestHCatClient.java:
 does not exist in index
error: patch failed: 
hcatalog/core/src/test/java/org/apache/hive/hcatalog/data/HCatDataCheckUtil.java:55
Falling back to three-way merge...
Applied patch to 
'hcatalog/core/src/test/java/org/apache/hive/hcatalog/data/HCatDataCheckUtil.java'
 with conflicts.
Going to apply patch with: git apply -p1
/data/hiveptest/working/scratch/build.patch:228: new blank line at EOF.
+
error: patch failed: 
hcatalog/core/src/test/java/org/apache/hive/hcatalog/data/HCatDataCheckUtil.java:55
Falling back to three-way merge...
Applied patch to 
'hcatalog/core/src/test/java/org/apache/hive/hcatalog/data/HCatDataCheckUtil.java'
 with conflicts.
U 
hcatalog/core/src/test/java/org/apache/hive/hcatalog/data/HCatDataCheckUtil.java
warning: 1 line adds whitespace errors.
+ result=1
+ '[' 1 -ne 0 ']'
+ rm -rf yetus_PreCommit-HIVE-Build-17234
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12901796 - PreCommit-HIVE-Build

> HCatLoader breaks when a member is added to a struct-column of a table
> --
>
> Key: HIVE-17794
> URL: https://issues.apache.org/jira/browse/HIVE-17794
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 2.2.0, 3.0.0, 

[jira] [Commented] (HIVE-21714) Insert overwrite on an acid/mm table is ineffective if the input is empty

2019-05-16 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841251#comment-16841251
 ] 

Hive QA commented on HIVE-21714:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12968894/HIVE-21714.4.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16056 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17233/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17233/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17233/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12968894 - PreCommit-HIVE-Build

> Insert overwrite on an acid/mm table is ineffective if the input is empty
> -
>
> Key: HIVE-21714
> URL: https://issues.apache.org/jira/browse/HIVE-21714
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Ivan Suller
>Assignee: Ivan Suller
>Priority: Major
> Attachments: HIVE-21714.1.patch, HIVE-21714.1.patch, 
> HIVE-21714.2.patch, HIVE-21714.3.patch, HIVE-21714.4.patch, 
> HIVE-21714.4.patch, HIVE-21714.4.patch
>
>
> The issue of HIVE-18702 is present for ACID tables as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17794) HCatLoader breaks when a member is added to a struct-column of a table

2019-05-16 Thread Mass Dosage (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841230#comment-16841230
 ] 

Mass Dosage commented on HIVE-17794:


I think we've run into this same issue, would be nice if this patch could be 
incorporated into future Hive versions (our preference would be Hive 2.3.x, I 
can port the code over there if necessary). Failing tests above look like 
normal Hive test failure noise that probably has nothing to do with this change.

> HCatLoader breaks when a member is added to a struct-column of a table
> --
>
> Key: HIVE-17794
> URL: https://issues.apache.org/jira/browse/HIVE-17794
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 2.2.0, 3.0.0, 2.4.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
>Priority: Major
> Attachments: HIVE-17794.02.patch, HIVE-17794.03.patch, 
> HIVE-17794.1.patch
>
>
> When a table's schema evolves to add a new member to a struct column, Hive 
> queries work fine, but {{HCatLoader}} breaks with the following trace:
> {noformat}
> TaskAttempt 1 failed, info=
>  Error: Failure while running 
> task:org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: kite_composites_with_segments: Local Rearrange
>  tuple
> {chararray}(false) - scope-555-> scope-974 Operator Key: scope-555): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup: New For Each(false,false)
>  bag
> - scope-548 Operator Key: scope-548): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLocalRearrange.getNextTuple(POLocalRearrange.java:287)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.plan.operator.POLocalRearrangeTez.getNextTuple(POLocalRearrangeTez.java:127)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.runPipeline(PigProcessor.java:376)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.run(PigProcessor.java:241)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:362)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:179)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:171)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1679)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:171)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:167)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> Exception while executing (Name: gup: New For Each(false,false)
>  bag
> - scope-548 Operator Key: scope-548): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNextTuple(POForEach.java:252)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:305)
> ... 17 more
> Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> Exception while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: 

[jira] [Commented] (HIVE-21714) Insert overwrite on an acid/mm table is ineffective if the input is empty

2019-05-16 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841227#comment-16841227
 ] 

Hive QA commented on HIVE-21714:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
26s{color} | {color:blue} ql in master has 2258 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17233/dev-support/hive-personality.sh
 |
| git revision | master / ac477b6 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql itests/hcatalog-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17233/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Insert overwrite on an acid/mm table is ineffective if the input is empty
> -
>
> Key: HIVE-21714
> URL: https://issues.apache.org/jira/browse/HIVE-21714
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Ivan Suller
>Assignee: Ivan Suller
>Priority: Major
> Attachments: HIVE-21714.1.patch, HIVE-21714.1.patch, 
> HIVE-21714.2.patch, HIVE-21714.3.patch, HIVE-21714.4.patch, 
> HIVE-21714.4.patch, HIVE-21714.4.patch
>
>
> The issue of HIVE-18702 is present for ACID tables as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-21714) Insert overwrite on an acid/mm table is ineffective if the input is empty

2019-05-16 Thread Laszlo Bodor (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841197#comment-16841197
 ] 

Laszlo Bodor edited comment on HIVE-21714 at 5/16/19 10:40 AM:
---

[~isuller]: testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites is not 
related, flaky and failed elsewhere (tracked at HIVE-21724)
https://issues.apache.org/jira/browse/HIVE-21724?focusedCommentId=16841160=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16841160
but unfortunately, we need a green run


was (Author: abstractdog):
[~isuller]: testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites is not 
related, flaky and failed elsewhere
https://issues.apache.org/jira/browse/HIVE-21724?focusedCommentId=16841160=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16841160
but unfortunately, we need a green run

> Insert overwrite on an acid/mm table is ineffective if the input is empty
> -
>
> Key: HIVE-21714
> URL: https://issues.apache.org/jira/browse/HIVE-21714
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Ivan Suller
>Assignee: Ivan Suller
>Priority: Major
> Attachments: HIVE-21714.1.patch, HIVE-21714.1.patch, 
> HIVE-21714.2.patch, HIVE-21714.3.patch, HIVE-21714.4.patch, 
> HIVE-21714.4.patch, HIVE-21714.4.patch
>
>
> The issue of HIVE-18702 is present for ACID tables as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-21714) Insert overwrite on an acid/mm table is ineffective if the input is empty

2019-05-16 Thread Laszlo Bodor (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841197#comment-16841197
 ] 

Laszlo Bodor edited comment on HIVE-21714 at 5/16/19 10:40 AM:
---

[~isuller]: testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites is not 
related, flaky and failed elsewhere (tracked at HIVE-21738)
https://issues.apache.org/jira/browse/HIVE-21724?focusedCommentId=16841160=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16841160
but unfortunately, we need a green run


was (Author: abstractdog):
[~isuller]: testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites is not 
related, flaky and failed elsewhere (tracked at HIVE-21724)
https://issues.apache.org/jira/browse/HIVE-21724?focusedCommentId=16841160=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16841160
but unfortunately, we need a green run

> Insert overwrite on an acid/mm table is ineffective if the input is empty
> -
>
> Key: HIVE-21714
> URL: https://issues.apache.org/jira/browse/HIVE-21714
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Ivan Suller
>Assignee: Ivan Suller
>Priority: Major
> Attachments: HIVE-21714.1.patch, HIVE-21714.1.patch, 
> HIVE-21714.2.patch, HIVE-21714.3.patch, HIVE-21714.4.patch, 
> HIVE-21714.4.patch, HIVE-21714.4.patch
>
>
> The issue of HIVE-18702 is present for ACID tables as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21738) TestReplAcidTablesBootstrapWithJsonMessage#testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites is flaky

2019-05-16 Thread Laszlo Bodor (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841206#comment-16841206
 ] 

Laszlo Bodor commented on HIVE-21738:
-

examples:
https://issues.apache.org/jira/browse/HIVE-21724?focusedCommentId=16841160=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16841160
https://issues.apache.org/jira/browse/HIVE-21714?focusedCommentId=16840185=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16840185

> TestReplAcidTablesBootstrapWithJsonMessage#testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites
>  is flaky
> 
>
> Key: HIVE-21738
> URL: https://issues.apache.org/jira/browse/HIVE-21738
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Priority: Major
> Attachments: maven-test.txt
>
>
> It's been failing intermittently in the recent runs:
> {code}
> [ERROR] 
> testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites(org.apache.hadoop.hive.ql.parse.TestReplAcidTablesBootstrapWithJsonMessage)
>   Time elapsed: 680.912 s  <<< ERROR!
> java.lang.IllegalStateException: Notification events are missing in the meta 
> store.
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getNextNotification(HiveMetaStoreClient.java:3195)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
>   at com.sun.proxy.$Proxy58.getNextNotification(Unknown Source)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$MSClientNotificationFetcher.getNextNotificationEvents(EventUtils.java:107)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.fetchNextBatch(EventUtils.java:159)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.hasNext(EventUtils.java:189)
>   at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:227)
>   at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:121)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2709)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2361)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2028)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1788)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1782)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:162)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:223)
>   at 
> org.apache.hadoop.hive.ql.parse.WarehouseInstance.run(WarehouseInstance.java:227)
>   at 
> org.apache.hadoop.hive.ql.parse.WarehouseInstance.dump(WarehouseInstance.java:270)
>   at 
> org.apache.hadoop.hive.ql.parse.WarehouseInstance.dump(WarehouseInstance.java:265)
>   at 
> org.apache.hadoop.hive.ql.parse.WarehouseInstance.dump(WarehouseInstance.java:277)
>   at 
> org.apache.hadoop.hive.ql.parse.TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites(TestReplicationScenariosAcidTablesBootstrap.java:328)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at 

[jira] [Updated] (HIVE-21738) TestReplAcidTablesBootstrapWithJsonMessage#testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites is flaky

2019-05-16 Thread Laszlo Bodor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Bodor updated HIVE-21738:

Attachment: maven-test.txt

> TestReplAcidTablesBootstrapWithJsonMessage#testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites
>  is flaky
> 
>
> Key: HIVE-21738
> URL: https://issues.apache.org/jira/browse/HIVE-21738
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Priority: Major
> Attachments: maven-test.txt
>
>
> It's been failing intermittently in the recent runs:
> {code}
> [ERROR] 
> testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites(org.apache.hadoop.hive.ql.parse.TestReplAcidTablesBootstrapWithJsonMessage)
>   Time elapsed: 680.912 s  <<< ERROR!
> java.lang.IllegalStateException: Notification events are missing in the meta 
> store.
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getNextNotification(HiveMetaStoreClient.java:3195)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
>   at com.sun.proxy.$Proxy58.getNextNotification(Unknown Source)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$MSClientNotificationFetcher.getNextNotificationEvents(EventUtils.java:107)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.fetchNextBatch(EventUtils.java:159)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.hasNext(EventUtils.java:189)
>   at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:227)
>   at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:121)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2709)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2361)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2028)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1788)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1782)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:162)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:223)
>   at 
> org.apache.hadoop.hive.ql.parse.WarehouseInstance.run(WarehouseInstance.java:227)
>   at 
> org.apache.hadoop.hive.ql.parse.WarehouseInstance.dump(WarehouseInstance.java:270)
>   at 
> org.apache.hadoop.hive.ql.parse.WarehouseInstance.dump(WarehouseInstance.java:265)
>   at 
> org.apache.hadoop.hive.ql.parse.WarehouseInstance.dump(WarehouseInstance.java:277)
>   at 
> org.apache.hadoop.hive.ql.parse.TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites(TestReplicationScenariosAcidTablesBootstrap.java:328)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at 

[jira] [Commented] (HIVE-21724) Nested ARRAY and STRUCT inside MAP don't work with LazySimpleDeserializeRead

2019-05-16 Thread Laszlo Bodor (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841202#comment-16841202
 ] 

Laszlo Bodor commented on HIVE-21724:
-

failing test is flaky, created HIVE-21738 about that

> Nested ARRAY and STRUCT inside MAP don't work with LazySimpleDeserializeRead
> 
>
> Key: HIVE-21724
> URL: https://issues.apache.org/jira/browse/HIVE-21724
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Affects Versions: 3.1.1
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
> Attachments: HIVE-21724.1.patch, HIVE-21724.2.patch, 
> HIVE-21724.2.patch
>
>
> The logic during vectorized execution that keeps track of how deep we are in 
> the nested structure doesn't work for ARRAYs and STRUCTs embedded inside maps.
> Repro steps (with hive.vectorized.execution.enabled=true):
> {code}
> CREATE TABLE srctable(a map>) STORED AS TEXTFILE;
> create table desttable(c1 map>);
> insert into srctable values (map(1, array(1, 2, 3)));
> insert into desttable select a from srctable;
> select * from desttable;
> {code}
> Will produce:
> {code}
> {1:[null]}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21738) TestReplAcidTablesBootstrapWithJsonMessage#testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites is flaky

2019-05-16 Thread Laszlo Bodor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Bodor updated HIVE-21738:

Description: 
It's been failing intermittently in the recent runs:

{code}
[ERROR] 
testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites(org.apache.hadoop.hive.ql.parse.TestReplAcidTablesBootstrapWithJsonMessage)
  Time elapsed: 680.912 s  <<< ERROR!
java.lang.IllegalStateException: Notification events are missing in the meta 
store.
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getNextNotification(HiveMetaStoreClient.java:3195)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
at com.sun.proxy.$Proxy58.getNextNotification(Unknown Source)
at 
org.apache.hadoop.hive.ql.metadata.events.EventUtils$MSClientNotificationFetcher.getNextNotificationEvents(EventUtils.java:107)
at 
org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.fetchNextBatch(EventUtils.java:159)
at 
org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.hasNext(EventUtils.java:189)
at 
org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:227)
at 
org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:121)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2709)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2361)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2028)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1788)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1782)
at 
org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:162)
at 
org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:223)
at 
org.apache.hadoop.hive.ql.parse.WarehouseInstance.run(WarehouseInstance.java:227)
at 
org.apache.hadoop.hive.ql.parse.WarehouseInstance.dump(WarehouseInstance.java:270)
at 
org.apache.hadoop.hive.ql.parse.WarehouseInstance.dump(WarehouseInstance.java:265)
at 
org.apache.hadoop.hive.ql.parse.WarehouseInstance.dump(WarehouseInstance.java:277)
at 
org.apache.hadoop.hive.ql.parse.TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites(TestReplicationScenariosAcidTablesBootstrap.java:328)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)

[jira] [Commented] (HIVE-21714) Insert overwrite on an acid/mm table is ineffective if the input is empty

2019-05-16 Thread Laszlo Bodor (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841197#comment-16841197
 ] 

Laszlo Bodor commented on HIVE-21714:
-

[~isuller]: testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites is not 
related, flaky and failed elsewhere
https://issues.apache.org/jira/browse/HIVE-21724?focusedCommentId=16841160=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16841160
but unfortunately, we need a green run

> Insert overwrite on an acid/mm table is ineffective if the input is empty
> -
>
> Key: HIVE-21714
> URL: https://issues.apache.org/jira/browse/HIVE-21714
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Ivan Suller
>Assignee: Ivan Suller
>Priority: Major
> Attachments: HIVE-21714.1.patch, HIVE-21714.1.patch, 
> HIVE-21714.2.patch, HIVE-21714.3.patch, HIVE-21714.4.patch, 
> HIVE-21714.4.patch, HIVE-21714.4.patch
>
>
> The issue of HIVE-18702 is present for ACID tables as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-21735) Upgrade to Avro 1.9.x

2019-05-16 Thread Nandor Kollar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandor Kollar resolved HIVE-21735.
--
Resolution: Duplicate

> Upgrade to Avro 1.9.x
> -
>
> Key: HIVE-21735
> URL: https://issues.apache.org/jira/browse/HIVE-21735
> Project: Hive
>  Issue Type: Improvement
>Reporter: Nandor Kollar
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21737) Upgrade Avro to version 1.9.0

2019-05-16 Thread Nandor Kollar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandor Kollar updated HIVE-21737:
-
Description: Avro 1.9.0 was released recently. It brings a lot of fixes 
including a leaner version of Avro without Jackson in the public API. Worth the 
update.  (was: Avro 0.9.0 was released recently. It brings a lot of fixes 
including a leaner version of Avro without Jackson in the public API. Worth the 
update.)

> Upgrade Avro to version 1.9.0
> -
>
> Key: HIVE-21737
> URL: https://issues.apache.org/jira/browse/HIVE-21737
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Ismaël Mejía
>Priority: Minor
>
> Avro 1.9.0 was released recently. It brings a lot of fixes including a leaner 
> version of Avro without Jackson in the public API. Worth the update.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21714) Insert overwrite on an acid/mm table is ineffective if the input is empty

2019-05-16 Thread Ivan Suller (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Suller updated HIVE-21714:
---
Attachment: HIVE-21714.4.patch

> Insert overwrite on an acid/mm table is ineffective if the input is empty
> -
>
> Key: HIVE-21714
> URL: https://issues.apache.org/jira/browse/HIVE-21714
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Ivan Suller
>Assignee: Ivan Suller
>Priority: Major
> Attachments: HIVE-21714.1.patch, HIVE-21714.1.patch, 
> HIVE-21714.2.patch, HIVE-21714.3.patch, HIVE-21714.4.patch, 
> HIVE-21714.4.patch, HIVE-21714.4.patch
>
>
> The issue of HIVE-18702 is present for ACID tables as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21724) Nested ARRAY and STRUCT inside MAP don't work with LazySimpleDeserializeRead

2019-05-16 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841160#comment-16841160
 ] 

Hive QA commented on HIVE-21724:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12968886/HIVE-21724.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 16057 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.parse.TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites
 (batchId=246)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17232/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17232/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17232/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12968886 - PreCommit-HIVE-Build

> Nested ARRAY and STRUCT inside MAP don't work with LazySimpleDeserializeRead
> 
>
> Key: HIVE-21724
> URL: https://issues.apache.org/jira/browse/HIVE-21724
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Affects Versions: 3.1.1
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
> Attachments: HIVE-21724.1.patch, HIVE-21724.2.patch, 
> HIVE-21724.2.patch
>
>
> The logic during vectorized execution that keeps track of how deep we are in 
> the nested structure doesn't work for ARRAYs and STRUCTs embedded inside maps.
> Repro steps (with hive.vectorized.execution.enabled=true):
> {code}
> CREATE TABLE srctable(a map>) STORED AS TEXTFILE;
> create table desttable(c1 map>);
> insert into srctable values (map(1, array(1, 2, 3)));
> insert into desttable select a from srctable;
> select * from desttable;
> {code}
> Will produce:
> {code}
> {1:[null]}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21730) HiveStatement.getQueryId throws TProtocolException when response is null.

2019-05-16 Thread Sankar Hariappan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841130#comment-16841130
 ] 

Sankar Hariappan commented on HIVE-21730:
-

Patch committed to master.
Thanks [~maheshk114] for the review!


> HiveStatement.getQueryId throws TProtocolException when response is null.
> -
>
> Key: HIVE-21730
> URL: https://issues.apache.org/jira/browse/HIVE-21730
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21730.01.patch, HIVE-21730.02.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> HiveStatement.getQueryId is failing with below exception if query is closed 
> concurrently.
> {code}
> 24256 2019-05-14T02:09:01,355  INFO [HiveServer2-Background-Pool: 
> Thread-1829] ql.Driver: Executing 
> command(queryId=hive_20190514020858_530a33d9-0b19-4f72-ae08-b631fb4749cb): 
> create table household_demographics
>  24257 stored as orc as
>  24258 select * from household_demographics_txt
>  24259 2019-05-14T02:09:01,356  INFO [HiveServer2-Background-Pool: 
> Thread-1829] hooks.HiveProtoLoggingHook: Received pre-hook notification for: 
> hive_20190514020858_530a33d9-0b19-4f72-ae08-b631fb4749cb
>  24260 2019-05-14T02:09:01,356 ERROR [HiveServer2-Handler-Pool: Thread-131] 
> server.TThreadPoolServer: Thrift error occurred during processing of message.
>  24261 org.apache.thrift.protocol.TProtocolException: Required field 
> 'queryId' is unset! Struct:TGetQueryIdResp(queryId:null)
>   
>   
> 24216,1   
> 10%
>  24260 2019-05-14T02:09:01,356 ERROR [HiveServer2-Handler-Pool: Thread-131] 
> server.TThreadPoolServer: Thrift error occurred during processing of message.
>  24261 org.apache.thrift.protocol.TProtocolException: Required field 
> 'queryId' is unset! Struct:TGetQueryIdResp(queryId:null)
>  24262 at 
> org.apache.hive.service.rpc.thrift.TGetQueryIdResp.validate(TGetQueryIdResp.java:294)
>  ~[hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24263 at 
> org.apache.hive.service.rpc.thrift.TCLIService$GetQueryId_result.validate(TCLIService.java:18890)
>  ~[hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24264 at 
> org.apache.hive.service.rpc.thrift.TCLIService$GetQueryId_result$GetQueryId_resultStandardScheme.write(TCLIService.java:18947)
>  ~[hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24265 at 
> org.apache.hive.service.rpc.thrift.TCLIService$GetQueryId_result$GetQueryId_resultStandardScheme.write(TCLIService.java:18916)
>  ~[hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24266 at 
> org.apache.hive.service.rpc.thrift.TCLIService$GetQueryId_result.write(TCLIService.java:18867)
>  ~[hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24267 at 
> org.apache.thrift.ProcessFunction.process(ProcessFunction.java:53) 
> ~[hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24268 at 
> org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
> ~[hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24269 at 
> org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
>  ~[hive-service-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24270 at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
>  [hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24271 at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_161]
>  24272 at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_161]
>  24273 at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21730) HiveStatement.getQueryId throws TProtocolException when response is null.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21730?focusedWorklogId=243188=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243188
 ]

ASF GitHub Bot logged work on HIVE-21730:
-

Author: ASF GitHub Bot
Created on: 16/May/19 08:51
Start Date: 16/May/19 08:51
Worklog Time Spent: 10m 
  Work Description: sankarh commented on pull request #629: HIVE-21730: 
HiveStatement.getQueryId throws TProtocolException when response is null.
URL: https://github.com/apache/hive/pull/629
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243188)
Time Spent: 0.5h  (was: 20m)

> HiveStatement.getQueryId throws TProtocolException when response is null.
> -
>
> Key: HIVE-21730
> URL: https://issues.apache.org/jira/browse/HIVE-21730
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21730.01.patch, HIVE-21730.02.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> HiveStatement.getQueryId is failing with below exception if query is closed 
> concurrently.
> {code}
> 24256 2019-05-14T02:09:01,355  INFO [HiveServer2-Background-Pool: 
> Thread-1829] ql.Driver: Executing 
> command(queryId=hive_20190514020858_530a33d9-0b19-4f72-ae08-b631fb4749cb): 
> create table household_demographics
>  24257 stored as orc as
>  24258 select * from household_demographics_txt
>  24259 2019-05-14T02:09:01,356  INFO [HiveServer2-Background-Pool: 
> Thread-1829] hooks.HiveProtoLoggingHook: Received pre-hook notification for: 
> hive_20190514020858_530a33d9-0b19-4f72-ae08-b631fb4749cb
>  24260 2019-05-14T02:09:01,356 ERROR [HiveServer2-Handler-Pool: Thread-131] 
> server.TThreadPoolServer: Thrift error occurred during processing of message.
>  24261 org.apache.thrift.protocol.TProtocolException: Required field 
> 'queryId' is unset! Struct:TGetQueryIdResp(queryId:null)
>   
>   
> 24216,1   
> 10%
>  24260 2019-05-14T02:09:01,356 ERROR [HiveServer2-Handler-Pool: Thread-131] 
> server.TThreadPoolServer: Thrift error occurred during processing of message.
>  24261 org.apache.thrift.protocol.TProtocolException: Required field 
> 'queryId' is unset! Struct:TGetQueryIdResp(queryId:null)
>  24262 at 
> org.apache.hive.service.rpc.thrift.TGetQueryIdResp.validate(TGetQueryIdResp.java:294)
>  ~[hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24263 at 
> org.apache.hive.service.rpc.thrift.TCLIService$GetQueryId_result.validate(TCLIService.java:18890)
>  ~[hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24264 at 
> org.apache.hive.service.rpc.thrift.TCLIService$GetQueryId_result$GetQueryId_resultStandardScheme.write(TCLIService.java:18947)
>  ~[hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24265 at 
> org.apache.hive.service.rpc.thrift.TCLIService$GetQueryId_result$GetQueryId_resultStandardScheme.write(TCLIService.java:18916)
>  ~[hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24266 at 
> org.apache.hive.service.rpc.thrift.TCLIService$GetQueryId_result.write(TCLIService.java:18867)
>  ~[hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24267 at 
> org.apache.thrift.ProcessFunction.process(ProcessFunction.java:53) 
> ~[hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24268 at 
> org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
> ~[hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24269 at 
> org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
>  ~[hive-service-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24270 at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
>  [hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24271 at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_161]
>  24272 at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_161]
> 

[jira] [Updated] (HIVE-21730) HiveStatement.getQueryId throws TProtocolException when response is null.

2019-05-16 Thread Sankar Hariappan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-21730:

   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

> HiveStatement.getQueryId throws TProtocolException when response is null.
> -
>
> Key: HIVE-21730
> URL: https://issues.apache.org/jira/browse/HIVE-21730
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21730.01.patch, HIVE-21730.02.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> HiveStatement.getQueryId is failing with below exception if query is closed 
> concurrently.
> {code}
> 24256 2019-05-14T02:09:01,355  INFO [HiveServer2-Background-Pool: 
> Thread-1829] ql.Driver: Executing 
> command(queryId=hive_20190514020858_530a33d9-0b19-4f72-ae08-b631fb4749cb): 
> create table household_demographics
>  24257 stored as orc as
>  24258 select * from household_demographics_txt
>  24259 2019-05-14T02:09:01,356  INFO [HiveServer2-Background-Pool: 
> Thread-1829] hooks.HiveProtoLoggingHook: Received pre-hook notification for: 
> hive_20190514020858_530a33d9-0b19-4f72-ae08-b631fb4749cb
>  24260 2019-05-14T02:09:01,356 ERROR [HiveServer2-Handler-Pool: Thread-131] 
> server.TThreadPoolServer: Thrift error occurred during processing of message.
>  24261 org.apache.thrift.protocol.TProtocolException: Required field 
> 'queryId' is unset! Struct:TGetQueryIdResp(queryId:null)
>   
>   
> 24216,1   
> 10%
>  24260 2019-05-14T02:09:01,356 ERROR [HiveServer2-Handler-Pool: Thread-131] 
> server.TThreadPoolServer: Thrift error occurred during processing of message.
>  24261 org.apache.thrift.protocol.TProtocolException: Required field 
> 'queryId' is unset! Struct:TGetQueryIdResp(queryId:null)
>  24262 at 
> org.apache.hive.service.rpc.thrift.TGetQueryIdResp.validate(TGetQueryIdResp.java:294)
>  ~[hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24263 at 
> org.apache.hive.service.rpc.thrift.TCLIService$GetQueryId_result.validate(TCLIService.java:18890)
>  ~[hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24264 at 
> org.apache.hive.service.rpc.thrift.TCLIService$GetQueryId_result$GetQueryId_resultStandardScheme.write(TCLIService.java:18947)
>  ~[hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24265 at 
> org.apache.hive.service.rpc.thrift.TCLIService$GetQueryId_result$GetQueryId_resultStandardScheme.write(TCLIService.java:18916)
>  ~[hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24266 at 
> org.apache.hive.service.rpc.thrift.TCLIService$GetQueryId_result.write(TCLIService.java:18867)
>  ~[hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24267 at 
> org.apache.thrift.ProcessFunction.process(ProcessFunction.java:53) 
> ~[hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24268 at 
> org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
> ~[hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24269 at 
> org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
>  ~[hive-service-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24270 at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
>  [hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24271 at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_161]
>  24272 at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_161]
>  24273 at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21730) HiveStatement.getQueryId throws TProtocolException when response is null.

2019-05-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21730?focusedWorklogId=243189=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-243189
 ]

ASF GitHub Bot logged work on HIVE-21730:
-

Author: ASF GitHub Bot
Created on: 16/May/19 08:51
Start Date: 16/May/19 08:51
Worklog Time Spent: 10m 
  Work Description: sankarh commented on pull request #629: HIVE-21730: 
HiveStatement.getQueryId throws TProtocolException when response is null.
URL: https://github.com/apache/hive/pull/629#discussion_r284602959
 
 

 ##
 File path: jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java
 ##
 @@ -1015,12 +1016,18 @@ public void setInPlaceUpdateStream(InPlaceUpdateStream 
stream) {
*/
   @LimitedPrivate(value={"Hive and closely related projects."})
   public String getQueryId() throws SQLException {
-if (stmtHandle == null) {
+// Storing it in temp variable as this method is not thread-safe and 
concurrent thread can
+// close this handle and set it to null after checking for null.
+TOperationHandle stmtHandleTmp = stmtHandle;
+if (stmtHandleTmp == null) {
   // If query is not running or already closed.
   return null;
 }
 try {
-  return client.GetQueryId(new TGetQueryIdReq(stmtHandle)).getQueryId();
+  String queryId = client.GetQueryId(new 
TGetQueryIdReq(stmtHandleTmp)).getQueryId();
+
+  // Returns empty string if query was already closed.
 
 Review comment:
   fixed.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 243189)
Time Spent: 40m  (was: 0.5h)

> HiveStatement.getQueryId throws TProtocolException when response is null.
> -
>
> Key: HIVE-21730
> URL: https://issues.apache.org/jira/browse/HIVE-21730
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21730.01.patch, HIVE-21730.02.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> HiveStatement.getQueryId is failing with below exception if query is closed 
> concurrently.
> {code}
> 24256 2019-05-14T02:09:01,355  INFO [HiveServer2-Background-Pool: 
> Thread-1829] ql.Driver: Executing 
> command(queryId=hive_20190514020858_530a33d9-0b19-4f72-ae08-b631fb4749cb): 
> create table household_demographics
>  24257 stored as orc as
>  24258 select * from household_demographics_txt
>  24259 2019-05-14T02:09:01,356  INFO [HiveServer2-Background-Pool: 
> Thread-1829] hooks.HiveProtoLoggingHook: Received pre-hook notification for: 
> hive_20190514020858_530a33d9-0b19-4f72-ae08-b631fb4749cb
>  24260 2019-05-14T02:09:01,356 ERROR [HiveServer2-Handler-Pool: Thread-131] 
> server.TThreadPoolServer: Thrift error occurred during processing of message.
>  24261 org.apache.thrift.protocol.TProtocolException: Required field 
> 'queryId' is unset! Struct:TGetQueryIdResp(queryId:null)
>   
>   
> 24216,1   
> 10%
>  24260 2019-05-14T02:09:01,356 ERROR [HiveServer2-Handler-Pool: Thread-131] 
> server.TThreadPoolServer: Thrift error occurred during processing of message.
>  24261 org.apache.thrift.protocol.TProtocolException: Required field 
> 'queryId' is unset! Struct:TGetQueryIdResp(queryId:null)
>  24262 at 
> org.apache.hive.service.rpc.thrift.TGetQueryIdResp.validate(TGetQueryIdResp.java:294)
>  ~[hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24263 at 
> org.apache.hive.service.rpc.thrift.TCLIService$GetQueryId_result.validate(TCLIService.java:18890)
>  ~[hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24264 at 
> org.apache.hive.service.rpc.thrift.TCLIService$GetQueryId_result$GetQueryId_resultStandardScheme.write(TCLIService.java:18947)
>  ~[hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24265 at 
> org.apache.hive.service.rpc.thrift.TCLIService$GetQueryId_result$GetQueryId_resultStandardScheme.write(TCLIService.java:18916)
>  ~[hive-exec-2.1.0.2.6.5.1150-19.jar:2.1.0.2.6.5.1150-19]
>  24266 at 
> 

[jira] [Commented] (HIVE-21724) Nested ARRAY and STRUCT inside MAP don't work with LazySimpleDeserializeRead

2019-05-16 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841129#comment-16841129
 ] 

Hive QA commented on HIVE-21724:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
42s{color} | {color:blue} serde in master has 193 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
20s{color} | {color:blue} ql in master has 2258 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
19s{color} | {color:red} serde: The patch generated 24 new + 421 unchanged - 5 
fixed = 445 total (was 426) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17232/dev-support/hive-personality.sh
 |
| git revision | master / c507156 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17232/yetus/diff-checkstyle-serde.txt
 |
| modules | C: serde ql itests U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17232/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Nested ARRAY and STRUCT inside MAP don't work with LazySimpleDeserializeRead
> 
>
> Key: HIVE-21724
> URL: https://issues.apache.org/jira/browse/HIVE-21724
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Affects Versions: 3.1.1
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
> Attachments: HIVE-21724.1.patch, HIVE-21724.2.patch, 
> HIVE-21724.2.patch
>
>
> The logic during vectorized execution that keeps track of how deep we are in 
> the nested structure doesn't work for ARRAYs and STRUCTs embedded inside maps.
> Repro steps (with hive.vectorized.execution.enabled=true):
> {code}
> CREATE TABLE srctable(a map>) STORED AS TEXTFILE;
> create table desttable(c1 map>);
> insert into srctable values (map(1, array(1, 2, 3)));
> insert into desttable select a from srctable;
> select * from desttable;
> {code}
> Will produce:
> {code}
> {1:[null]}
> {code}




[jira] [Commented] (HIVE-21620) GROUPBY position alias not working with STREAMTABLE hint

2019-05-16 Thread David Lavati (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841118#comment-16841118
 ] 

David Lavati commented on HIVE-21620:
-

Hi [~dac], 
thank you for your investigation, you can find more info on the process here: 
https://cwiki.apache.org/confluence/display/Hive/HowToContribute

I'm not yet familiar with this part of the code base, but I can give some 
further notes on contributing:
 * We have a green run policy for new commits, which means you need to submit 
your patch in this Jira and an automation will check your code. The patch has 
to bee iterated until all tests pass in the precommit job. (Note here, that 
some tests tend to be flaky, so multiple resubmits of the same patch is 
possible, if the failed tests are unrelated.) See [Creating a 
patch|https://cwiki.apache.org/confluence/display/Hive/HowToContribute#HowToContribute-CreatingaPatch]
 and [Contributing your 
work|https://cwiki.apache.org/confluence/display/Hive/HowToContribute#HowToContribute-ContributingYourWork]
 * Is this resolved in the current Hive version? If this is unique to Hive 2.x 
only, you'll need to submit your patch here for *branch-2*
 * Can you recreate the issue with a few SQL commands from scratch? Adding a 
unit test for previously bad behaviour is advised. You can find more info at 
[Query Unit 
Test|https://cwiki.apache.org/confluence/display/Hive/HowToContribute#HowToContribute-QueryUnitTest]

 

Cheers,
David

> GROUPBY position alias not working with STREAMTABLE hint
> 
>
> Key: HIVE-21620
> URL: https://issues.apache.org/jira/browse/HIVE-21620
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Affects Versions: 2.3.2
>Reporter: Da Cheng
>Priority: Major
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> Hi team,
>   
>  When STREAMTABLE hint is used for JOINs, GROUPBY index fails to skip the 
> hint to select the right column. Take following query for example, I wanted 
> to group by 'user', but the '1' index in GROUPBY clause actually points to 
> the hint '_/*+ STREAMTABLE(xyz) */_' other than the actual 'user' column. 
> Hence Hive errors out complaining: "Expression not in GROUP BY key 'user'"
> {code:java}
> select
>      /*+ STREAMTABLE(xyz) */
>      user,
>      sum(score)
>  from 
>      test.roster
>  group by 1{code}
> To make the query work, I need to manually skip the hint by using 'group by 
> 2' instead of 'group by 1'. 
>  (Note that the STREAMTABLE hint is dummy in the query since there is no 
> JOIN. It's added just to reproduce the error.)
>   
>  We have made a patch at our local branch and tested it's working fine. If 
> you have seen similar issues, feel free to apply our patch to your branch. 
> Next we will create a PR for the patch for review. Please advise if I missed 
> anything.
>   
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21709) Count with expression does not work in Parquet

2019-05-16 Thread David Lavati (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841101#comment-16841101
 ] 

David Lavati commented on HIVE-21709:
-

Thank you, a few more things we'll need to do to make this complete:
 * Your example seems like a good unit test. Could you add it as a q file to 
your commit? You can find more info at [Query Unit 
Test|https://cwiki.apache.org/confluence/display/Hive/HowToContribute#HowToContribute-QueryUnitTest]
 * Is this resolved in the current Hive version? (4.0.0-SNAPSHOT) If this is 
unique to Hive 2.x only, you'll need to submit your patch here for *branch-2*
 * We have a green run policy for new commits, which means you need to submit 
your patch in this Jira and an automation will check your code. The patch has 
to bee iterated until all tests pass in the precommit job. (Note here, that 
some tests tend to be flaky, so multiple resubmits of the same patch is 
possible, if the failed tests are unrelated.) See [Creating a 
patch|https://cwiki.apache.org/confluence/display/Hive/HowToContribute#HowToContribute-CreatingaPatch]

> Count with expression does not work in Parquet
> --
>
> Key: HIVE-21709
> URL: https://issues.apache.org/jira/browse/HIVE-21709
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.3.2
>Reporter: Mainak Ghosh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For parquet file with nested schema, count with expression as column name 
> does not work when you are filtering on another column in the same struct. 
> Here are the steps to reproduce:
> {code:java}
> CREATE TABLE `test_table`( `rtb_win` struct<`impression_id`:string, 
> `pub_id`:string>) ROW FORMAT SERDE 
> 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' STORED AS 
> INPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat' 
> OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat';
> INSERT INTO TABLE test_table SELECT named_struct('impression_id', 'cat', 
> 'pub_id', '2');
> select count(rtb_win.impression_id) from test_table where rtb_win.pub_id ='2';
> WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the 
> future versions. Consider using a different execution engine (i.e. spark, 
> tez) or using Hive 1.X releases.
> +--+ 
> | _c0  |
> +--+ 
> | 0    | 
> +--+
> select count(*) from test_parquet_count_mghosh where rtb_win.pub_id ='2';
> WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the 
> future versions. Consider using a different execution engine (i.e. spark, 
> tez) or using Hive 1.X releases. 
> +--+ 
> | _c0  | 
> +--+ 
> | 1    | 
> +--+{code}
> As you can see the first query returns the wrong result while the second one 
> returns the correct result.
> The issue is an column order mismatch between the actual parquet file 
> (impression_id first and pub_id second) and the Hive prunedCols datastructure 
> (reverse). As a result in the filter we compare with the wrong value and the 
> count returns 0. I have been able to identify the cause of this mismatch.
> I would love to get the code reviewed and merged. Some of the code changes 
> are changes to commits from Ferdinand Xu and Chao Sun.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >