[2/2] hive git commit: HIVE-18235: Columnstats gather on mm tables: re-enable disabled test (Zoltan Haindrich reviewed by Peter Vary)

2018-02-13 Thread kgyrtkirk
HIVE-18235: Columnstats gather on mm tables: re-enable disabled test (Zoltan 
Haindrich reviewed by Peter Vary)

Signed-off-by: Zoltan Haindrich 


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/1d15990a
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/1d15990a
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/1d15990a

Branch: refs/heads/master
Commit: 1d15990addc9a4913cdcff34e0610723e69bf4f3
Parents: f5c08a9
Author: Zoltan Haindrich 
Authored: Tue Feb 13 14:34:12 2018 +0100
Committer: Zoltan Haindrich 
Committed: Tue Feb 13 14:34:12 2018 +0100

--
 .../test/queries/clientpositive/dp_counter_mm.q |  2 -
 .../clientpositive/llap/dp_counter_mm.q.out | 54 +---
 2 files changed, 36 insertions(+), 20 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/1d15990a/ql/src/test/queries/clientpositive/dp_counter_mm.q
--
diff --git a/ql/src/test/queries/clientpositive/dp_counter_mm.q 
b/ql/src/test/queries/clientpositive/dp_counter_mm.q
index 5a7b859..4f3b100 100644
--- a/ql/src/test/queries/clientpositive/dp_counter_mm.q
+++ b/ql/src/test/queries/clientpositive/dp_counter_mm.q
@@ -1,5 +1,3 @@
--- remove disable after HIVE-18237
-set hive.stats.column.autogather=false;
 set hive.exec.dynamic.partition.mode=nonstrict;
 set hive.exec.max.dynamic.partitions.pernode=200;
 set hive.exec.max.dynamic.partitions=200;

http://git-wip-us.apache.org/repos/asf/hive/blob/1d15990a/ql/src/test/results/clientpositive/llap/dp_counter_mm.q.out
--
diff --git a/ql/src/test/results/clientpositive/llap/dp_counter_mm.q.out 
b/ql/src/test/results/clientpositive/llap/dp_counter_mm.q.out
index 981f260..8699160 100644
--- a/ql/src/test/results/clientpositive/llap/dp_counter_mm.q.out
+++ b/ql/src/test/results/clientpositive/llap/dp_counter_mm.q.out
@@ -17,10 +17,12 @@ PREHOOK: Output: default@src2
 Stage-1 FILE SYSTEM COUNTERS:
 Stage-1 HIVE COUNTERS:
CREATED_DYNAMIC_PARTITIONS: 57
-   CREATED_FILES: 57
+   CREATED_FILES: 61
DESERIALIZE_ERRORS: 0
RECORDS_IN_Map_1: 500
+   RECORDS_OUT_0: 57
RECORDS_OUT_1_default.src2: 84
+   RECORDS_OUT_INTERMEDIATE_Map_1: 57
 Stage-1 INPUT COUNTERS:
GROUPED_INPUT_SPLITS_Map_1: 1
INPUT_DIRECTORIES_Map_1: 1
@@ -33,10 +35,12 @@ PREHOOK: Output: default@src2
 Stage-1 FILE SYSTEM COUNTERS:
 Stage-1 HIVE COUNTERS:
CREATED_DYNAMIC_PARTITIONS: 64
-   CREATED_FILES: 121
+   CREATED_FILES: 125
DESERIALIZE_ERRORS: 0
RECORDS_IN_Map_1: 500
+   RECORDS_OUT_0: 121
RECORDS_OUT_1_default.src2: 189
+   RECORDS_OUT_INTERMEDIATE_Map_1: 121
 Stage-1 INPUT COUNTERS:
GROUPED_INPUT_SPLITS_Map_1: 1
INPUT_DIRECTORIES_Map_1: 1
@@ -57,10 +61,12 @@ PREHOOK: Output: default@src2
 Stage-1 FILE SYSTEM COUNTERS:
 Stage-1 HIVE COUNTERS:
CREATED_DYNAMIC_PARTITIONS: 121
-   CREATED_FILES: 121
+   CREATED_FILES: 125
DESERIALIZE_ERRORS: 0
RECORDS_IN_Map_1: 500
+   RECORDS_OUT_0: 121
RECORDS_OUT_1_default.src2: 189
+   RECORDS_OUT_INTERMEDIATE_Map_1: 121
 Stage-1 INPUT COUNTERS:
GROUPED_INPUT_SPLITS_Map_1: 1
INPUT_DIRECTORIES_Map_1: 1
@@ -73,10 +79,12 @@ PREHOOK: Output: default@src2
 Stage-1 FILE SYSTEM COUNTERS:
 Stage-1 HIVE COUNTERS:
CREATED_DYNAMIC_PARTITIONS: 63
-   CREATED_FILES: 184
+   CREATED_FILES: 188
DESERIALIZE_ERRORS: 0
RECORDS_IN_Map_1: 500
+   RECORDS_OUT_0: 184
RECORDS_OUT_1_default.src2: 292
+   RECORDS_OUT_INTERMEDIATE_Map_1: 184
 Stage-1 INPUT COUNTERS:
GROUPED_INPUT_SPLITS_Map_1: 1
INPUT_DIRECTORIES_Map_1: 1
@@ -106,11 +114,13 @@ PREHOOK: Output: default@src3
 Stage-2 FILE SYSTEM COUNTERS:
 Stage-2 HIVE COUNTERS:
CREATED_DYNAMIC_PARTITIONS: 121
-   CREATED_FILES: 121
+   CREATED_FILES: 129
DESERIALIZE_ERRORS: 0
RECORDS_IN_Map_1: 500
+   RECORDS_OUT_0: 121
RECORDS_OUT_1_default.src2: 84
RECORDS_OUT_2_default.src3: 105
+   RECORDS_OUT_INTERMEDIATE_Map_1: 121
 Stage-2 INPUT COUNTERS:
GROUPED_INPUT_SPLITS_Map_1: 1
INPUT_DIRECTORIES_Map_1: 1
@@ -126,11 +136,13 @@ PREHOOK: Output: default@src3
 Stage-2 FILE SYSTEM COUNTERS:
 Stage-2 HIVE COUNTERS:
CREATED_DYNAMIC_PARTITIONS: 63
-   CREATED_FILES: 184
+   CREATED_FILES: 192
DESERIALIZE_ERRORS: 0
RECORDS_IN_Map_1: 500
+   RECORDS_OUT_0: 184
RECORDS_OUT_1_default.src2: 84
RECORDS_OUT_2_default.src3: 208
+   RECORDS_OUT_INTERMEDIATE_Map_1: 184
 Stage-2 INPUT COUNTERS:
GROUPED_INPUT_SPLITS_Map_1: 1
INPUT_DIRECTORIES_Map_1: 1
@@ -155,20 +167,23 @@ PREHOOK: Output: default@src2
 Stage-1 FILE SYSTEM COUNTERS:
 Stage-1 HIVE COUNTERS:
CREATED_DYNAMIC_PARTITIONS: 121
-   CREATED_FILES: 121
+   CREATED_FILES: 125

[1/2] hive git commit: HIVE-18238: Driver execution may not have configuration changing sideeffects (Zoltan Haindrich reviewed by Ashutosh Chauhan)

2018-02-13 Thread kgyrtkirk
Repository: hive
Updated Branches:
  refs/heads/master 6356205c7 -> 1d15990ad


HIVE-18238: Driver execution may not have configuration changing sideeffects 
(Zoltan Haindrich reviewed by Ashutosh Chauhan)

Signed-off-by: Zoltan Haindrich 


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/f5c08a95
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/f5c08a95
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/f5c08a95

Branch: refs/heads/master
Commit: f5c08a951763d811289e6f39d6f08dcac36bb45d
Parents: 6356205
Author: Zoltan Haindrich 
Authored: Tue Feb 13 14:33:21 2018 +0100
Committer: Zoltan Haindrich 
Committed: Tue Feb 13 14:33:21 2018 +0100

--
 .../org/apache/hadoop/hive/cli/CliDriver.java   | 24 +++
 .../org/apache/hive/hcatalog/cli/HCatCli.java   |  8 +--
 .../apache/hive/hcatalog/cli/HCatDriver.java| 16 ++---
 .../apache/hive/hcatalog/cli/TestPermsGrp.java  | 20 +++---
 .../hcatalog/pig/TestHCatLoaderEncryption.java  |  5 +-
 .../plugin/TestHiveAuthorizerShowFilters.java   |  3 +-
 .../java/org/apache/hadoop/hive/ql/Driver.java  | 66 ++--
 .../apache/hadoop/hive/ql/DriverFactory.java| 19 ++
 .../java/org/apache/hadoop/hive/ql/IDriver.java |  6 +-
 .../org/apache/hadoop/hive/ql/QueryState.java   | 51 ---
 .../hadoop/hive/ql/hooks/HooksLoader.java   |  8 +--
 .../hadoop/hive/ql/lockmgr/DummyTxnManager.java | 11 +++-
 .../hive/ql/lockmgr/HiveTxnManagerImpl.java | 16 -
 .../hadoop/hive/ql/parse/SemanticAnalyzer.java  |  1 +
 .../ql/processors/AddResourceProcessor.java |  4 ++
 .../hive/ql/processors/CommandProcessor.java|  2 +-
 .../ql/processors/CommandProcessorFactory.java  | 40 ++--
 .../hive/ql/processors/CompileProcessor.java|  4 ++
 .../hive/ql/processors/CryptoProcessor.java |  4 ++
 .../ql/processors/DeleteResourceProcessor.java  |  4 ++
 .../hadoop/hive/ql/processors/DfsProcessor.java | 11 ++--
 .../ql/processors/ListResourceProcessor.java|  4 ++
 .../hive/ql/processors/ReloadProcessor.java |  4 ++
 .../hive/ql/processors/ResetProcessor.java  |  4 ++
 .../hadoop/hive/ql/processors/SetProcessor.java |  3 +
 .../hadoop/hive/ql/txn/compactor/Worker.java|  5 +-
 .../ql/udf/generic/GenericUDTFGetSplits.java|  6 +-
 .../apache/hadoop/hive/ql/TestTxnCommands2.java |  5 +-
 .../hadoop/hive/ql/TxnCommandsBaseForTests.java |  8 +--
 .../hadoop/hive/ql/exec/TestOperators.java  |  2 +-
 .../hadoop/hive/ql/hooks/TestQueryHooks.java|  4 +-
 .../hive/ql/lockmgr/TestDbTxnManager2.java  | 11 ++--
 .../hive/ql/lockmgr/TestDummyTxnManager.java|  9 +++
 .../clientpositive/driver_conf_isolation.q  |  5 ++
 .../special_character_in_tabnames_1.q   |  1 +
 .../clientpositive/driver_conf_isolation.q.out  | 34 ++
 .../test/results/clientpositive/input39.q.out   |  2 +-
 .../hive/service/cli/operation/Operation.java   |  8 +--
 38 files changed, 240 insertions(+), 198 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/f5c08a95/cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java
--
diff --git a/cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java 
b/cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java
index e57412a..68741f6 100644
--- a/cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java
+++ b/cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java
@@ -181,18 +181,23 @@ public class CliDriver {
   }
 }  else { // local mode
   try {
-CommandProcessor proc = CommandProcessorFactory.get(tokens, (HiveConf) 
conf);
-if (proc instanceof IDriver) {
-  // Let Driver strip comments using sql parser
-  ret = processLocalCmd(cmd, proc, ss);
-} else {
-  ret = processLocalCmd(cmd_trimmed, proc, ss);
+
+try (CommandProcessor proc = CommandProcessorFactory.get(tokens, 
(HiveConf) conf)) {
+  if (proc instanceof IDriver) {
+// Let Driver strip comments using sql parser
+ret = processLocalCmd(cmd, proc, ss);
+  } else {
+ret = processLocalCmd(cmd_trimmed, proc, ss);
+  }
 }
   } catch (SQLException e) {
 console.printError("Failed processing command " + tokens[0] + " " + 
e.getLocalizedMessage(),
   org.apache.hadoop.util.StringUtils.stringifyException(e));
 ret = 1;
   }
+  catch (Exception e) {
+throw new RuntimeException(e);
+  }
 }
 
 ss.resetThreadName();
@@ -270,10 +275,7 @@ public class CliDriver {
   ret = 1;
 }
 
-int cret = qp.close();
-if (ret == 0) {
-  ret = cret;
-}
+qp.close();
 
 if (out instanceof 

hive git commit: HIVE-18671: Lock not released after Hive on Spark query was cancelled (Yongzhi Chen, reviewed by Aihua Xu)

2018-02-13 Thread ychena
Repository: hive
Updated Branches:
  refs/heads/master 1d15990ad -> 9a02aa86b


HIVE-18671: Lock not released after Hive on Spark query was cancelled (Yongzhi 
Chen, reviewed by Aihua Xu)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/9a02aa86
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/9a02aa86
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/9a02aa86

Branch: refs/heads/master
Commit: 9a02aa86b9fe4b68681ba1c7129d5028f24791c9
Parents: 1d15990
Author: Yongzhi Chen 
Authored: Tue Feb 13 10:03:53 2018 -0500
Committer: Yongzhi Chen 
Committed: Tue Feb 13 10:24:34 2018 -0500

--
 .../ql/exec/spark/status/RemoteSparkJobMonitor.java |  6 ++
 .../hadoop/hive/ql/exec/spark/TestSparkTask.java| 16 
 2 files changed, 22 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/9a02aa86/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/RemoteSparkJobMonitor.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/RemoteSparkJobMonitor.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/RemoteSparkJobMonitor.java
index 22f7024..fc4e4de 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/RemoteSparkJobMonitor.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/RemoteSparkJobMonitor.java
@@ -174,6 +174,12 @@ public class RemoteSparkJobMonitor extends SparkJobMonitor 
{
   done = true;
   rc = 3;
   break;
+case CANCELLED:
+  console.printInfo("Status: Cancelled");
+  running = false;
+  done = true;
+  rc = 3;
+  break;
 }
 
 if (!done) {

http://git-wip-us.apache.org/repos/asf/hive/blob/9a02aa86/ql/src/test/org/apache/hadoop/hive/ql/exec/spark/TestSparkTask.java
--
diff --git 
a/ql/src/test/org/apache/hadoop/hive/ql/exec/spark/TestSparkTask.java 
b/ql/src/test/org/apache/hadoop/hive/ql/exec/spark/TestSparkTask.java
index 928ecc0..435c6b6 100644
--- a/ql/src/test/org/apache/hadoop/hive/ql/exec/spark/TestSparkTask.java
+++ b/ql/src/test/org/apache/hadoop/hive/ql/exec/spark/TestSparkTask.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.hive.ql.exec.spark;
 import static org.mockito.Mockito.never;
 import static org.mockito.Mockito.times;
 import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
 
 import java.io.IOException;
 import java.util.ArrayList;
@@ -27,10 +28,14 @@ import java.util.List;
 
 import org.apache.hadoop.hive.common.metrics.common.Metrics;
 import org.apache.hadoop.hive.common.metrics.common.MetricsConstant;
+import org.apache.hadoop.hive.conf.HiveConf;
 import org.apache.hadoop.hive.ql.exec.Task;
+import org.apache.hadoop.hive.ql.exec.spark.status.RemoteSparkJobMonitor;
+import org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobStatus;
 import org.apache.hadoop.hive.ql.plan.BaseWork;
 import org.apache.hadoop.hive.ql.plan.MapWork;
 import org.apache.hadoop.hive.ql.plan.SparkWork;
+import org.apache.hive.spark.client.JobHandle.State;
 import org.junit.Assert;
 import org.junit.Test;
 import org.mockito.Mockito;
@@ -81,6 +86,17 @@ public class TestSparkTask {
 Assert.assertEquals(child1.getParentTasks().size(), 0);
   }
 
+  @Test
+  public void testRemoteSparkCancel() {
+RemoteSparkJobStatus jobSts = Mockito.mock(RemoteSparkJobStatus.class);
+when(jobSts.getRemoteJobState()).thenReturn(State.CANCELLED);
+when(jobSts.isRemoteActive()).thenReturn(true);
+HiveConf hiveConf = new HiveConf();
+RemoteSparkJobMonitor remoteSparkJobMonitor = new 
RemoteSparkJobMonitor(hiveConf, jobSts);
+Assert.assertEquals(remoteSparkJobMonitor.startMonitor(), 3);
+  }
+
+
   private boolean isEmptySparkWork(SparkWork sparkWork) {
 List allWorks = sparkWork.getAllWork();
 boolean allWorksIsEmtpy = true;



hive git commit: HIVE-18671: Lock not released after Hive on Spark query was cancelled (Yongzhi Chen, reviewed by Aihua Xu)

2018-02-13 Thread ychena
Repository: hive
Updated Branches:
  refs/heads/branch-2 6dbec04dd -> 042296fbc


HIVE-18671: Lock not released after Hive on Spark query was cancelled (Yongzhi 
Chen, reviewed by Aihua Xu)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/042296fb
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/042296fb
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/042296fb

Branch: refs/heads/branch-2
Commit: 042296fbcd4f3a6dee6f06aa3c997a594bc73391
Parents: 6dbec04
Author: Yongzhi Chen 
Authored: Tue Feb 13 10:03:53 2018 -0500
Committer: Yongzhi Chen 
Committed: Tue Feb 13 10:35:59 2018 -0500

--
 .../ql/exec/spark/status/RemoteSparkJobMonitor.java |  6 ++
 .../hadoop/hive/ql/exec/spark/TestSparkTask.java| 16 
 2 files changed, 22 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/042296fb/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/RemoteSparkJobMonitor.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/RemoteSparkJobMonitor.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/RemoteSparkJobMonitor.java
index dd73f3e..dc6e951 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/RemoteSparkJobMonitor.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/RemoteSparkJobMonitor.java
@@ -148,6 +148,12 @@ public class RemoteSparkJobMonitor extends SparkJobMonitor 
{
   done = true;
   rc = 3;
   break;
+case CANCELLED:
+  console.printInfo("Status: Cancelled");
+  running = false;
+  done = true;
+  rc = 3;
+  break;
 }
 
 if (!done) {

http://git-wip-us.apache.org/repos/asf/hive/blob/042296fb/ql/src/test/org/apache/hadoop/hive/ql/exec/spark/TestSparkTask.java
--
diff --git 
a/ql/src/test/org/apache/hadoop/hive/ql/exec/spark/TestSparkTask.java 
b/ql/src/test/org/apache/hadoop/hive/ql/exec/spark/TestSparkTask.java
index 4c7ec76..3229ea8 100644
--- a/ql/src/test/org/apache/hadoop/hive/ql/exec/spark/TestSparkTask.java
+++ b/ql/src/test/org/apache/hadoop/hive/ql/exec/spark/TestSparkTask.java
@@ -20,11 +20,18 @@ package org.apache.hadoop.hive.ql.exec.spark;
 import static org.mockito.Mockito.never;
 import static org.mockito.Mockito.times;
 import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
 
 import java.io.IOException;
 
 import org.apache.hadoop.hive.common.metrics.common.Metrics;
 import org.apache.hadoop.hive.common.metrics.common.MetricsConstant;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.ql.exec.spark.status.RemoteSparkJobMonitor;
+import org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobStatus;
+import org.apache.hadoop.hive.ql.plan.SparkWork;
+import org.apache.hive.spark.client.JobHandle.State;
+import org.junit.Assert;
 import org.junit.Test;
 import org.mockito.Mockito;
 
@@ -43,4 +50,13 @@ public class TestSparkTask {
 verify(mockMetrics, 
never()).incrementCounter(MetricsConstant.HIVE_MR_TASKS);
   }
 
+  @Test
+  public void testRemoteSparkCancel() {
+RemoteSparkJobStatus jobSts = Mockito.mock(RemoteSparkJobStatus.class);
+when(jobSts.getRemoteJobState()).thenReturn(State.CANCELLED);
+when(jobSts.isRemoteActive()).thenReturn(true);
+HiveConf hiveConf = new HiveConf();
+RemoteSparkJobMonitor remoteSparkJobMonitor = new 
RemoteSparkJobMonitor(hiveConf, jobSts);
+Assert.assertEquals(remoteSparkJobMonitor.startMonitor(), 3);
+  }
 }



hive git commit: HIVE-18665: LLAP: Ignore cache-affinity if the LLAP IO elevator is disabled (Gopal V, reviewed by Sergey Shelukhin)

2018-02-13 Thread gopalv
Repository: hive
Updated Branches:
  refs/heads/master 9a02aa86b -> 5ddd5851f


HIVE-18665: LLAP: Ignore cache-affinity if the LLAP IO elevator is disabled 
(Gopal V, reviewed by Sergey Shelukhin)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/5ddd5851
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/5ddd5851
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/5ddd5851

Branch: refs/heads/master
Commit: 5ddd5851f179f265a7bf912656e1cc4c87a1a7a0
Parents: 9a02aa8
Author: Gopal V 
Authored: Tue Feb 13 10:23:01 2018 -0800
Committer: Gopal V 
Committed: Tue Feb 13 10:23:08 2018 -0800

--
 .../hive/ql/exec/tez/HiveSplitGenerator.java| 10 ++--
 .../apache/hadoop/hive/ql/exec/tez/Utils.java   | 12 -
 .../org/apache/hadoop/hive/ql/plan/MapWork.java | 49 ++--
 3 files changed, 52 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/5ddd5851/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/HiveSplitGenerator.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/HiveSplitGenerator.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/HiveSplitGenerator.java
index f3aa151..57f6c66 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/HiveSplitGenerator.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/HiveSplitGenerator.java
@@ -97,7 +97,8 @@ public class HiveSplitGenerator extends InputInitializer {
 // Assuming grouping enabled always.
 userPayloadProto = 
MRInputUserPayloadProto.newBuilder().setGroupingEnabled(true).build();
 
-this.splitLocationProvider = Utils.getSplitLocationProvider(conf, LOG);
+this.splitLocationProvider =
+Utils.getSplitLocationProvider(conf, work.getCacheAffinity(), LOG);
 LOG.info("SplitLocationProvider: " + splitLocationProvider);
 
 // Read all credentials into the credentials instance stored in JobConf.
@@ -123,14 +124,15 @@ public class HiveSplitGenerator extends InputInitializer {
 
 this.jobConf = new JobConf(conf);
 
-this.splitLocationProvider = Utils.getSplitLocationProvider(conf, LOG);
-LOG.info("SplitLocationProvider: " + splitLocationProvider);
-
 // Read all credentials into the credentials instance stored in JobConf.
 ShimLoader.getHadoopShims().getMergedCredentials(jobConf);
 
 this.work = Utilities.getMapWork(jobConf);
 
+this.splitLocationProvider =
+Utils.getSplitLocationProvider(conf, work.getCacheAffinity(), LOG);
+LOG.info("SplitLocationProvider: " + splitLocationProvider);
+
 // Events can start coming in the moment the InputInitializer is created. 
The pruner
 // must be setup and initialized here so that it sets up it's structures 
to start accepting events.
 // Setting it up in initialize leads to a window where events may come in 
before the pruner is

http://git-wip-us.apache.org/repos/asf/hive/blob/5ddd5851/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/Utils.java
--
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/Utils.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/Utils.java
index b33f027..bc438bb 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/Utils.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/Utils.java
@@ -32,11 +32,19 @@ import org.apache.hadoop.mapred.split.SplitLocationProvider;
 import org.slf4j.Logger;
 
 public class Utils {
-  public static SplitLocationProvider getSplitLocationProvider(Configuration 
conf, Logger LOG) throws
+
+  public static SplitLocationProvider getSplitLocationProvider(Configuration 
conf, Logger LOG)
+  throws IOException {
+// fall back to checking confs
+return getSplitLocationProvider(conf, true, LOG);
+  }
+
+  public static SplitLocationProvider getSplitLocationProvider(Configuration 
conf, boolean useCacheAffinity, Logger LOG) throws
   IOException {
 boolean useCustomLocations =
 HiveConf.getVar(conf, 
HiveConf.ConfVars.HIVE_EXECUTION_MODE).equals("llap")
-&& HiveConf.getBoolVar(conf, 
HiveConf.ConfVars.LLAP_CLIENT_CONSISTENT_SPLITS);
+&& HiveConf.getBoolVar(conf, 
HiveConf.ConfVars.LLAP_CLIENT_CONSISTENT_SPLITS) 
+&& useCacheAffinity;
 SplitLocationProvider splitLocationProvider;
 LOG.info("SplitGenerator using llap affinitized locations: " + 
useCustomLocations);
 if (useCustomLocations) {

http://git-wip-us.apache.org/repos/asf/hive/blob/5ddd5851/ql/src/java/org/apache/hadoop/hive/ql/plan/MapWork.java
--
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/plan/MapWork.java 

hive git commit: HIVE-18688: Vectorization: Vectorizer Reason shouldn't be part of work-plan (Gopal V, reviewed by Ashutosh Chauhan)

2018-02-13 Thread gopalv
Repository: hive
Updated Branches:
  refs/heads/master 5ddd5851f -> 8cf36e733


HIVE-18688: Vectorization: Vectorizer Reason shouldn't be part of work-plan 
(Gopal V, reviewed by Ashutosh Chauhan)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/8cf36e73
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/8cf36e73
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/8cf36e73

Branch: refs/heads/master
Commit: 8cf36e733471c760df173efaff3129dc46f7d0de
Parents: 5ddd585
Author: Gopal V 
Authored: Tue Feb 13 10:25:22 2018 -0800
Committer: Gopal V 
Committed: Tue Feb 13 10:25:22 2018 -0800

--
 ql/src/java/org/apache/hadoop/hive/ql/plan/BaseWork.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/8cf36e73/ql/src/java/org/apache/hadoop/hive/ql/plan/BaseWork.java
--
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/plan/BaseWork.java 
b/ql/src/java/org/apache/hadoop/hive/ql/plan/BaseWork.java
index ae7cd57..dc3219b 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/plan/BaseWork.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/plan/BaseWork.java
@@ -90,7 +90,7 @@ public abstract class BaseWork extends AbstractOperatorDesc {
   protected Set supportSetInUse;
   protected List supportRemovedReasons;
 
-  private VectorizerReason notVectorizedReason;
+  private transient VectorizerReason notVectorizedReason;
 
   private boolean groupByVectorOutput;
   private boolean allNative;



hive git commit: HIVE-18569 : Hive Druid indexing not dealing with decimals in correct way. (Nishant Bangarwa via Ashutosh Chauhan)

2018-02-13 Thread hashutosh
Repository: hive
Updated Branches:
  refs/heads/master 35605732b -> 5daad4e44


HIVE-18569 : Hive Druid indexing not dealing with decimals in correct way. 
(Nishant Bangarwa via Ashutosh Chauhan)

Signed-off-by: Ashutosh Chauhan 


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/5daad4e4
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/5daad4e4
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/5daad4e4

Branch: refs/heads/master
Commit: 5daad4e4451e7d181236942e5af85f3cf94c6bad
Parents: 3560573
Author: Ashutosh Chauhan 
Authored: Mon Jan 29 07:48:00 2018 -0800
Committer: Ashutosh Chauhan 
Committed: Tue Feb 13 13:14:45 2018 -0800

--
 .../org/apache/hadoop/hive/conf/HiveConf.java   |  3 +
 .../hadoop/hive/druid/io/DruidOutputFormat.java | 12 ++-
 .../test/queries/clientpositive/druidmini_mv.q  |  4 +-
 .../clientpositive/druid/druidmini_mv.q.out | 98 +++-
 4 files changed, 72 insertions(+), 45 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/5daad4e4/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
--
diff --git a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
index adb9b9b..ce96bff 100644
--- a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
+++ b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
@@ -2123,6 +2123,9 @@ public class HiveConf extends Configuration {
 "Wait time in ms default to 30 seconds."
 ),
 HIVE_DRUID_BITMAP_FACTORY_TYPE("hive.druid.bitmap.type", "roaring", new 
PatternSet("roaring", "concise"), "Coding algorithm use to encode the bitmaps"),
+HIVE_DRUID_APPROX_RESULT("hive.druid.approx.result", false,
+"Whether to allow approximate results from druid. \n" +
+"When set to true decimals will be stored as double and druid is 
allowed to return approximate results for decimal columns."),
 // For HBase storage handler
 HIVE_HBASE_WAL_ENABLED("hive.hbase.wal.enabled", true,
 "Whether writes to HBase should be forced to the write-ahead log. \n" +

http://git-wip-us.apache.org/repos/asf/hive/blob/5daad4e4/druid-handler/src/java/org/apache/hadoop/hive/druid/io/DruidOutputFormat.java
--
diff --git 
a/druid-handler/src/java/org/apache/hadoop/hive/druid/io/DruidOutputFormat.java 
b/druid-handler/src/java/org/apache/hadoop/hive/druid/io/DruidOutputFormat.java
index 0977329..8c25d62 100644
--- 
a/druid-handler/src/java/org/apache/hadoop/hive/druid/io/DruidOutputFormat.java
+++ 
b/druid-handler/src/java/org/apache/hadoop/hive/druid/io/DruidOutputFormat.java
@@ -129,6 +129,7 @@ public class DruidOutputFormat implements 
HiveOutputFormat();
 ImmutableList.Builder aggregatorFactoryBuilder = 
ImmutableList.builder();
@@ -145,9 +146,18 @@ public class DruidOutputFormat implements 
HiveOutputFormat

hive git commit: HIVE-17627 : Use druid scan query instead of the select query. (Nishant Bangarwa via Slim B, Ashutosh Chauhan)

2018-02-13 Thread hashutosh
Repository: hive
Updated Branches:
  refs/heads/master 5daad4e44 -> cf4114e1b


HIVE-17627 : Use druid scan query instead of the select query. (Nishant 
Bangarwa via Slim B, Ashutosh Chauhan)

Signed-off-by: Ashutosh Chauhan 


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/cf4114e1
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/cf4114e1
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/cf4114e1

Branch: refs/heads/master
Commit: cf4114e1b72b0637b92d4d1267ac9b779d48a29a
Parents: 5daad4e
Author: Nishant Bangarwa 
Authored: Tue Jan 23 11:08:00 2018 -0800
Committer: Ashutosh Chauhan 
Committed: Tue Feb 13 13:38:40 2018 -0800

--
 .../druid/io/DruidQueryBasedInputFormat.java|  98 +-
 .../druid/serde/DruidScanQueryRecordReader.java | 102 +++
 .../hadoop/hive/druid/serde/DruidSerDe.java |  49 -
 3 files changed, 225 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/cf4114e1/druid-handler/src/java/org/apache/hadoop/hive/druid/io/DruidQueryBasedInputFormat.java
--
diff --git 
a/druid-handler/src/java/org/apache/hadoop/hive/druid/io/DruidQueryBasedInputFormat.java
 
b/druid-handler/src/java/org/apache/hadoop/hive/druid/io/DruidQueryBasedInputFormat.java
index 7bdc172..33f6412 100644
--- 
a/druid-handler/src/java/org/apache/hadoop/hive/druid/io/DruidQueryBasedInputFormat.java
+++ 
b/druid-handler/src/java/org/apache/hadoop/hive/druid/io/DruidQueryBasedInputFormat.java
@@ -36,6 +36,7 @@ import org.apache.hadoop.hive.druid.DruidStorageHandler;
 import org.apache.hadoop.hive.druid.DruidStorageHandlerUtils;
 import org.apache.hadoop.hive.druid.serde.DruidGroupByQueryRecordReader;
 import org.apache.hadoop.hive.druid.serde.DruidQueryRecordReader;
+import org.apache.hadoop.hive.druid.serde.DruidScanQueryRecordReader;
 import org.apache.hadoop.hive.druid.serde.DruidSelectQueryRecordReader;
 import org.apache.hadoop.hive.druid.serde.DruidTimeseriesQueryRecordReader;
 import org.apache.hadoop.hive.druid.serde.DruidTopNQueryRecordReader;
@@ -68,6 +69,7 @@ import io.druid.query.Druids.SelectQueryBuilder;
 import io.druid.query.LocatedSegmentDescriptor;
 import io.druid.query.Query;
 import io.druid.query.SegmentDescriptor;
+import io.druid.query.scan.ScanQuery;
 import io.druid.query.select.PagingSpec;
 import io.druid.query.select.SelectQuery;
 import io.druid.query.spec.MultipleSpecificSegmentSpec;
@@ -93,6 +95,8 @@ public class DruidQueryBasedInputFormat extends 
InputFormat() {});
-} catch (Exception e) {
-  response.close();
-  throw new 
IOException(org.apache.hadoop.util.StringUtils.stringifyException(e));
-}
+final List 

[2/2] hive git commit: HIVE-18586: Upgrade Derby to 10.14.1.0 (Janaki Lahorani, reviewed by Aihua Xu)

2018-02-13 Thread aihuaxu
HIVE-18586: Upgrade Derby to 10.14.1.0 (Janaki Lahorani, reviewed by Aihua Xu)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/35605732
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/35605732
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/35605732

Branch: refs/heads/master
Commit: 35605732b2041eee809485718bfd951cdfae0980
Parents: ec7ccc3
Author: Aihua Xu 
Authored: Tue Feb 13 13:06:31 2018 -0800
Committer: Aihua Xu 
Committed: Tue Feb 13 13:06:31 2018 -0800

--
 .../org/apache/hive/hcatalog/DerbyPolicy.java   | 90 
 .../org/apache/hive/hcatalog/DerbyPolicy.java   | 90 
 .../apache/hive/hcatalog/cli/TestPermsGrp.java  |  3 +
 .../mapreduce/TestHCatPartitionPublish.java |  3 +
 .../org/apache/hive/hcatalog/package-info.java  | 22 +
 .../hive/hcatalog/api/TestHCatClient.java   |  4 +
 pom.xml |  2 +-
 .../metastore/TestHiveMetaStoreGetMetaConf.java | 25 --
 .../TestHiveMetaStorePartitionSpecs.java| 26 --
 9 files changed, 213 insertions(+), 52 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/35605732/core/src/test/java/org/apache/hive/hcatalog/DerbyPolicy.java
--
diff --git a/core/src/test/java/org/apache/hive/hcatalog/DerbyPolicy.java 
b/core/src/test/java/org/apache/hive/hcatalog/DerbyPolicy.java
new file mode 100644
index 000..cecf6dc
--- /dev/null
+++ b/core/src/test/java/org/apache/hive/hcatalog/DerbyPolicy.java
@@ -0,0 +1,90 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hive.hcatalog;
+
+import org.apache.derby.security.SystemPermission;
+
+import java.security.CodeSource;
+import java.security.Permission;
+import java.security.PermissionCollection;
+import java.security.Policy;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Enumeration;
+import java.util.Iterator;
+
+/**
+ * A security policy that grants usederbyinternals
+ *
+ * 
+ *   HCatalog tests use Security Manager to handle exits.  With Derby version 
10.14.1, if a
+ *   security manager is configured, embedded Derby requires usederbyinternals 
permission, and
+ *   that is checked directly using AccessController.checkPermission.  This 
class will be used to
+ *   setup a security policy to grant usederbyinternals, in tests that use 
NoExitSecurityManager.
+ * 
+ */
+public class DerbyPolicy extends Policy {
+
+  private static PermissionCollection perms;
+
+  public DerbyPolicy() {
+super();
+if (perms == null) {
+  perms = new DerbyPermissionCollection();
+  addPermissions();
+}
+  }
+
+  @Override
+  public PermissionCollection getPermissions(CodeSource codesource) {
+return perms;
+  }
+
+  private void addPermissions() {
+SystemPermission systemPermission = new SystemPermission("engine", 
"usederbyinternals");
+perms.add(systemPermission);
+  }
+
+  class DerbyPermissionCollection extends PermissionCollection {
+
+ArrayList perms = new ArrayList();
+
+public void add(Permission p) {
+  perms.add(p);
+}
+
+public boolean implies(Permission p) {
+  for (Iterator i = perms.iterator(); i.hasNext();) {
+if (((Permission) i.next()).implies(p)) {
+  return true;
+}
+  }
+  return false;
+}
+
+public Enumeration elements() {
+  return Collections.enumeration(perms);
+}
+
+public boolean isReadOnly() {
+  return false;
+}
+  }
+}
+

http://git-wip-us.apache.org/repos/asf/hive/blob/35605732/hcatalog/core/src/test/java/org/apache/hive/hcatalog/DerbyPolicy.java
--
diff --git 
a/hcatalog/core/src/test/java/org/apache/hive/hcatalog/DerbyPolicy.java 
b/hcatalog/core/src/test/java/org/apache/hive/hcatalog/DerbyPolicy.java
new file mode 100644
index 000..cecf6dc
--- /dev/null
+++ 

[1/2] hive git commit: HIVE-17735: ObjectStore.addNotificationEvent is leaking queries (Aihua Xu, reviewed by Yongzhi Chen)

2018-02-13 Thread aihuaxu
Repository: hive
Updated Branches:
  refs/heads/master 8cf36e733 -> 35605732b


HIVE-17735: ObjectStore.addNotificationEvent is leaking queries (Aihua Xu, 
reviewed by Yongzhi Chen)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/ec7ccc3a
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/ec7ccc3a
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/ec7ccc3a

Branch: refs/heads/master
Commit: ec7ccc3a452fa125719ca820b5f751ddd00686ec
Parents: 8cf36e7
Author: Aihua Xu 
Authored: Mon Feb 5 15:35:30 2018 -0800
Committer: Aihua Xu 
Committed: Tue Feb 13 13:04:15 2018 -0800

--
 .../hadoop/hive/metastore/ObjectStore.java  | 42 +++-
 .../hadoop/hive/metastore/TestObjectStore.java  |  2 +-
 2 files changed, 15 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/ec7ccc3a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
--
diff --git 
a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
 
b/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
index d58ed67..edabaa1 100644
--- 
a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
+++ 
b/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
@@ -3941,13 +3941,13 @@ public class ObjectStore implements RawStore, 
Configurable {
 }
 
 boolean success = false;
-QueryWrapper queryWrapper = new QueryWrapper();
+Query query = null;
 
 try {
   openTransaction();
   LOG.debug("execute removeUnusedColumnDescriptor");
 
-  Query query = pm.newQuery("select count(1) from " +
+  query = pm.newQuery("select count(1) from " +
 "org.apache.hadoop.hive.metastore.model.MStorageDescriptor where 
(this.cd == inCD)");
   query.declareParameters("MColumnDescriptor inCD");
   long count = ((Long)query.execute(oldCD)).longValue();
@@ -3960,7 +3960,7 @@ public class ObjectStore implements RawStore, 
Configurable {
   success = commitTransaction();
   LOG.debug("successfully deleted a CD in removeUnusedColumnDescriptor");
 } finally {
-  rollbackAndCleanup(success, queryWrapper);
+  rollbackAndCleanup(success, query);
 }
   }
 
@@ -8819,14 +8819,13 @@ public class ObjectStore implements RawStore, 
Configurable {
   public Function getFunction(String dbName, String funcName) throws 
MetaException {
 boolean commited = false;
 Function func = null;
+Query query = null;
 try {
   openTransaction();
   func = convertToFunction(getMFunction(dbName, funcName));
   commited = commitTransaction();
 } finally {
-  if (!commited) {
-rollbackTransaction();
-  }
+  rollbackAndCleanup(commited, query);
 }
 return func;
   }
@@ -8834,17 +8833,16 @@ public class ObjectStore implements RawStore, 
Configurable {
   @Override
   public List getAllFunctions() throws MetaException {
 boolean commited = false;
+Query query = null;
 try {
   openTransaction();
-  Query query = pm.newQuery(MFunction.class);
+  query = pm.newQuery(MFunction.class);
   List allFunctions = (List) query.execute();
   pm.retrieveAll(allFunctions);
   commited = commitTransaction();
   return convertToFunctions(allFunctions);
 } finally {
-  if (!commited) {
-rollbackTransaction();
-  }
+  rollbackAndCleanup(commited, query);
 }
   }
 
@@ -8905,10 +8903,7 @@ public class ObjectStore implements RawStore, 
Configurable {
   }
   return result;
 } finally {
-  if (!commited) {
-rollbackAndCleanup(commited, query);
-return null;
-  }
+  rollbackAndCleanup(commited, query);
 }
   }
 
@@ -8938,6 +8933,7 @@ public class ObjectStore implements RawStore, 
Configurable {
   query.setUnique(true);
   // only need to execute it to get db Lock
   query.execute();
+  query.closeAll();
 }).run();
   }
 
@@ -9003,8 +8999,8 @@ public class ObjectStore implements RawStore, 
Configurable {
 try {
   openTransaction();
   lockForUpdate();
-  Query objectQuery = pm.newQuery(MNotificationNextId.class);
-  Collection ids = (Collection) objectQuery.execute();
+  query = pm.newQuery(MNotificationNextId.class);
+  Collection ids = (Collection) query.execute();
   MNotificationNextId mNotificationNextId = null;
   boolean needToPersistId;
   if (CollectionUtils.isEmpty(ids)) {
@@ -9533,12 +9529,7 @@ public class ObjectStore implements RawStore, 
Configurable {
   }
   commited = commitTransaction();
 } finally {
-  

hive git commit: HIVE-18673: ErrorMsg.SPARK_JOB_MONITOR_TIMEOUT isn't formatted correctly (Sahil Takiar, reviewed by Chao Sun)

2018-02-13 Thread stakiar
Repository: hive
Updated Branches:
  refs/heads/master cf4114e1b -> fedefeba6


HIVE-18673: ErrorMsg.SPARK_JOB_MONITOR_TIMEOUT isn't formatted correctly (Sahil 
Takiar, reviewed by Chao Sun)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/fedefeba
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/fedefeba
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/fedefeba

Branch: refs/heads/master
Commit: fedefeba65e0ce98fe3e5db5c802982ac70acf53
Parents: cf4114e
Author: Sahil Takiar 
Authored: Tue Feb 13 16:07:44 2018 -0800
Committer: Sahil Takiar 
Committed: Tue Feb 13 16:07:44 2018 -0800

--
 ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/fedefeba/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
--
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java 
b/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
index 46d876d..39a613c 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
@@ -557,10 +557,10 @@ public enum ErrorMsg {
 
   SPARK_GET_JOB_INFO_TIMEOUT(30036,
   "Spark job timed out after {0} seconds while getting job info", true),
-  SPARK_JOB_MONITOR_TIMEOUT(30037, "Job hasn't been submitted after {0}s." +
+  SPARK_JOB_MONITOR_TIMEOUT(30037, "Job hasn''t been submitted after {0}s." +
   " Aborting it.\nPossible reasons include network issues, " +
   "errors in remote driver or the cluster has no available resources, 
etc.\n" +
-  "Please check YARN or Spark driver's logs for further information.\n" +
+  "Please check YARN or Spark driver''s logs for further information.\n" +
   "The timeout is controlled by " + 
HiveConf.ConfVars.SPARK_JOB_MONITOR_TIMEOUT + ".", true),
 
   // Various errors when creating Spark client