hive git commit: HIVE-18627: Errata

2018-02-12 Thread gopalv
Repository: hive
Updated Branches:
  refs/heads/master 0808f7d32 -> 6356205c7


HIVE-18627: Errata


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/6356205c
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/6356205c
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/6356205c

Branch: refs/heads/master
Commit: 6356205c7cfd6cf8972ec3dce8cd89eae9433342
Parents: 0808f7d
Author: Gopal V 
Authored: Mon Feb 12 19:33:53 2018 -0800
Committer: Gopal V 
Committed: Mon Feb 12 19:34:06 2018 -0800

--
 errata.txt | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/6356205c/errata.txt
--
diff --git a/errata.txt b/errata.txt
index d1d95ef..cef9c50 100644
--- a/errata.txt
+++ b/errata.txt
@@ -1,6 +1,7 @@
 Commits with the wrong or no JIRA referenced:
 
 git commit   branch jira   url
+233884620af67e6af72b60629f799a69f5823eb2 master HIVE-18627 
https://issues.apache.org/jira/browse/HIVE-18627
 eb0034c0cdcc5f10fd5d7382e2caf787a8003e7a master HIVE-17420 
https://issues.apache.org/jira/browse/HIVE-17420
 f1aae85f197de09d4b86143f7f13d5aa21d2eb85 master HIVE-16431 
https://issues.apache.org/jira/browse/HIVE-16431
 cbab5b29f26ceb3d4633ade9647ce8bcb2f020a0 master HIVE-16422 
https://issues.apache.org/jira/browse/HIVE-16422



hive git commit: HIVE-18660 : PCR doesn't distinguish between partition and virtual columns (Ashutosh Chauhan via Gopal V, Jesus Camacho Rodriguez)

2018-02-12 Thread hashutosh
Repository: hive
Updated Branches:
  refs/heads/master 18779ea07 -> 0808f7d32


HIVE-18660 : PCR doesn't distinguish between partition and virtual columns  
(Ashutosh Chauhan via Gopal V, Jesus Camacho Rodriguez)

Signed-off-by: Ashutosh Chauhan 


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/0808f7d3
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/0808f7d3
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/0808f7d3

Branch: refs/heads/master
Commit: 0808f7d328fe38be27efc78f3a43a0fbd5b2a1e3
Parents: 18779ea
Author: Ashutosh Chauhan 
Authored: Mon Feb 12 14:57:49 2018 -0800
Committer: Ashutosh Chauhan 
Committed: Mon Feb 12 14:57:49 2018 -0800

--
 .../ql/optimizer/pcr/PcrExprProcFactory.java|  3 +-
 .../queries/clientpositive/partition_boolexpr.q |  3 +-
 .../clientpositive/partition_boolexpr.q.out | 49 
 3 files changed, 53 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/0808f7d3/ql/src/java/org/apache/hadoop/hive/ql/optimizer/pcr/PcrExprProcFactory.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/pcr/PcrExprProcFactory.java 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/pcr/PcrExprProcFactory.java
index ea042bf..f612cd2 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/pcr/PcrExprProcFactory.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/pcr/PcrExprProcFactory.java
@@ -214,7 +214,8 @@ public final class PcrExprProcFactory {
   ExprNodeColumnDesc cd = (ExprNodeColumnDesc) nd;
   PcrExprProcCtx epc = (PcrExprProcCtx) procCtx;
   if (cd.getTabAlias().equalsIgnoreCase(epc.getTabAlias())
-  && cd.getIsPartitionColOrVirtualCol()) {
+  && cd.getIsPartitionColOrVirtualCol()
+  && 
!VirtualColumn.VIRTUAL_COLUMN_NAMES.contains(cd.getColumn().toUpperCase())) {
 return new NodeInfoWrapper(WalkState.PART_COL, null, cd);
   } else {
 return new NodeInfoWrapper(WalkState.UNKNOWN, null, cd);

http://git-wip-us.apache.org/repos/asf/hive/blob/0808f7d3/ql/src/test/queries/clientpositive/partition_boolexpr.q
--
diff --git a/ql/src/test/queries/clientpositive/partition_boolexpr.q 
b/ql/src/test/queries/clientpositive/partition_boolexpr.q
index e18f095..6178aab 100644
--- a/ql/src/test/queries/clientpositive/partition_boolexpr.q
+++ b/ql/src/test/queries/clientpositive/partition_boolexpr.q
@@ -10,4 +10,5 @@ explain select count(1) from srcpart where false;
 explain select count(1) from srcpart where true and hr='11';
 explain select count(1) from srcpart where true or hr='11';
 explain select count(1) from srcpart where false or hr='11';
-explain select count(1) from srcpart where false and hr='11';
\ No newline at end of file
+explain select count(1) from srcpart where false and hr='11';
+explain select count(1) from srcpart where INPUT__FILE__NAME is not null;

http://git-wip-us.apache.org/repos/asf/hive/blob/0808f7d3/ql/src/test/results/clientpositive/partition_boolexpr.q.out
--
diff --git a/ql/src/test/results/clientpositive/partition_boolexpr.q.out 
b/ql/src/test/results/clientpositive/partition_boolexpr.q.out
index b605260..3276a30 100644
--- a/ql/src/test/results/clientpositive/partition_boolexpr.q.out
+++ b/ql/src/test/results/clientpositive/partition_boolexpr.q.out
@@ -177,3 +177,52 @@ STAGE PLANS:
   Processor Tree:
 ListSink
 
+PREHOOK: query: explain select count(1) from srcpart where INPUT__FILE__NAME 
is not null
+PREHOOK: type: QUERY
+POSTHOOK: query: explain select count(1) from srcpart where INPUT__FILE__NAME 
is not null
+POSTHOOK: type: QUERY
+STAGE DEPENDENCIES:
+  Stage-1 is a root stage
+  Stage-0 depends on stages: Stage-1
+
+STAGE PLANS:
+  Stage: Stage-1
+Map Reduce
+  Map Operator Tree:
+  TableScan
+alias: srcpart
+Statistics: Num rows: 2000 Data size: 21248 Basic stats: COMPLETE 
Column stats: NONE
+Filter Operator
+  predicate: INPUT__FILE__NAME is not null (type: boolean)
+  Statistics: Num rows: 2000 Data size: 21248 Basic stats: 
COMPLETE Column stats: NONE
+  Select Operator
+Statistics: Num rows: 2000 Data size: 21248 Basic stats: 
COMPLETE Column stats: NONE
+Group By Operator
+  aggregations: count()
+  mode: hash
+  outputColumnNames: _col0
+  Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE 
Column stats: NONE
+  Reduce 

[1/2] hive git commit: HIVE-18678 : fix exim for MM tables and reinstante the test (Sergey Shelukhin, reviewed by Eugene Koifman)

2018-02-12 Thread sershe
Repository: hive
Updated Branches:
  refs/heads/master ab33a7b7d -> 18779ea07


HIVE-18678 : fix exim for MM tables and reinstante the test (Sergey Shelukhin, 
reviewed by Eugene Koifman)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/cb225803
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/cb225803
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/cb225803

Branch: refs/heads/master
Commit: cb22580357aeaf8bb1beb4618669d8c03e9f9cc1
Parents: ab33a7b
Author: sergey 
Authored: Mon Feb 12 11:48:22 2018 -0800
Committer: sergey 
Committed: Mon Feb 12 11:48:22 2018 -0800

--
 .../test/resources/testconfiguration.properties |   2 +-
 .../apache/hadoop/hive/ql/exec/CopyTask.java|   4 +-
 .../apache/hadoop/hive/ql/exec/MoveTask.java|  36 +-
 .../hive/ql/parse/ImportSemanticAnalyzer.java   |  25 +-
 ql/src/test/queries/clientpositive/mm_exim.q|   8 +-
 .../results/clientpositive/llap/mm_exim.q.out   | 556 +++
 6 files changed, 602 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/cb225803/itests/src/test/resources/testconfiguration.properties
--
diff --git a/itests/src/test/resources/testconfiguration.properties 
b/itests/src/test/resources/testconfiguration.properties
index 974bfac..391170f 100644
--- a/itests/src/test/resources/testconfiguration.properties
+++ b/itests/src/test/resources/testconfiguration.properties
@@ -35,7 +35,6 @@ disabled.query.files=ql_rewrite_gbtoidx.q,\
   ql_rewrite_gbtoidx_cbo_2.q,\
   rcfile_merge1.q,\
   stats_filemetadata.q,\
-  mm_exim.q,\
   cbo_rp_insert.q,\
   cbo_rp_lineage2.q
 
@@ -585,6 +584,7 @@ minillaplocal.query.files=\
   mapjoin_hint.q,\
   mapjoin_emit_interval.q,\
   mergejoin_3way.q,\
+  mm_exim.q,\
   mrr.q,\
   multiMapJoin1.q,\
   multiMapJoin2.q,\

http://git-wip-us.apache.org/repos/asf/hive/blob/cb225803/ql/src/java/org/apache/hadoop/hive/ql/exec/CopyTask.java
--
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/exec/CopyTask.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/CopyTask.java
index 1f5e25f..eee5e66 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/exec/CopyTask.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/CopyTask.java
@@ -32,6 +32,7 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.PathFilter;
 import org.apache.hadoop.hive.common.FileUtils;
 import org.apache.hadoop.hive.ql.DriverContext;
+import org.apache.hadoop.hive.ql.parse.repl.dump.io.FileOperations;
 import org.apache.hadoop.hive.ql.plan.CopyWork;
 import org.apache.hadoop.hive.ql.plan.api.StageType;
 import org.apache.hadoop.util.StringUtils;
@@ -61,6 +62,7 @@ public class CopyTask extends Task implements 
Serializable {
   protected int copyOnePath(Path fromPath, Path toPath) {
 FileSystem dstFs = null;
 try {
+  Utilities.FILE_OP_LOGGER./**/debug("Copying data from {} to {} " + 
fromPath);
   console.printInfo("Copying data from " + fromPath.toString(), " to "
   + toPath.toString());
 
@@ -85,7 +87,7 @@ public class CopyTask extends Task implements 
Serializable {
   for (FileStatus oneSrc : srcs) {
 String oneSrcPathStr = oneSrc.getPath().toString();
 console.printInfo("Copying file: " + oneSrcPathStr);
-LOG.debug("Copying file: {}", oneSrcPathStr);
+Utilities.FILE_OP_LOGGER.debug("Copying file {} to {}", oneSrcPathStr, 
toPath);
 if (!FileUtils.copy(srcFs, oneSrc.getPath(), dstFs, toPath,
 false, // delete source
 true, // overwrite destination

http://git-wip-us.apache.org/repos/asf/hive/blob/cb225803/ql/src/java/org/apache/hadoop/hive/ql/exec/MoveTask.java
--
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/exec/MoveTask.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/MoveTask.java
index 4e804ba..40eb659 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/exec/MoveTask.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/MoveTask.java
@@ -351,22 +351,7 @@ public class MoveTask extends Task implements 
Serializable {
   // Next we do this for tables and partitions
   LoadTableDesc tbd = work.getLoadTableWork();
   if (tbd != null) {
-StringBuilder mesg = new StringBuilder("Loading data to table ")
-.append( tbd.getTable().getTableName());
-if (tbd.getPartitionSpec().size() > 0) {
-  mesg.append(" partition (");
-  Map partSpec = tbd.getPartitionSpec();
-  for (String key: partSpec.keySet()) {
-mesg.append(key).append('=').append(partSpec.get(key)).append(", 

[2/2] hive git commit: HIVE-18492 : Wrong argument in the WorkloadManager.resetAndQueryKill() (Oleg Danilov, reviewed by Prasanth Jayachandran and Sergey Shelukhin)

2018-02-12 Thread sershe
HIVE-18492 : Wrong argument in the WorkloadManager.resetAndQueryKill() (Oleg 
Danilov, reviewed by Prasanth Jayachandran and Sergey Shelukhin)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/18779ea0
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/18779ea0
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/18779ea0

Branch: refs/heads/master
Commit: 18779ea076ee1dfae0ed0628ce43efc046d7cf11
Parents: cb22580
Author: sergey 
Authored: Mon Feb 12 11:58:07 2018 -0800
Committer: sergey 
Committed: Mon Feb 12 11:58:07 2018 -0800

--
 .../org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java  | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/18779ea0/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java
index 915b016..25922d9 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java
@@ -2127,7 +2127,13 @@ public class WorkloadManager extends 
TezSessionPoolSession.AbstractTriggerValida
   PoolState poolState = pools.get(poolName);
   if (poolState != null) {
 poolState.getSessions().remove(toKill);
-poolState.getInitializingSessions().remove(toKill);
+Iterator iter = 
poolState.getInitializingSessions().iterator();
+while (iter.hasNext()) {
+  if (iter.next().session == toKill) {
+iter.remove();
+break;
+  }
+}
   }
 }
 



hive git commit: HIVE-18575 : ACID properties usage in jobconf is ambiguous for MM tables (Sergey Shelukhin, reviewed by Eugene Koifman)

2018-02-12 Thread sershe
Repository: hive
Updated Branches:
  refs/heads/master 00a8e1a13 -> ab33a7b7d


HIVE-18575 : ACID properties usage in jobconf is ambiguous for MM tables 
(Sergey Shelukhin, reviewed by Eugene Koifman)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/ab33a7b7
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/ab33a7b7
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/ab33a7b7

Branch: refs/heads/master
Commit: ab33a7b7decb03ef378b00c11b813b12e66f7be7
Parents: 00a8e1a
Author: sergey 
Authored: Mon Feb 12 11:26:52 2018 -0800
Committer: sergey 
Committed: Mon Feb 12 11:26:52 2018 -0800

--
 .../org/apache/hadoop/hive/conf/HiveConf.java   |  5 +-
 .../mapreduce/FosterStorageHandler.java |  6 +-
 .../hive/hcatalog/streaming/HiveEndPoint.java   |  2 +-
 .../streaming/mutate/client/lock/Lock.java  |  4 +-
 .../hive/hcatalog/streaming/TestStreaming.java  |  2 +-
 .../streaming/mutate/StreamingAssert.java   |  2 +-
 .../hive/ql/txn/compactor/TestCompactor.java|  2 +-
 .../hive/llap/io/api/impl/LlapRecordReader.java |  3 +-
 .../llap/io/encoded/OrcEncodedDataReader.java   |  2 +-
 .../org/apache/hadoop/hive/ql/exec/DDLTask.java |  2 +-
 .../apache/hadoop/hive/ql/exec/FetchTask.java   |  4 +-
 .../hadoop/hive/ql/exec/SMBMapJoinOperator.java |  4 +-
 .../hadoop/hive/ql/exec/mr/MapredLocalTask.java |  4 +-
 .../org/apache/hadoop/hive/ql/io/AcidUtils.java | 80 ++--
 .../hadoop/hive/ql/io/HiveInputFormat.java  |  9 ++-
 .../hadoop/hive/ql/io/orc/OrcInputFormat.java   | 17 ++---
 .../apache/hadoop/hive/ql/io/orc/OrcSplit.java  | 14 ++--
 .../io/orc/VectorizedOrcAcidRowBatchReader.java |  2 +-
 .../ql/io/orc/VectorizedOrcInputFormat.java |  3 +-
 .../hadoop/hive/ql/lockmgr/DbTxnManager.java| 10 +--
 .../apache/hadoop/hive/ql/metadata/Hive.java|  4 +-
 .../BucketingSortingReduceSinkOptimizer.java|  2 +-
 .../hive/ql/optimizer/GenMapRedUtils.java   |  8 +-
 .../hive/ql/optimizer/physical/Vectorizer.java  | 15 ++--
 .../hive/ql/parse/DDLSemanticAnalyzer.java  |  2 +-
 .../hadoop/hive/ql/parse/SemanticAnalyzer.java  | 12 +--
 .../hive/ql/parse/repl/dump/TableExport.java|  4 +-
 .../hadoop/hive/ql/plan/TableScanDesc.java  | 18 +++--
 .../apache/hadoop/hive/ql/stats/Partish.java|  2 +-
 .../hive/ql/txn/compactor/CompactorMR.java  |  5 +-
 .../apache/hadoop/hive/ql/io/TestAcidUtils.java |  4 +-
 .../hive/ql/io/orc/TestInputOutputFormat.java   | 13 ++--
 .../hive/ql/io/orc/TestOrcRawRecordMerger.java  | 18 +++--
 .../TestVectorizedOrcAcidRowBatchReader.java|  2 +-
 .../hive/metastore/LockComponentBuilder.java|  2 +-
 35 files changed, 151 insertions(+), 137 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/ab33a7b7/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
--
diff --git a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
index 67e22f6..adb9b9b 100644
--- a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
+++ b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
@@ -1337,8 +1337,9 @@ public class HiveConf extends Configuration {
 HIVE_SCHEMA_EVOLUTION("hive.exec.schema.evolution", true,
 "Use schema evolution to convert self-describing file format's data to 
the schema desired by the reader."),
 
-HIVE_ACID_TABLE_SCAN("hive.acid.table.scan", false,
-"internal usage only -- do transaction (ACID) table scan.", true),
+/** Don't use this directly - use AcidUtils! */
+HIVE_TRANSACTIONAL_TABLE_SCAN("hive.transactional.table.scan", false,
+"internal usage only -- do transaction (ACID or insert-only) table 
scan.", true),
 
 HIVE_TRANSACTIONAL_NUM_EVENTS_IN_MEMORY("hive.transactional.events.mem", 
1000,
 "Vectorized ACID readers can often load all the delete events from all 
the delete deltas\n"

http://git-wip-us.apache.org/repos/asf/hive/blob/ab33a7b7/hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/FosterStorageHandler.java
--
diff --git 
a/hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/FosterStorageHandler.java
 
b/hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/FosterStorageHandler.java
index 5ee8aad..195eaa3 100644
--- 
a/hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/FosterStorageHandler.java
+++ 
b/hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/FosterStorageHandler.java
@@ -134,10 +134,8 @@ public class FosterStorageHandler extends 
DefaultStorageHandler {
 boolean isTransactionalTable = 

hive git commit: HIVE-18674 : update Hive to use ORC 1.4.3 (Sergey Shelukhin, reviewed by Gopal Vijayaraghavan)

2018-02-12 Thread sershe
Repository: hive
Updated Branches:
  refs/heads/master 1eddbc06a -> 00a8e1a13


HIVE-18674 : update Hive to use ORC 1.4.3 (Sergey Shelukhin, reviewed by Gopal 
Vijayaraghavan)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/00a8e1a1
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/00a8e1a1
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/00a8e1a1

Branch: refs/heads/master
Commit: 00a8e1a131a2038cf1531f3147d73ef533154aad
Parents: 1eddbc0
Author: sergey 
Authored: Mon Feb 12 11:11:38 2018 -0800
Committer: sergey 
Committed: Mon Feb 12 11:11:55 2018 -0800

--
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/00a8e1a1/pom.xml
--
diff --git a/pom.xml b/pom.xml
index bd19ca3..5ae63da 100644
--- a/pom.xml
+++ b/pom.xml
@@ -182,7 +182,7 @@
 0.9.3
 2.10.0
 2.3
-1.4.2
+1.4.3
 1.10.19
 2.0.0-M5
 4.0.52.Final



hive git commit: HIVE-18550: Keep the hbase table name property as hbase.table.name (Aihua Xu, reviewed by Yongzhi Chen)

2018-02-12 Thread aihuaxu
Repository: hive
Updated Branches:
  refs/heads/master fa14a4365 -> 1eddbc06a


HIVE-18550: Keep the hbase table name property as hbase.table.name (Aihua Xu, 
reviewed by Yongzhi Chen)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/1eddbc06
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/1eddbc06
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/1eddbc06

Branch: refs/heads/master
Commit: 1eddbc06a6224cb860ecb2f331cb2462a57b228b
Parents: fa14a43
Author: Aihua Xu 
Authored: Fri Jan 26 15:30:52 2018 -0800
Committer: Aihua Xu 
Committed: Mon Feb 12 11:03:25 2018 -0800

--
 .../hadoop/hive/hbase/HiveHFileOutputFormat.java  | 14 +-
 hbase-handler/src/test/queries/positive/hbase_bulk.q  |  2 +-
 .../src/test/queries/positive/hbase_handler_bulk.q|  4 ++--
 .../test/results/positive/hbase_handler_bulk.q.out|  8 
 4 files changed, 20 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/1eddbc06/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHFileOutputFormat.java
--
diff --git 
a/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHFileOutputFormat.java
 
b/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHFileOutputFormat.java
index d8dad06..4fa0272 100644
--- 
a/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHFileOutputFormat.java
+++ 
b/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHFileOutputFormat.java
@@ -42,6 +42,7 @@ import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.hive.common.FileUtils;
+import org.apache.hadoop.hive.metastore.api.hive_metastoreConstants;
 import org.apache.hadoop.hive.ql.exec.FileSinkOperator.RecordWriter;
 import org.apache.hadoop.hive.ql.io.HiveOutputFormat;
 import org.apache.hadoop.hive.shims.ShimLoader;
@@ -64,7 +65,8 @@ public class HiveHFileOutputFormat extends
 HiveOutputFormat {
 
   public static final String HFILE_FAMILY_PATH = "hfile.family.path";
-
+  public static final String OUTPUT_TABLE_NAME_CONF_KEY =
+  "hbase.mapreduce.hfileoutputformat.table.name";
   static final Logger LOG = 
LoggerFactory.getLogger(HiveHFileOutputFormat.class.getName());
 
   private
@@ -95,6 +97,16 @@ public class HiveHFileOutputFormat extends
 Properties tableProperties,
 final Progressable progressable) throws IOException {
 
+String hbaseTableName = jc.get(HBaseSerDe.HBASE_TABLE_NAME);
+if (hbaseTableName == null) {
+  hbaseTableName = 
tableProperties.getProperty(hive_metastoreConstants.META_TABLE_NAME);
+  hbaseTableName = hbaseTableName.toLowerCase();
+  if (hbaseTableName.startsWith(HBaseStorageHandler.DEFAULT_PREFIX)) {
+hbaseTableName = 
hbaseTableName.substring(HBaseStorageHandler.DEFAULT_PREFIX.length());
+  }
+}
+jc.set(OUTPUT_TABLE_NAME_CONF_KEY, hbaseTableName);
+
 // Read configuration for the target path, first from jobconf, then from 
table properties
 String hfilePath = getFamilyPath(jc, tableProperties);
 if (hfilePath == null) {

http://git-wip-us.apache.org/repos/asf/hive/blob/1eddbc06/hbase-handler/src/test/queries/positive/hbase_bulk.q
--
diff --git a/hbase-handler/src/test/queries/positive/hbase_bulk.q 
b/hbase-handler/src/test/queries/positive/hbase_bulk.q
index 5e0c14e..475aafc 100644
--- a/hbase-handler/src/test/queries/positive/hbase_bulk.q
+++ b/hbase-handler/src/test/queries/positive/hbase_bulk.q
@@ -9,7 +9,7 @@ create table hbsort(key string, val string, val2 string)
 stored as
 INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat'
 OUTPUTFORMAT 'org.apache.hadoop.hive.hbase.HiveHFileOutputFormat'
-TBLPROPERTIES ('hfile.family.path' = 
'/tmp/hbsort/cf','hbase.mapreduce.hfileoutputformat.table.name'='hbsort');
+TBLPROPERTIES ('hfile.family.path' = '/tmp/hbsort/cf');
 
 -- this is a dummy table used for controlling how the input file
 -- for TotalOrderPartitioner is created

http://git-wip-us.apache.org/repos/asf/hive/blob/1eddbc06/hbase-handler/src/test/queries/positive/hbase_handler_bulk.q
--
diff --git a/hbase-handler/src/test/queries/positive/hbase_handler_bulk.q 
b/hbase-handler/src/test/queries/positive/hbase_handler_bulk.q
index 5ac4dc8..d02a61f 100644
--- a/hbase-handler/src/test/queries/positive/hbase_handler_bulk.q
+++ b/hbase-handler/src/test/queries/positive/hbase_handler_bulk.q
@@ -6,7 +6,7 @@ drop table if exists hb_target;
 create table 

hive git commit: HIVE-18598: Disallow NOT NULL constraints to be ENABLED/ENFORCED with EXTERNAL table(Vineet Garg, reviewed by Ashutosh Chauhan)

2018-02-12 Thread vgarg
Repository: hive
Updated Branches:
  refs/heads/master 887233d28 -> fa14a4365


HIVE-18598: Disallow NOT NULL constraints to be ENABLED/ENFORCED with EXTERNAL 
table(Vineet Garg, reviewed by Ashutosh Chauhan)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/fa14a436
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/fa14a436
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/fa14a436

Branch: refs/heads/master
Commit: fa14a436555d132fdab42e284538c19444f83b8c
Parents: 887233d
Author: Vineet Garg 
Authored: Mon Feb 12 10:54:37 2018 -0800
Committer: Vineet Garg 
Committed: Mon Feb 12 10:54:37 2018 -0800

--
 .../hive/ql/parse/BaseSemanticAnalyzer.java | 11 ++
 .../hive/ql/parse/DDLSemanticAnalyzer.java  | 40 +++-
 .../hadoop/hive/ql/parse/SemanticAnalyzer.java  |  7 
 .../alter_external_with_constraint.q|  3 ++
 .../alter_tableprops_external_with_constraint.q |  3 ++
 .../create_external_with_constraint.q   |  1 +
 .../alter_external_with_constraint.q.out|  9 +
 ...er_tableprops_external_with_constraint.q.out |  9 +
 .../create_external_with_constraint.q.out   |  1 +
 9 files changed, 83 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/fa14a436/ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java 
b/ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java
index 8a1bfd2..d18dba5 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java
@@ -906,6 +906,17 @@ public abstract class BaseSemanticAnalyzer {
 }
   }
 
+  protected boolean 
hasEnabledOrValidatedConstraints(List notNullConstraints){
+if(notNullConstraints != null) {
+  for (SQLNotNullConstraint nnC : notNullConstraints) {
+if (nnC.isEnable_cstr() || nnC.isValidate_cstr()) {
+  return true;
+}
+  }
+}
+return false;
+  }
+
   private static void checkColumnName(String columnName) throws 
SemanticException {
 if (VirtualColumn.VIRTUAL_COLUMN_NAMES.contains(columnName.toUpperCase())) 
{
   throw new 
SemanticException(ErrorMsg.INVALID_COLUMN_NAME.getMsg(columnName));

http://git-wip-us.apache.org/repos/asf/hive/blob/fa14a436/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 
b/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
index b766791..834cb0c 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
@@ -96,6 +96,7 @@ import org.apache.hadoop.hive.ql.metadata.Hive;
 import org.apache.hadoop.hive.ql.metadata.HiveException;
 import org.apache.hadoop.hive.ql.metadata.HiveUtils;
 import org.apache.hadoop.hive.ql.metadata.InvalidTableException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
 import org.apache.hadoop.hive.ql.metadata.Partition;
 import org.apache.hadoop.hive.ql.metadata.Table;
 import org.apache.hadoop.hive.ql.parse.authorization.AuthorizationParseUtils;
@@ -1895,6 +1896,26 @@ public class DDLSemanticAnalyzer extends 
BaseSemanticAnalyzer {
 }
   }
 
+  private boolean hasConstraintsEnabled(final String tblName) throws 
SemanticException{
+
+NotNullConstraint nnc = null;
+try {
+  // retrieve enabled NOT NULL constraint from metastore
+  nnc = Hive.get().getEnabledNotNullConstraints(
+  db.getDatabaseCurrent().getName(), tblName);
+} catch (Exception e) {
+  if (e instanceof SemanticException) {
+throw (SemanticException) e;
+  } else {
+throw (new RuntimeException(e));
+  }
+}
+if(nnc != null  && !nnc.getNotNullConstraints().isEmpty()) {
+  return true;
+}
+return false;
+  }
+
   private void analyzeAlterTableProps(String[] qualified, HashMap partSpec,
   ASTNode ast, boolean expectView, boolean isUnset) throws 
SemanticException {
 
@@ -1919,7 +1940,17 @@ public class DDLSemanticAnalyzer extends 
BaseSemanticAnalyzer {
   throw new SemanticException("AlterTable " + entry.getKey() + " 
failed with value "
   + entry.getValue());
 }
-  } else {
+  }
+  // if table is being modified to be external we need to make sure 
existing table
+  // doesn't have enabled constraint since constraints are disallowed with 

[08/50] [abbrv] hive git commit: HIVE-18359: Extend grouping set limits from int to long (Prasanth Jayachandran reviewed by Jesus Camacho Rodriguez)

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/ddd4c9ae/ql/src/test/results/clientpositive/llap/vector_groupby_grouping_sets2.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/vector_groupby_grouping_sets2.q.out 
b/ql/src/test/results/clientpositive/llap/vector_groupby_grouping_sets2.q.out
index 1877bba..d8da29a 100644
--- 
a/ql/src/test/results/clientpositive/llap/vector_groupby_grouping_sets2.q.out
+++ 
b/ql/src/test/results/clientpositive/llap/vector_groupby_grouping_sets2.q.out
@@ -133,18 +133,18 @@ STAGE PLANS:
 aggregators: VectorUDAFCountMerge(col 2:bigint) -> bigint
 className: VectorGroupByOperator
 groupByMode: PARTIALS
-keyExpressions: col 0:string, col 1:string, 
ConstantVectorExpression(val 0) -> 3:int
+keyExpressions: col 0:string, col 1:string, 
ConstantVectorExpression(val 0) -> 3:bigint
 native: false
 vectorProcessingMode: STREAMING
 projectedOutputColumnNums: [0]
-keys: KEY._col0 (type: string), KEY._col1 (type: string), 0 
(type: int)
+keys: KEY._col0 (type: string), KEY._col1 (type: string), 0 
(type: bigint)
 mode: partials
 outputColumnNames: _col0, _col1, _col2, _col3
 Statistics: Num rows: 24 Data size: 8832 Basic stats: COMPLETE 
Column stats: NONE
 Reduce Output Operator
-  key expressions: _col0 (type: string), _col1 (type: string), 
_col2 (type: int)
+  key expressions: _col0 (type: string), _col1 (type: string), 
_col2 (type: bigint)
   sort order: +++
-  Map-reduce partition columns: _col0 (type: string), _col1 
(type: string), _col2 (type: int)
+  Map-reduce partition columns: _col0 (type: string), _col1 
(type: string), _col2 (type: bigint)
   Reduce Sink Vectorization:
   className: VectorReduceSinkMultiKeyOperator
   keyColumnNums: [0, 1, 2]
@@ -165,7 +165,7 @@ STAGE PLANS:
 vectorized: true
 rowBatchContext:
 dataColumnCount: 4
-dataColumns: KEY._col0:string, KEY._col1:string, 
KEY._col2:int, VALUE._col0:bigint
+dataColumns: KEY._col0:string, KEY._col1:string, 
KEY._col2:bigint, VALUE._col0:bigint
 partitionColumnCount: 0
 scratchColumnTypeNames: []
 Reduce Operator Tree:
@@ -175,11 +175,11 @@ STAGE PLANS:
 aggregators: VectorUDAFCountMerge(col 3:bigint) -> bigint
 className: VectorGroupByOperator
 groupByMode: FINAL
-keyExpressions: col 0:string, col 1:string, col 2:int
+keyExpressions: col 0:string, col 1:string, col 2:bigint
 native: false
 vectorProcessingMode: STREAMING
 projectedOutputColumnNums: [0]
-keys: KEY._col0 (type: string), KEY._col1 (type: string), 
KEY._col2 (type: int)
+keys: KEY._col0 (type: string), KEY._col1 (type: string), 
KEY._col2 (type: bigint)
 mode: final
 outputColumnNames: _col0, _col1, _col3
 Statistics: Num rows: 12 Data size: 4416 Basic stats: COMPLETE 
Column stats: NONE
@@ -314,18 +314,18 @@ STAGE PLANS:
 aggregators: VectorUDAFCountMerge(col 2:bigint) -> bigint
 className: VectorGroupByOperator
 groupByMode: PARTIALS
-keyExpressions: col 0:string, col 1:string, 
ConstantVectorExpression(val 0) -> 3:int
+keyExpressions: col 0:string, col 1:string, 
ConstantVectorExpression(val 0) -> 3:bigint
 native: false
 vectorProcessingMode: STREAMING
 projectedOutputColumnNums: [0]
-keys: KEY._col0 (type: string), KEY._col1 (type: string), 0 
(type: int)
+keys: KEY._col0 (type: string), KEY._col1 (type: string), 0 
(type: bigint)
 mode: partials
 outputColumnNames: _col0, _col1, _col2, _col3
 Statistics: Num rows: 24 Data size: 8832 Basic stats: COMPLETE 
Column stats: NONE
 Reduce Output Operator
-  key expressions: _col0 (type: string), _col1 (type: string), 
_col2 (type: int)
+  key expressions: _col0 (type: string), _col1 (type: string), 
_col2 (type: bigint)
   sort order: +++
-  Map-reduce partition columns: _col0 (type: string), _col1 
(type: string), _col2 (type: int)
+  Map-reduce partition columns: _col0 (type: string), _col1 
(type: string), _col2 (type: bigint)
   

[02/50] [abbrv] hive git commit: HIVE-18607 : HBase HFile write does strange things (Sergey Shelukhin, reviewed by Ashutosh Chauhan)

2018-02-12 Thread gates
HIVE-18607 : HBase HFile write does strange things (Sergey Shelukhin, reviewed 
by Ashutosh Chauhan)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/717ef18d
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/717ef18d
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/717ef18d

Branch: refs/heads/standalone-metastore
Commit: 717ef18d96bd86cbe4448350c22c3766cd90e184
Parents: 58bbfc7
Author: sergey 
Authored: Fri Feb 9 14:31:26 2018 -0800
Committer: sergey 
Committed: Fri Feb 9 14:37:03 2018 -0800

--
 .../hive/hbase/HiveHFileOutputFormat.java   |  4 --
 .../test/queries/positive/hbase_handler_bulk.q  | 21 ++
 .../results/positive/hbase_handler_bulk.q.out   | 40 
 3 files changed, 61 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/717ef18d/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHFileOutputFormat.java
--
diff --git 
a/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHFileOutputFormat.java
 
b/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHFileOutputFormat.java
index 81318be..d8dad06 100644
--- 
a/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHFileOutputFormat.java
+++ 
b/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHFileOutputFormat.java
@@ -176,10 +176,6 @@ public class HiveHFileOutputFormat extends
 columnFamilyPath,
 regionFile.getPath().getName()));
   }
-  // Hive actually wants a file as task output (not a directory), so
-  // replace the empty directory with an empty file to keep it happy.
-  fs.delete(taskAttemptOutputdir, true);
-  fs.createNewFile(taskAttemptOutputdir);
 } catch (InterruptedException ex) {
   throw new IOException(ex);
 }

http://git-wip-us.apache.org/repos/asf/hive/blob/717ef18d/hbase-handler/src/test/queries/positive/hbase_handler_bulk.q
--
diff --git a/hbase-handler/src/test/queries/positive/hbase_handler_bulk.q 
b/hbase-handler/src/test/queries/positive/hbase_handler_bulk.q
index ac2fdfa..5ac4dc8 100644
--- a/hbase-handler/src/test/queries/positive/hbase_handler_bulk.q
+++ b/hbase-handler/src/test/queries/positive/hbase_handler_bulk.q
@@ -14,6 +14,7 @@ set mapreduce.input.fileinputformat.split.maxsize=200;
 set mapreduce.input.fileinputformat.split.minsize=200;
 set mapred.reduce.tasks=2;
 
+
 -- this should produce three files in /tmp/hb_target/cf
 insert overwrite table hb_target select distinct key, value from src cluster 
by key;
 
@@ -24,3 +25,23 @@ insert overwrite table hb_target select distinct key, value 
from src cluster by
 
 drop table hb_target;
 dfs -rmr /tmp/hb_target/cf;
+
+
+create table hb_target(key int, val string)
+stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
+with serdeproperties ('hbase.columns.mapping' = ':key,cf:val')
+tblproperties ('hbase.mapreduce.hfileoutputformat.table.name' = 
'positive_hbase_handler_bulk');
+
+-- do it twice - regression test for HIVE-18607
+
+insert overwrite table hb_target select distinct key, value from src cluster 
by key;
+
+dfs -rmr /tmp/hb_target/cf;
+
+insert overwrite table hb_target select distinct key, value from src cluster 
by key;
+
+drop table hb_target;
+dfs -rmr /tmp/hb_target/cf;
+
+
+

http://git-wip-us.apache.org/repos/asf/hive/blob/717ef18d/hbase-handler/src/test/results/positive/hbase_handler_bulk.q.out
--
diff --git a/hbase-handler/src/test/results/positive/hbase_handler_bulk.q.out 
b/hbase-handler/src/test/results/positive/hbase_handler_bulk.q.out
index 10e1c0a..cd8930f 100644
--- a/hbase-handler/src/test/results/positive/hbase_handler_bulk.q.out
+++ b/hbase-handler/src/test/results/positive/hbase_handler_bulk.q.out
@@ -33,3 +33,43 @@ POSTHOOK: type: DROPTABLE
 POSTHOOK: Input: default@hb_target
 POSTHOOK: Output: default@hb_target
  A masked pattern was here 
+PREHOOK: query: create table hb_target(key int, val string)
+stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
+with serdeproperties ('hbase.columns.mapping' = ':key,cf:val')
+tblproperties ('hbase.mapreduce.hfileoutputformat.table.name' = 
'positive_hbase_handler_bulk')
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@hb_target
+POSTHOOK: query: create table hb_target(key int, val string)
+stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
+with serdeproperties ('hbase.columns.mapping' = ':key,cf:val')
+tblproperties ('hbase.mapreduce.hfileoutputformat.table.name' = 
'positive_hbase_handler_bulk')

[07/50] [abbrv] hive git commit: HIVE-18359: Extend grouping set limits from int to long (Prasanth Jayachandran reviewed by Jesus Camacho Rodriguez)

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/ddd4c9ae/ql/src/test/results/clientpositive/llap/vector_groupby_grouping_sets_grouping.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/vector_groupby_grouping_sets_grouping.q.out
 
b/ql/src/test/results/clientpositive/llap/vector_groupby_grouping_sets_grouping.q.out
index b81a0d3..8dd5cf0 100644
--- 
a/ql/src/test/results/clientpositive/llap/vector_groupby_grouping_sets_grouping.q.out
+++ 
b/ql/src/test/results/clientpositive/llap/vector_groupby_grouping_sets_grouping.q.out
@@ -74,18 +74,18 @@ STAGE PLANS:
   Group By Vectorization:
   className: VectorGroupByOperator
   groupByMode: HASH
-  keyExpressions: col 0:int, col 1:int, 
ConstantVectorExpression(val 0) -> 3:int
+  keyExpressions: col 0:int, col 1:int, 
ConstantVectorExpression(val 0) -> 3:bigint
   native: false
   vectorProcessingMode: HASH
   projectedOutputColumnNums: []
-  keys: _col0 (type: int), _col1 (type: int), 0 (type: int)
+  keys: _col0 (type: int), _col1 (type: int), 0 (type: 
bigint)
   mode: hash
   outputColumnNames: _col0, _col1, _col2
   Statistics: Num rows: 18 Data size: 144 Basic stats: 
COMPLETE Column stats: NONE
   Reduce Output Operator
-key expressions: _col0 (type: int), _col1 (type: int), 
_col2 (type: int)
+key expressions: _col0 (type: int), _col1 (type: int), 
_col2 (type: bigint)
 sort order: +++
-Map-reduce partition columns: _col0 (type: int), _col1 
(type: int), _col2 (type: int)
+Map-reduce partition columns: _col0 (type: int), _col1 
(type: int), _col2 (type: bigint)
 Reduce Sink Vectorization:
 className: VectorReduceSinkMultiKeyOperator
 keyColumnNums: [0, 1, 2]
@@ -122,7 +122,7 @@ STAGE PLANS:
 vectorized: true
 rowBatchContext:
 dataColumnCount: 3
-dataColumns: KEY._col0:int, KEY._col1:int, KEY._col2:int
+dataColumns: KEY._col0:int, KEY._col1:int, KEY._col2:bigint
 partitionColumnCount: 0
 scratchColumnTypeNames: []
 Reduce Operator Tree:
@@ -130,22 +130,22 @@ STAGE PLANS:
 Group By Vectorization:
 className: VectorGroupByOperator
 groupByMode: MERGEPARTIAL
-keyExpressions: col 0:int, col 1:int, col 2:int
+keyExpressions: col 0:int, col 1:int, col 2:bigint
 native: false
 vectorProcessingMode: MERGE_PARTIAL
 projectedOutputColumnNums: []
-keys: KEY._col0 (type: int), KEY._col1 (type: int), KEY._col2 
(type: int)
+keys: KEY._col0 (type: int), KEY._col1 (type: int), KEY._col2 
(type: bigint)
 mode: mergepartial
 outputColumnNames: _col0, _col1, _col2
 Statistics: Num rows: 9 Data size: 72 Basic stats: COMPLETE 
Column stats: NONE
 Select Operator
-  expressions: _col0 (type: int), _col1 (type: int), _col2 
(type: int), grouping(_col2, 1) (type: int), grouping(_col2, 0) (type: int)
+  expressions: _col0 (type: int), _col1 (type: int), _col2 
(type: bigint), grouping(_col2, 1) (type: bigint), grouping(_col2, 0) (type: 
bigint)
   outputColumnNames: _col0, _col1, _col2, _col3, _col4
   Select Vectorization:
   className: VectorSelectOperator
   native: true
   projectedOutputColumnNums: [0, 1, 2, 3, 4]
-  selectExpressions: VectorUDFAdaptor(grouping(_col2, 1)) 
-> 3:int, VectorUDFAdaptor(grouping(_col2, 0)) -> 4:int
+  selectExpressions: VectorUDFAdaptor(grouping(_col2, 1)) 
-> 3:bigint, VectorUDFAdaptor(grouping(_col2, 0)) -> 4:bigint
   Statistics: Num rows: 9 Data size: 72 Basic stats: COMPLETE 
Column stats: NONE
   File Output Operator
 compressed: false
@@ -235,18 +235,18 @@ STAGE PLANS:
   Group By Vectorization:
   className: VectorGroupByOperator
   groupByMode: HASH
-  keyExpressions: col 0:int, col 1:int, 
ConstantVectorExpression(val 0) -> 3:int
+  keyExpressions: col 0:int, col 1:int, 
ConstantVectorExpression(val 0) -> 3:bigint
   

[26/50] [abbrv] hive git commit: HIVE-17983 Make the standalone metastore generate tarballs etc. (Alan Gates, reviewed by Thejas Nair)

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/47cac2d0/standalone-metastore/pom.xml
--
diff --git a/standalone-metastore/pom.xml b/standalone-metastore/pom.xml
index 58ed741..df769f5 100644
--- a/standalone-metastore/pom.xml
+++ b/standalone-metastore/pom.xml
@@ -38,6 +38,8 @@
 1.8
 
false
 ${settings.localRepository}
+2.3
+1.6.0
 
 
 ${project.basedir}/src/test/resources
@@ -45,6 +47,8 @@
 
${project.build.directory}/warehouse
 file://
 1
+true
+
set-this-to-colon-separated-full-path-list-of-jars-to-run-integration-tests
 
 
 1.0b3
@@ -75,6 +79,7 @@
 2.8.2
 1.10.19
 2.5.0
+1.3.0
 3.0.0-SNAPSHOT
 
 
@@ -277,10 +282,21 @@
   1.4.0
   test
 
+
+  sqlline
+  sqlline
+  ${sqlline.version}
+
 
 
 
 
+  com.microsoft.sqlserver
+  mssql-jdbc
+  6.2.1.jre8
+  test
+
+
   junit
   junit
   ${junit.version}
@@ -292,6 +308,20 @@
   ${mockito-all.version}
   test
 
+
+
+  org.mariadb.jdbc
+  mariadb-java-client
+  2.2.0
+  test
+
+
+  org.postgresql
+  postgresql
+  9.3-1102-jdbc41
+  test
+
   
 
   
@@ -427,6 +457,11 @@
   maven-checkstyle-plugin
   ${maven.checkstyle.plugin.version}
 
+
+  org.codehaus.mojo
+  exec-maven-plugin
+  ${maven.exec.plugin.version}
+
   
 
 
@@ -467,6 +502,21 @@
   run
 
   
+  
+setup-metastore-scripts
+process-test-resources
+
+  run
+
+
+  
+
+
+  
+
+  
+
+  
 
   
   
@@ -498,11 +548,62 @@
   
 
   
-  
   
   
   
 org.apache.maven.plugins
+maven-assembly-plugin
+${maven.assembly.plugin.version}
+
+  
+assemble
+package
+
+  single
+
+
+  apache-hive-metastore-${project.version}
+  
+src/assembly/bin.xml
+src/assembly/src.xml
+  
+  gnu
+
+  
+
+  
+  
+org.apache.maven.plugins
+maven-failsafe-plugin
+2.20.1
+
+  
+
+  integration-test
+  verify
+
+  
+
+
+  true
+  false
+  -Xmx2048m
+  false
+  
+true
+${test.tmp.dir}
+${test.tmp.dir}
+true
+  
+  
+
${log4j.conf.dir}
+
${itest.jdbc.jars}
+  
+  ${skipITests} 
+
+  
+  
+org.apache.maven.plugins
 maven-surefire-plugin
 2.16
 
@@ -583,6 +684,27 @@
 
   
   
+org.codehaus.mojo
+exec-maven-plugin
+
+  
+prepare-package
+
+  exec
+
+  
+
+
+  java
+  
+-classpath
+
+
org.apache.hadoop.hive.metastore.conf.ConfTemplatePrinter
+
${project.build.directory}/generated-sources/conf/metastore-site.xml.template
+  
+
+  
+  
 org.datanucleus
 datanucleus-maven-plugin
 4.0.5

http://git-wip-us.apache.org/repos/asf/hive/blob/47cac2d0/standalone-metastore/src/assembly/bin.xml
--
diff --git a/standalone-metastore/src/assembly/bin.xml 
b/standalone-metastore/src/assembly/bin.xml
new file mode 100644
index 000..81912d7
--- /dev/null
+++ b/standalone-metastore/src/assembly/bin.xml
@@ -0,0 +1,136 @@
+
+
+http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2;
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+  
xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2
 http://maven.apache.org/xsd/assembly-1.1.2.xsd;>
+
+  bin
+
+  
+dir
+tar.gz
+  
+
+  apache-hive-metastore-${project.version}-bin
+
+  
+
+  lib
+  false
+  true
+  true
+  true
+  
+org.apache.hadoop:*
+org.slf4j:*
+log4j:*
+  
+
+  
+
+  
+
+  ${project.basedir}
+  
+target/**
+.classpath
+.project
+.settings/**
+lib/**
+  
+
+  
+README.txt
+LICENSE
+NOTICE
+  
+  /
+
+
+
+  ${project.basedir}/binary-package-licenses
+  
+/*
+  
+  
+/README
+  
+  binary-package-licenses
+  

[10/50] [abbrv] hive git commit: HIVE-18359: Extend grouping set limits from int to long (Prasanth Jayachandran reviewed by Jesus Camacho Rodriguez)

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/ddd4c9ae/ql/src/test/results/clientpositive/llap/groupby_rollup_empty.q.out
--
diff --git a/ql/src/test/results/clientpositive/llap/groupby_rollup_empty.q.out 
b/ql/src/test/results/clientpositive/llap/groupby_rollup_empty.q.out
index f2cda04..24be36e 100644
--- a/ql/src/test/results/clientpositive/llap/groupby_rollup_empty.q.out
+++ b/ql/src/test/results/clientpositive/llap/groupby_rollup_empty.q.out
@@ -175,14 +175,14 @@ STAGE PLANS:
   Statistics: Num rows: 1 Data size: 12 Basic stats: 
COMPLETE Column stats: NONE
   Group By Operator
 aggregations: sum(_col2)
-keys: _col0 (type: int), _col1 (type: int), 0 (type: 
int)
+keys: _col0 (type: int), _col1 (type: int), 0 (type: 
bigint)
 mode: hash
 outputColumnNames: _col0, _col1, _col2, _col3
 Statistics: Num rows: 3 Data size: 36 Basic stats: 
COMPLETE Column stats: NONE
 Reduce Output Operator
-  key expressions: _col0 (type: int), _col1 (type: 
int), _col2 (type: int)
+  key expressions: _col0 (type: int), _col1 (type: 
int), _col2 (type: bigint)
   sort order: +++
-  Map-reduce partition columns: _col0 (type: int), 
_col1 (type: int), _col2 (type: int)
+  Map-reduce partition columns: _col0 (type: int), 
_col1 (type: int), _col2 (type: bigint)
   Statistics: Num rows: 3 Data size: 36 Basic stats: 
COMPLETE Column stats: NONE
   value expressions: _col3 (type: bigint)
 Execution mode: vectorized, llap
@@ -192,12 +192,12 @@ STAGE PLANS:
 Reduce Operator Tree:
   Group By Operator
 aggregations: sum(VALUE._col0)
-keys: KEY._col0 (type: int), KEY._col1 (type: int), KEY._col2 
(type: int)
+keys: KEY._col0 (type: int), KEY._col1 (type: int), KEY._col2 
(type: bigint)
 mode: mergepartial
 outputColumnNames: _col0, _col1, _col2, _col3
 Statistics: Num rows: 1 Data size: 12 Basic stats: COMPLETE 
Column stats: NONE
 Select Operator
-  expressions: _col3 (type: bigint), grouping(_col2, 0) (type: 
int), 'NULL,1' (type: string)
+  expressions: _col3 (type: bigint), grouping(_col2, 0) (type: 
bigint), 'NULL,1' (type: string)
   outputColumnNames: _col0, _col1, _col2
   Statistics: Num rows: 1 Data size: 12 Basic stats: COMPLETE 
Column stats: NONE
   File Output Operator

http://git-wip-us.apache.org/repos/asf/hive/blob/ddd4c9ae/ql/src/test/results/clientpositive/llap/llap_acid.q.out
--
diff --git a/ql/src/test/results/clientpositive/llap/llap_acid.q.out 
b/ql/src/test/results/clientpositive/llap/llap_acid.q.out
index 38889b9..4ed45e7 100644
--- a/ql/src/test/results/clientpositive/llap/llap_acid.q.out
+++ b/ql/src/test/results/clientpositive/llap/llap_acid.q.out
@@ -174,23 +174,25 @@ POSTHOOK: Input: default@orc_llap@csmallint=1
 POSTHOOK: Input: default@orc_llap@csmallint=2
 POSTHOOK: Input: default@orc_llap@csmallint=3
  A masked pattern was here 
--285355633 1   -1241163445
--109813638 1   -58941842
-164554497  1   1161977292
-199879534  1   123351087
+-838810013 1   1864027286
+-595277064 1   -1645852809
+-334595454 1   -1645852809
+185212032  1   -1645852809
+186967185  1   -1645852809
+241008004  1   -1645852809
 246423894  1   -1645852809
-354670578  1   562841852
-455419170  1   1108177470
-665801232  1   480783141
+518213127  1   -1645852809
+584923170  1   -1645852809
 708885482  1   -1645852809
--285355633 2   -1241163445
--109813638 2   -58941842
-164554497  2   1161977292
-199879534  2   123351087
+-838810013 2   1864027286
+-595277064 2   -1645852809
+-334595454 2   -1645852809
+185212032  2   -1645852809
+186967185  2   -1645852809
+241008004  2   -1645852809
 246423894  2   -1645852809
-354670578  2   562841852
-455419170  2   1108177470
-665801232  2   480783141
+518213127  2   -1645852809
+584923170  2   -1645852809
 708885482  2   -1645852809
 -923308739 3   -1887561756
 -3728  3   -1887561756
@@ -424,24 +426,26 @@ POSTHOOK: Input: default@orc_llap@csmallint=1
 POSTHOOK: Input: default@orc_llap@csmallint=2
 POSTHOOK: Input: default@orc_llap@csmallint=3
  A masked pattern was here 

[47/50] [abbrv] hive git commit: HIVE-17990 Add Thrift and DB storage for Schema Registry objects

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/a9e1acaf/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp
--
diff --git 
a/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp 
b/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp
index af0fd6b..14718b5 100644
--- a/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp
+++ b/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp
@@ -1240,14 +1240,14 @@ uint32_t 
ThriftHiveMetastore_get_databases_result::read(::apache::thrift::protoc
 if (ftype == ::apache::thrift::protocol::T_LIST) {
   {
 this->success.clear();
-uint32_t _size1049;
-::apache::thrift::protocol::TType _etype1052;
-xfer += iprot->readListBegin(_etype1052, _size1049);
-this->success.resize(_size1049);
-uint32_t _i1053;
-for (_i1053 = 0; _i1053 < _size1049; ++_i1053)
+uint32_t _size1076;
+::apache::thrift::protocol::TType _etype1079;
+xfer += iprot->readListBegin(_etype1079, _size1076);
+this->success.resize(_size1076);
+uint32_t _i1080;
+for (_i1080 = 0; _i1080 < _size1076; ++_i1080)
 {
-  xfer += iprot->readString(this->success[_i1053]);
+  xfer += iprot->readString(this->success[_i1080]);
 }
 xfer += iprot->readListEnd();
   }
@@ -1286,10 +1286,10 @@ uint32_t 
ThriftHiveMetastore_get_databases_result::write(::apache::thrift::proto
 xfer += oprot->writeFieldBegin("success", 
::apache::thrift::protocol::T_LIST, 0);
 {
   xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRING, 
static_cast(this->success.size()));
-  std::vector ::const_iterator _iter1054;
-  for (_iter1054 = this->success.begin(); _iter1054 != 
this->success.end(); ++_iter1054)
+  std::vector ::const_iterator _iter1081;
+  for (_iter1081 = this->success.begin(); _iter1081 != 
this->success.end(); ++_iter1081)
   {
-xfer += oprot->writeString((*_iter1054));
+xfer += oprot->writeString((*_iter1081));
   }
   xfer += oprot->writeListEnd();
 }
@@ -1334,14 +1334,14 @@ uint32_t 
ThriftHiveMetastore_get_databases_presult::read(::apache::thrift::proto
 if (ftype == ::apache::thrift::protocol::T_LIST) {
   {
 (*(this->success)).clear();
-uint32_t _size1055;
-::apache::thrift::protocol::TType _etype1058;
-xfer += iprot->readListBegin(_etype1058, _size1055);
-(*(this->success)).resize(_size1055);
-uint32_t _i1059;
-for (_i1059 = 0; _i1059 < _size1055; ++_i1059)
+uint32_t _size1082;
+::apache::thrift::protocol::TType _etype1085;
+xfer += iprot->readListBegin(_etype1085, _size1082);
+(*(this->success)).resize(_size1082);
+uint32_t _i1086;
+for (_i1086 = 0; _i1086 < _size1082; ++_i1086)
 {
-  xfer += iprot->readString((*(this->success))[_i1059]);
+  xfer += iprot->readString((*(this->success))[_i1086]);
 }
 xfer += iprot->readListEnd();
   }
@@ -1458,14 +1458,14 @@ uint32_t 
ThriftHiveMetastore_get_all_databases_result::read(::apache::thrift::pr
 if (ftype == ::apache::thrift::protocol::T_LIST) {
   {
 this->success.clear();
-uint32_t _size1060;
-::apache::thrift::protocol::TType _etype1063;
-xfer += iprot->readListBegin(_etype1063, _size1060);
-this->success.resize(_size1060);
-uint32_t _i1064;
-for (_i1064 = 0; _i1064 < _size1060; ++_i1064)
+uint32_t _size1087;
+::apache::thrift::protocol::TType _etype1090;
+xfer += iprot->readListBegin(_etype1090, _size1087);
+this->success.resize(_size1087);
+uint32_t _i1091;
+for (_i1091 = 0; _i1091 < _size1087; ++_i1091)
 {
-  xfer += iprot->readString(this->success[_i1064]);
+  xfer += iprot->readString(this->success[_i1091]);
 }
 xfer += iprot->readListEnd();
   }
@@ -1504,10 +1504,10 @@ uint32_t 
ThriftHiveMetastore_get_all_databases_result::write(::apache::thrift::p
 xfer += oprot->writeFieldBegin("success", 
::apache::thrift::protocol::T_LIST, 0);
 {
   xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRING, 
static_cast(this->success.size()));
-  std::vector ::const_iterator _iter1065;
-  for (_iter1065 = this->success.begin(); _iter1065 != 
this->success.end(); ++_iter1065)
+  std::vector ::const_iterator _iter1092;
+  for (_iter1092 = this->success.begin(); _iter1092 != 
this->success.end(); ++_iter1092)
   {
-xfer += 

[36/50] [abbrv] hive git commit: HIVE-17990 Add Thrift and DB storage for Schema Registry objects

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/a9e1acaf/standalone-metastore/src/gen/thrift/gen-py/hive_metastore/ttypes.py
--
diff --git 
a/standalone-metastore/src/gen/thrift/gen-py/hive_metastore/ttypes.py 
b/standalone-metastore/src/gen/thrift/gen-py/hive_metastore/ttypes.py
index 5598859..ea9da89 100644
--- a/standalone-metastore/src/gen/thrift/gen-py/hive_metastore/ttypes.py
+++ b/standalone-metastore/src/gen/thrift/gen-py/hive_metastore/ttypes.py
@@ -211,6 +211,100 @@ class EventRequestType:
 "DELETE": 3,
   }
 
+class SerdeType:
+  HIVE = 1
+  SCHEMA_REGISTRY = 2
+
+  _VALUES_TO_NAMES = {
+1: "HIVE",
+2: "SCHEMA_REGISTRY",
+  }
+
+  _NAMES_TO_VALUES = {
+"HIVE": 1,
+"SCHEMA_REGISTRY": 2,
+  }
+
+class SchemaType:
+  HIVE = 1
+  AVRO = 2
+
+  _VALUES_TO_NAMES = {
+1: "HIVE",
+2: "AVRO",
+  }
+
+  _NAMES_TO_VALUES = {
+"HIVE": 1,
+"AVRO": 2,
+  }
+
+class SchemaCompatibility:
+  NONE = 1
+  BACKWARD = 2
+  FORWARD = 3
+  BOTH = 4
+
+  _VALUES_TO_NAMES = {
+1: "NONE",
+2: "BACKWARD",
+3: "FORWARD",
+4: "BOTH",
+  }
+
+  _NAMES_TO_VALUES = {
+"NONE": 1,
+"BACKWARD": 2,
+"FORWARD": 3,
+"BOTH": 4,
+  }
+
+class SchemaValidation:
+  LATEST = 1
+  ALL = 2
+
+  _VALUES_TO_NAMES = {
+1: "LATEST",
+2: "ALL",
+  }
+
+  _NAMES_TO_VALUES = {
+"LATEST": 1,
+"ALL": 2,
+  }
+
+class SchemaVersionState:
+  INITIATED = 1
+  START_REVIEW = 2
+  CHANGES_REQUIRED = 3
+  REVIEWED = 4
+  ENABLED = 5
+  DISABLED = 6
+  ARCHIVED = 7
+  DELETED = 8
+
+  _VALUES_TO_NAMES = {
+1: "INITIATED",
+2: "START_REVIEW",
+3: "CHANGES_REQUIRED",
+4: "REVIEWED",
+5: "ENABLED",
+6: "DISABLED",
+7: "ARCHIVED",
+8: "DELETED",
+  }
+
+  _NAMES_TO_VALUES = {
+"INITIATED": 1,
+"START_REVIEW": 2,
+"CHANGES_REQUIRED": 3,
+"REVIEWED": 4,
+"ENABLED": 5,
+"DISABLED": 6,
+"ARCHIVED": 7,
+"DELETED": 8,
+  }
+
 class FunctionType:
   JAVA = 1
 
@@ -2897,6 +2991,10 @@ class SerDeInfo:
- name
- serializationLib
- parameters
+   - description
+   - serializerClass
+   - deserializerClass
+   - serdeType
   """
 
   thrift_spec = (
@@ -2904,12 +3002,20 @@ class SerDeInfo:
 (1, TType.STRING, 'name', None, None, ), # 1
 (2, TType.STRING, 'serializationLib', None, None, ), # 2
 (3, TType.MAP, 'parameters', (TType.STRING,None,TType.STRING,None), None, 
), # 3
+(4, TType.STRING, 'description', None, None, ), # 4
+(5, TType.STRING, 'serializerClass', None, None, ), # 5
+(6, TType.STRING, 'deserializerClass', None, None, ), # 6
+(7, TType.I32, 'serdeType', None, None, ), # 7
   )
 
-  def __init__(self, name=None, serializationLib=None, parameters=None,):
+  def __init__(self, name=None, serializationLib=None, parameters=None, 
description=None, serializerClass=None, deserializerClass=None, 
serdeType=None,):
 self.name = name
 self.serializationLib = serializationLib
 self.parameters = parameters
+self.description = description
+self.serializerClass = serializerClass
+self.deserializerClass = deserializerClass
+self.serdeType = serdeType
 
   def read(self, iprot):
 if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and 
isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is 
not None and fastbinary is not None:
@@ -2941,6 +3047,26 @@ class SerDeInfo:
   iprot.readMapEnd()
 else:
   iprot.skip(ftype)
+  elif fid == 4:
+if ftype == TType.STRING:
+  self.description = iprot.readString()
+else:
+  iprot.skip(ftype)
+  elif fid == 5:
+if ftype == TType.STRING:
+  self.serializerClass = iprot.readString()
+else:
+  iprot.skip(ftype)
+  elif fid == 6:
+if ftype == TType.STRING:
+  self.deserializerClass = iprot.readString()
+else:
+  iprot.skip(ftype)
+  elif fid == 7:
+if ftype == TType.I32:
+  self.serdeType = iprot.readI32()
+else:
+  iprot.skip(ftype)
   else:
 iprot.skip(ftype)
   iprot.readFieldEnd()
@@ -2967,6 +3093,22 @@ class SerDeInfo:
 oprot.writeString(viter100)
   oprot.writeMapEnd()
   oprot.writeFieldEnd()
+if self.description is not None:
+  oprot.writeFieldBegin('description', TType.STRING, 4)
+  oprot.writeString(self.description)
+  oprot.writeFieldEnd()
+if self.serializerClass is not None:
+  oprot.writeFieldBegin('serializerClass', TType.STRING, 5)
+  oprot.writeString(self.serializerClass)
+  oprot.writeFieldEnd()
+if self.deserializerClass is not None:
+  oprot.writeFieldBegin('deserializerClass', TType.STRING, 6)
+  oprot.writeString(self.deserializerClass)
+  oprot.writeFieldEnd()
+if self.serdeType is not None:
+  oprot.writeFieldBegin('serdeType', TType.I32, 7)
+

[33/50] [abbrv] hive git commit: HIVE-17990 Add Thrift and DB storage for Schema Registry objects

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/a9e1acaf/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStoreSchemaMethods.java
--
diff --git 
a/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStoreSchemaMethods.java
 
b/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStoreSchemaMethods.java
new file mode 100644
index 000..0ceb84a
--- /dev/null
+++ 
b/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStoreSchemaMethods.java
@@ -0,0 +1,887 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.metastore;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hive.metastore.api.AlreadyExistsException;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.FindSchemasByColsResp;
+import org.apache.hadoop.hive.metastore.api.FindSchemasByColsRespEntry;
+import org.apache.hadoop.hive.metastore.api.FindSchemasByColsRqst;
+import org.apache.hadoop.hive.metastore.api.ISchema;
+import org.apache.hadoop.hive.metastore.api.InvalidOperationException;
+import org.apache.hadoop.hive.metastore.api.MetaException;
+import org.apache.hadoop.hive.metastore.api.NoSuchObjectException;
+import org.apache.hadoop.hive.metastore.api.SchemaCompatibility;
+import org.apache.hadoop.hive.metastore.api.SchemaType;
+import org.apache.hadoop.hive.metastore.api.SchemaValidation;
+import org.apache.hadoop.hive.metastore.api.SchemaVersion;
+import org.apache.hadoop.hive.metastore.api.SchemaVersionState;
+import org.apache.hadoop.hive.metastore.api.SerDeInfo;
+import org.apache.hadoop.hive.metastore.api.SerdeType;
+import org.apache.hadoop.hive.metastore.client.builder.ISchemaBuilder;
+import org.apache.hadoop.hive.metastore.client.builder.SchemaVersionBuilder;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf.ConfVars;
+import org.apache.hadoop.hive.metastore.events.AddSchemaVersionEvent;
+import org.apache.hadoop.hive.metastore.events.AlterISchemaEvent;
+import org.apache.hadoop.hive.metastore.events.AlterSchemaVersionEvent;
+import org.apache.hadoop.hive.metastore.events.CreateISchemaEvent;
+import org.apache.hadoop.hive.metastore.events.DropISchemaEvent;
+import org.apache.hadoop.hive.metastore.events.DropSchemaVersionEvent;
+import org.apache.hadoop.hive.metastore.events.PreEventContext;
+import org.apache.hadoop.hive.metastore.messaging.EventMessage;
+import org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge;
+import org.apache.thrift.TException;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+// This does the testing using a remote metastore, as that finds more issues 
in thrift
+public class TestHiveMetaStoreSchemaMethods {
+  private static Map events;
+  private static Map transactionalEvents;
+  private static Map preEvents;
+
+  private static IMetaStoreClient client;
+
+
+  @BeforeClass
+  public static void startMetastore() throws Exception {
+Configuration conf = MetastoreConf.newMetastoreConf();
+int port = MetaStoreTestUtils.findFreePort();
+MetastoreConf.setVar(conf, ConfVars.THRIFT_URIS, "thrift://localhost:" + 
port);
+MetastoreConf.setClass(conf, ConfVars.EVENT_LISTENERS, 
SchemaEventListener.class,
+MetaStoreEventListener.class);
+MetastoreConf.setClass(conf, ConfVars.TRANSACTIONAL_EVENT_LISTENERS, 
TransactionalSchemaEventListener.class,
+MetaStoreEventListener.class);
+MetastoreConf.setClass(conf, ConfVars.PRE_EVENT_LISTENERS, 
SchemaPreEventListener.class,
+MetaStorePreEventListener.class);
+MetaStoreTestUtils.setConfForStandloneMode(conf);
+MetaStoreTestUtils.startMetaStore(port, 

[16/50] [abbrv] hive git commit: HIVE-18359: Extend grouping set limits from int to long (Prasanth Jayachandran reviewed by Jesus Camacho Rodriguez)

2018-02-12 Thread gates
HIVE-18359: Extend grouping set limits from int to long (Prasanth Jayachandran 
reviewed by Jesus Camacho Rodriguez)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/ddd4c9ae
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/ddd4c9ae
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/ddd4c9ae

Branch: refs/heads/standalone-metastore
Commit: ddd4c9aea6166129be289757e1721d0cfccfef66
Parents: 89e75c7
Author: Prasanth Jayachandran 
Authored: Sat Feb 10 11:22:05 2018 -0600
Committer: Prasanth Jayachandran 
Committed: Sat Feb 10 11:22:05 2018 -0600

--
 .../org/apache/hadoop/hive/ql/ErrorMsg.java | 2 +
 .../hadoop/hive/ql/exec/GroupByOperator.java|20 +-
 .../ql/exec/vector/VectorGroupByOperator.java   | 8 +-
 .../hadoop/hive/ql/metadata/VirtualColumn.java  | 2 +-
 .../calcite/reloperators/HiveGroupingID.java| 2 +-
 .../rules/HiveExpandDistinctAggregatesRule.java | 2 +-
 .../calcite/translator/HiveGBOpConvUtil.java| 8 +-
 .../hadoop/hive/ql/parse/CalcitePlanner.java|20 +-
 .../hadoop/hive/ql/parse/SemanticAnalyzer.java  |   110 +-
 .../apache/hadoop/hive/ql/plan/GroupByDesc.java |12 +-
 .../hive/ql/udf/generic/GenericUDFGrouping.java |39 +-
 .../test/queries/clientnegative/groupby_cube3.q |90 +
 .../clientnegative/groupby_grouping_sets8.q |98 +
 .../queries/clientnegative/groupby_rollup3.q|90 +
 ql/src/test/queries/clientpositive/cte_1.q  | 2 +-
 .../clientpositive/groupingset_high_columns.q   |   259 +
 .../results/clientnegative/groupby_cube3.q.out  |18 +
 .../clientnegative/groupby_grouping_sets8.q.out |18 +
 .../clientnegative/groupby_rollup3.q.out|18 +
 .../clientpositive/annotate_stats_groupby.q.out |   192 +-
 .../annotate_stats_groupby2.q.out   |40 +-
 .../cbo_rp_annotate_stats_groupby.q.out |64 +-
 .../results/clientpositive/groupby_cube1.q.out  |74 +-
 .../clientpositive/groupby_cube_multi_gby.q.out |16 +-
 .../clientpositive/groupby_grouping_id3.q.out   |20 +-
 .../clientpositive/groupby_grouping_sets1.q.out |40 +-
 .../clientpositive/groupby_grouping_sets2.q.out |32 +-
 .../clientpositive/groupby_grouping_sets3.q.out |24 +-
 .../clientpositive/groupby_grouping_sets4.q.out |48 +-
 .../clientpositive/groupby_grouping_sets5.q.out |24 +-
 .../clientpositive/groupby_grouping_sets6.q.out |16 +-
 .../groupby_grouping_sets_grouping.q.out|   128 +-
 .../groupby_grouping_sets_limit.q.out   |32 +-
 .../groupby_grouping_window.q.out   | 8 +-
 .../clientpositive/groupby_rollup1.q.out|56 +-
 .../clientpositive/groupby_rollup_empty.q.out   |10 +-
 .../groupingset_high_columns.q.out  |  1169 +
 .../infer_bucket_sort_grouping_operators.q.out  |24 +-
 .../clientpositive/limit_pushdown2.q.out|16 +-
 .../results/clientpositive/llap/cte_1.q.out | 36670 -
 .../llap/groupby_rollup_empty.q.out |10 +-
 .../results/clientpositive/llap/llap_acid.q.out |60 +-
 .../clientpositive/llap/llap_acid_fast.q.out|60 +-
 .../llap/multi_count_distinct_null.q.out|58 +-
 .../llap/vector_groupby_cube1.q.out |74 +-
 .../llap/vector_groupby_grouping_id1.q.out  |   100 +-
 .../llap/vector_groupby_grouping_id2.q.out  |   306 +-
 .../llap/vector_groupby_grouping_id3.q.out  |42 +-
 .../llap/vector_groupby_grouping_sets1.q.out|70 +-
 .../llap/vector_groupby_grouping_sets2.q.out|62 +-
 .../llap/vector_groupby_grouping_sets3.q.out|38 +-
 .../vector_groupby_grouping_sets3_dec.q.out |42 +-
 .../llap/vector_groupby_grouping_sets4.q.out|72 +-
 .../llap/vector_groupby_grouping_sets5.q.out|42 +-
 .../llap/vector_groupby_grouping_sets6.q.out|28 +-
 .../vector_groupby_grouping_sets_grouping.q.out |   230 +-
 .../vector_groupby_grouping_sets_limit.q.out|56 +-
 .../llap/vector_groupby_grouping_window.q.out   |28 +-
 .../llap/vector_groupby_rollup1.q.out   |96 +-
 .../llap/vector_grouping_sets.q.out |44 +-
 .../clientpositive/perf/spark/query18.q.out | 8 +-
 .../clientpositive/perf/spark/query22.q.out | 8 +-
 .../clientpositive/perf/spark/query27.q.out |14 +-
 .../clientpositive/perf/spark/query36.q.out |26 +-
 .../clientpositive/perf/spark/query5.q.out  |20 +-
 .../clientpositive/perf/spark/query67.q.out | 8 +-
 .../clientpositive/perf/spark/query70.q.out |26 +-
 .../clientpositive/perf/spark/query77.q.out |20 +-
 .../clientpositive/perf/spark/query80.q.out |20 +-
 .../clientpositive/perf/spark/query86.q.out |26 +-
 .../clientpositive/spark/groupby_cube1.q.out

[09/50] [abbrv] hive git commit: HIVE-18359: Extend grouping set limits from int to long (Prasanth Jayachandran reviewed by Jesus Camacho Rodriguez)

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/ddd4c9ae/ql/src/test/results/clientpositive/llap/vector_groupby_grouping_id2.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/vector_groupby_grouping_id2.q.out 
b/ql/src/test/results/clientpositive/llap/vector_groupby_grouping_id2.q.out
index e6075c7..74f6289 100644
--- a/ql/src/test/results/clientpositive/llap/vector_groupby_grouping_id2.q.out
+++ b/ql/src/test/results/clientpositive/llap/vector_groupby_grouping_id2.q.out
@@ -73,16 +73,16 @@ STAGE PLANS:
   aggregators: VectorUDAFCountStar(*) -> bigint
   className: VectorGroupByOperator
   groupByMode: HASH
-  keyExpressions: col 0:int, col 1:int, 
ConstantVectorExpression(val 0) -> 3:int
+  keyExpressions: col 0:int, col 1:int, 
ConstantVectorExpression(val 0) -> 3:bigint
   native: false
   vectorProcessingMode: HASH
   projectedOutputColumnNums: [0]
-  keys: _col0 (type: int), _col1 (type: int), 0 (type: int)
+  keys: _col0 (type: int), _col1 (type: int), 0 (type: 
bigint)
   mode: hash
   outputColumnNames: _col0, _col1, _col2, _col3
   Statistics: Num rows: 18 Data size: 144 Basic stats: 
COMPLETE Column stats: NONE
   Reduce Output Operator
-key expressions: _col0 (type: int), _col1 (type: int), 
_col2 (type: int)
+key expressions: _col0 (type: int), _col1 (type: int), 
_col2 (type: bigint)
 sort order: +++
 Map-reduce partition columns: rand() (type: double)
 Reduce Sink Vectorization:
@@ -123,7 +123,7 @@ STAGE PLANS:
 vectorized: true
 rowBatchContext:
 dataColumnCount: 4
-dataColumns: KEY._col0:int, KEY._col1:int, KEY._col2:int, 
VALUE._col0:bigint
+dataColumns: KEY._col0:int, KEY._col1:int, 
KEY._col2:bigint, VALUE._col0:bigint
 partitionColumnCount: 0
 scratchColumnTypeNames: []
 Reduce Operator Tree:
@@ -133,16 +133,16 @@ STAGE PLANS:
 aggregators: VectorUDAFCountMerge(col 3:bigint) -> bigint
 className: VectorGroupByOperator
 groupByMode: PARTIALS
-keyExpressions: col 0:int, col 1:int, col 2:int
+keyExpressions: col 0:int, col 1:int, col 2:bigint
 native: false
 vectorProcessingMode: STREAMING
 projectedOutputColumnNums: [0]
-keys: KEY._col0 (type: int), KEY._col1 (type: int), KEY._col2 
(type: int)
+keys: KEY._col0 (type: int), KEY._col1 (type: int), KEY._col2 
(type: bigint)
 mode: partials
 outputColumnNames: _col0, _col1, _col2, _col3
 Statistics: Num rows: 18 Data size: 144 Basic stats: COMPLETE 
Column stats: NONE
 Reduce Output Operator
-  key expressions: _col0 (type: int), _col1 (type: int), _col2 
(type: int)
+  key expressions: _col0 (type: int), _col1 (type: int), _col2 
(type: bigint)
   sort order: +++
   Map-reduce partition columns: _col0 (type: int), _col1 
(type: int)
   Reduce Sink Vectorization:
@@ -166,7 +166,7 @@ STAGE PLANS:
 vectorized: true
 rowBatchContext:
 dataColumnCount: 4
-dataColumns: KEY._col0:int, KEY._col1:int, KEY._col2:int, 
VALUE._col0:bigint
+dataColumns: KEY._col0:int, KEY._col1:int, 
KEY._col2:bigint, VALUE._col0:bigint
 partitionColumnCount: 0
 scratchColumnTypeNames: []
 Reduce Operator Tree:
@@ -176,16 +176,16 @@ STAGE PLANS:
 aggregators: VectorUDAFCountMerge(col 3:bigint) -> bigint
 className: VectorGroupByOperator
 groupByMode: FINAL
-keyExpressions: col 0:int, col 1:int, col 2:int
+keyExpressions: col 0:int, col 1:int, col 2:bigint
 native: false
 vectorProcessingMode: STREAMING
 projectedOutputColumnNums: [0]
-keys: KEY._col0 (type: int), KEY._col1 (type: int), KEY._col2 
(type: int)
+keys: KEY._col0 (type: int), KEY._col1 (type: int), KEY._col2 
(type: bigint)
 mode: final
 outputColumnNames: _col0, _col1, _col2, _col3
 Statistics: Num rows: 9 Data size: 72 Basic stats: COMPLETE 
Column 

[21/50] [abbrv] hive git commit: HIVE-17983 Make the standalone metastore generate tarballs etc. (Alan Gates, reviewed by Thejas Nair)

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/47cac2d0/standalone-metastore/src/main/sql/mysql/upgrade-2.1.0-to-2.2.0.mysql.sql
--
diff --git 
a/standalone-metastore/src/main/sql/mysql/upgrade-2.1.0-to-2.2.0.mysql.sql 
b/standalone-metastore/src/main/sql/mysql/upgrade-2.1.0-to-2.2.0.mysql.sql
new file mode 100644
index 000..b114587
--- /dev/null
+++ b/standalone-metastore/src/main/sql/mysql/upgrade-2.1.0-to-2.2.0.mysql.sql
@@ -0,0 +1,43 @@
+SELECT 'Upgrading MetaStore schema from 2.1.0 to 2.2.0' AS ' ';
+
+--SOURCE 037-HIVE-14496.mysql.sql;
+-- Step 1: Add the column allowing null
+ALTER TABLE `TBLS` ADD `IS_REWRITE_ENABLED` bit(1);
+
+ -- Step 2: Replace the null with default value (false)
+UPDATE `TBLS` SET `IS_REWRITE_ENABLED` = false;
+
+-- Step 3: Alter the column to disallow null values
+ALTER TABLE `TBLS` MODIFY COLUMN `IS_REWRITE_ENABLED` bit(1) NOT NULL DEFAULT 
0;
+
+--SOURCE 038-HIVE-10562.mysql.sql;
+-- Step 1: Add the column for format
+ALTER TABLE `NOTIFICATION_LOG` ADD `MESSAGE_FORMAT` varchar(16);
+-- if MESSAGE_FORMAT is null, then it is the legacy hcat JSONMessageFactory 
that created this message
+
+-- Step 2 : Change the type of the MESSAGE field from mediumtext to longtext
+ALTER TABLE `NOTIFICATION_LOG` MODIFY `MESSAGE` longtext;
+
+--SOURCE 039-HIVE-12274.mysql.sql;
+ALTER TABLE COLUMNS_V2 MODIFY TYPE_NAME MEDIUMTEXT;
+ALTER TABLE TABLE_PARAMS MODIFY PARAM_VALUE MEDIUMTEXT;
+ALTER TABLE SERDE_PARAMS MODIFY PARAM_VALUE MEDIUMTEXT;
+ALTER TABLE SD_PARAMS MODIFY PARAM_VALUE MEDIUMTEXT;
+
+ALTER TABLE TBLS MODIFY TBL_NAME varchar(256) CHARACTER SET latin1 COLLATE 
latin1_bin DEFAULT NULL;
+ALTER TABLE NOTIFICATION_LOG MODIFY TBL_NAME varchar(256) CHARACTER SET latin1 
COLLATE latin1_bin;
+ALTER TABLE PARTITION_EVENTS MODIFY TBL_NAME varchar(256) CHARACTER SET latin1 
COLLATE latin1_bin DEFAULT NULL;
+ALTER TABLE TAB_COL_STATS MODIFY TABLE_NAME varchar(256) CHARACTER SET latin1 
COLLATE latin1_bin NOT NULL;
+ALTER TABLE PART_COL_STATS MODIFY TABLE_NAME varchar(256) CHARACTER SET latin1 
COLLATE latin1_bin NOT NULL;
+ALTER TABLE COMPLETED_TXN_COMPONENTS MODIFY CTC_TABLE varchar(256) CHARACTER 
SET latin1 COLLATE latin1_bin;
+
+ALTER TABLE COLUMNS_V2 MODIFY COLUMN_NAME varchar(767) CHARACTER SET latin1 
COLLATE latin1_bin NOT NULL;
+ALTER TABLE PART_COL_PRIVS MODIFY COLUMN_NAME varchar(767) CHARACTER SET 
latin1 COLLATE latin1_bin DEFAULT NULL;
+ALTER TABLE TBL_COL_PRIVS MODIFY COLUMN_NAME varchar(767) CHARACTER SET latin1 
COLLATE latin1_bin DEFAULT NULL;
+ALTER TABLE SORT_COLS MODIFY COLUMN_NAME varchar(767) CHARACTER SET latin1 
COLLATE latin1_bin DEFAULT NULL;
+ALTER TABLE TAB_COL_STATS MODIFY COLUMN_NAME varchar(767) CHARACTER SET latin1 
COLLATE latin1_bin NOT NULL;
+ALTER TABLE PART_COL_STATS MODIFY COLUMN_NAME varchar(767) CHARACTER SET 
latin1 COLLATE latin1_bin NOT NULL;
+
+UPDATE VERSION SET SCHEMA_VERSION='2.2.0', VERSION_COMMENT='Hive release 
version 2.2.0' where VER_ID=1;
+SELECT 'Finished upgrading MetaStore schema from 2.1.0 to 2.2.0' AS ' ';
+

http://git-wip-us.apache.org/repos/asf/hive/blob/47cac2d0/standalone-metastore/src/main/sql/mysql/upgrade-2.2.0-to-2.3.0.mysql.sql
--
diff --git 
a/standalone-metastore/src/main/sql/mysql/upgrade-2.2.0-to-2.3.0.mysql.sql 
b/standalone-metastore/src/main/sql/mysql/upgrade-2.2.0-to-2.3.0.mysql.sql
new file mode 100644
index 000..aa5110f
--- /dev/null
+++ b/standalone-metastore/src/main/sql/mysql/upgrade-2.2.0-to-2.3.0.mysql.sql
@@ -0,0 +1,8 @@
+SELECT 'Upgrading MetaStore schema from 2.2.0 to 2.3.0' AS ' ';
+
+--SOURCE 040-HIVE-16399.mysql.sql;
+CREATE INDEX TC_TXNID_INDEX ON TXN_COMPONENTS (TC_TXNID);
+
+UPDATE VERSION SET SCHEMA_VERSION='2.3.0', VERSION_COMMENT='Hive release 
version 2.3.0' where VER_ID=1;
+SELECT 'Finished upgrading MetaStore schema from 2.2.0 to 2.3.0' AS ' ';
+

http://git-wip-us.apache.org/repos/asf/hive/blob/47cac2d0/standalone-metastore/src/main/sql/mysql/upgrade-2.3.0-to-3.0.0.mysql.sql
--
diff --git 
a/standalone-metastore/src/main/sql/mysql/upgrade-2.3.0-to-3.0.0.mysql.sql 
b/standalone-metastore/src/main/sql/mysql/upgrade-2.3.0-to-3.0.0.mysql.sql
new file mode 100644
index 000..0a170f6
--- /dev/null
+++ b/standalone-metastore/src/main/sql/mysql/upgrade-2.3.0-to-3.0.0.mysql.sql
@@ -0,0 +1,135 @@
+SELECT 'Upgrading MetaStore schema from 2.3.0 to 3.0.0' AS ' ';
+
+--SOURCE 041-HIVE-16556.mysql.sql;
+--
+-- Table structure for table METASTORE_DB_PROPERTIES
+--
+CREATE TABLE IF NOT EXISTS `METASTORE_DB_PROPERTIES` (
+  `PROPERTY_KEY` varchar(255) NOT NULL,
+  `PROPERTY_VALUE` varchar(1000) NOT NULL,
+  `DESCRIPTION` varchar(1000),
+ PRIMARY KEY(`PROPERTY_KEY`)
+) ENGINE=InnoDB DEFAULT CHARSET=latin1;
+
+--SOURCE 042-HIVE-16575.mysql.sql;
+CREATE INDEX `CONSTRAINTS_CONSTRAINT_TYPE_INDEX` ON 

[49/50] [abbrv] hive git commit: HIVE-18588 Add categories to unit tests to divide them into unit and checkin tests.

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/c4d22858/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/client/TestFunctions.java
--
diff --git 
a/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/client/TestFunctions.java
 
b/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/client/TestFunctions.java
index f3b7ce5..1974399 100644
--- 
a/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/client/TestFunctions.java
+++ 
b/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/client/TestFunctions.java
@@ -19,6 +19,7 @@
 package org.apache.hadoop.hive.metastore.client;
 
 import org.apache.hadoop.hive.metastore.IMetaStoreClient;
+import org.apache.hadoop.hive.metastore.annotation.MetastoreCheckinTest;
 import org.apache.hadoop.hive.metastore.api.AlreadyExistsException;
 import org.apache.hadoop.hive.metastore.api.Function;
 import org.apache.hadoop.hive.metastore.api.FunctionType;
@@ -39,6 +40,7 @@ import org.junit.AfterClass;
 import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
 
@@ -50,6 +52,7 @@ import java.util.stream.Collectors;
  * Test class for IMetaStoreClient API. Testing the Function related functions.
  */
 @RunWith(Parameterized.class)
+@Category(MetastoreCheckinTest.class)
 public class TestFunctions {
   // Needed until there is no junit release with @BeforeParam, @AfterParam 
(junit 4.13)
   // 
https://github.com/junit-team/junit4/commit/1bf8438b65858565dbb64736bfe13aae9cfc1b5a

http://git-wip-us.apache.org/repos/asf/hive/blob/c4d22858/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/client/TestGetListIndexes.java
--
diff --git 
a/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/client/TestGetListIndexes.java
 
b/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/client/TestGetListIndexes.java
index ab3c00d..3b865b6 100644
--- 
a/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/client/TestGetListIndexes.java
+++ 
b/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/client/TestGetListIndexes.java
@@ -22,6 +22,7 @@ import java.util.Set;
 import java.util.stream.Collectors;
 
 import org.apache.hadoop.hive.metastore.IMetaStoreClient;
+import org.apache.hadoop.hive.metastore.annotation.MetastoreCheckinTest;
 import org.apache.hadoop.hive.metastore.api.Database;
 import org.apache.hadoop.hive.metastore.api.Index;
 import org.apache.hadoop.hive.metastore.api.MetaException;
@@ -37,6 +38,7 @@ import org.junit.AfterClass;
 import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
 
@@ -46,6 +48,7 @@ import com.google.common.collect.Lists;
  * Tests for getting and listing indexes.
  */
 @RunWith(Parameterized.class)
+@Category(MetastoreCheckinTest.class)
 public class TestGetListIndexes {
   // Needed until there is no junit release with @BeforeParam, @AfterParam 
(junit 4.13)
   // 
https://github.com/junit-team/junit4/commit/1bf8438b65858565dbb64736bfe13aae9cfc1b5a

http://git-wip-us.apache.org/repos/asf/hive/blob/c4d22858/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/client/TestGetPartitions.java
--
diff --git 
a/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/client/TestGetPartitions.java
 
b/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/client/TestGetPartitions.java
index 76a824a..2c7f3fb 100644
--- 
a/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/client/TestGetPartitions.java
+++ 
b/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/client/TestGetPartitions.java
@@ -23,6 +23,7 @@ import java.util.Set;
 import java.util.stream.Collectors;
 
 import org.apache.hadoop.hive.metastore.IMetaStoreClient;
+import org.apache.hadoop.hive.metastore.annotation.MetastoreCheckinTest;
 import org.apache.hadoop.hive.metastore.api.Database;
 import org.apache.hadoop.hive.metastore.api.MetaException;
 import org.apache.hadoop.hive.metastore.api.NoSuchObjectException;
@@ -41,6 +42,7 @@ import org.junit.After;
 import org.junit.AfterClass;
 import org.junit.Before;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
 
@@ -53,6 +55,7 @@ import static org.junit.Assert.fail;
  * API tests for HMS client's getPartitions methods.
  */
 @RunWith(Parameterized.class)
+@Category(MetastoreCheckinTest.class)
 public class TestGetPartitions {
 
   // Needed until 

[50/50] [abbrv] hive git commit: HIVE-18588 Add categories to unit tests to divide them into unit and checkin tests.

2018-02-12 Thread gates
HIVE-18588 Add categories to unit tests to divide them into unit and checkin 
tests.


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/c4d22858
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/c4d22858
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/c4d22858

Branch: refs/heads/standalone-metastore
Commit: c4d22858c997375ed0f578f9012b2747ff7d1169
Parents: a9e1aca
Author: Alan Gates 
Authored: Thu Feb 1 09:07:14 2018 -0800
Committer: Alan Gates 
Committed: Mon Feb 12 10:40:22 2018 -0800

--
 standalone-metastore/DEV-README | 23 ++
 standalone-metastore/pom.xml| 47 +++-
 .../hadoop/hive/common/TestStatsSetupConst.java |  4 ++
 .../ndv/fm/TestFMSketchSerialization.java   |  3 ++
 .../hive/common/ndv/hll/TestHLLNoBias.java  |  3 ++
 .../common/ndv/hll/TestHLLSerialization.java|  3 ++
 .../hive/common/ndv/hll/TestHyperLogLog.java|  3 ++
 .../common/ndv/hll/TestHyperLogLogDense.java|  3 ++
 .../common/ndv/hll/TestHyperLogLogSparse.java   |  3 ++
 .../common/ndv/hll/TestSparseEncodeHash.java|  3 ++
 .../hadoop/hive/metastore/TestAdminUser.java|  3 ++
 .../hive/metastore/TestAggregateStatsCache.java |  3 ++
 .../hadoop/hive/metastore/TestDeadline.java |  3 ++
 .../metastore/TestEmbeddedHiveMetaStore.java|  3 ++
 .../hadoop/hive/metastore/TestFilterHooks.java  |  3 ++
 .../hive/metastore/TestHiveAlterHandler.java|  3 ++
 .../metastore/TestHiveMetaStoreGetMetaConf.java |  3 ++
 .../TestHiveMetaStorePartitionSpecs.java|  3 ++
 .../metastore/TestHiveMetaStoreTimeout.java |  3 ++
 .../hive/metastore/TestHiveMetaStoreTxns.java   |  3 ++
 ...TestHiveMetaStoreWithEnvironmentContext.java |  3 ++
 .../hive/metastore/TestHiveMetastoreCli.java|  3 ++
 .../hive/metastore/TestLockRequestBuilder.java  |  3 ++
 .../hive/metastore/TestMarkPartition.java   |  3 ++
 .../hive/metastore/TestMarkPartitionRemote.java |  3 ++
 .../TestMetaStoreConnectionUrlHook.java |  3 ++
 .../TestMetaStoreEndFunctionListener.java   |  3 ++
 .../metastore/TestMetaStoreEventListener.java   |  3 ++
 .../TestMetaStoreEventListenerOnlyOnCommit.java |  3 ++
 .../TestMetaStoreEventListenerWithOldConf.java  |  3 ++
 .../metastore/TestMetaStoreInitListener.java|  3 ++
 .../metastore/TestMetaStoreListenersError.java  |  3 ++
 .../metastore/TestMetaStoreSchemaFactory.java   |  3 ++
 .../hive/metastore/TestMetaStoreSchemaInfo.java |  3 ++
 .../hadoop/hive/metastore/TestObjectStore.java  |  3 ++
 .../metastore/TestObjectStoreInitRetry.java |  3 ++
 .../hadoop/hive/metastore/TestOldSchema.java|  3 ++
 .../TestPartitionNameWhitelistValidation.java   |  3 ++
 .../hive/metastore/TestRawStoreProxy.java   |  3 ++
 .../hive/metastore/TestRemoteHiveMetaStore.java |  3 ++
 .../TestRemoteHiveMetaStoreIpAddress.java   |  3 ++
 .../TestRemoteUGIHiveMetaStoreIpAddress.java|  3 ++
 .../TestRetriesInRetryingHMSHandler.java|  3 ++
 .../hive/metastore/TestRetryingHMSHandler.java  |  3 ++
 .../metastore/TestSetUGIOnBothClientServer.java |  3 ++
 .../hive/metastore/TestSetUGIOnOnlyClient.java  |  3 ++
 .../hive/metastore/TestSetUGIOnOnlyServer.java  |  3 ++
 .../annotation/MetastoreCheckinTest.java| 25 +++
 .../metastore/annotation/MetastoreTest.java | 24 ++
 .../metastore/annotation/MetastoreUnitTest.java | 25 +++
 .../hive/metastore/cache/TestCachedStore.java   |  4 +-
 .../client/TestAddAlterDropIndexes.java |  3 ++
 .../metastore/client/TestAddPartitions.java |  3 ++
 .../metastore/client/TestAlterPartitions.java   |  3 ++
 .../metastore/client/TestAppendPartitions.java  |  3 ++
 .../hive/metastore/client/TestDatabases.java|  3 ++
 .../metastore/client/TestDropPartitions.java|  3 ++
 .../hive/metastore/client/TestFunctions.java|  3 ++
 .../metastore/client/TestGetListIndexes.java|  3 ++
 .../metastore/client/TestGetPartitions.java |  3 ++
 .../metastore/client/TestListPartitions.java|  3 ++
 .../TestTablesCreateDropAlterTruncate.java  |  3 ++
 .../metastore/client/TestTablesGetExists.java   |  3 ++
 .../hive/metastore/client/TestTablesList.java   |  3 ++
 .../hive/metastore/conf/TestMetastoreConf.java  |  3 ++
 .../TestDataSourceProviderFactory.java  |  3 ++
 .../json/TestJSONMessageDeserializer.java   |  3 ++
 .../hive/metastore/metrics/TestMetrics.java |  3 ++
 .../metastore/txn/TestTxnHandlerNegative.java   |  3 ++
 .../hadoop/hive/metastore/txn/TestTxnUtils.java |  3 ++
 .../hive/metastore/utils/TestHdfsUtils.java |  3 ++
 .../metastore/utils/TestMetaStoreUtils.java |  3 ++
 72 files changed, 345 insertions(+), 2 deletions(-)
--



[40/50] [abbrv] hive git commit: HIVE-17990 Add Thrift and DB storage for Schema Registry objects

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/a9e1acaf/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
--
diff --git 
a/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
 
b/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
index d5e3527..4659b79 100644
--- 
a/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
+++ 
b/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
@@ -404,6 +404,34 @@ import org.slf4j.LoggerFactory;
 
 public WMCreateOrDropTriggerToPoolMappingResponse 
create_or_drop_wm_trigger_to_pool_mapping(WMCreateOrDropTriggerToPoolMappingRequest
 request) throws AlreadyExistsException, NoSuchObjectException, 
InvalidObjectException, MetaException, org.apache.thrift.TException;
 
+public void create_ischema(ISchema schema) throws AlreadyExistsException, 
NoSuchObjectException, MetaException, org.apache.thrift.TException;
+
+public void alter_ischema(String schemaName, ISchema newSchema) throws 
NoSuchObjectException, MetaException, org.apache.thrift.TException;
+
+public ISchema get_ischema(String schemaName) throws 
NoSuchObjectException, MetaException, org.apache.thrift.TException;
+
+public void drop_ischema(String schemaName) throws NoSuchObjectException, 
InvalidOperationException, MetaException, org.apache.thrift.TException;
+
+public void add_schema_version(SchemaVersion schemaVersion) throws 
AlreadyExistsException, NoSuchObjectException, MetaException, 
org.apache.thrift.TException;
+
+public SchemaVersion get_schema_version(String schemaName, int version) 
throws NoSuchObjectException, MetaException, org.apache.thrift.TException;
+
+public SchemaVersion get_schema_latest_version(String schemaName) throws 
NoSuchObjectException, MetaException, org.apache.thrift.TException;
+
+public List get_schema_all_versions(String schemaName) 
throws NoSuchObjectException, MetaException, org.apache.thrift.TException;
+
+public void drop_schema_version(String schemaName, int version) throws 
NoSuchObjectException, MetaException, org.apache.thrift.TException;
+
+public FindSchemasByColsResp get_schemas_by_cols(FindSchemasByColsRqst 
rqst) throws MetaException, org.apache.thrift.TException;
+
+public void map_schema_version_to_serde(String schemaName, int version, 
String serdeName) throws NoSuchObjectException, MetaException, 
org.apache.thrift.TException;
+
+public void set_schema_version_state(String schemaName, int version, 
SchemaVersionState state) throws NoSuchObjectException, 
InvalidOperationException, MetaException, org.apache.thrift.TException;
+
+public void add_serde(SerDeInfo serde) throws AlreadyExistsException, 
MetaException, org.apache.thrift.TException;
+
+public SerDeInfo get_serde(String serdeName) throws NoSuchObjectException, 
MetaException, org.apache.thrift.TException;
+
   }
 
   @org.apache.hadoop.classification.InterfaceAudience.Public 
@org.apache.hadoop.classification.InterfaceStability.Stable public interface 
AsyncIface extends com.facebook.fb303.FacebookService .AsyncIface {
@@ -770,6 +798,34 @@ import org.slf4j.LoggerFactory;
 
 public void 
create_or_drop_wm_trigger_to_pool_mapping(WMCreateOrDropTriggerToPoolMappingRequest
 request, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws 
org.apache.thrift.TException;
 
+public void create_ischema(ISchema schema, 
org.apache.thrift.async.AsyncMethodCallback resultHandler) throws 
org.apache.thrift.TException;
+
+public void alter_ischema(String schemaName, ISchema newSchema, 
org.apache.thrift.async.AsyncMethodCallback resultHandler) throws 
org.apache.thrift.TException;
+
+public void get_ischema(String schemaName, 
org.apache.thrift.async.AsyncMethodCallback resultHandler) throws 
org.apache.thrift.TException;
+
+public void drop_ischema(String schemaName, 
org.apache.thrift.async.AsyncMethodCallback resultHandler) throws 
org.apache.thrift.TException;
+
+public void add_schema_version(SchemaVersion schemaVersion, 
org.apache.thrift.async.AsyncMethodCallback resultHandler) throws 
org.apache.thrift.TException;
+
+public void get_schema_version(String schemaName, int version, 
org.apache.thrift.async.AsyncMethodCallback resultHandler) throws 
org.apache.thrift.TException;
+
+public void get_schema_latest_version(String schemaName, 
org.apache.thrift.async.AsyncMethodCallback resultHandler) throws 
org.apache.thrift.TException;
+
+public void get_schema_all_versions(String schemaName, 
org.apache.thrift.async.AsyncMethodCallback resultHandler) throws 
org.apache.thrift.TException;
+
+public void drop_schema_version(String schemaName, int 

[43/50] [abbrv] hive git commit: HIVE-17990 Add Thrift and DB storage for Schema Registry objects

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/a9e1acaf/standalone-metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h
--
diff --git a/standalone-metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h 
b/standalone-metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h
index 4c09bc8..6346ede 100644
--- a/standalone-metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h
+++ b/standalone-metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h
@@ -132,6 +132,59 @@ struct EventRequestType {
 
 extern const std::map _EventRequestType_VALUES_TO_NAMES;
 
+struct SerdeType {
+  enum type {
+HIVE = 1,
+SCHEMA_REGISTRY = 2
+  };
+};
+
+extern const std::map _SerdeType_VALUES_TO_NAMES;
+
+struct SchemaType {
+  enum type {
+HIVE = 1,
+AVRO = 2
+  };
+};
+
+extern const std::map _SchemaType_VALUES_TO_NAMES;
+
+struct SchemaCompatibility {
+  enum type {
+NONE = 1,
+BACKWARD = 2,
+FORWARD = 3,
+BOTH = 4
+  };
+};
+
+extern const std::map _SchemaCompatibility_VALUES_TO_NAMES;
+
+struct SchemaValidation {
+  enum type {
+LATEST = 1,
+ALL = 2
+  };
+};
+
+extern const std::map _SchemaValidation_VALUES_TO_NAMES;
+
+struct SchemaVersionState {
+  enum type {
+INITIATED = 1,
+START_REVIEW = 2,
+CHANGES_REQUIRED = 3,
+REVIEWED = 4,
+ENABLED = 5,
+DISABLED = 6,
+ARCHIVED = 7,
+DELETED = 8
+  };
+};
+
+extern const std::map _SchemaVersionState_VALUES_TO_NAMES;
+
 struct FunctionType {
   enum type {
 JAVA = 1
@@ -548,6 +601,16 @@ class WMCreateOrDropTriggerToPoolMappingRequest;
 
 class WMCreateOrDropTriggerToPoolMappingResponse;
 
+class ISchema;
+
+class SchemaVersion;
+
+class FindSchemasByColsRqst;
+
+class FindSchemasByColsRespEntry;
+
+class FindSchemasByColsResp;
+
 class MetaException;
 
 class UnknownTableException;
@@ -2088,10 +2151,14 @@ inline std::ostream& operator<<(std::ostream& out, 
const Database& obj)
 }
 
 typedef struct _SerDeInfo__isset {
-  _SerDeInfo__isset() : name(false), serializationLib(false), 
parameters(false) {}
+  _SerDeInfo__isset() : name(false), serializationLib(false), 
parameters(false), description(false), serializerClass(false), 
deserializerClass(false), serdeType(false) {}
   bool name :1;
   bool serializationLib :1;
   bool parameters :1;
+  bool description :1;
+  bool serializerClass :1;
+  bool deserializerClass :1;
+  bool serdeType :1;
 } _SerDeInfo__isset;
 
 class SerDeInfo {
@@ -2099,13 +2166,17 @@ class SerDeInfo {
 
   SerDeInfo(const SerDeInfo&);
   SerDeInfo& operator=(const SerDeInfo&);
-  SerDeInfo() : name(), serializationLib() {
+  SerDeInfo() : name(), serializationLib(), description(), serializerClass(), 
deserializerClass(), serdeType((SerdeType::type)0) {
   }
 
   virtual ~SerDeInfo() throw();
   std::string name;
   std::string serializationLib;
   std::map  parameters;
+  std::string description;
+  std::string serializerClass;
+  std::string deserializerClass;
+  SerdeType::type serdeType;
 
   _SerDeInfo__isset __isset;
 
@@ -2115,6 +2186,14 @@ class SerDeInfo {
 
   void __set_parameters(const std::map & val);
 
+  void __set_description(const std::string& val);
+
+  void __set_serializerClass(const std::string& val);
+
+  void __set_deserializerClass(const std::string& val);
+
+  void __set_serdeType(const SerdeType::type val);
+
   bool operator == (const SerDeInfo & rhs) const
   {
 if (!(name == rhs.name))
@@ -2123,6 +2202,22 @@ class SerDeInfo {
   return false;
 if (!(parameters == rhs.parameters))
   return false;
+if (__isset.description != rhs.__isset.description)
+  return false;
+else if (__isset.description && !(description == rhs.description))
+  return false;
+if (__isset.serializerClass != rhs.__isset.serializerClass)
+  return false;
+else if (__isset.serializerClass && !(serializerClass == 
rhs.serializerClass))
+  return false;
+if (__isset.deserializerClass != rhs.__isset.deserializerClass)
+  return false;
+else if (__isset.deserializerClass && !(deserializerClass == 
rhs.deserializerClass))
+  return false;
+if (__isset.serdeType != rhs.__isset.serdeType)
+  return false;
+else if (__isset.serdeType && !(serdeType == rhs.serdeType))
+  return false;
 return true;
   }
   bool operator != (const SerDeInfo ) const {
@@ -10898,6 +10993,372 @@ inline std::ostream& operator<<(std::ostream& out, 
const WMCreateOrDropTriggerTo
   return out;
 }
 
+typedef struct _ISchema__isset {
+  _ISchema__isset() : schemaType(false), name(false), dbName(false), 
compatibility(false), validationLevel(false), canEvolve(false), 
schemaGroup(false), description(false) {}
+  bool schemaType :1;
+  bool name :1;
+  bool dbName :1;
+  bool compatibility :1;
+  bool 

[34/50] [abbrv] hive git commit: HIVE-17990 Add Thrift and DB storage for Schema Registry objects

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/a9e1acaf/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/events/PreAlterISchemaEvent.java
--
diff --git 
a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/events/PreAlterISchemaEvent.java
 
b/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/events/PreAlterISchemaEvent.java
new file mode 100644
index 000..3df3780
--- /dev/null
+++ 
b/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/events/PreAlterISchemaEvent.java
@@ -0,0 +1,44 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.metastore.events;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.hive.metastore.IHMSHandler;
+import org.apache.hadoop.hive.metastore.api.ISchema;
+
+@InterfaceAudience.Public
+@InterfaceStability.Stable
+public class PreAlterISchemaEvent extends PreEventContext {
+
+  private final ISchema oldSchema, newSchema;
+
+  public PreAlterISchemaEvent(IHMSHandler handler, ISchema oldSchema, ISchema 
newSchema) {
+super(PreEventType.ALTER_ISCHEMA, handler);
+this.oldSchema = oldSchema;
+this.newSchema = newSchema;
+  }
+
+  public ISchema getOldSchema() {
+return oldSchema;
+  }
+
+  public ISchema getNewSchema() {
+return newSchema;
+  }
+}

http://git-wip-us.apache.org/repos/asf/hive/blob/a9e1acaf/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/events/PreAlterSchemaVersionEvent.java
--
diff --git 
a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/events/PreAlterSchemaVersionEvent.java
 
b/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/events/PreAlterSchemaVersionEvent.java
new file mode 100644
index 000..63ddb3b
--- /dev/null
+++ 
b/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/events/PreAlterSchemaVersionEvent.java
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.metastore.events;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.hive.metastore.IHMSHandler;
+import org.apache.hadoop.hive.metastore.api.SchemaVersion;
+
+@InterfaceAudience.Public
+@InterfaceStability.Stable
+public class PreAlterSchemaVersionEvent extends PreEventContext {
+
+  private final SchemaVersion oldSchemaVersion, newSchemaVersion;
+
+  public PreAlterSchemaVersionEvent(IHMSHandler handler, SchemaVersion 
oldSchemaVersion,
+SchemaVersion newSchemaVersion) {
+super(PreEventType.ALTER_SCHEMA_VERSION, handler);
+this.oldSchemaVersion = oldSchemaVersion;
+this.newSchemaVersion = newSchemaVersion;
+  }
+
+  public SchemaVersion getOldSchemaVersion() {
+return oldSchemaVersion;
+  }
+
+  public SchemaVersion getNewSchemaVersion() {
+return newSchemaVersion;
+  }
+}

http://git-wip-us.apache.org/repos/asf/hive/blob/a9e1acaf/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/events/PreCreateISchemaEvent.java
--
diff --git 

[13/50] [abbrv] hive git commit: HIVE-18359: Extend grouping set limits from int to long (Prasanth Jayachandran reviewed by Jesus Camacho Rodriguez)

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/ddd4c9ae/ql/src/test/results/clientpositive/groupingset_high_columns.q.out
--
diff --git a/ql/src/test/results/clientpositive/groupingset_high_columns.q.out 
b/ql/src/test/results/clientpositive/groupingset_high_columns.q.out
new file mode 100644
index 000..3456719
--- /dev/null
+++ b/ql/src/test/results/clientpositive/groupingset_high_columns.q.out
@@ -0,0 +1,1169 @@
+PREHOOK: query: create table facts (val string)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@facts
+POSTHOOK: query: create table facts (val string)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@facts
+PREHOOK: query: insert into facts values 
('abcdefghijklmnopqrstuvwxyz0123456789')
+PREHOOK: type: QUERY
+PREHOOK: Input: _dummy_database@_dummy_table
+PREHOOK: Output: default@facts
+POSTHOOK: query: insert into facts values 
('abcdefghijklmnopqrstuvwxyz0123456789')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: _dummy_database@_dummy_table
+POSTHOOK: Output: default@facts
+POSTHOOK: Lineage: facts.val SCRIPT []
+PREHOOK: query: drop table groupingsets32
+PREHOOK: type: DROPTABLE
+POSTHOOK: query: drop table groupingsets32
+POSTHOOK: type: DROPTABLE
+PREHOOK: query: drop table groupingsets33
+PREHOOK: type: DROPTABLE
+POSTHOOK: query: drop table groupingsets33
+POSTHOOK: type: DROPTABLE
+PREHOOK: query: drop table groupingsets32a
+PREHOOK: type: DROPTABLE
+POSTHOOK: query: drop table groupingsets32a
+POSTHOOK: type: DROPTABLE
+PREHOOK: query: drop table groupingsets33a
+PREHOOK: type: DROPTABLE
+POSTHOOK: query: drop table groupingsets33a
+POSTHOOK: type: DROPTABLE
+PREHOOK: query: create table groupingsets32 as 
+select 
+c00,c01,c02,c03,c04,c05,c06,c07,c08,c09, 
+c10,c11,c12,c13,c14,c15,c16,c17,c18,c19, 
+c20,c21,c22,c23,c24,c25,c26,c27,c28,c29, 
+c30,c31 
+,count(*) as n from ( 
+select 
+substring(val,01,1) as c00, substring(val,02,1) as c01, substring(val,03,1) as 
c02,substring(val,04,1) as c03,substring(val,05,1) as c04,substring(val,06,1) 
as c05,substring(val,07,1) as c06, substring(val,08,1) as 
c07,substring(val,09,1) as c08,substring(val,10,1) as c09, 
+substring(val,11,1) as c10, substring(val,12,1) as c11, substring(val,13,1) as 
c12,substring(val,14,1) as c13,substring(val,15,1) as c14,substring(val,16,1) 
as c15,substring(val,17,1) as c16, substring(val,18,1) as 
c17,substring(val,19,1) as c18,substring(val,20,1) as c19, 
+substring(val,21,1) as c20, substring(val,22,1) as c21, substring(val,23,1) as 
c22,substring(val,24,1) as c23,substring(val,25,1) as c24,substring(val,26,1) 
as c25,substring(val,27,1) as c26, substring(val,28,1) as 
c27,substring(val,29,1) as c28,substring(val,30,1) as c29, 
+substring(val,31,1) as c30,substring(val,32,1) as c31 
+from facts ) x 
+group by 
+c00,c01,c02,c03,c04,c05,c06,c07,c08,c09, 
+c10,c11,c12,c13,c14,c15,c16,c17,c18,c19, 
+c20,c21,c22,c23,c24,c25,c26,c27,c28,c29, 
+c30,c31 
+grouping sets ( 
+c00,c01,c02,c03,c04,c05,c06,c07,c08,c09, 
+c10,c11,c12,c13,c14,c15,c16,c17,c18,c19, 
+c20,c21,c22,c23,c24,c25,c26,c27,c28,c29, 
+c30,c31 
+)
+PREHOOK: type: CREATETABLE_AS_SELECT
+PREHOOK: Input: default@facts
+PREHOOK: Output: database:default
+PREHOOK: Output: default@groupingsets32
+POSTHOOK: query: create table groupingsets32 as 
+select 
+c00,c01,c02,c03,c04,c05,c06,c07,c08,c09, 
+c10,c11,c12,c13,c14,c15,c16,c17,c18,c19, 
+c20,c21,c22,c23,c24,c25,c26,c27,c28,c29, 
+c30,c31 
+,count(*) as n from ( 
+select 
+substring(val,01,1) as c00, substring(val,02,1) as c01, substring(val,03,1) as 
c02,substring(val,04,1) as c03,substring(val,05,1) as c04,substring(val,06,1) 
as c05,substring(val,07,1) as c06, substring(val,08,1) as 
c07,substring(val,09,1) as c08,substring(val,10,1) as c09, 
+substring(val,11,1) as c10, substring(val,12,1) as c11, substring(val,13,1) as 
c12,substring(val,14,1) as c13,substring(val,15,1) as c14,substring(val,16,1) 
as c15,substring(val,17,1) as c16, substring(val,18,1) as 
c17,substring(val,19,1) as c18,substring(val,20,1) as c19, 
+substring(val,21,1) as c20, substring(val,22,1) as c21, substring(val,23,1) as 
c22,substring(val,24,1) as c23,substring(val,25,1) as c24,substring(val,26,1) 
as c25,substring(val,27,1) as c26, substring(val,28,1) as 
c27,substring(val,29,1) as c28,substring(val,30,1) as c29, 
+substring(val,31,1) as c30,substring(val,32,1) as c31 
+from facts ) x 
+group by 
+c00,c01,c02,c03,c04,c05,c06,c07,c08,c09, 
+c10,c11,c12,c13,c14,c15,c16,c17,c18,c19, 
+c20,c21,c22,c23,c24,c25,c26,c27,c28,c29, 
+c30,c31 
+grouping sets ( 
+c00,c01,c02,c03,c04,c05,c06,c07,c08,c09, 
+c10,c11,c12,c13,c14,c15,c16,c17,c18,c19, 
+c20,c21,c22,c23,c24,c25,c26,c27,c28,c29, 
+c30,c31 
+)
+POSTHOOK: type: CREATETABLE_AS_SELECT
+POSTHOOK: Input: default@facts
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@groupingsets32
+POSTHOOK: Lineage: groupingsets32.c00 

[31/50] [abbrv] hive git commit: HIVE-18668: Really shade guava in ql (Zoltan Haindrich reviewed by Ashutosh Chauhan)

2018-02-12 Thread gates
HIVE-18668: Really shade guava in ql (Zoltan Haindrich reviewed by Ashutosh 
Chauhan)

Signed-off-by: Zoltan Haindrich 


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/91889089
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/91889089
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/91889089

Branch: refs/heads/standalone-metastore
Commit: 91889089c77c231aeead606ae89f580a80b7ada8
Parents: 2338846
Author: Zoltan Haindrich 
Authored: Mon Feb 12 10:30:57 2018 +0100
Committer: Zoltan Haindrich 
Committed: Mon Feb 12 10:30:57 2018 +0100

--
 ql/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/91889089/ql/pom.xml
--
diff --git a/ql/pom.xml b/ql/pom.xml
index 187b701..2d1034c 100644
--- a/ql/pom.xml
+++ b/ql/pom.xml
@@ -907,7 +907,7 @@
   io.airlift:aircompressor
   org.codehaus.jackson:jackson-core-asl
   org.codehaus.jackson:jackson-mapper-asl
-  com.google.guava:guava
+  com.google.common:guava-common
   net.sf.opencsv:opencsv
   org.apache.hive:hive-spark-client
   org.apache.hive:hive-storage-api



[23/50] [abbrv] hive git commit: HIVE-17983 Make the standalone metastore generate tarballs etc. (Alan Gates, reviewed by Thejas Nair)

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/47cac2d0/standalone-metastore/src/main/sql/mssql/hive-schema-1.2.0.mssql.sql
--
diff --git 
a/standalone-metastore/src/main/sql/mssql/hive-schema-1.2.0.mssql.sql 
b/standalone-metastore/src/main/sql/mssql/hive-schema-1.2.0.mssql.sql
new file mode 100644
index 000..0bbd647
--- /dev/null
+++ b/standalone-metastore/src/main/sql/mssql/hive-schema-1.2.0.mssql.sql
@@ -0,0 +1,947 @@
+-- Licensed to the Apache Software Foundation (ASF) under one or more
+-- contributor license agreements.  See the NOTICE file distributed with
+-- this work for additional information regarding copyright ownership.
+-- The ASF licenses this file to You under the Apache License, Version 2.0
+-- (the "License"); you may not use this file except in compliance with
+-- the License.  You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+
+--
+-- DataNucleus SchemaTool (ran at 08/04/2014 15:10:15)
+--
+-- Complete schema required for the following classes:-
+-- org.apache.hadoop.hive.metastore.model.MColumnDescriptor
+-- org.apache.hadoop.hive.metastore.model.MDBPrivilege
+-- org.apache.hadoop.hive.metastore.model.MDatabase
+-- org.apache.hadoop.hive.metastore.model.MDelegationToken
+-- org.apache.hadoop.hive.metastore.model.MFieldSchema
+-- org.apache.hadoop.hive.metastore.model.MFunction
+-- org.apache.hadoop.hive.metastore.model.MGlobalPrivilege
+-- org.apache.hadoop.hive.metastore.model.MIndex
+-- org.apache.hadoop.hive.metastore.model.MMasterKey
+-- org.apache.hadoop.hive.metastore.model.MOrder
+-- org.apache.hadoop.hive.metastore.model.MPartition
+-- org.apache.hadoop.hive.metastore.model.MPartitionColumnPrivilege
+-- org.apache.hadoop.hive.metastore.model.MPartitionColumnStatistics
+-- org.apache.hadoop.hive.metastore.model.MPartitionEvent
+-- org.apache.hadoop.hive.metastore.model.MPartitionPrivilege
+-- org.apache.hadoop.hive.metastore.model.MResourceUri
+-- org.apache.hadoop.hive.metastore.model.MRole
+-- org.apache.hadoop.hive.metastore.model.MRoleMap
+-- org.apache.hadoop.hive.metastore.model.MSerDeInfo
+-- org.apache.hadoop.hive.metastore.model.MStorageDescriptor
+-- org.apache.hadoop.hive.metastore.model.MStringList
+-- org.apache.hadoop.hive.metastore.model.MTable
+-- org.apache.hadoop.hive.metastore.model.MTableColumnPrivilege
+-- org.apache.hadoop.hive.metastore.model.MTableColumnStatistics
+-- org.apache.hadoop.hive.metastore.model.MTablePrivilege
+-- org.apache.hadoop.hive.metastore.model.MType
+-- org.apache.hadoop.hive.metastore.model.MVersionTable
+--
+-- Table MASTER_KEYS for classes 
[org.apache.hadoop.hive.metastore.model.MMasterKey]
+CREATE TABLE MASTER_KEYS
+(
+KEY_ID int NOT NULL,
+MASTER_KEY nvarchar(767) NULL
+);
+
+ALTER TABLE MASTER_KEYS ADD CONSTRAINT MASTER_KEYS_PK PRIMARY KEY (KEY_ID);
+
+-- Table IDXS for classes [org.apache.hadoop.hive.metastore.model.MIndex]
+CREATE TABLE IDXS
+(
+INDEX_ID bigint NOT NULL,
+CREATE_TIME int NOT NULL,
+DEFERRED_REBUILD bit NOT NULL,
+INDEX_HANDLER_CLASS nvarchar(4000) NULL,
+INDEX_NAME nvarchar(128) NULL,
+INDEX_TBL_ID bigint NULL,
+LAST_ACCESS_TIME int NOT NULL,
+ORIG_TBL_ID bigint NULL,
+SD_ID bigint NULL
+);
+
+ALTER TABLE IDXS ADD CONSTRAINT IDXS_PK PRIMARY KEY (INDEX_ID);
+
+-- Table PART_COL_STATS for classes 
[org.apache.hadoop.hive.metastore.model.MPartitionColumnStatistics]
+CREATE TABLE PART_COL_STATS
+(
+CS_ID bigint NOT NULL,
+AVG_COL_LEN float NULL,
+"COLUMN_NAME" nvarchar(128) NOT NULL,
+COLUMN_TYPE nvarchar(128) NOT NULL,
+DB_NAME nvarchar(128) NOT NULL,
+BIG_DECIMAL_HIGH_VALUE nvarchar(255) NULL,
+BIG_DECIMAL_LOW_VALUE nvarchar(255) NULL,
+DOUBLE_HIGH_VALUE float NULL,
+DOUBLE_LOW_VALUE float NULL,
+LAST_ANALYZED bigint NOT NULL,
+LONG_HIGH_VALUE bigint NULL,
+LONG_LOW_VALUE bigint NULL,
+MAX_COL_LEN bigint NULL,
+NUM_DISTINCTS bigint NULL,
+NUM_FALSES bigint NULL,
+NUM_NULLS bigint NOT NULL,
+NUM_TRUES bigint NULL,
+PART_ID bigint NULL,
+PARTITION_NAME nvarchar(767) NOT NULL,
+"TABLE_NAME" nvarchar(128) NOT NULL
+);
+
+ALTER TABLE PART_COL_STATS ADD CONSTRAINT PART_COL_STATS_PK PRIMARY KEY 
(CS_ID);
+
+CREATE INDEX PCS_STATS_IDX ON PART_COL_STATS 
(DB_NAME,TABLE_NAME,COLUMN_NAME,PARTITION_NAME);
+
+-- Table 

[48/50] [abbrv] hive git commit: HIVE-17990 Add Thrift and DB storage for Schema Registry objects

2018-02-12 Thread gates
HIVE-17990 Add Thrift and DB storage for Schema Registry objects


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/a9e1acaf
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/a9e1acaf
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/a9e1acaf

Branch: refs/heads/standalone-metastore
Commit: a9e1acaf37f52a8c69ab296332abf80e701568b4
Parents: 887233d
Author: Alan Gates 
Authored: Thu Oct 19 16:49:38 2017 -0700
Committer: Alan Gates 
Committed: Mon Feb 12 09:45:03 2018 -0800

--
 .../listener/DummyRawStoreFailEvent.java|73 +
 standalone-metastore/pom.xml| 3 +-
 .../gen/thrift/gen-cpp/ThriftHiveMetastore.cpp  | 26536 ++--
 .../gen/thrift/gen-cpp/ThriftHiveMetastore.h|  2427 +-
 .../ThriftHiveMetastore_server.skeleton.cpp |70 +
 .../gen/thrift/gen-cpp/hive_metastore_types.cpp |  5204 +--
 .../gen/thrift/gen-cpp/hive_metastore_types.h   |   465 +-
 .../metastore/api/FindSchemasByColsResp.java|   449 +
 .../api/FindSchemasByColsRespEntry.java |   497 +
 .../metastore/api/FindSchemasByColsRqst.java|   605 +
 .../hadoop/hive/metastore/api/ISchema.java  |  1162 +
 .../hive/metastore/api/SchemaCompatibility.java |51 +
 .../hadoop/hive/metastore/api/SchemaType.java   |45 +
 .../hive/metastore/api/SchemaValidation.java|45 +
 .../hive/metastore/api/SchemaVersion.java   |  1407 +
 .../hive/metastore/api/SchemaVersionState.java  |63 +
 .../hadoop/hive/metastore/api/SerDeInfo.java|   443 +-
 .../hadoop/hive/metastore/api/SerdeType.java|45 +
 .../hive/metastore/api/ThriftHiveMetastore.java | 19342 ++--
 .../gen-php/metastore/ThriftHiveMetastore.php   | 28016 ++---
 .../src/gen/thrift/gen-php/metastore/Types.php  |  1026 +
 .../hive_metastore/ThriftHiveMetastore-remote   |98 +
 .../hive_metastore/ThriftHiveMetastore.py   |  5322 +++-
 .../gen/thrift/gen-py/hive_metastore/ttypes.py  |   739 +-
 .../gen/thrift/gen-rb/hive_metastore_types.rb   |   186 +-
 .../gen/thrift/gen-rb/thrift_hive_metastore.rb  |   932 +
 .../hadoop/hive/metastore/HiveMetaStore.java|   476 +-
 .../hive/metastore/HiveMetaStoreClient.java |73 +-
 .../hadoop/hive/metastore/IMetaStoreClient.java |   158 +
 .../hive/metastore/MetaStoreEventListener.java  |26 +
 .../metastore/MetaStoreListenerNotifier.java|42 +
 .../hadoop/hive/metastore/ObjectStore.java  |   408 +-
 .../apache/hadoop/hive/metastore/RawStore.java  |   135 +
 .../hive/metastore/cache/CachedStore.java   |75 +
 .../client/builder/DatabaseBuilder.java | 2 +-
 .../client/builder/ISchemaBuilder.java  |93 +
 .../client/builder/SchemaVersionBuilder.java|90 +
 .../client/builder/SerdeAndColsBuilder.java |   124 +
 .../builder/StorageDescriptorBuilder.java   |57 +-
 .../metastore/events/AddSchemaVersionEvent.java |40 +
 .../metastore/events/AlterISchemaEvent.java |45 +
 .../events/AlterSchemaVersionEvent.java |46 +
 .../metastore/events/CreateISchemaEvent.java|39 +
 .../hive/metastore/events/DropISchemaEvent.java |39 +
 .../events/DropSchemaVersionEvent.java  |40 +
 .../events/PreAddSchemaVersionEvent.java|39 +
 .../metastore/events/PreAlterISchemaEvent.java  |44 +
 .../events/PreAlterSchemaVersionEvent.java  |45 +
 .../metastore/events/PreCreateISchemaEvent.java |39 +
 .../metastore/events/PreDropISchemaEvent.java   |39 +
 .../events/PreDropSchemaVersionEvent.java   |39 +
 .../hive/metastore/events/PreEventContext.java  |10 +-
 .../metastore/events/PreReadISchemaEvent.java   |39 +
 .../events/PreReadhSchemaVersionEvent.java  |36 +
 .../hive/metastore/messaging/EventMessage.java  | 8 +-
 .../metastore/messaging/MessageFactory.java | 7 +
 .../hadoop/hive/metastore/model/MISchema.java   |   107 +
 .../hive/metastore/model/MSchemaVersion.java|   127 +
 .../hadoop/hive/metastore/model/MSerDeInfo.java |48 +-
 .../main/resources/datanucleus-log4j.properties |17 +
 .../src/main/resources/package.jdo  |77 +
 .../main/sql/derby/hive-schema-3.0.0.derby.sql  |30 +-
 .../sql/derby/upgrade-2.3.0-to-3.0.0.derby.sql  |34 +
 .../main/sql/mssql/hive-schema-3.0.0.mssql.sql  |33 +-
 .../sql/mssql/upgrade-2.3.0-to-3.0.0.mssql.sql  |33 +
 .../main/sql/mysql/hive-schema-3.0.0.mysql.sql  |38 +
 .../sql/mysql/upgrade-2.3.0-to-3.0.0.mysql.sql  |38 +
 .../sql/oracle/hive-schema-3.0.0.oracle.sql |33 +-
 .../oracle/upgrade-2.3.0-to-3.0.0.oracle.sql|34 +
 .../sql/postgres/hive-schema-3.0.0.postgres.sql |34 +-
 .../upgrade-2.3.0-to-3.0.0.postgres.sql |34 +
 .../src/main/thrift/hive_metastore.thrift   |   112 +-
 

[41/50] [abbrv] hive git commit: HIVE-17990 Add Thrift and DB storage for Schema Registry objects

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/a9e1acaf/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SchemaVersion.java
--
diff --git 
a/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SchemaVersion.java
 
b/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SchemaVersion.java
new file mode 100644
index 000..db964b0
--- /dev/null
+++ 
b/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SchemaVersion.java
@@ -0,0 +1,1407 @@
+/**
+ * Autogenerated by Thrift Compiler (0.9.3)
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ *  @generated
+ */
+package org.apache.hadoop.hive.metastore.api;
+
+import org.apache.thrift.scheme.IScheme;
+import org.apache.thrift.scheme.SchemeFactory;
+import org.apache.thrift.scheme.StandardScheme;
+
+import org.apache.thrift.scheme.TupleScheme;
+import org.apache.thrift.protocol.TTupleProtocol;
+import org.apache.thrift.protocol.TProtocolException;
+import org.apache.thrift.EncodingUtils;
+import org.apache.thrift.TException;
+import org.apache.thrift.async.AsyncMethodCallback;
+import org.apache.thrift.server.AbstractNonblockingServer.*;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.EnumMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.EnumSet;
+import java.util.Collections;
+import java.util.BitSet;
+import java.nio.ByteBuffer;
+import java.util.Arrays;
+import javax.annotation.Generated;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+@org.apache.hadoop.classification.InterfaceAudience.Public 
@org.apache.hadoop.classification.InterfaceStability.Stable public class 
SchemaVersion implements org.apache.thrift.TBase, java.io.Serializable, Cloneable, 
Comparable {
+  private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new 
org.apache.thrift.protocol.TStruct("SchemaVersion");
+
+  private static final org.apache.thrift.protocol.TField 
SCHEMA_NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("schemaName", 
org.apache.thrift.protocol.TType.STRING, (short)1);
+  private static final org.apache.thrift.protocol.TField VERSION_FIELD_DESC = 
new org.apache.thrift.protocol.TField("version", 
org.apache.thrift.protocol.TType.I32, (short)2);
+  private static final org.apache.thrift.protocol.TField CREATED_AT_FIELD_DESC 
= new org.apache.thrift.protocol.TField("createdAt", 
org.apache.thrift.protocol.TType.I64, (short)3);
+  private static final org.apache.thrift.protocol.TField COLS_FIELD_DESC = new 
org.apache.thrift.protocol.TField("cols", 
org.apache.thrift.protocol.TType.LIST, (short)4);
+  private static final org.apache.thrift.protocol.TField STATE_FIELD_DESC = 
new org.apache.thrift.protocol.TField("state", 
org.apache.thrift.protocol.TType.I32, (short)5);
+  private static final org.apache.thrift.protocol.TField 
DESCRIPTION_FIELD_DESC = new org.apache.thrift.protocol.TField("description", 
org.apache.thrift.protocol.TType.STRING, (short)6);
+  private static final org.apache.thrift.protocol.TField 
SCHEMA_TEXT_FIELD_DESC = new org.apache.thrift.protocol.TField("schemaText", 
org.apache.thrift.protocol.TType.STRING, (short)7);
+  private static final org.apache.thrift.protocol.TField 
FINGERPRINT_FIELD_DESC = new org.apache.thrift.protocol.TField("fingerprint", 
org.apache.thrift.protocol.TType.STRING, (short)8);
+  private static final org.apache.thrift.protocol.TField NAME_FIELD_DESC = new 
org.apache.thrift.protocol.TField("name", 
org.apache.thrift.protocol.TType.STRING, (short)9);
+  private static final org.apache.thrift.protocol.TField SER_DE_FIELD_DESC = 
new org.apache.thrift.protocol.TField("serDe", 
org.apache.thrift.protocol.TType.STRUCT, (short)10);
+
+  private static final Map schemes = 
new HashMap();
+  static {
+schemes.put(StandardScheme.class, new 
SchemaVersionStandardSchemeFactory());
+schemes.put(TupleScheme.class, new SchemaVersionTupleSchemeFactory());
+  }
+
+  private String schemaName; // required
+  private int version; // required
+  private long createdAt; // required
+  private List cols; // required
+  private SchemaVersionState state; // optional
+  private String description; // optional
+  private String schemaText; // optional
+  private String fingerprint; // optional
+  private String name; // optional
+  private SerDeInfo serDe; // optional
+
+  /** The set of fields this struct contains, along with convenience methods 
for finding and manipulating them. */
+  public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+

[20/50] [abbrv] hive git commit: HIVE-17983 Make the standalone metastore generate tarballs etc. (Alan Gates, reviewed by Thejas Nair)

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/47cac2d0/standalone-metastore/src/main/sql/oracle/upgrade-2.3.0-to-3.0.0.oracle.sql
--
diff --git 
a/standalone-metastore/src/main/sql/oracle/upgrade-2.3.0-to-3.0.0.oracle.sql 
b/standalone-metastore/src/main/sql/oracle/upgrade-2.3.0-to-3.0.0.oracle.sql
new file mode 100644
index 000..a923d92
--- /dev/null
+++ b/standalone-metastore/src/main/sql/oracle/upgrade-2.3.0-to-3.0.0.oracle.sql
@@ -0,0 +1,158 @@
+SELECT 'Upgrading MetaStore schema from 2.3.0 to 3.0.0' AS Status from dual;
+
+--@041-HIVE-16556.oracle.sql;
+CREATE TABLE METASTORE_DB_PROPERTIES
+(
+  PROPERTY_KEY VARCHAR(255) NOT NULL,
+  PROPERTY_VALUE VARCHAR(1000) NOT NULL,
+  DESCRIPTION VARCHAR(1000)
+);
+
+ALTER TABLE METASTORE_DB_PROPERTIES ADD CONSTRAINT PROPERTY_KEY_PK PRIMARY KEY 
(PROPERTY_KEY);
+
+--@042-HIVE-16575.oracle.sql;
+CREATE INDEX CONSTRAINTS_CT_INDEX ON KEY_CONSTRAINTS(CONSTRAINT_TYPE);
+
+--@043-HIVE-16922.oracle.sql;
+UPDATE SERDE_PARAMS
+SET PARAM_KEY='collection.delim'
+WHERE PARAM_KEY='colelction.delim';
+
+--@044-HIVE-16997.oracle.sql;
+ALTER TABLE PART_COL_STATS ADD BIT_VECTOR BLOB NULL;
+ALTER TABLE TAB_COL_STATS ADD BIT_VECTOR BLOB NULL;
+
+--@045-HIVE-16886.oracle.sql;
+INSERT INTO NOTIFICATION_SEQUENCE (NNI_ID, NEXT_EVENT_ID) SELECT 1,1 FROM DUAL 
WHERE NOT EXISTS ( SELECT NEXT_EVENT_ID FROM NOTIFICATION_SEQUENCE);
+
+--@046-HIVE-17566.oracle.sql;
+CREATE TABLE WM_RESOURCEPLAN
+(
+RP_ID NUMBER NOT NULL,
+"NAME" VARCHAR2(128) NOT NULL,
+QUERY_PARALLELISM NUMBER(10),
+STATUS VARCHAR2(20) NOT NULL,
+DEFAULT_POOL_ID NUMBER
+);
+
+ALTER TABLE WM_RESOURCEPLAN ADD CONSTRAINT WM_RESOURCEPLAN_PK PRIMARY KEY 
(RP_ID);
+
+CREATE UNIQUE INDEX UNIQUE_WM_RESOURCEPLAN ON WM_RESOURCEPLAN ("NAME");
+
+
+CREATE TABLE WM_POOL
+(
+POOL_ID NUMBER NOT NULL,
+RP_ID NUMBER NOT NULL,
+PATH VARCHAR2(1024) NOT NULL,
+ALLOC_FRACTION NUMBER,
+QUERY_PARALLELISM NUMBER(10),
+SCHEDULING_POLICY VARCHAR2(1024)
+);
+
+ALTER TABLE WM_POOL ADD CONSTRAINT WM_POOL_PK PRIMARY KEY (POOL_ID);
+
+CREATE UNIQUE INDEX UNIQUE_WM_POOL ON WM_POOL (RP_ID, PATH);
+ALTER TABLE WM_POOL ADD CONSTRAINT WM_POOL_FK1 FOREIGN KEY (RP_ID) REFERENCES 
WM_RESOURCEPLAN (RP_ID);
+
+
+CREATE TABLE WM_TRIGGER
+(
+TRIGGER_ID NUMBER NOT NULL,
+RP_ID NUMBER NOT NULL,
+"NAME" VARCHAR2(128) NOT NULL,
+TRIGGER_EXPRESSION VARCHAR2(1024),
+ACTION_EXPRESSION VARCHAR2(1024),
+IS_IN_UNMANAGED NUMBER(1) DEFAULT 0 NOT NULL CHECK (IS_IN_UNMANAGED IN 
(1,0))
+);
+
+ALTER TABLE WM_TRIGGER ADD CONSTRAINT WM_TRIGGER_PK PRIMARY KEY (TRIGGER_ID);
+
+CREATE UNIQUE INDEX UNIQUE_WM_TRIGGER ON WM_TRIGGER (RP_ID, "NAME");
+
+ALTER TABLE WM_TRIGGER ADD CONSTRAINT WM_TRIGGER_FK1 FOREIGN KEY (RP_ID) 
REFERENCES WM_RESOURCEPLAN (RP_ID);
+
+
+CREATE TABLE WM_POOL_TO_TRIGGER
+(
+POOL_ID NUMBER NOT NULL,
+TRIGGER_ID NUMBER NOT NULL
+);
+
+ALTER TABLE WM_POOL_TO_TRIGGER ADD CONSTRAINT WM_POOL_TO_TRIGGER_PK PRIMARY 
KEY (POOL_ID, TRIGGER_ID);
+
+ALTER TABLE WM_POOL_TO_TRIGGER ADD CONSTRAINT WM_POOL_TO_TRIGGER_FK1 FOREIGN 
KEY (POOL_ID) REFERENCES WM_POOL (POOL_ID);
+
+ALTER TABLE WM_POOL_TO_TRIGGER ADD CONSTRAINT WM_POOL_TO_TRIGGER_FK2 FOREIGN 
KEY (TRIGGER_ID) REFERENCES WM_TRIGGER (TRIGGER_ID);
+
+
+CREATE TABLE WM_MAPPING
+(
+MAPPING_ID NUMBER NOT NULL,
+RP_ID NUMBER NOT NULL,
+ENTITY_TYPE VARCHAR2(128) NOT NULL,
+ENTITY_NAME VARCHAR2(128) NOT NULL,
+POOL_ID NUMBER NOT NULL,
+ORDERING NUMBER(10)
+);
+
+ALTER TABLE WM_MAPPING ADD CONSTRAINT WM_MAPPING_PK PRIMARY KEY (MAPPING_ID);
+
+CREATE UNIQUE INDEX UNIQUE_WM_MAPPING ON WM_MAPPING (RP_ID, ENTITY_TYPE, 
ENTITY_NAME);
+
+ALTER TABLE WM_MAPPING ADD CONSTRAINT WM_MAPPING_FK1 FOREIGN KEY (RP_ID) 
REFERENCES WM_RESOURCEPLAN (RP_ID);
+
+ALTER TABLE WM_MAPPING ADD CONSTRAINT WM_MAPPING_FK2 FOREIGN KEY (POOL_ID) 
REFERENCES WM_POOL (POOL_ID);
+
+UPDATE VERSION SET SCHEMA_VERSION='3.0.0', VERSION_COMMENT='Hive release 
version 3.0.0' where VER_ID=1;
+SELECT 'Finished upgrading MetaStore schema from 2.3.0 to 3.0.0' AS Status 
from dual;
+
+-- 048-HIVE-14498
+CREATE TABLE MV_CREATION_METADATA
+(
+MV_CREATION_METADATA_ID NUMBER NOT NULL,
+DB_NAME VARCHAR2(128) NOT NULL,
+TBL_NAME VARCHAR2(256) NOT NULL,
+TXN_LIST CLOB NULL
+);
+
+ALTER TABLE MV_CREATION_METADATA ADD CONSTRAINT MV_CREATION_METADATA_PK 
PRIMARY KEY (MV_CREATION_METADATA_ID);
+
+CREATE UNIQUE INDEX UNIQUE_TABLE ON MV_CREATION_METADATA ("DB_NAME", 
"TBL_NAME");
+
+CREATE TABLE MV_TABLES_USED
+(
+MV_CREATION_METADATA_ID NUMBER NOT NULL,
+TBL_ID NUMBER NOT NULL
+);
+
+ALTER TABLE MV_TABLES_USED ADD CONSTRAINT MV_TABLES_USED_FK1 FOREIGN KEY 
(MV_CREATION_METADATA_ID) REFERENCES MV_CREATION_METADATA 
(MV_CREATION_METADATA_ID);
+
+ALTER TABLE MV_TABLES_USED ADD CONSTRAINT MV_TABLES_USED_FK2 FOREIGN KEY 
(TBL_ID) REFERENCES TBLS (TBL_ID);
+
+ALTER TABLE 

[32/50] [abbrv] hive git commit: HIVE-18646: Update errata.txt for HIVE-18617 (Daniel Voros via Zoltan Haindrich)

2018-02-12 Thread gates
HIVE-18646: Update errata.txt for HIVE-18617 (Daniel Voros via Zoltan Haindrich)

Signed-off-by: Zoltan Haindrich 


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/887233d2
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/887233d2
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/887233d2

Branch: refs/heads/standalone-metastore
Commit: 887233d28bbc64da0214d5c27653c9ca378766ef
Parents: 9188908
Author: Daniel Voros 
Authored: Mon Feb 12 10:59:30 2018 +0100
Committer: Zoltan Haindrich 
Committed: Mon Feb 12 10:59:30 2018 +0100

--
 errata.txt | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/887233d2/errata.txt
--
diff --git a/errata.txt b/errata.txt
index 87e41b8..d1d95ef 100644
--- a/errata.txt
+++ b/errata.txt
@@ -93,3 +93,4 @@ d16d4f1bcc43d6ebcab0eaf5bc635fb88b60be5f master HIVE-9423 
 https://issues.ap
 5facfbb863366d7a661c21c57011b8dbe43f52e0 master HIVE-16307 
https://issues.apache.org/jira/browse/HIVE-16307
 1c3039333ba71665e8b954fbee88188757bb4050 master HIVE-16743 
https://issues.apache.org/jira/browse/HIVE-16743
 e7081035bb9768bc014f0aba11417418ececbaf0 master HIVE-17109 
https://issues.apache.org/jira/browse/HIVE-17109
+f33db1f68c68b552b9888988f818c03879749461 master HIVE-18617 
https://issues.apache.org/jira/browse/HIVE-18617



[22/50] [abbrv] hive git commit: HIVE-17983 Make the standalone metastore generate tarballs etc. (Alan Gates, reviewed by Thejas Nair)

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/47cac2d0/standalone-metastore/src/main/sql/mysql/hive-schema-1.2.0.mysql.sql
--
diff --git 
a/standalone-metastore/src/main/sql/mysql/hive-schema-1.2.0.mysql.sql 
b/standalone-metastore/src/main/sql/mysql/hive-schema-1.2.0.mysql.sql
new file mode 100644
index 000..adf0de7
--- /dev/null
+++ b/standalone-metastore/src/main/sql/mysql/hive-schema-1.2.0.mysql.sql
@@ -0,0 +1,910 @@
+-- MySQL dump 10.13  Distrib 5.5.25, for osx10.6 (i386)
+--
+-- Host: localhostDatabase: test
+-- --
+-- Server version  5.5.25
+
+/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
+/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
+/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
+/*!40101 SET NAMES utf8 */;
+/*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;
+/*!40103 SET TIME_ZONE='+00:00' */;
+/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
+/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, 
FOREIGN_KEY_CHECKS=0 */;
+/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
+/*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;
+
+--
+-- Table structure for table `BUCKETING_COLS`
+--
+
+/*!40101 SET @saved_cs_client = @@character_set_client */;
+/*!40101 SET character_set_client = utf8 */;
+CREATE TABLE IF NOT EXISTS `BUCKETING_COLS` (
+  `SD_ID` bigint(20) NOT NULL,
+  `BUCKET_COL_NAME` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin 
DEFAULT NULL,
+  `INTEGER_IDX` int(11) NOT NULL,
+  PRIMARY KEY (`SD_ID`,`INTEGER_IDX`),
+  KEY `BUCKETING_COLS_N49` (`SD_ID`),
+  CONSTRAINT `BUCKETING_COLS_FK1` FOREIGN KEY (`SD_ID`) REFERENCES `SDS` 
(`SD_ID`)
+) ENGINE=InnoDB DEFAULT CHARSET=latin1;
+/*!40101 SET character_set_client = @saved_cs_client */;
+
+--
+-- Table structure for table `CDS`
+--
+
+/*!40101 SET @saved_cs_client = @@character_set_client */;
+/*!40101 SET character_set_client = utf8 */;
+CREATE TABLE IF NOT EXISTS `CDS` (
+  `CD_ID` bigint(20) NOT NULL,
+  PRIMARY KEY (`CD_ID`)
+) ENGINE=InnoDB DEFAULT CHARSET=latin1;
+/*!40101 SET character_set_client = @saved_cs_client */;
+
+--
+-- Table structure for table `COLUMNS_V2`
+--
+
+/*!40101 SET @saved_cs_client = @@character_set_client */;
+/*!40101 SET character_set_client = utf8 */;
+CREATE TABLE IF NOT EXISTS `COLUMNS_V2` (
+  `CD_ID` bigint(20) NOT NULL,
+  `COMMENT` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
+  `COLUMN_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
+  `TYPE_NAME` varchar(4000) DEFAULT NULL,
+  `INTEGER_IDX` int(11) NOT NULL,
+  PRIMARY KEY (`CD_ID`,`COLUMN_NAME`),
+  KEY `COLUMNS_V2_N49` (`CD_ID`),
+  CONSTRAINT `COLUMNS_V2_FK1` FOREIGN KEY (`CD_ID`) REFERENCES `CDS` (`CD_ID`)
+) ENGINE=InnoDB DEFAULT CHARSET=latin1;
+/*!40101 SET character_set_client = @saved_cs_client */;
+
+--
+-- Table structure for table `DATABASE_PARAMS`
+--
+
+/*!40101 SET @saved_cs_client = @@character_set_client */;
+/*!40101 SET character_set_client = utf8 */;
+CREATE TABLE IF NOT EXISTS `DATABASE_PARAMS` (
+  `DB_ID` bigint(20) NOT NULL,
+  `PARAM_KEY` varchar(180) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
+  `PARAM_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT 
NULL,
+  PRIMARY KEY (`DB_ID`,`PARAM_KEY`),
+  KEY `DATABASE_PARAMS_N49` (`DB_ID`),
+  CONSTRAINT `DATABASE_PARAMS_FK1` FOREIGN KEY (`DB_ID`) REFERENCES `DBS` 
(`DB_ID`)
+) ENGINE=InnoDB DEFAULT CHARSET=latin1;
+/*!40101 SET character_set_client = @saved_cs_client */;
+
+--
+-- Table structure for table `DBS`
+--
+
+/*!40101 SET @saved_cs_client = @@character_set_client */;
+/*!40101 SET character_set_client = utf8 */;
+CREATE TABLE IF NOT EXISTS `DBS` (
+  `DB_ID` bigint(20) NOT NULL,
+  `DESC` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
+  `DB_LOCATION_URI` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin NOT 
NULL,
+  `NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
+  `OWNER_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT 
NULL,
+  `OWNER_TYPE` varchar(10) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT 
NULL,
+  PRIMARY KEY (`DB_ID`),
+  UNIQUE KEY `UNIQUE_DATABASE` (`NAME`)
+) ENGINE=InnoDB DEFAULT CHARSET=latin1;
+/*!40101 SET character_set_client = @saved_cs_client */;
+
+--
+-- Table structure for table `DB_PRIVS`
+--
+
+/*!40101 SET @saved_cs_client = @@character_set_client */;
+/*!40101 SET character_set_client = utf8 */;
+CREATE TABLE IF NOT EXISTS `DB_PRIVS` (
+  `DB_GRANT_ID` bigint(20) NOT NULL,
+  `CREATE_TIME` int(11) NOT NULL,
+  `DB_ID` bigint(20) DEFAULT NULL,
+  `GRANT_OPTION` smallint(6) NOT NULL,
+  `GRANTOR` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
+  `GRANTOR_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin 

[04/50] [abbrv] hive git commit: HIVE-17837: Explicitly check if the HoS Remote Driver has been lost in the RemoteSparkJobMonitor (Sahil Takiar, reviewed by Rui Li)

2018-02-12 Thread gates
HIVE-17837: Explicitly check if the HoS Remote Driver has been lost in the 
RemoteSparkJobMonitor (Sahil Takiar, reviewed by Rui Li)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/89e75c78
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/89e75c78
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/89e75c78

Branch: refs/heads/standalone-metastore
Commit: 89e75c78524327ef0c6111b4d90504f3bda781d4
Parents: e33edd9
Author: Sahil Takiar 
Authored: Fri Feb 9 15:03:15 2018 -0800
Committer: Sahil Takiar 
Committed: Fri Feb 9 15:03:15 2018 -0800

--
 .../hadoop/hive/ql/exec/spark/status/RemoteSparkJobMonitor.java | 5 +
 1 file changed, 1 insertion(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/89e75c78/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/RemoteSparkJobMonitor.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/RemoteSparkJobMonitor.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/RemoteSparkJobMonitor.java
index 4c4ce55..22f7024 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/RemoteSparkJobMonitor.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/RemoteSparkJobMonitor.java
@@ -66,6 +66,7 @@ public class RemoteSparkJobMonitor extends SparkJobMonitor {
 while (true) {
   try {
 JobHandle.State state = sparkJobStatus.getRemoteJobState();
+Preconditions.checkState(sparkJobStatus.isRemoteActive(), "Connection 
to remote Spark driver was lost");
 
 switch (state) {
 case SENT:
@@ -133,10 +134,6 @@ public class RemoteSparkJobMonitor extends SparkJobMonitor 
{
 
 printStatus(progressMap, lastProgressMap);
 lastProgressMap = progressMap;
-  } else if (sparkJobState == null) {
-// in case the remote context crashes between JobStarted and 
JobSubmitted
-Preconditions.checkState(sparkJobStatus.isRemoteActive(),
-"Remote context becomes inactive.");
   }
   break;
 case SUCCEEDED:



[11/50] [abbrv] hive git commit: HIVE-18359: Extend grouping set limits from int to long (Prasanth Jayachandran reviewed by Jesus Camacho Rodriguez)

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/ddd4c9ae/ql/src/test/results/clientpositive/llap/cte_1.q.out
--
diff --git a/ql/src/test/results/clientpositive/llap/cte_1.q.out 
b/ql/src/test/results/clientpositive/llap/cte_1.q.out
index ddef9db..d7bc062 100644
--- a/ql/src/test/results/clientpositive/llap/cte_1.q.out
+++ b/ql/src/test/results/clientpositive/llap/cte_1.q.out
@@ -671,12236 +671,12236 @@ POSTHOOK: type: QUERY
 POSTHOOK: Input: default@alltypesorc
  A masked pattern was here 
 NULL   NULL2735.0
-NULL   2x14G717LqcPA7Ic5   NULL
-NULL   64Vxl8QSNULL
-NULL   Ul085f84S33Xd32uNULL
-NULL   b062i16kuwQerAvO5D2cBp3 NULL
-NULL   efnt3   NULL
-NULL   nlVvHbKNkU5I04XtkP6 NULL
+NULL   3Ke6A1U847tV73  NULL
+NULL   45ja5suONULL
+NULL   4fNIOF6ul   NULL
+NULL   62vmI4  NULL
+NULL   84O1C65C5k88bI7i4   NULL
+NULL   AmPHc4NUg3HwJ   NULL
+NULL   LR2AKy0dPt8vFdIV5760jriwNULL
+NULL   Oye1OEeNNULL
+NULL   THog3nx6pd1Bb   NULL
+NULL   Xw6nBW1A205Rv7rENULL
+NULL   Yssb82rdfylDv4K NULL
+NULL   a7GT5lui7rc NULL
+NULL   fVgv88OvQR1BB7toX   NULL
+NULL   gC1t8pc NULL
 NULL   p61uO61KDWhQ8b648ac2xyFONULL
-NULL   r4jOncC4N6ov2LdxmkWAfJ7JNULL
-NULL   wa73jb5WDRp2le0wf   NULL
 NULL   y605nF0K3mMoM75jNULL
+-1073279343NULLNULL
 -1073279343oj1YrV5Wa   NULL
 -1073051226NULL-7382.0
--1071480828NULLNULL
--1071363017NULLNULL
--1070551679iUR3Q   -947.0
+-1072081801dPkN74F78373.0
+-1072076362NULL-5470.0
+-1071363017Anj0oF  NULL
+-10708830710ruyd6Y50JpdGRf6HqD -741.0
+-1070551679NULL-947.0
 -1069109166vW36C22KS75R8390.0
--1069097390B553840U1H2b1M06l6N81   NULL
+-1069103950NULLNULL
 -1068336533PUn1YVC NULL
--1068206466NULLNULL
--1068206466F3u1yJaQywofxCCM4v4jScY NULL
--1067874703us1gH35lcpNDNULL
+-1067874703NULLNULL
 -1067683781IbgbUvP5NULL
--1065775394NULLNULL
--1065117869jWVP6gOkq12mdh  2538.0
--10649493028u8tR858jC01y8Ft66nYRnb66454.0
--1064623720NULLNULL
--1061057428NULL-1085.0
--1060990068NULLNULL
+-1067386090HBtg2r6pR16VC73 -3977.0
+-1066922682NULL-9987.0
+-1063745167NULLNULL
+-1063745167L47nqo  NULL
+-10631645411NydRD5y5o3 NULL
+-1062973443144eST755Fvf6nLi74SK10541.0
+-106161498961Oa7M7Pl17d7auyXra6-4234.0
+-1061509617NULLNULL
+-1060624784NULLNULL
 -1060624784Das7E73 NULL
--1059047258e2B6K7FJH77Y4i7h6B43U   12452.0
--1055669248U7r33N1GT   2570.0
+-1058897881NULLNULL
+-1056684111NULL13991.0
+-10566841117K7y062ndg5aRSBsx   13991.0
+-1055945837Qc722Gg4280 13690.0
+-1055669248NULL2570.0
 -1055185482l20vn2Awc   NULL
--1055076545NULLNULL
--10550765455l4yXhHX0Y1jgmw4NULL
--10550407731t2c87D721uxcFhn2   NULL
 -1054958082im6VJRHh5EGfS7FVhw  NULL
 -1054849160NULLNULL
--1053385587NULL14504.0
--10512235977i7FJDchQc1 NULL
--1050165799hA4lNb  8634.0
--1049984461qUY8Rl34NWRgNULL
--1048097158NULLNULL
+-1053238077NULL-3704.0
+-1052322972C60KTh  -7433.0
+-1051223597NULLNULL
+-1050388484B26L6Qp134xe0wy0Si  NULL
+-1048934049NULL-524.0
+-1046913669NULLNULL
 -1046766350s4LPR6Bg0j25SWD8NULL
 -1046399794NULL4130.0
--1045867222gdoaNjlr4H8gbNV -8034.0
--1045737053NULLNULL
--104519636335lk428d1BN8Qp1M27  -5039.0
--1045181724NULL-5706.0
+-1045737053FGQf6n21ES  NULL
 -1044828205Ej05nrdc8CVXYu1Axy6WNULL
+-1044748460NULLNULL
 -1044357977NULLNULL
--1044357977nqThW83 NULL
--1044093617NULL-3422.0
--10440936170Dlv8g24a1Q43   -3422.0
--1043132597yVj2368XQ64rY25N8jCGSeW 12302.0
+-1043573508NULL16216.0
 -1043082182NULL9180.0
--1042805968NULL5133.0
--1042805968QUnIT4yAVU  5133.0
--1042396242NULL9583.0
--1041734429NULL-836.0
--1041391389NULL-12970.0
--1041391389IL6Ct0hm2   -12970.0
--104135370725Qky6lf2pt5FP47MqmbNULL
--1039762548ki4pfORasIn14cM2G   -3802.0
--1039715238NULLNULL
--1039495786NULLNULL
--1037297218NULL10880.0
--1037297218lXhthv3GoliXESKJV70310880.0
--1037188286NULL5144.0
--10360253708dDe31b5NULL
--1035148422NULL7228.0
--10351484223GU0iMHI286JAUnA0f  7228.0
--1033919841NULLNULL
--1031797254sKEJ8vy8kHWK7D  -326.0
--1031594611NULLNULL
--1031594611dFE1VTv3P5WDi20YecUuv7  NULL
--103099342676VqjvX6hmnmvmDWOa8wi8  NULL
--10306342972060qh1mQdiLrqGg0Jc5K   15011.0
+-1042712895iD2KrmBUbvNjuhHR2r  9296.0
+-10423962423E1ynn7EtEFXaiQ772b86gVL9583.0

[05/50] [abbrv] hive git commit: HIVE-18359: Extend grouping set limits from int to long (Prasanth Jayachandran reviewed by Jesus Camacho Rodriguez)

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/ddd4c9ae/ql/src/test/results/clientpositive/spark/groupby_cube1.q.out
--
diff --git a/ql/src/test/results/clientpositive/spark/groupby_cube1.q.out 
b/ql/src/test/results/clientpositive/spark/groupby_cube1.q.out
index fa1480e..71ccea5 100644
--- a/ql/src/test/results/clientpositive/spark/groupby_cube1.q.out
+++ b/ql/src/test/results/clientpositive/spark/groupby_cube1.q.out
@@ -42,21 +42,21 @@ STAGE PLANS:
 Statistics: Num rows: 1 Data size: 300 Basic stats: 
COMPLETE Column stats: NONE
 Group By Operator
   aggregations: count()
-  keys: key (type: string), val (type: string), 0 (type: 
int)
+  keys: key (type: string), val (type: string), 0 (type: 
bigint)
   mode: hash
   outputColumnNames: _col0, _col1, _col2, _col3
   Statistics: Num rows: 4 Data size: 1200 Basic stats: 
COMPLETE Column stats: NONE
   Reduce Output Operator
-key expressions: _col0 (type: string), _col1 (type: 
string), _col2 (type: int)
+key expressions: _col0 (type: string), _col1 (type: 
string), _col2 (type: bigint)
 sort order: +++
-Map-reduce partition columns: _col0 (type: string), 
_col1 (type: string), _col2 (type: int)
+Map-reduce partition columns: _col0 (type: string), 
_col1 (type: string), _col2 (type: bigint)
 Statistics: Num rows: 4 Data size: 1200 Basic stats: 
COMPLETE Column stats: NONE
 value expressions: _col3 (type: bigint)
 Reducer 2 
 Reduce Operator Tree:
   Group By Operator
 aggregations: count(VALUE._col0)
-keys: KEY._col0 (type: string), KEY._col1 (type: string), 
KEY._col2 (type: int)
+keys: KEY._col0 (type: string), KEY._col1 (type: string), 
KEY._col2 (type: bigint)
 mode: mergepartial
 outputColumnNames: _col0, _col1, _col3
 Statistics: Num rows: 2 Data size: 600 Basic stats: COMPLETE 
Column stats: NONE
@@ -107,21 +107,21 @@ STAGE PLANS:
 Statistics: Num rows: 1 Data size: 300 Basic stats: 
COMPLETE Column stats: NONE
 Group By Operator
   aggregations: count()
-  keys: key (type: string), val (type: string), 0 (type: 
int)
+  keys: key (type: string), val (type: string), 0 (type: 
bigint)
   mode: hash
   outputColumnNames: _col0, _col1, _col2, _col3
   Statistics: Num rows: 4 Data size: 1200 Basic stats: 
COMPLETE Column stats: NONE
   Reduce Output Operator
-key expressions: _col0 (type: string), _col1 (type: 
string), _col2 (type: int)
+key expressions: _col0 (type: string), _col1 (type: 
string), _col2 (type: bigint)
 sort order: +++
-Map-reduce partition columns: _col0 (type: string), 
_col1 (type: string), _col2 (type: int)
+Map-reduce partition columns: _col0 (type: string), 
_col1 (type: string), _col2 (type: bigint)
 Statistics: Num rows: 4 Data size: 1200 Basic stats: 
COMPLETE Column stats: NONE
 value expressions: _col3 (type: bigint)
 Reducer 2 
 Reduce Operator Tree:
   Group By Operator
 aggregations: count(VALUE._col0)
-keys: KEY._col0 (type: string), KEY._col1 (type: string), 
KEY._col2 (type: int)
+keys: KEY._col0 (type: string), KEY._col1 (type: string), 
KEY._col2 (type: bigint)
 mode: mergepartial
 outputColumnNames: _col0, _col1, _col3
 Statistics: Num rows: 2 Data size: 600 Basic stats: COMPLETE 
Column stats: NONE
@@ -198,26 +198,26 @@ STAGE PLANS:
 Statistics: Num rows: 1 Data size: 300 Basic stats: 
COMPLETE Column stats: NONE
 Group By Operator
   aggregations: count()
-  keys: _col0 (type: string), _col1 (type: string), 0 
(type: int)
+  keys: _col0 (type: string), _col1 (type: string), 0 
(type: bigint)
   mode: hash
   outputColumnNames: _col0, _col1, _col2, _col3
   Statistics: Num rows: 4 Data size: 1200 Basic stats: 
COMPLETE Column stats: NONE
   Reduce Output Operator
-key expressions: _col0 (type: string), _col1 (type: 
string), _col2 (type: int)
+key expressions: _col0 (type: string), _col1 

[42/50] [abbrv] hive git commit: HIVE-17990 Add Thrift and DB storage for Schema Registry objects

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/a9e1acaf/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ISchema.java
--
diff --git 
a/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ISchema.java
 
b/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ISchema.java
new file mode 100644
index 000..92d8b52
--- /dev/null
+++ 
b/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ISchema.java
@@ -0,0 +1,1162 @@
+/**
+ * Autogenerated by Thrift Compiler (0.9.3)
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ *  @generated
+ */
+package org.apache.hadoop.hive.metastore.api;
+
+import org.apache.thrift.scheme.IScheme;
+import org.apache.thrift.scheme.SchemeFactory;
+import org.apache.thrift.scheme.StandardScheme;
+
+import org.apache.thrift.scheme.TupleScheme;
+import org.apache.thrift.protocol.TTupleProtocol;
+import org.apache.thrift.protocol.TProtocolException;
+import org.apache.thrift.EncodingUtils;
+import org.apache.thrift.TException;
+import org.apache.thrift.async.AsyncMethodCallback;
+import org.apache.thrift.server.AbstractNonblockingServer.*;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.EnumMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.EnumSet;
+import java.util.Collections;
+import java.util.BitSet;
+import java.nio.ByteBuffer;
+import java.util.Arrays;
+import javax.annotation.Generated;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+@org.apache.hadoop.classification.InterfaceAudience.Public 
@org.apache.hadoop.classification.InterfaceStability.Stable public class 
ISchema implements org.apache.thrift.TBase, 
java.io.Serializable, Cloneable, Comparable {
+  private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new 
org.apache.thrift.protocol.TStruct("ISchema");
+
+  private static final org.apache.thrift.protocol.TField 
SCHEMA_TYPE_FIELD_DESC = new org.apache.thrift.protocol.TField("schemaType", 
org.apache.thrift.protocol.TType.I32, (short)1);
+  private static final org.apache.thrift.protocol.TField NAME_FIELD_DESC = new 
org.apache.thrift.protocol.TField("name", 
org.apache.thrift.protocol.TType.STRING, (short)2);
+  private static final org.apache.thrift.protocol.TField DB_NAME_FIELD_DESC = 
new org.apache.thrift.protocol.TField("dbName", 
org.apache.thrift.protocol.TType.STRING, (short)3);
+  private static final org.apache.thrift.protocol.TField 
COMPATIBILITY_FIELD_DESC = new 
org.apache.thrift.protocol.TField("compatibility", 
org.apache.thrift.protocol.TType.I32, (short)4);
+  private static final org.apache.thrift.protocol.TField 
VALIDATION_LEVEL_FIELD_DESC = new 
org.apache.thrift.protocol.TField("validationLevel", 
org.apache.thrift.protocol.TType.I32, (short)5);
+  private static final org.apache.thrift.protocol.TField CAN_EVOLVE_FIELD_DESC 
= new org.apache.thrift.protocol.TField("canEvolve", 
org.apache.thrift.protocol.TType.BOOL, (short)6);
+  private static final org.apache.thrift.protocol.TField 
SCHEMA_GROUP_FIELD_DESC = new org.apache.thrift.protocol.TField("schemaGroup", 
org.apache.thrift.protocol.TType.STRING, (short)7);
+  private static final org.apache.thrift.protocol.TField 
DESCRIPTION_FIELD_DESC = new org.apache.thrift.protocol.TField("description", 
org.apache.thrift.protocol.TType.STRING, (short)8);
+
+  private static final Map schemes = 
new HashMap();
+  static {
+schemes.put(StandardScheme.class, new ISchemaStandardSchemeFactory());
+schemes.put(TupleScheme.class, new ISchemaTupleSchemeFactory());
+  }
+
+  private SchemaType schemaType; // required
+  private String name; // required
+  private String dbName; // required
+  private SchemaCompatibility compatibility; // required
+  private SchemaValidation validationLevel; // required
+  private boolean canEvolve; // required
+  private String schemaGroup; // optional
+  private String description; // optional
+
+  /** The set of fields this struct contains, along with convenience methods 
for finding and manipulating them. */
+  public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+/**
+ * 
+ * @see SchemaType
+ */
+SCHEMA_TYPE((short)1, "schemaType"),
+NAME((short)2, "name"),
+DB_NAME((short)3, "dbName"),
+/**
+ * 
+ * @see SchemaCompatibility
+ */
+COMPATIBILITY((short)4, "compatibility"),
+/**
+ * 
+ * @see SchemaValidation
+ */
+VALIDATION_LEVEL((short)5, "validationLevel"),
+CAN_EVOLVE((short)6, "canEvolve"),
+SCHEMA_GROUP((short)7, "schemaGroup"),

[25/50] [abbrv] hive git commit: HIVE-17983 Make the standalone metastore generate tarballs etc. (Alan Gates, reviewed by Thejas Nair)

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/47cac2d0/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/tools/MetastoreSchemaTool.java
--
diff --git 
a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/tools/MetastoreSchemaTool.java
 
b/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/tools/MetastoreSchemaTool.java
new file mode 100644
index 000..06ba671
--- /dev/null
+++ 
b/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/tools/MetastoreSchemaTool.java
@@ -0,0 +1,1308 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.metastore.tools;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.GnuParser;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.OptionBuilder;
+import org.apache.commons.cli.OptionGroup;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.ParseException;
+import org.apache.commons.io.output.NullOutputStream;
+import org.apache.commons.lang.ArrayUtils;
+import org.apache.commons.lang.StringUtils;
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.metastore.HiveMetaException;
+import org.apache.hadoop.hive.metastore.IMetaStoreSchemaInfo;
+import org.apache.hadoop.hive.metastore.MetaStoreSchemaInfoFactory;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf.ConfVars;
+import 
org.apache.hadoop.hive.metastore.tools.HiveSchemaHelper.MetaStoreConnectionInfo;
+import 
org.apache.hadoop.hive.metastore.tools.HiveSchemaHelper.NestedScriptParser;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.collect.ImmutableMap;
+import sqlline.SqlLine;
+
+import java.io.BufferedReader;
+import java.io.BufferedWriter;
+import java.io.ByteArrayOutputStream;
+import java.io.File;
+import java.io.FileReader;
+import java.io.FileWriter;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.io.PrintStream;
+import java.net.URI;
+import java.sql.Connection;
+import java.sql.DatabaseMetaData;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
+public class MetastoreSchemaTool {
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetastoreSchemaTool.class);
+  private static final String PASSWD_MASK = "[passwd stripped]";
+
+  @VisibleForTesting
+  public static String homeDir;
+
+  private String userName = null;
+  private String passWord = null;
+  private boolean dryRun = false;
+  private boolean verbose = false;
+  private String dbOpts = null;
+  private String url = null;
+  private String driver = null;
+  private URI[] validationServers = null; // The list of servers the 
database/partition/table can locate on
+  private String hiveUser; // Hive username, for use when creating the user, 
not for connecting
+  private String hivePasswd; // Hive password, for use when creating the user, 
not for connecting
+  private String hiveDb; // Hive database, for use when creating the user, not 
for connecting
+  private final Configuration conf;
+  private final String dbType;
+  private final IMetaStoreSchemaInfo metaStoreSchemaInfo;
+  private boolean needsQuotedIdentifier;
+
+  private static String findHomeDir() {
+// If METASTORE_HOME is set, use it, else use HIVE_HOME for backwards 
compatibility.
+homeDir = homeDir == null ? System.getenv("METASTORE_HOME") : homeDir;
+return homeDir == null ? System.getenv("HIVE_HOME") : homeDir;
+  }
+
+  private MetastoreSchemaTool(String dbType) throws 

[44/50] [abbrv] hive git commit: HIVE-17990 Add Thrift and DB storage for Schema Registry objects

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/a9e1acaf/standalone-metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp
--
diff --git 
a/standalone-metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp 
b/standalone-metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp
index aadf8f1..ed0f068 100644
--- a/standalone-metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp
+++ b/standalone-metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp
@@ -149,6 +149,72 @@ const char* _kEventRequestTypeNames[] = {
 };
 const std::map 
_EventRequestType_VALUES_TO_NAMES(::apache::thrift::TEnumIterator(3, 
_kEventRequestTypeValues, _kEventRequestTypeNames), 
::apache::thrift::TEnumIterator(-1, NULL, NULL));
 
+int _kSerdeTypeValues[] = {
+  SerdeType::HIVE,
+  SerdeType::SCHEMA_REGISTRY
+};
+const char* _kSerdeTypeNames[] = {
+  "HIVE",
+  "SCHEMA_REGISTRY"
+};
+const std::map 
_SerdeType_VALUES_TO_NAMES(::apache::thrift::TEnumIterator(2, 
_kSerdeTypeValues, _kSerdeTypeNames), ::apache::thrift::TEnumIterator(-1, NULL, 
NULL));
+
+int _kSchemaTypeValues[] = {
+  SchemaType::HIVE,
+  SchemaType::AVRO
+};
+const char* _kSchemaTypeNames[] = {
+  "HIVE",
+  "AVRO"
+};
+const std::map 
_SchemaType_VALUES_TO_NAMES(::apache::thrift::TEnumIterator(2, 
_kSchemaTypeValues, _kSchemaTypeNames), ::apache::thrift::TEnumIterator(-1, 
NULL, NULL));
+
+int _kSchemaCompatibilityValues[] = {
+  SchemaCompatibility::NONE,
+  SchemaCompatibility::BACKWARD,
+  SchemaCompatibility::FORWARD,
+  SchemaCompatibility::BOTH
+};
+const char* _kSchemaCompatibilityNames[] = {
+  "NONE",
+  "BACKWARD",
+  "FORWARD",
+  "BOTH"
+};
+const std::map 
_SchemaCompatibility_VALUES_TO_NAMES(::apache::thrift::TEnumIterator(4, 
_kSchemaCompatibilityValues, _kSchemaCompatibilityNames), 
::apache::thrift::TEnumIterator(-1, NULL, NULL));
+
+int _kSchemaValidationValues[] = {
+  SchemaValidation::LATEST,
+  SchemaValidation::ALL
+};
+const char* _kSchemaValidationNames[] = {
+  "LATEST",
+  "ALL"
+};
+const std::map 
_SchemaValidation_VALUES_TO_NAMES(::apache::thrift::TEnumIterator(2, 
_kSchemaValidationValues, _kSchemaValidationNames), 
::apache::thrift::TEnumIterator(-1, NULL, NULL));
+
+int _kSchemaVersionStateValues[] = {
+  SchemaVersionState::INITIATED,
+  SchemaVersionState::START_REVIEW,
+  SchemaVersionState::CHANGES_REQUIRED,
+  SchemaVersionState::REVIEWED,
+  SchemaVersionState::ENABLED,
+  SchemaVersionState::DISABLED,
+  SchemaVersionState::ARCHIVED,
+  SchemaVersionState::DELETED
+};
+const char* _kSchemaVersionStateNames[] = {
+  "INITIATED",
+  "START_REVIEW",
+  "CHANGES_REQUIRED",
+  "REVIEWED",
+  "ENABLED",
+  "DISABLED",
+  "ARCHIVED",
+  "DELETED"
+};
+const std::map 
_SchemaVersionState_VALUES_TO_NAMES(::apache::thrift::TEnumIterator(8, 
_kSchemaVersionStateValues, _kSchemaVersionStateNames), 
::apache::thrift::TEnumIterator(-1, NULL, NULL));
+
 int _kFunctionTypeValues[] = {
   FunctionType::JAVA
 };
@@ -4009,6 +4075,26 @@ void SerDeInfo::__set_parameters(const 
std::map & val)
   this->parameters = val;
 }
 
+void SerDeInfo::__set_description(const std::string& val) {
+  this->description = val;
+__isset.description = true;
+}
+
+void SerDeInfo::__set_serializerClass(const std::string& val) {
+  this->serializerClass = val;
+__isset.serializerClass = true;
+}
+
+void SerDeInfo::__set_deserializerClass(const std::string& val) {
+  this->deserializerClass = val;
+__isset.deserializerClass = true;
+}
+
+void SerDeInfo::__set_serdeType(const SerdeType::type val) {
+  this->serdeType = val;
+__isset.serdeType = true;
+}
+
 uint32_t SerDeInfo::read(::apache::thrift::protocol::TProtocol* iprot) {
 
   apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
@@ -4069,6 +4155,40 @@ uint32_t 
SerDeInfo::read(::apache::thrift::protocol::TProtocol* iprot) {
   xfer += iprot->skip(ftype);
 }
 break;
+  case 4:
+if (ftype == ::apache::thrift::protocol::T_STRING) {
+  xfer += iprot->readString(this->description);
+  this->__isset.description = true;
+} else {
+  xfer += iprot->skip(ftype);
+}
+break;
+  case 5:
+if (ftype == ::apache::thrift::protocol::T_STRING) {
+  xfer += iprot->readString(this->serializerClass);
+  this->__isset.serializerClass = true;
+} else {
+  xfer += iprot->skip(ftype);
+}
+break;
+  case 6:
+if (ftype == ::apache::thrift::protocol::T_STRING) {
+  xfer += iprot->readString(this->deserializerClass);
+  this->__isset.deserializerClass = true;
+} else {
+  xfer += iprot->skip(ftype);
+}
+break;
+  case 7:
+if (ftype == ::apache::thrift::protocol::T_I32) {
+  int32_t ecast144;
+ 

[37/50] [abbrv] hive git commit: HIVE-17990 Add Thrift and DB storage for Schema Registry objects

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/a9e1acaf/standalone-metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore.py
--
diff --git 
a/standalone-metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore.py
 
b/standalone-metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore.py
index 2e19105..3963f25 100644
--- 
a/standalone-metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore.py
+++ 
b/standalone-metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore.py
@@ -1427,6 +1427,111 @@ class Iface(fb303.FacebookService.Iface):
 """
 pass
 
+  def create_ischema(self, schema):
+"""
+Parameters:
+ - schema
+"""
+pass
+
+  def alter_ischema(self, schemaName, newSchema):
+"""
+Parameters:
+ - schemaName
+ - newSchema
+"""
+pass
+
+  def get_ischema(self, schemaName):
+"""
+Parameters:
+ - schemaName
+"""
+pass
+
+  def drop_ischema(self, schemaName):
+"""
+Parameters:
+ - schemaName
+"""
+pass
+
+  def add_schema_version(self, schemaVersion):
+"""
+Parameters:
+ - schemaVersion
+"""
+pass
+
+  def get_schema_version(self, schemaName, version):
+"""
+Parameters:
+ - schemaName
+ - version
+"""
+pass
+
+  def get_schema_latest_version(self, schemaName):
+"""
+Parameters:
+ - schemaName
+"""
+pass
+
+  def get_schema_all_versions(self, schemaName):
+"""
+Parameters:
+ - schemaName
+"""
+pass
+
+  def drop_schema_version(self, schemaName, version):
+"""
+Parameters:
+ - schemaName
+ - version
+"""
+pass
+
+  def get_schemas_by_cols(self, rqst):
+"""
+Parameters:
+ - rqst
+"""
+pass
+
+  def map_schema_version_to_serde(self, schemaName, version, serdeName):
+"""
+Parameters:
+ - schemaName
+ - version
+ - serdeName
+"""
+pass
+
+  def set_schema_version_state(self, schemaName, version, state):
+"""
+Parameters:
+ - schemaName
+ - version
+ - state
+"""
+pass
+
+  def add_serde(self, serde):
+"""
+Parameters:
+ - serde
+"""
+pass
+
+  def get_serde(self, serdeName):
+"""
+Parameters:
+ - serdeName
+"""
+pass
+
 
 class Client(fb303.FacebookService.Client, Iface):
   """
@@ -7958,6 +8063,500 @@ class Client(fb303.FacebookService.Client, Iface):
   raise result.o4
 raise TApplicationException(TApplicationException.MISSING_RESULT, 
"create_or_drop_wm_trigger_to_pool_mapping failed: unknown result")
 
+  def create_ischema(self, schema):
+"""
+Parameters:
+ - schema
+"""
+self.send_create_ischema(schema)
+self.recv_create_ischema()
+
+  def send_create_ischema(self, schema):
+self._oprot.writeMessageBegin('create_ischema', TMessageType.CALL, 
self._seqid)
+args = create_ischema_args()
+args.schema = schema
+args.write(self._oprot)
+self._oprot.writeMessageEnd()
+self._oprot.trans.flush()
+
+  def recv_create_ischema(self):
+iprot = self._iprot
+(fname, mtype, rseqid) = iprot.readMessageBegin()
+if mtype == TMessageType.EXCEPTION:
+  x = TApplicationException()
+  x.read(iprot)
+  iprot.readMessageEnd()
+  raise x
+result = create_ischema_result()
+result.read(iprot)
+iprot.readMessageEnd()
+if result.o1 is not None:
+  raise result.o1
+if result.o2 is not None:
+  raise result.o2
+if result.o3 is not None:
+  raise result.o3
+return
+
+  def alter_ischema(self, schemaName, newSchema):
+"""
+Parameters:
+ - schemaName
+ - newSchema
+"""
+self.send_alter_ischema(schemaName, newSchema)
+self.recv_alter_ischema()
+
+  def send_alter_ischema(self, schemaName, newSchema):
+self._oprot.writeMessageBegin('alter_ischema', TMessageType.CALL, 
self._seqid)
+args = alter_ischema_args()
+args.schemaName = schemaName
+args.newSchema = newSchema
+args.write(self._oprot)
+self._oprot.writeMessageEnd()
+self._oprot.trans.flush()
+
+  def recv_alter_ischema(self):
+iprot = self._iprot
+(fname, mtype, rseqid) = iprot.readMessageBegin()
+if mtype == TMessageType.EXCEPTION:
+  x = TApplicationException()
+  x.read(iprot)
+  iprot.readMessageEnd()
+  raise x
+result = alter_ischema_result()
+result.read(iprot)
+iprot.readMessageEnd()
+if result.o1 is not None:
+  raise result.o1
+if result.o2 is not None:
+  raise result.o2
+return
+
+  def get_ischema(self, schemaName):
+"""
+Parameters:
+ - schemaName
+"""
+self.send_get_ischema(schemaName)
+return self.recv_get_ischema()
+
+  def send_get_ischema(self, schemaName):
+self._oprot.writeMessageBegin('get_ischema', TMessageType.CALL, 
self._seqid)
+args = get_ischema_args()
+

[28/50] [abbrv] hive git commit: HIVE-17983 Make the standalone metastore generate tarballs etc. (Alan Gates, reviewed by Thejas Nair)

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/47cac2d0/metastore/scripts/upgrade/mysql/hive-schema-3.0.0.mysql.sql
--
diff --git a/metastore/scripts/upgrade/mysql/hive-schema-3.0.0.mysql.sql 
b/metastore/scripts/upgrade/mysql/hive-schema-3.0.0.mysql.sql
deleted file mode 100644
index eb5da4a..000
--- a/metastore/scripts/upgrade/mysql/hive-schema-3.0.0.mysql.sql
+++ /dev/null
@@ -1,965 +0,0 @@
--- MySQL dump 10.13  Distrib 5.5.25, for osx10.6 (i386)
---
--- Host: localhostDatabase: test
--- --
--- Server version  5.5.25
-
-/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
-/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
-/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
-/*!40101 SET NAMES utf8 */;
-/*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;
-/*!40103 SET TIME_ZONE='+00:00' */;
-/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
-/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, 
FOREIGN_KEY_CHECKS=0 */;
-/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
-/*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;
-
---
--- Table structure for table `BUCKETING_COLS`
---
-
-/*!40101 SET @saved_cs_client = @@character_set_client */;
-/*!40101 SET character_set_client = utf8 */;
-CREATE TABLE IF NOT EXISTS `BUCKETING_COLS` (
-  `SD_ID` bigint(20) NOT NULL,
-  `BUCKET_COL_NAME` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin 
DEFAULT NULL,
-  `INTEGER_IDX` int(11) NOT NULL,
-  PRIMARY KEY (`SD_ID`,`INTEGER_IDX`),
-  KEY `BUCKETING_COLS_N49` (`SD_ID`),
-  CONSTRAINT `BUCKETING_COLS_FK1` FOREIGN KEY (`SD_ID`) REFERENCES `SDS` 
(`SD_ID`)
-) ENGINE=InnoDB DEFAULT CHARSET=latin1;
-/*!40101 SET character_set_client = @saved_cs_client */;
-
---
--- Table structure for table `CDS`
---
-
-/*!40101 SET @saved_cs_client = @@character_set_client */;
-/*!40101 SET character_set_client = utf8 */;
-CREATE TABLE IF NOT EXISTS `CDS` (
-  `CD_ID` bigint(20) NOT NULL,
-  PRIMARY KEY (`CD_ID`)
-) ENGINE=InnoDB DEFAULT CHARSET=latin1;
-/*!40101 SET character_set_client = @saved_cs_client */;
-
---
--- Table structure for table `COLUMNS_V2`
---
-
-/*!40101 SET @saved_cs_client = @@character_set_client */;
-/*!40101 SET character_set_client = utf8 */;
-CREATE TABLE IF NOT EXISTS `COLUMNS_V2` (
-  `CD_ID` bigint(20) NOT NULL,
-  `COMMENT` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
-  `COLUMN_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
-  `TYPE_NAME` MEDIUMTEXT DEFAULT NULL,
-  `INTEGER_IDX` int(11) NOT NULL,
-  PRIMARY KEY (`CD_ID`,`COLUMN_NAME`),
-  KEY `COLUMNS_V2_N49` (`CD_ID`),
-  CONSTRAINT `COLUMNS_V2_FK1` FOREIGN KEY (`CD_ID`) REFERENCES `CDS` (`CD_ID`)
-) ENGINE=InnoDB DEFAULT CHARSET=latin1;
-/*!40101 SET character_set_client = @saved_cs_client */;
-
---
--- Table structure for table `DATABASE_PARAMS`
---
-
-/*!40101 SET @saved_cs_client = @@character_set_client */;
-/*!40101 SET character_set_client = utf8 */;
-CREATE TABLE IF NOT EXISTS `DATABASE_PARAMS` (
-  `DB_ID` bigint(20) NOT NULL,
-  `PARAM_KEY` varchar(180) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
-  `PARAM_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT 
NULL,
-  PRIMARY KEY (`DB_ID`,`PARAM_KEY`),
-  KEY `DATABASE_PARAMS_N49` (`DB_ID`),
-  CONSTRAINT `DATABASE_PARAMS_FK1` FOREIGN KEY (`DB_ID`) REFERENCES `DBS` 
(`DB_ID`)
-) ENGINE=InnoDB DEFAULT CHARSET=latin1;
-/*!40101 SET character_set_client = @saved_cs_client */;
-
---
--- Table structure for table `DBS`
---
-
-/*!40101 SET @saved_cs_client = @@character_set_client */;
-/*!40101 SET character_set_client = utf8 */;
-CREATE TABLE IF NOT EXISTS `DBS` (
-  `DB_ID` bigint(20) NOT NULL,
-  `DESC` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
-  `DB_LOCATION_URI` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin NOT 
NULL,
-  `NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
-  `OWNER_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT 
NULL,
-  `OWNER_TYPE` varchar(10) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT 
NULL,
-  PRIMARY KEY (`DB_ID`),
-  UNIQUE KEY `UNIQUE_DATABASE` (`NAME`)
-) ENGINE=InnoDB DEFAULT CHARSET=latin1;
-/*!40101 SET character_set_client = @saved_cs_client */;
-
---
--- Table structure for table `DB_PRIVS`
---
-
-/*!40101 SET @saved_cs_client = @@character_set_client */;
-/*!40101 SET character_set_client = utf8 */;
-CREATE TABLE IF NOT EXISTS `DB_PRIVS` (
-  `DB_GRANT_ID` bigint(20) NOT NULL,
-  `CREATE_TIME` int(11) NOT NULL,
-  `DB_ID` bigint(20) DEFAULT NULL,
-  `GRANT_OPTION` smallint(6) NOT NULL,
-  `GRANTOR` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
-  `GRANTOR_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT 
NULL,
-  `PRINCIPAL_NAME` 

[38/50] [abbrv] hive git commit: HIVE-17990 Add Thrift and DB storage for Schema Registry objects

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/a9e1acaf/standalone-metastore/src/gen/thrift/gen-php/metastore/Types.php
--
diff --git a/standalone-metastore/src/gen/thrift/gen-php/metastore/Types.php 
b/standalone-metastore/src/gen/thrift/gen-php/metastore/Types.php
index a5b578e..3c2b49d 100644
--- a/standalone-metastore/src/gen/thrift/gen-php/metastore/Types.php
+++ b/standalone-metastore/src/gen/thrift/gen-php/metastore/Types.php
@@ -142,6 +142,67 @@ final class EventRequestType {
   );
 }
 
+final class SerdeType {
+  const HIVE = 1;
+  const SCHEMA_REGISTRY = 2;
+  static public $__names = array(
+1 => 'HIVE',
+2 => 'SCHEMA_REGISTRY',
+  );
+}
+
+final class SchemaType {
+  const HIVE = 1;
+  const AVRO = 2;
+  static public $__names = array(
+1 => 'HIVE',
+2 => 'AVRO',
+  );
+}
+
+final class SchemaCompatibility {
+  const NONE = 1;
+  const BACKWARD = 2;
+  const FORWARD = 3;
+  const BOTH = 4;
+  static public $__names = array(
+1 => 'NONE',
+2 => 'BACKWARD',
+3 => 'FORWARD',
+4 => 'BOTH',
+  );
+}
+
+final class SchemaValidation {
+  const LATEST = 1;
+  const ALL = 2;
+  static public $__names = array(
+1 => 'LATEST',
+2 => 'ALL',
+  );
+}
+
+final class SchemaVersionState {
+  const INITIATED = 1;
+  const START_REVIEW = 2;
+  const CHANGES_REQUIRED = 3;
+  const REVIEWED = 4;
+  const ENABLED = 5;
+  const DISABLED = 6;
+  const ARCHIVED = 7;
+  const DELETED = 8;
+  static public $__names = array(
+1 => 'INITIATED',
+2 => 'START_REVIEW',
+3 => 'CHANGES_REQUIRED',
+4 => 'REVIEWED',
+5 => 'ENABLED',
+6 => 'DISABLED',
+7 => 'ARCHIVED',
+8 => 'DELETED',
+  );
+}
+
 final class FunctionType {
   const JAVA = 1;
   static public $__names = array(
@@ -4030,6 +4091,22 @@ class SerDeInfo {
* @var array
*/
   public $parameters = null;
+  /**
+   * @var string
+   */
+  public $description = null;
+  /**
+   * @var string
+   */
+  public $serializerClass = null;
+  /**
+   * @var string
+   */
+  public $deserializerClass = null;
+  /**
+   * @var int
+   */
+  public $serdeType = null;
 
   public function __construct($vals=null) {
 if (!isset(self::$_TSPEC)) {
@@ -4054,6 +4131,22 @@ class SerDeInfo {
 'type' => TType::STRING,
 ),
   ),
+4 => array(
+  'var' => 'description',
+  'type' => TType::STRING,
+  ),
+5 => array(
+  'var' => 'serializerClass',
+  'type' => TType::STRING,
+  ),
+6 => array(
+  'var' => 'deserializerClass',
+  'type' => TType::STRING,
+  ),
+7 => array(
+  'var' => 'serdeType',
+  'type' => TType::I32,
+  ),
 );
 }
 if (is_array($vals)) {
@@ -4066,6 +4159,18 @@ class SerDeInfo {
   if (isset($vals['parameters'])) {
 $this->parameters = $vals['parameters'];
   }
+  if (isset($vals['description'])) {
+$this->description = $vals['description'];
+  }
+  if (isset($vals['serializerClass'])) {
+$this->serializerClass = $vals['serializerClass'];
+  }
+  if (isset($vals['deserializerClass'])) {
+$this->deserializerClass = $vals['deserializerClass'];
+  }
+  if (isset($vals['serdeType'])) {
+$this->serdeType = $vals['serdeType'];
+  }
 }
   }
 
@@ -4122,6 +4227,34 @@ class SerDeInfo {
 $xfer += $input->skip($ftype);
   }
   break;
+case 4:
+  if ($ftype == TType::STRING) {
+$xfer += $input->readString($this->description);
+  } else {
+$xfer += $input->skip($ftype);
+  }
+  break;
+case 5:
+  if ($ftype == TType::STRING) {
+$xfer += $input->readString($this->serializerClass);
+  } else {
+$xfer += $input->skip($ftype);
+  }
+  break;
+case 6:
+  if ($ftype == TType::STRING) {
+$xfer += $input->readString($this->deserializerClass);
+  } else {
+$xfer += $input->skip($ftype);
+  }
+  break;
+case 7:
+  if ($ftype == TType::I32) {
+$xfer += $input->readI32($this->serdeType);
+  } else {
+$xfer += $input->skip($ftype);
+  }
+  break;
 default:
   $xfer += $input->skip($ftype);
   break;
@@ -4163,6 +4296,26 @@ class SerDeInfo {
   }
   $xfer += $output->writeFieldEnd();
 }
+if ($this->description !== null) {
+  $xfer += $output->writeFieldBegin('description', TType::STRING, 4);
+  $xfer += $output->writeString($this->description);
+  $xfer += $output->writeFieldEnd();
+}
+if ($this->serializerClass !== null) {
+  $xfer += $output->writeFieldBegin('serializerClass', TType::STRING, 5);
+  $xfer += $output->writeString($this->serializerClass);

[18/50] [abbrv] hive git commit: HIVE-17983 Make the standalone metastore generate tarballs etc. (Alan Gates, reviewed by Thejas Nair)

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/47cac2d0/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/tools/TestMetastoreSchemaTool.java
--
diff --git 
a/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/tools/TestMetastoreSchemaTool.java
 
b/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/tools/TestMetastoreSchemaTool.java
new file mode 100644
index 000..8b07e93
--- /dev/null
+++ 
b/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/tools/TestMetastoreSchemaTool.java
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.metastore.tools;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.Mock;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.Arrays;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+public class TestMetastoreSchemaTool {
+
+  private String scriptFile = System.getProperty("java.io.tmpdir") + 
File.separator + "someScript.sql";
+  @Mock
+  private Configuration conf;
+  private MetastoreSchemaTool.CommandBuilder builder;
+  private String pasword = "reallySimplePassword";
+
+  @Before
+  public void setup() throws IOException {
+conf = MetastoreConf.newMetastoreConf();
+File file = new File(scriptFile);
+if (!file.exists()) {
+  file.createNewFile();
+}
+builder = new MetastoreSchemaTool.CommandBuilder(conf, null, null, 
"testUser", pasword, scriptFile);
+  }
+
+  @After
+  public void globalAssert() throws IOException {
+new File(scriptFile).delete();
+  }
+
+  @Test
+  public void shouldReturnStrippedPassword() throws IOException {
+assertFalse(builder.buildToLog().contains(pasword));
+  }
+
+  @Test
+  public void shouldReturnActualPassword() throws IOException {
+String[] strings = builder.buildToRun();
+assertTrue(Arrays.asList(strings).contains(pasword));
+  }
+}

http://git-wip-us.apache.org/repos/asf/hive/blob/47cac2d0/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/tools/TestSchemaToolForMetastore.java
--
diff --git 
a/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/tools/TestSchemaToolForMetastore.java
 
b/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/tools/TestSchemaToolForMetastore.java
new file mode 100644
index 000..c52729a
--- /dev/null
+++ 
b/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/tools/TestSchemaToolForMetastore.java
@@ -0,0 +1,467 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.metastore.tools;
+
+import java.io.BufferedWriter;
+import java.io.ByteArrayOutputStream;
+import java.io.File;
+import java.io.FileWriter;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.io.PrintStream;
+import java.net.URI;
+import java.sql.Connection;
+import java.sql.SQLException;
+import java.util.Random;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hive.metastore.HiveMetaException;
+import 

[12/50] [abbrv] hive git commit: HIVE-18359: Extend grouping set limits from int to long (Prasanth Jayachandran reviewed by Jesus Camacho Rodriguez)

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/ddd4c9ae/ql/src/test/results/clientpositive/infer_bucket_sort_grouping_operators.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/infer_bucket_sort_grouping_operators.q.out 
b/ql/src/test/results/clientpositive/infer_bucket_sort_grouping_operators.q.out
index 5f1d264..7224938 100644
--- 
a/ql/src/test/results/clientpositive/infer_bucket_sort_grouping_operators.q.out
+++ 
b/ql/src/test/results/clientpositive/infer_bucket_sort_grouping_operators.q.out
@@ -39,20 +39,20 @@ STAGE PLANS:
   Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE 
Column stats: NONE
   Group By Operator
 aggregations: count()
-keys: key (type: string), value (type: string), 0 (type: int)
+keys: key (type: string), value (type: string), 0 (type: 
bigint)
 mode: hash
 outputColumnNames: _col0, _col1, _col2, _col3
 Statistics: Num rows: 1500 Data size: 15936 Basic stats: 
COMPLETE Column stats: NONE
 Reduce Output Operator
-  key expressions: _col0 (type: string), _col1 (type: string), 
_col2 (type: int)
+  key expressions: _col0 (type: string), _col1 (type: string), 
_col2 (type: bigint)
   sort order: +++
-  Map-reduce partition columns: _col0 (type: string), _col1 
(type: string), _col2 (type: int)
+  Map-reduce partition columns: _col0 (type: string), _col1 
(type: string), _col2 (type: bigint)
   Statistics: Num rows: 1500 Data size: 15936 Basic stats: 
COMPLETE Column stats: NONE
   value expressions: _col3 (type: bigint)
   Reduce Operator Tree:
 Group By Operator
   aggregations: count(VALUE._col0)
-  keys: KEY._col0 (type: string), KEY._col1 (type: string), KEY._col2 
(type: int)
+  keys: KEY._col0 (type: string), KEY._col1 (type: string), KEY._col2 
(type: bigint)
   mode: mergepartial
   outputColumnNames: _col0, _col1, _col3
   Statistics: Num rows: 750 Data size: 7968 Basic stats: COMPLETE 
Column stats: NONE
@@ -1518,20 +1518,20 @@ STAGE PLANS:
   Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE 
Column stats: NONE
   Group By Operator
 aggregations: count()
-keys: key (type: string), value (type: string), 0 (type: int)
+keys: key (type: string), value (type: string), 0 (type: 
bigint)
 mode: hash
 outputColumnNames: _col0, _col1, _col2, _col3
 Statistics: Num rows: 2000 Data size: 21248 Basic stats: 
COMPLETE Column stats: NONE
 Reduce Output Operator
-  key expressions: _col0 (type: string), _col1 (type: string), 
_col2 (type: int)
+  key expressions: _col0 (type: string), _col1 (type: string), 
_col2 (type: bigint)
   sort order: +++
-  Map-reduce partition columns: _col0 (type: string), _col1 
(type: string), _col2 (type: int)
+  Map-reduce partition columns: _col0 (type: string), _col1 
(type: string), _col2 (type: bigint)
   Statistics: Num rows: 2000 Data size: 21248 Basic stats: 
COMPLETE Column stats: NONE
   value expressions: _col3 (type: bigint)
   Reduce Operator Tree:
 Group By Operator
   aggregations: count(VALUE._col0)
-  keys: KEY._col0 (type: string), KEY._col1 (type: string), KEY._col2 
(type: int)
+  keys: KEY._col0 (type: string), KEY._col1 (type: string), KEY._col2 
(type: bigint)
   mode: mergepartial
   outputColumnNames: _col0, _col1, _col3
   Statistics: Num rows: 1000 Data size: 10624 Basic stats: COMPLETE 
Column stats: NONE
@@ -1743,20 +1743,20 @@ STAGE PLANS:
   Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE 
Column stats: NONE
   Group By Operator
 aggregations: count()
-keys: key (type: string), value (type: string), 0 (type: int)
+keys: key (type: string), value (type: string), 0 (type: 
bigint)
 mode: hash
 outputColumnNames: _col0, _col1, _col2, _col3
 Statistics: Num rows: 1000 Data size: 10624 Basic stats: 
COMPLETE Column stats: NONE
 Reduce Output Operator
-  key expressions: _col0 (type: string), _col1 (type: string), 
_col2 (type: int)
+  key expressions: _col0 (type: string), _col1 (type: string), 
_col2 (type: bigint)
   sort order: +++
-  Map-reduce partition columns: _col0 (type: string), _col1 
(type: string), _col2 (type: int)
+  Map-reduce partition columns: _col0 (type: string), _col1 

[19/50] [abbrv] hive git commit: HIVE-17983 Make the standalone metastore generate tarballs etc. (Alan Gates, reviewed by Thejas Nair)

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/47cac2d0/standalone-metastore/src/main/sql/postgres/hive-schema-3.0.0.postgres.sql
--
diff --git 
a/standalone-metastore/src/main/sql/postgres/hive-schema-3.0.0.postgres.sql 
b/standalone-metastore/src/main/sql/postgres/hive-schema-3.0.0.postgres.sql
new file mode 100644
index 000..9d63056
--- /dev/null
+++ b/standalone-metastore/src/main/sql/postgres/hive-schema-3.0.0.postgres.sql
@@ -0,0 +1,1735 @@
+--
+-- PostgreSQL database dump
+--
+
+SET statement_timeout = 0;
+SET client_encoding = 'UTF8';
+SET standard_conforming_strings = off;
+SET check_function_bodies = false;
+SET client_min_messages = warning;
+SET escape_string_warning = off;
+
+SET search_path = public, pg_catalog;
+
+SET default_tablespace = '';
+
+SET default_with_oids = false;
+
+--
+-- Name: BUCKETING_COLS; Type: TABLE; Schema: public; Owner: hiveuser; 
Tablespace:
+--
+
+CREATE TABLE "BUCKETING_COLS" (
+"SD_ID" bigint NOT NULL,
+"BUCKET_COL_NAME" character varying(256) DEFAULT NULL::character varying,
+"INTEGER_IDX" bigint NOT NULL
+);
+
+
+--
+-- Name: CDS; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
+--
+
+CREATE TABLE "CDS" (
+"CD_ID" bigint NOT NULL
+);
+
+
+--
+-- Name: COLUMNS_V2; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
+--
+
+CREATE TABLE "COLUMNS_V2" (
+"CD_ID" bigint NOT NULL,
+"COMMENT" character varying(4000),
+"COLUMN_NAME" character varying(767) NOT NULL,
+"TYPE_NAME" text,
+"INTEGER_IDX" integer NOT NULL
+);
+
+
+--
+-- Name: DATABASE_PARAMS; Type: TABLE; Schema: public; Owner: hiveuser; 
Tablespace:
+--
+
+CREATE TABLE "DATABASE_PARAMS" (
+"DB_ID" bigint NOT NULL,
+"PARAM_KEY" character varying(180) NOT NULL,
+"PARAM_VALUE" character varying(4000) DEFAULT NULL::character varying
+);
+
+
+--
+-- Name: DBS; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
+--
+
+CREATE TABLE "DBS" (
+"DB_ID" bigint NOT NULL,
+"DESC" character varying(4000) DEFAULT NULL::character varying,
+"DB_LOCATION_URI" character varying(4000) NOT NULL,
+"NAME" character varying(128) DEFAULT NULL::character varying,
+"OWNER_NAME" character varying(128) DEFAULT NULL::character varying,
+"OWNER_TYPE" character varying(10) DEFAULT NULL::character varying
+);
+
+
+--
+-- Name: DB_PRIVS; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
+--
+
+CREATE TABLE "DB_PRIVS" (
+"DB_GRANT_ID" bigint NOT NULL,
+"CREATE_TIME" bigint NOT NULL,
+"DB_ID" bigint,
+"GRANT_OPTION" smallint NOT NULL,
+"GRANTOR" character varying(128) DEFAULT NULL::character varying,
+"GRANTOR_TYPE" character varying(128) DEFAULT NULL::character varying,
+"PRINCIPAL_NAME" character varying(128) DEFAULT NULL::character varying,
+"PRINCIPAL_TYPE" character varying(128) DEFAULT NULL::character varying,
+"DB_PRIV" character varying(128) DEFAULT NULL::character varying
+);
+
+
+--
+-- Name: GLOBAL_PRIVS; Type: TABLE; Schema: public; Owner: hiveuser; 
Tablespace:
+--
+
+CREATE TABLE "GLOBAL_PRIVS" (
+"USER_GRANT_ID" bigint NOT NULL,
+"CREATE_TIME" bigint NOT NULL,
+"GRANT_OPTION" smallint NOT NULL,
+"GRANTOR" character varying(128) DEFAULT NULL::character varying,
+"GRANTOR_TYPE" character varying(128) DEFAULT NULL::character varying,
+"PRINCIPAL_NAME" character varying(128) DEFAULT NULL::character varying,
+"PRINCIPAL_TYPE" character varying(128) DEFAULT NULL::character varying,
+"USER_PRIV" character varying(128) DEFAULT NULL::character varying
+);
+
+
+--
+-- Name: IDXS; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
+--
+
+CREATE TABLE "IDXS" (
+"INDEX_ID" bigint NOT NULL,
+"CREATE_TIME" bigint NOT NULL,
+"DEFERRED_REBUILD" boolean NOT NULL,
+"INDEX_HANDLER_CLASS" character varying(4000) DEFAULT NULL::character 
varying,
+"INDEX_NAME" character varying(128) DEFAULT NULL::character varying,
+"INDEX_TBL_ID" bigint,
+"LAST_ACCESS_TIME" bigint NOT NULL,
+"ORIG_TBL_ID" bigint,
+"SD_ID" bigint
+);
+
+
+--
+-- Name: INDEX_PARAMS; Type: TABLE; Schema: public; Owner: hiveuser; 
Tablespace:
+--
+
+CREATE TABLE "INDEX_PARAMS" (
+"INDEX_ID" bigint NOT NULL,
+"PARAM_KEY" character varying(256) NOT NULL,
+"PARAM_VALUE" character varying(4000) DEFAULT NULL::character varying
+);
+
+
+--
+-- Name: NUCLEUS_TABLES; Type: TABLE; Schema: public; Owner: hiveuser; 
Tablespace:
+--
+
+CREATE TABLE "NUCLEUS_TABLES" (
+"CLASS_NAME" character varying(128) NOT NULL,
+"TABLE_NAME" character varying(128) NOT NULL,
+"TYPE" character varying(4) NOT NULL,
+"OWNER" character varying(2) NOT NULL,
+"VERSION" character varying(20) NOT NULL,
+"INTERFACE_NAME" character varying(255) DEFAULT NULL::character varying
+);
+
+
+--
+-- Name: PARTITIONS; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
+--
+
+CREATE TABLE "PARTITIONS" (
+

[29/50] [abbrv] hive git commit: HIVE-17983 Make the standalone metastore generate tarballs etc. (Alan Gates, reviewed by Thejas Nair)

2018-02-12 Thread gates
HIVE-17983 Make the standalone metastore generate tarballs etc. (Alan Gates, 
reviewed by Thejas Nair)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/47cac2d0
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/47cac2d0
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/47cac2d0

Branch: refs/heads/standalone-metastore
Commit: 47cac2d0e1ffd9bff499c3455337466d7d3352a9
Parents: 9fdb601
Author: Alan Gates 
Authored: Sun Feb 11 19:50:15 2018 -0800
Committer: Alan Gates 
Committed: Sun Feb 11 19:50:15 2018 -0800

--
 .../org/apache/hive/beeline/HiveSchemaTool.java |5 +-
 binary-package-licenses/README  |1 +
 .../org/apache/hive/beeline/TestSchemaTool.java |   16 +-
 .../upgrade/mssql/hive-schema-3.0.0.mssql.sql   | 1135 
 .../mssql/upgrade-2.3.0-to-3.0.0.mssql.sql  |   13 -
 .../upgrade/mysql/hive-schema-3.0.0.mysql.sql   |  965 --
 .../mysql/hive-txn-schema-3.0.0.mysql.sql   |  138 --
 .../mysql/upgrade-2.3.0-to-3.0.0.mysql.sql  |   13 -
 .../upgrade/oracle/hive-schema-3.0.0.oracle.sql |  925 --
 .../oracle/hive-txn-schema-3.0.0.oracle.sql |  136 --
 .../oracle/upgrade-2.3.0-to-3.0.0.oracle.sql|   13 -
 .../postgres/hive-schema-3.0.0.postgres.sql | 1619 
 .../postgres/hive-txn-schema-3.0.0.postgres.sql |  136 --
 .../upgrade-2.3.0-to-3.0.0.postgres.sql |   14 -
 packaging/src/main/assembly/bin.xml |8 +
 packaging/src/main/assembly/src.xml |3 +
 standalone-metastore/DEV-README |   40 +
 .../binary-package-licenses/NOTICE  |4 +
 .../com.google.protobuf-LICENSE |   42 +
 .../javax.transaction.transaction-api-LICENSE   |  128 ++
 .../binary-package-licenses/javolution-LICENSE  |   25 +
 .../binary-package-licenses/jline-LICENSE   |   32 +
 .../binary-package-licenses/org.antlr-LICENSE   |   27 +
 .../binary-package-licenses/sqlline-LICENSE |   33 +
 standalone-metastore/pom.xml|  124 +-
 standalone-metastore/src/assembly/bin.xml   |  136 ++
 standalone-metastore/src/assembly/src.xml   |   53 +
 .../hive/metastore/IMetaStoreSchemaInfo.java|7 +
 .../hive/metastore/MetaStoreSchemaInfo.java |   33 +-
 .../metastore/conf/ConfTemplatePrinter.java |  150 ++
 .../hive/metastore/conf/MetastoreConf.java  |   45 +-
 .../hive/metastore/tools/HiveSchemaHelper.java  |   78 +-
 .../metastore/tools/MetastoreSchemaTool.java| 1308 +
 .../hadoop/hive/metastore/tools/SmokeTest.java  |  103 ++
 .../hadoop/hive/metastore/utils/LogUtils.java   |2 +-
 .../main/resources/metastore-log4j2.properties  |   71 +
 .../src/main/resources/metastore-site.xml   |   34 +
 standalone-metastore/src/main/scripts/base  |  231 +++
 .../src/main/scripts/ext/metastore.sh   |   41 +
 .../src/main/scripts/ext/schemaTool.sh  |   33 +
 .../src/main/scripts/ext/smokeTest.sh   |   33 +
 .../src/main/scripts/metastore-config.sh|   69 +
 .../src/main/scripts/schematool |   21 +
 .../src/main/scripts/start-metastore|   22 +
 .../main/sql/derby/hive-schema-1.2.0.derby.sql  |  405 
 .../main/sql/derby/hive-schema-3.0.0.derby.sql  |  531 ++
 .../sql/derby/upgrade-1.2.0-to-2.0.0.derby.sql  |   62 +
 .../sql/derby/upgrade-2.0.0-to-2.1.0.derby.sql  |   22 +
 .../sql/derby/upgrade-2.1.0-to-2.2.0.derby.sql  |   59 +
 .../sql/derby/upgrade-2.2.0-to-2.3.0.derby.sql  |5 +
 .../sql/derby/upgrade-2.3.0-to-3.0.0.derby.sql  |   96 +
 .../src/main/sql/derby/upgrade.order.derby  |   16 +
 .../src/main/sql/mssql/create-user.mssql.sql|5 +
 .../main/sql/mssql/hive-schema-1.2.0.mssql.sql  |  947 ++
 .../main/sql/mssql/hive-schema-3.0.0.mssql.sql  | 1135 
 .../sql/mssql/upgrade-1.2.0-to-2.0.0.mssql.sql  |   73 +
 .../sql/mssql/upgrade-2.0.0-to-2.1.0.mssql.sql  |   39 +
 .../sql/mssql/upgrade-2.1.0-to-2.2.0.mssql.sql  |   43 +
 .../sql/mssql/upgrade-2.2.0-to-2.3.0.mssql.sql  |7 +
 .../sql/mssql/upgrade-2.3.0-to-3.0.0.mssql.sql  |  150 ++
 .../src/main/sql/mssql/upgrade.order.mssql  |   10 +
 .../src/main/sql/mysql/create-user.mysql.sql|8 +
 .../main/sql/mysql/hive-schema-1.2.0.mysql.sql  |  910 +
 .../main/sql/mysql/hive-schema-3.0.0.mysql.sql  | 1082 +++
 .../sql/mysql/upgrade-1.2.0-to-2.0.0.mysql.sql  |   75 +
 .../sql/mysql/upgrade-2.0.0-to-2.1.0.mysql.sql  |   42 +
 .../sql/mysql/upgrade-2.1.0-to-2.2.0.mysql.sql  |   43 +
 .../sql/mysql/upgrade-2.2.0-to-2.3.0.mysql.sql  |8 +
 .../sql/mysql/upgrade-2.3.0-to-3.0.0.mysql.sql  |  135 ++
 .../src/main/sql/mysql/upgrade.order.mysql  |   16 +
 .../src/main/sql/oracle/create-user.oracle.sql  |3 +
 .../sql/oracle/hive-schema-1.2.0.oracle.sql | 

[30/50] [abbrv] hive git commit: PPD: Handle FLOAT boxing differently for single/double precision constants (Gopal V, reviewed by Prasanth Jayachandran)

2018-02-12 Thread gates
PPD: Handle FLOAT boxing differently for single/double precision constants 
(Gopal V, reviewed by Prasanth Jayachandran)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/23388462
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/23388462
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/23388462

Branch: refs/heads/standalone-metastore
Commit: 233884620af67e6af72b60629f799a69f5823eb2
Parents: 47cac2d
Author: Gopal V 
Authored: Sun Feb 11 23:02:46 2018 -0800
Committer: Gopal V 
Committed: Sun Feb 11 23:02:46 2018 -0800

--
 .../hive/ql/io/sarg/ConvertAstToSearchArg.java  | 107 -
 .../test/queries/clientpositive/orc_ppd_basic.q |  17 +++
 .../clientpositive/llap/orc_ppd_basic.q.out | 153 +++
 3 files changed, 240 insertions(+), 37 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/23388462/ql/src/java/org/apache/hadoop/hive/ql/io/sarg/ConvertAstToSearchArg.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/io/sarg/ConvertAstToSearchArg.java 
b/ql/src/java/org/apache/hadoop/hive/ql/io/sarg/ConvertAstToSearchArg.java
index 51b1ac6..27fe828 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/io/sarg/ConvertAstToSearchArg.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/io/sarg/ConvertAstToSearchArg.java
@@ -23,8 +23,6 @@ import java.sql.Timestamp;
 import java.util.List;
 import java.util.concurrent.ExecutionException;
 
-import com.google.common.cache.Cache;
-import com.google.common.cache.CacheBuilder;
 import org.apache.commons.codec.binary.Base64;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hive.common.type.HiveChar;
@@ -60,12 +58,35 @@ import org.slf4j.LoggerFactory;
 
 import com.esotericsoftware.kryo.Kryo;
 import com.esotericsoftware.kryo.io.Input;
+import com.google.common.cache.Cache;
+import com.google.common.cache.CacheBuilder;
 
 public class ConvertAstToSearchArg {
   private static final Logger LOG = 
LoggerFactory.getLogger(ConvertAstToSearchArg.class);
   private final SearchArgument.Builder builder;
   private final Configuration conf;
 
+  /*
+   * Create a new type for handling precision conversions from Decimal -> 
Double/Float
+   * 
+   * The type is only relevant to boxLiteral and all other functions treat it 
identically.
+   */
+  private static enum BoxType {
+LONG(PredicateLeaf.Type.LONG),  // all of the integer types
+FLOAT(PredicateLeaf.Type.FLOAT),   // float
+DOUBLE(PredicateLeaf.Type.FLOAT),   // double
+STRING(PredicateLeaf.Type.STRING),  // string, char, varchar
+DATE(PredicateLeaf.Type.DATE),
+DECIMAL(PredicateLeaf.Type.DECIMAL),
+TIMESTAMP(PredicateLeaf.Type.TIMESTAMP),
+BOOLEAN(PredicateLeaf.Type.BOOLEAN);
+
+public final PredicateLeaf.Type type;
+BoxType(PredicateLeaf.Type type) {
+  this.type = type;
+}
+  }
+
   /**
* Builds the expression and leaf list from the original predicate.
* @param expression the expression to translate.
@@ -89,7 +110,7 @@ public class ConvertAstToSearchArg {
* @param expr the expression to get the type of
* @return int, string, or float or null if we don't know the type
*/
-  private static PredicateLeaf.Type getType(ExprNodeDesc expr) {
+  private static BoxType getType(ExprNodeDesc expr) {
 TypeInfo type = expr.getTypeInfo();
 if (type.getCategory() == ObjectInspector.Category.PRIMITIVE) {
   switch (((PrimitiveTypeInfo) type).getPrimitiveCategory()) {
@@ -97,22 +118,23 @@ public class ConvertAstToSearchArg {
 case SHORT:
 case INT:
 case LONG:
-  return PredicateLeaf.Type.LONG;
+  return BoxType.LONG;
 case CHAR:
 case VARCHAR:
 case STRING:
-  return PredicateLeaf.Type.STRING;
+  return BoxType.STRING;
 case FLOAT:
+  return BoxType.FLOAT;
 case DOUBLE:
-  return PredicateLeaf.Type.FLOAT;
+  return BoxType.DOUBLE;
 case DATE:
-  return PredicateLeaf.Type.DATE;
+  return BoxType.DATE;
 case TIMESTAMP:
-  return PredicateLeaf.Type.TIMESTAMP;
+  return BoxType.TIMESTAMP;
 case DECIMAL:
-  return PredicateLeaf.Type.DECIMAL;
+  return BoxType.DECIMAL;
 case BOOLEAN:
-  return PredicateLeaf.Type.BOOLEAN;
+  return BoxType.BOOLEAN;
 default:
   }
 }
@@ -140,12 +162,12 @@ public class ConvertAstToSearchArg {
   }
 
   private static Object boxLiteral(ExprNodeConstantDesc constantDesc,
-   PredicateLeaf.Type type) {
+   BoxType boxType) {
 Object lit = 

[46/50] [abbrv] hive git commit: HIVE-17990 Add Thrift and DB storage for Schema Registry objects

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/a9e1acaf/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h
--
diff --git a/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h 
b/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h
index bfa17eb..83108d8 100644
--- a/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h
+++ b/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h
@@ -203,6 +203,20 @@ class ThriftHiveMetastoreIf : virtual public  
::facebook::fb303::FacebookService
   virtual void create_or_update_wm_mapping(WMCreateOrUpdateMappingResponse& 
_return, const WMCreateOrUpdateMappingRequest& request) = 0;
   virtual void drop_wm_mapping(WMDropMappingResponse& _return, const 
WMDropMappingRequest& request) = 0;
   virtual void 
create_or_drop_wm_trigger_to_pool_mapping(WMCreateOrDropTriggerToPoolMappingResponse&
 _return, const WMCreateOrDropTriggerToPoolMappingRequest& request) = 0;
+  virtual void create_ischema(const ISchema& schema) = 0;
+  virtual void alter_ischema(const std::string& schemaName, const ISchema& 
newSchema) = 0;
+  virtual void get_ischema(ISchema& _return, const std::string& schemaName) = 
0;
+  virtual void drop_ischema(const std::string& schemaName) = 0;
+  virtual void add_schema_version(const SchemaVersion& schemaVersion) = 0;
+  virtual void get_schema_version(SchemaVersion& _return, const std::string& 
schemaName, const int32_t version) = 0;
+  virtual void get_schema_latest_version(SchemaVersion& _return, const 
std::string& schemaName) = 0;
+  virtual void get_schema_all_versions(std::vector & _return, 
const std::string& schemaName) = 0;
+  virtual void drop_schema_version(const std::string& schemaName, const 
int32_t version) = 0;
+  virtual void get_schemas_by_cols(FindSchemasByColsResp& _return, const 
FindSchemasByColsRqst& rqst) = 0;
+  virtual void map_schema_version_to_serde(const std::string& schemaName, 
const int32_t version, const std::string& serdeName) = 0;
+  virtual void set_schema_version_state(const std::string& schemaName, const 
int32_t version, const SchemaVersionState::type state) = 0;
+  virtual void add_serde(const SerDeInfo& serde) = 0;
+  virtual void get_serde(SerDeInfo& _return, const std::string& serdeName) = 0;
 };
 
 class ThriftHiveMetastoreIfFactory : virtual public  
::facebook::fb303::FacebookServiceIfFactory {
@@ -803,6 +817,48 @@ class ThriftHiveMetastoreNull : virtual public 
ThriftHiveMetastoreIf , virtual p
   void 
create_or_drop_wm_trigger_to_pool_mapping(WMCreateOrDropTriggerToPoolMappingResponse&
 /* _return */, const WMCreateOrDropTriggerToPoolMappingRequest& /* request */) 
{
 return;
   }
+  void create_ischema(const ISchema& /* schema */) {
+return;
+  }
+  void alter_ischema(const std::string& /* schemaName */, const ISchema& /* 
newSchema */) {
+return;
+  }
+  void get_ischema(ISchema& /* _return */, const std::string& /* schemaName 
*/) {
+return;
+  }
+  void drop_ischema(const std::string& /* schemaName */) {
+return;
+  }
+  void add_schema_version(const SchemaVersion& /* schemaVersion */) {
+return;
+  }
+  void get_schema_version(SchemaVersion& /* _return */, const std::string& /* 
schemaName */, const int32_t /* version */) {
+return;
+  }
+  void get_schema_latest_version(SchemaVersion& /* _return */, const 
std::string& /* schemaName */) {
+return;
+  }
+  void get_schema_all_versions(std::vector & /* _return */, 
const std::string& /* schemaName */) {
+return;
+  }
+  void drop_schema_version(const std::string& /* schemaName */, const int32_t 
/* version */) {
+return;
+  }
+  void get_schemas_by_cols(FindSchemasByColsResp& /* _return */, const 
FindSchemasByColsRqst& /* rqst */) {
+return;
+  }
+  void map_schema_version_to_serde(const std::string& /* schemaName */, const 
int32_t /* version */, const std::string& /* serdeName */) {
+return;
+  }
+  void set_schema_version_state(const std::string& /* schemaName */, const 
int32_t /* version */, const SchemaVersionState::type /* state */) {
+return;
+  }
+  void add_serde(const SerDeInfo& /* serde */) {
+return;
+  }
+  void get_serde(SerDeInfo& /* _return */, const std::string& /* serdeName */) 
{
+return;
+  }
 };
 
 typedef struct _ThriftHiveMetastore_getMetaConf_args__isset {
@@ -23160,228 +23216,1917 @@ class 
ThriftHiveMetastore_create_or_drop_wm_trigger_to_pool_mapping_presult {
 
 };
 
-class ThriftHiveMetastoreClient : virtual public ThriftHiveMetastoreIf, public 
 ::facebook::fb303::FacebookServiceClient {
+typedef struct _ThriftHiveMetastore_create_ischema_args__isset {
+  _ThriftHiveMetastore_create_ischema_args__isset() : schema(false) {}
+  bool schema :1;
+} _ThriftHiveMetastore_create_ischema_args__isset;
+
+class ThriftHiveMetastore_create_ischema_args {
  public:
-  ThriftHiveMetastoreClient(boost::shared_ptr< 

[45/50] [abbrv] hive git commit: HIVE-17990 Add Thrift and DB storage for Schema Registry objects

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/a9e1acaf/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore_server.skeleton.cpp
--
diff --git 
a/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore_server.skeleton.cpp
 
b/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore_server.skeleton.cpp
index cf9a171..b6f5995 100644
--- 
a/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore_server.skeleton.cpp
+++ 
b/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore_server.skeleton.cpp
@@ -927,6 +927,76 @@ class ThriftHiveMetastoreHandler : virtual public 
ThriftHiveMetastoreIf {
 printf("create_or_drop_wm_trigger_to_pool_mapping\n");
   }
 
+  void create_ischema(const ISchema& schema) {
+// Your implementation goes here
+printf("create_ischema\n");
+  }
+
+  void alter_ischema(const std::string& schemaName, const ISchema& newSchema) {
+// Your implementation goes here
+printf("alter_ischema\n");
+  }
+
+  void get_ischema(ISchema& _return, const std::string& schemaName) {
+// Your implementation goes here
+printf("get_ischema\n");
+  }
+
+  void drop_ischema(const std::string& schemaName) {
+// Your implementation goes here
+printf("drop_ischema\n");
+  }
+
+  void add_schema_version(const SchemaVersion& schemaVersion) {
+// Your implementation goes here
+printf("add_schema_version\n");
+  }
+
+  void get_schema_version(SchemaVersion& _return, const std::string& 
schemaName, const int32_t version) {
+// Your implementation goes here
+printf("get_schema_version\n");
+  }
+
+  void get_schema_latest_version(SchemaVersion& _return, const std::string& 
schemaName) {
+// Your implementation goes here
+printf("get_schema_latest_version\n");
+  }
+
+  void get_schema_all_versions(std::vector & _return, const 
std::string& schemaName) {
+// Your implementation goes here
+printf("get_schema_all_versions\n");
+  }
+
+  void drop_schema_version(const std::string& schemaName, const int32_t 
version) {
+// Your implementation goes here
+printf("drop_schema_version\n");
+  }
+
+  void get_schemas_by_cols(FindSchemasByColsResp& _return, const 
FindSchemasByColsRqst& rqst) {
+// Your implementation goes here
+printf("get_schemas_by_cols\n");
+  }
+
+  void map_schema_version_to_serde(const std::string& schemaName, const 
int32_t version, const std::string& serdeName) {
+// Your implementation goes here
+printf("map_schema_version_to_serde\n");
+  }
+
+  void set_schema_version_state(const std::string& schemaName, const int32_t 
version, const SchemaVersionState::type state) {
+// Your implementation goes here
+printf("set_schema_version_state\n");
+  }
+
+  void add_serde(const SerDeInfo& serde) {
+// Your implementation goes here
+printf("add_serde\n");
+  }
+
+  void get_serde(SerDeInfo& _return, const std::string& serdeName) {
+// Your implementation goes here
+printf("get_serde\n");
+  }
+
 };
 
 int main(int argc, char **argv) {



[03/50] [abbrv] hive git commit: HIVE-17835: HS2 Logs print unnecessary stack trace when HoS query is cancelled (Sahil Takiar, reviewed by Chao Sun)

2018-02-12 Thread gates
HIVE-17835: HS2 Logs print unnecessary stack trace when HoS query is cancelled 
(Sahil Takiar, reviewed by Chao Sun)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/e33edd96
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/e33edd96
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/e33edd96

Branch: refs/heads/standalone-metastore
Commit: e33edd9649ce05495396a2183b1be3d1a79fd0d3
Parents: 717ef18
Author: Sahil Takiar 
Authored: Fri Feb 9 14:49:38 2018 -0800
Committer: Sahil Takiar 
Committed: Fri Feb 9 14:49:38 2018 -0800

--
 .../hadoop/hive/ql/exec/spark/SparkTask.java |  4 +++-
 .../exec/spark/status/RemoteSparkJobMonitor.java | 19 +++
 2 files changed, 14 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/e33edd96/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTask.java
--
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTask.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTask.java
index c6e17b5..62d 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTask.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTask.java
@@ -117,6 +117,7 @@ public class SparkTask extends Task {
   perfLogger.PerfLogEnd(CLASS_NAME, PerfLogger.SPARK_SUBMIT_JOB);
 
   if (driverContext.isShutdown()) {
+LOG.warn("Killing Spark job");
 killJob();
 throw new HiveException("Operation is cancelled.");
   }
@@ -337,7 +338,7 @@ public class SparkTask extends Task {
   try {
 jobRef.cancelJob();
   } catch (Exception e) {
-LOG.warn("failed to kill job", e);
+LOG.warn("Failed to kill Spark job", e);
   }
 }
   }
@@ -424,6 +425,7 @@ public class SparkTask extends Task {
   if ((error instanceof InterruptedException) ||
   (error instanceof HiveException &&
   error.getCause() instanceof InterruptedException)) {
+LOG.info("Killing Spark job since query was interrupted");
 killJob();
   }
   HiveException he;

http://git-wip-us.apache.org/repos/asf/hive/blob/e33edd96/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/RemoteSparkJobMonitor.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/RemoteSparkJobMonitor.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/RemoteSparkJobMonitor.java
index 6c7aca7..4c4ce55 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/RemoteSparkJobMonitor.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/RemoteSparkJobMonitor.java
@@ -184,16 +184,19 @@ public class RemoteSparkJobMonitor extends 
SparkJobMonitor {
 }
   } catch (Exception e) {
 Exception finalException = e;
-if (e instanceof InterruptedException) {
+if (e instanceof InterruptedException ||
+(e instanceof HiveException && e.getCause() instanceof 
InterruptedException)) {
   finalException = new HiveException(e, 
ErrorMsg.SPARK_JOB_INTERRUPTED);
+  LOG.warn("Interrupted while monitoring the Hive on Spark 
application, exiting");
+} else {
+  String msg = " with exception '" + Utilities.getNameMessage(e) + "'";
+  msg = "Failed to monitor Job[" + sparkJobStatus.getJobId() + "]" + 
msg;
+
+  // Has to use full name to make sure it does not conflict with
+  // org.apache.commons.lang.StringUtils
+  LOG.error(msg, e);
+  console.printError(msg, "\n" + 
org.apache.hadoop.util.StringUtils.stringifyException(e));
 }
-String msg = " with exception '" + Utilities.getNameMessage(e) + "'";
-msg = "Failed to monitor Job[" + sparkJobStatus.getJobId() + "]" + msg;
-
-// Has to use full name to make sure it does not conflict with
-// org.apache.commons.lang.StringUtils
-LOG.error(msg, e);
-console.printError(msg, "\n" + 
org.apache.hadoop.util.StringUtils.stringifyException(e));
 rc = 1;
 done = true;
 sparkJobStatus.setError(finalException);



[06/50] [abbrv] hive git commit: HIVE-18359: Extend grouping set limits from int to long (Prasanth Jayachandran reviewed by Jesus Camacho Rodriguez)

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/ddd4c9ae/ql/src/test/results/clientpositive/llap/vector_groupby_rollup1.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/vector_groupby_rollup1.q.out 
b/ql/src/test/results/clientpositive/llap/vector_groupby_rollup1.q.out
index d1263cd..717c218 100644
--- a/ql/src/test/results/clientpositive/llap/vector_groupby_rollup1.q.out
+++ b/ql/src/test/results/clientpositive/llap/vector_groupby_rollup1.q.out
@@ -70,18 +70,18 @@ STAGE PLANS:
   aggregators: VectorUDAFCountStar(*) -> bigint
   className: VectorGroupByOperator
   groupByMode: HASH
-  keyExpressions: col 0:string, col 1:string, 
ConstantVectorExpression(val 0) -> 3:int
+  keyExpressions: col 0:string, col 1:string, 
ConstantVectorExpression(val 0) -> 3:bigint
   native: false
   vectorProcessingMode: HASH
   projectedOutputColumnNums: [0]
-  keys: key (type: string), val (type: string), 0 (type: 
int)
+  keys: key (type: string), val (type: string), 0 (type: 
bigint)
   mode: hash
   outputColumnNames: _col0, _col1, _col2, _col3
   Statistics: Num rows: 18 Data size: 6624 Basic stats: 
COMPLETE Column stats: NONE
   Reduce Output Operator
-key expressions: _col0 (type: string), _col1 (type: 
string), _col2 (type: int)
+key expressions: _col0 (type: string), _col1 (type: 
string), _col2 (type: bigint)
 sort order: +++
-Map-reduce partition columns: _col0 (type: string), 
_col1 (type: string), _col2 (type: int)
+Map-reduce partition columns: _col0 (type: string), 
_col1 (type: string), _col2 (type: bigint)
 Reduce Sink Vectorization:
 className: VectorReduceSinkMultiKeyOperator
 keyColumnNums: [0, 1, 2]
@@ -119,7 +119,7 @@ STAGE PLANS:
 vectorized: true
 rowBatchContext:
 dataColumnCount: 4
-dataColumns: KEY._col0:string, KEY._col1:string, 
KEY._col2:int, VALUE._col0:bigint
+dataColumns: KEY._col0:string, KEY._col1:string, 
KEY._col2:bigint, VALUE._col0:bigint
 partitionColumnCount: 0
 scratchColumnTypeNames: []
 Reduce Operator Tree:
@@ -129,11 +129,11 @@ STAGE PLANS:
 aggregators: VectorUDAFCountMerge(col 3:bigint) -> bigint
 className: VectorGroupByOperator
 groupByMode: MERGEPARTIAL
-keyExpressions: col 0:string, col 1:string, col 2:int
+keyExpressions: col 0:string, col 1:string, col 2:bigint
 native: false
 vectorProcessingMode: MERGE_PARTIAL
 projectedOutputColumnNums: [0]
-keys: KEY._col0 (type: string), KEY._col1 (type: string), 
KEY._col2 (type: int)
+keys: KEY._col0 (type: string), KEY._col1 (type: string), 
KEY._col2 (type: bigint)
 mode: mergepartial
 outputColumnNames: _col0, _col1, _col3
 Statistics: Num rows: 9 Data size: 3312 Basic stats: COMPLETE 
Column stats: NONE
@@ -227,18 +227,18 @@ STAGE PLANS:
   aggregators: VectorUDAFCount(col 1:string) -> bigint
   className: VectorGroupByOperator
   groupByMode: HASH
-  keyExpressions: col 0:string, 
ConstantVectorExpression(val 0) -> 3:int, col 1:string
+  keyExpressions: col 0:string, 
ConstantVectorExpression(val 0) -> 3:bigint, col 1:string
   native: false
   vectorProcessingMode: HASH
   projectedOutputColumnNums: [0]
-  keys: key (type: string), 0 (type: int), val (type: 
string)
+  keys: key (type: string), 0 (type: bigint), val (type: 
string)
   mode: hash
   outputColumnNames: _col0, _col1, _col2, _col3
   Statistics: Num rows: 12 Data size: 4416 Basic stats: 
COMPLETE Column stats: NONE
   Reduce Output Operator
-key expressions: _col0 (type: string), _col1 (type: 
int), _col2 (type: string)
+key expressions: _col0 (type: string), _col1 (type: 
bigint), _col2 (type: string)
 sort order: +++
-Map-reduce partition columns: _col0 (type: string), 
_col1 (type: int)
+

[27/50] [abbrv] hive git commit: HIVE-17983 Make the standalone metastore generate tarballs etc. (Alan Gates, reviewed by Thejas Nair)

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/47cac2d0/metastore/scripts/upgrade/postgres/hive-schema-3.0.0.postgres.sql
--
diff --git a/metastore/scripts/upgrade/postgres/hive-schema-3.0.0.postgres.sql 
b/metastore/scripts/upgrade/postgres/hive-schema-3.0.0.postgres.sql
deleted file mode 100644
index af71ed3..000
--- a/metastore/scripts/upgrade/postgres/hive-schema-3.0.0.postgres.sql
+++ /dev/null
@@ -1,1619 +0,0 @@
---
--- PostgreSQL database dump
---
-
-SET statement_timeout = 0;
-SET client_encoding = 'UTF8';
-SET standard_conforming_strings = off;
-SET check_function_bodies = false;
-SET client_min_messages = warning;
-SET escape_string_warning = off;
-
-SET search_path = public, pg_catalog;
-
-SET default_tablespace = '';
-
-SET default_with_oids = false;
-
---
--- Name: BUCKETING_COLS; Type: TABLE; Schema: public; Owner: hiveuser; 
Tablespace:
---
-
-CREATE TABLE "BUCKETING_COLS" (
-"SD_ID" bigint NOT NULL,
-"BUCKET_COL_NAME" character varying(256) DEFAULT NULL::character varying,
-"INTEGER_IDX" bigint NOT NULL
-);
-
-
---
--- Name: CDS; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
---
-
-CREATE TABLE "CDS" (
-"CD_ID" bigint NOT NULL
-);
-
-
---
--- Name: COLUMNS_V2; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
---
-
-CREATE TABLE "COLUMNS_V2" (
-"CD_ID" bigint NOT NULL,
-"COMMENT" character varying(4000),
-"COLUMN_NAME" character varying(767) NOT NULL,
-"TYPE_NAME" text,
-"INTEGER_IDX" integer NOT NULL
-);
-
-
---
--- Name: DATABASE_PARAMS; Type: TABLE; Schema: public; Owner: hiveuser; 
Tablespace:
---
-
-CREATE TABLE "DATABASE_PARAMS" (
-"DB_ID" bigint NOT NULL,
-"PARAM_KEY" character varying(180) NOT NULL,
-"PARAM_VALUE" character varying(4000) DEFAULT NULL::character varying
-);
-
-
---
--- Name: DBS; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
---
-
-CREATE TABLE "DBS" (
-"DB_ID" bigint NOT NULL,
-"DESC" character varying(4000) DEFAULT NULL::character varying,
-"DB_LOCATION_URI" character varying(4000) NOT NULL,
-"NAME" character varying(128) DEFAULT NULL::character varying,
-"OWNER_NAME" character varying(128) DEFAULT NULL::character varying,
-"OWNER_TYPE" character varying(10) DEFAULT NULL::character varying
-);
-
-
---
--- Name: DB_PRIVS; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
---
-
-CREATE TABLE "DB_PRIVS" (
-"DB_GRANT_ID" bigint NOT NULL,
-"CREATE_TIME" bigint NOT NULL,
-"DB_ID" bigint,
-"GRANT_OPTION" smallint NOT NULL,
-"GRANTOR" character varying(128) DEFAULT NULL::character varying,
-"GRANTOR_TYPE" character varying(128) DEFAULT NULL::character varying,
-"PRINCIPAL_NAME" character varying(128) DEFAULT NULL::character varying,
-"PRINCIPAL_TYPE" character varying(128) DEFAULT NULL::character varying,
-"DB_PRIV" character varying(128) DEFAULT NULL::character varying
-);
-
-
---
--- Name: GLOBAL_PRIVS; Type: TABLE; Schema: public; Owner: hiveuser; 
Tablespace:
---
-
-CREATE TABLE "GLOBAL_PRIVS" (
-"USER_GRANT_ID" bigint NOT NULL,
-"CREATE_TIME" bigint NOT NULL,
-"GRANT_OPTION" smallint NOT NULL,
-"GRANTOR" character varying(128) DEFAULT NULL::character varying,
-"GRANTOR_TYPE" character varying(128) DEFAULT NULL::character varying,
-"PRINCIPAL_NAME" character varying(128) DEFAULT NULL::character varying,
-"PRINCIPAL_TYPE" character varying(128) DEFAULT NULL::character varying,
-"USER_PRIV" character varying(128) DEFAULT NULL::character varying
-);
-
-
---
--- Name: IDXS; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
---
-
-CREATE TABLE "IDXS" (
-"INDEX_ID" bigint NOT NULL,
-"CREATE_TIME" bigint NOT NULL,
-"DEFERRED_REBUILD" boolean NOT NULL,
-"INDEX_HANDLER_CLASS" character varying(4000) DEFAULT NULL::character 
varying,
-"INDEX_NAME" character varying(128) DEFAULT NULL::character varying,
-"INDEX_TBL_ID" bigint,
-"LAST_ACCESS_TIME" bigint NOT NULL,
-"ORIG_TBL_ID" bigint,
-"SD_ID" bigint
-);
-
-
---
--- Name: INDEX_PARAMS; Type: TABLE; Schema: public; Owner: hiveuser; 
Tablespace:
---
-
-CREATE TABLE "INDEX_PARAMS" (
-"INDEX_ID" bigint NOT NULL,
-"PARAM_KEY" character varying(256) NOT NULL,
-"PARAM_VALUE" character varying(4000) DEFAULT NULL::character varying
-);
-
-
---
--- Name: NUCLEUS_TABLES; Type: TABLE; Schema: public; Owner: hiveuser; 
Tablespace:
---
-
-CREATE TABLE "NUCLEUS_TABLES" (
-"CLASS_NAME" character varying(128) NOT NULL,
-"TABLE_NAME" character varying(128) NOT NULL,
-"TYPE" character varying(4) NOT NULL,
-"OWNER" character varying(2) NOT NULL,
-"VERSION" character varying(20) NOT NULL,
-"INTERFACE_NAME" character varying(255) DEFAULT NULL::character varying
-);
-
-
---
--- Name: PARTITIONS; Type: TABLE; Schema: public; Owner: hiveuser; Tablespace:
---
-
-CREATE TABLE "PARTITIONS" (
-"PART_ID" bigint NOT NULL,
-

[35/50] [abbrv] hive git commit: HIVE-17990 Add Thrift and DB storage for Schema Registry objects

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/a9e1acaf/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
--
diff --git 
a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
 
b/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
index 23cef8d..4b262a6 100644
--- 
a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
+++ 
b/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
@@ -2811,7 +2811,8 @@ public class HiveMetaStoreClient implements 
IMetaStoreClient, AutoCloseable {
   public void createOrDropTriggerToPoolMapping(String resourcePlanName, String 
triggerName,
   String poolPath, boolean shouldDrop) throws AlreadyExistsException, 
NoSuchObjectException,
   InvalidObjectException, MetaException, TException {
-WMCreateOrDropTriggerToPoolMappingRequest request = new 
WMCreateOrDropTriggerToPoolMappingRequest();
+WMCreateOrDropTriggerToPoolMappingRequest request =
+new WMCreateOrDropTriggerToPoolMappingRequest();
 request.setResourcePlanName(resourcePlanName);
 request.setTriggerName(triggerName);
 request.setPoolPath(poolPath);
@@ -2819,4 +2820,74 @@ public class HiveMetaStoreClient implements 
IMetaStoreClient, AutoCloseable {
 client.create_or_drop_wm_trigger_to_pool_mapping(request);
   }
 
+  public void createISchema(ISchema schema) throws TException {
+client.create_ischema(schema);
+  }
+
+  @Override
+  public void alterISchema(String schemaName, ISchema newSchema) throws 
TException {
+client.alter_ischema(schemaName, newSchema);
+  }
+
+  @Override
+  public ISchema getISchema(String name) throws TException {
+return client.get_ischema(name);
+  }
+
+  @Override
+  public void dropISchema(String name) throws TException {
+client.drop_ischema(name);
+  }
+
+  @Override
+  public void addSchemaVersion(SchemaVersion schemaVersion) throws TException {
+client.add_schema_version(schemaVersion);
+  }
+
+  @Override
+  public SchemaVersion getSchemaVersion(String schemaName, int version) throws 
TException {
+return client.get_schema_version(schemaName, version);
+  }
+
+  @Override
+  public SchemaVersion getSchemaLatestVersion(String schemaName) throws 
TException {
+return client.get_schema_latest_version(schemaName);
+  }
+
+  @Override
+  public List getSchemaAllVersions(String schemaName) throws 
TException {
+return client.get_schema_all_versions(schemaName);
+  }
+
+  @Override
+  public void dropSchemaVersion(String schemaName, int version) throws 
TException {
+client.drop_schema_version(schemaName, version);
+  }
+
+  @Override
+  public FindSchemasByColsResp getSchemaByCols(FindSchemasByColsRqst rqst) 
throws TException {
+return client.get_schemas_by_cols(rqst);
+  }
+
+  @Override
+  public void mapSchemaVersionToSerde(String schemaName, int version, String 
serdeName)
+  throws TException {
+client.map_schema_version_to_serde(schemaName, version, serdeName);
+  }
+
+  @Override
+  public void setSchemaVersionState(String schemaName, int version, 
SchemaVersionState state)
+  throws TException {
+client.set_schema_version_state(schemaName, version, state);
+  }
+
+  @Override
+  public void addSerDe(SerDeInfo serDeInfo) throws TException {
+client.add_serde(serDeInfo);
+  }
+
+  @Override
+  public SerDeInfo getSerDe(String serDeName) throws TException {
+return client.get_serde(serDeName);
+  }
 }

http://git-wip-us.apache.org/repos/asf/hive/blob/a9e1acaf/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java
--
diff --git 
a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java
 
b/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java
index 96d4590..0987ed5 100644
--- 
a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java
+++ 
b/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java
@@ -46,6 +46,8 @@ import org.apache.hadoop.hive.metastore.api.DataOperationType;
 import org.apache.hadoop.hive.metastore.api.Database;
 import org.apache.hadoop.hive.metastore.api.EnvironmentContext;
 import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.FindSchemasByColsResp;
+import org.apache.hadoop.hive.metastore.api.FindSchemasByColsRqst;
 import org.apache.hadoop.hive.metastore.api.FireEventRequest;
 import org.apache.hadoop.hive.metastore.api.FireEventResponse;
 import org.apache.hadoop.hive.metastore.api.ForeignKeysRequest;
@@ -59,6 +61,7 @@ import 
org.apache.hadoop.hive.metastore.api.GetRoleGrantsForPrincipalResponse;
 import 

[39/50] [abbrv] hive git commit: HIVE-17990 Add Thrift and DB storage for Schema Registry objects

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/a9e1acaf/standalone-metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php
--
diff --git 
a/standalone-metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php 
b/standalone-metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php
index 9382c60..6f905cd 100644
--- 
a/standalone-metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php
+++ 
b/standalone-metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php
@@ -1396,6 +1396,106 @@ interface ThriftHiveMetastoreIf extends 
\FacebookServiceIf {
* @throws \metastore\MetaException
*/
   public function 
create_or_drop_wm_trigger_to_pool_mapping(\metastore\WMCreateOrDropTriggerToPoolMappingRequest
 $request);
+  /**
+   * @param \metastore\ISchema $schema
+   * @throws \metastore\AlreadyExistsException
+   * @throws \metastore\NoSuchObjectException
+   * @throws \metastore\MetaException
+   */
+  public function create_ischema(\metastore\ISchema $schema);
+  /**
+   * @param string $schemaName
+   * @param \metastore\ISchema $newSchema
+   * @throws \metastore\NoSuchObjectException
+   * @throws \metastore\MetaException
+   */
+  public function alter_ischema($schemaName, \metastore\ISchema $newSchema);
+  /**
+   * @param string $schemaName
+   * @return \metastore\ISchema
+   * @throws \metastore\NoSuchObjectException
+   * @throws \metastore\MetaException
+   */
+  public function get_ischema($schemaName);
+  /**
+   * @param string $schemaName
+   * @throws \metastore\NoSuchObjectException
+   * @throws \metastore\InvalidOperationException
+   * @throws \metastore\MetaException
+   */
+  public function drop_ischema($schemaName);
+  /**
+   * @param \metastore\SchemaVersion $schemaVersion
+   * @throws \metastore\AlreadyExistsException
+   * @throws \metastore\NoSuchObjectException
+   * @throws \metastore\MetaException
+   */
+  public function add_schema_version(\metastore\SchemaVersion $schemaVersion);
+  /**
+   * @param string $schemaName
+   * @param int $version
+   * @return \metastore\SchemaVersion
+   * @throws \metastore\NoSuchObjectException
+   * @throws \metastore\MetaException
+   */
+  public function get_schema_version($schemaName, $version);
+  /**
+   * @param string $schemaName
+   * @return \metastore\SchemaVersion
+   * @throws \metastore\NoSuchObjectException
+   * @throws \metastore\MetaException
+   */
+  public function get_schema_latest_version($schemaName);
+  /**
+   * @param string $schemaName
+   * @return \metastore\SchemaVersion[]
+   * @throws \metastore\NoSuchObjectException
+   * @throws \metastore\MetaException
+   */
+  public function get_schema_all_versions($schemaName);
+  /**
+   * @param string $schemaName
+   * @param int $version
+   * @throws \metastore\NoSuchObjectException
+   * @throws \metastore\MetaException
+   */
+  public function drop_schema_version($schemaName, $version);
+  /**
+   * @param \metastore\FindSchemasByColsRqst $rqst
+   * @return \metastore\FindSchemasByColsResp
+   * @throws \metastore\MetaException
+   */
+  public function get_schemas_by_cols(\metastore\FindSchemasByColsRqst $rqst);
+  /**
+   * @param string $schemaName
+   * @param int $version
+   * @param string $serdeName
+   * @throws \metastore\NoSuchObjectException
+   * @throws \metastore\MetaException
+   */
+  public function map_schema_version_to_serde($schemaName, $version, 
$serdeName);
+  /**
+   * @param string $schemaName
+   * @param int $version
+   * @param int $state
+   * @throws \metastore\NoSuchObjectException
+   * @throws \metastore\InvalidOperationException
+   * @throws \metastore\MetaException
+   */
+  public function set_schema_version_state($schemaName, $version, $state);
+  /**
+   * @param \metastore\SerDeInfo $serde
+   * @throws \metastore\AlreadyExistsException
+   * @throws \metastore\MetaException
+   */
+  public function add_serde(\metastore\SerDeInfo $serde);
+  /**
+   * @param string $serdeName
+   * @return \metastore\SerDeInfo
+   * @throws \metastore\NoSuchObjectException
+   * @throws \metastore\MetaException
+   */
+  public function get_serde($serdeName);
 }
 
 class ThriftHiveMetastoreClient extends \FacebookServiceClient implements 
\metastore\ThriftHiveMetastoreIf {
@@ -11713,327 +11813,4018 @@ class ThriftHiveMetastoreClient extends 
\FacebookServiceClient implements \metas
 throw new \Exception("create_or_drop_wm_trigger_to_pool_mapping failed: 
unknown result");
   }
 
-}
-
-// HELPER FUNCTIONS AND STRUCTURES
+  public function create_ischema(\metastore\ISchema $schema)
+  {
+$this->send_create_ischema($schema);
+$this->recv_create_ischema();
+  }
 
-class ThriftHiveMetastore_getMetaConf_args {
-  static $_TSPEC;
+  public function send_create_ischema(\metastore\ISchema $schema)
+  {
+$args = new \metastore\ThriftHiveMetastore_create_ischema_args();
+$args->schema = $schema;
+

[14/50] [abbrv] hive git commit: HIVE-18359: Extend grouping set limits from int to long (Prasanth Jayachandran reviewed by Jesus Camacho Rodriguez)

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/ddd4c9ae/ql/src/test/results/clientpositive/groupby_grouping_sets2.q.out
--
diff --git a/ql/src/test/results/clientpositive/groupby_grouping_sets2.q.out 
b/ql/src/test/results/clientpositive/groupby_grouping_sets2.q.out
index 453b9f7..43e17ec 100644
--- a/ql/src/test/results/clientpositive/groupby_grouping_sets2.q.out
+++ b/ql/src/test/results/clientpositive/groupby_grouping_sets2.q.out
@@ -52,7 +52,7 @@ STAGE PLANS:
   Reduce Operator Tree:
 Group By Operator
   aggregations: count(VALUE._col0)
-  keys: KEY._col0 (type: string), KEY._col1 (type: string), 0 (type: 
int)
+  keys: KEY._col0 (type: string), KEY._col1 (type: string), 0 (type: 
bigint)
   mode: partials
   outputColumnNames: _col0, _col1, _col2, _col3
   Statistics: Num rows: 4 Data size: 1440 Basic stats: COMPLETE Column 
stats: NONE
@@ -68,15 +68,15 @@ STAGE PLANS:
   Map Operator Tree:
   TableScan
 Reduce Output Operator
-  key expressions: _col0 (type: string), _col1 (type: string), 
_col2 (type: int)
+  key expressions: _col0 (type: string), _col1 (type: string), 
_col2 (type: bigint)
   sort order: +++
-  Map-reduce partition columns: _col0 (type: string), _col1 (type: 
string), _col2 (type: int)
+  Map-reduce partition columns: _col0 (type: string), _col1 (type: 
string), _col2 (type: bigint)
   Statistics: Num rows: 4 Data size: 1440 Basic stats: COMPLETE 
Column stats: NONE
   value expressions: _col3 (type: bigint)
   Reduce Operator Tree:
 Group By Operator
   aggregations: count(VALUE._col0)
-  keys: KEY._col0 (type: string), KEY._col1 (type: string), KEY._col2 
(type: int)
+  keys: KEY._col0 (type: string), KEY._col1 (type: string), KEY._col2 
(type: bigint)
   mode: final
   outputColumnNames: _col0, _col1, _col3
   Statistics: Num rows: 2 Data size: 720 Basic stats: COMPLETE Column 
stats: NONE
@@ -137,7 +137,7 @@ STAGE PLANS:
   Reduce Operator Tree:
 Group By Operator
   aggregations: count(VALUE._col0)
-  keys: KEY._col0 (type: string), KEY._col1 (type: string), 0 (type: 
int)
+  keys: KEY._col0 (type: string), KEY._col1 (type: string), 0 (type: 
bigint)
   mode: partials
   outputColumnNames: _col0, _col1, _col2, _col3
   Statistics: Num rows: 4 Data size: 1440 Basic stats: COMPLETE Column 
stats: NONE
@@ -153,15 +153,15 @@ STAGE PLANS:
   Map Operator Tree:
   TableScan
 Reduce Output Operator
-  key expressions: _col0 (type: string), _col1 (type: string), 
_col2 (type: int)
+  key expressions: _col0 (type: string), _col1 (type: string), 
_col2 (type: bigint)
   sort order: +++
-  Map-reduce partition columns: _col0 (type: string), _col1 (type: 
string), _col2 (type: int)
+  Map-reduce partition columns: _col0 (type: string), _col1 (type: 
string), _col2 (type: bigint)
   Statistics: Num rows: 4 Data size: 1440 Basic stats: COMPLETE 
Column stats: NONE
   value expressions: _col3 (type: bigint)
   Reduce Operator Tree:
 Group By Operator
   aggregations: count(VALUE._col0)
-  keys: KEY._col0 (type: string), KEY._col1 (type: string), KEY._col2 
(type: int)
+  keys: KEY._col0 (type: string), KEY._col1 (type: string), KEY._col2 
(type: bigint)
   mode: final
   outputColumnNames: _col0, _col1, _col3
   Statistics: Num rows: 2 Data size: 720 Basic stats: COMPLETE Column 
stats: NONE
@@ -246,7 +246,7 @@ STAGE PLANS:
   Reduce Operator Tree:
 Group By Operator
   aggregations: sum(VALUE._col0)
-  keys: KEY._col0 (type: string), KEY._col1 (type: string), 0 (type: 
int)
+  keys: KEY._col0 (type: string), KEY._col1 (type: string), 0 (type: 
bigint)
   mode: partials
   outputColumnNames: _col0, _col1, _col2, _col3
   Statistics: Num rows: 4 Data size: 1440 Basic stats: COMPLETE Column 
stats: NONE
@@ -262,15 +262,15 @@ STAGE PLANS:
   Map Operator Tree:
   TableScan
 Reduce Output Operator
-  key expressions: _col0 (type: string), _col1 (type: string), 
_col2 (type: int)
+  key expressions: _col0 (type: string), _col1 (type: string), 
_col2 (type: bigint)
   sort order: +++
-  Map-reduce partition columns: _col0 (type: string), _col1 (type: 
string), _col2 (type: int)
+  Map-reduce partition columns: _col0 (type: string), _col1 (type: 
string), _col2 (type: bigint)
   Statistics: Num rows: 4 Data size: 1440 Basic stats: COMPLETE 
Column stats: NONE
   value expressions: _col3 (type: double)
   Reduce 

[15/50] [abbrv] hive git commit: HIVE-18359: Extend grouping set limits from int to long (Prasanth Jayachandran reviewed by Jesus Camacho Rodriguez)

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/ddd4c9ae/ql/src/test/results/clientpositive/annotate_stats_groupby.q.out
--
diff --git a/ql/src/test/results/clientpositive/annotate_stats_groupby.q.out 
b/ql/src/test/results/clientpositive/annotate_stats_groupby.q.out
index ed3d594..25efe1e 100644
--- a/ql/src/test/results/clientpositive/annotate_stats_groupby.q.out
+++ b/ql/src/test/results/clientpositive/annotate_stats_groupby.q.out
@@ -304,25 +304,25 @@ STAGE PLANS:
   outputColumnNames: state, locid
   Statistics: Num rows: 8 Data size: 720 Basic stats: COMPLETE 
Column stats: COMPLETE
   Group By Operator
-keys: state (type: string), locid (type: int), 0 (type: int)
+keys: state (type: string), locid (type: int), 0 (type: bigint)
 mode: hash
 outputColumnNames: _col0, _col1, _col2
-Statistics: Num rows: 32 Data size: 3008 Basic stats: COMPLETE 
Column stats: COMPLETE
+Statistics: Num rows: 32 Data size: 3136 Basic stats: COMPLETE 
Column stats: COMPLETE
 Reduce Output Operator
-  key expressions: _col0 (type: string), _col1 (type: int), 
_col2 (type: int)
+  key expressions: _col0 (type: string), _col1 (type: int), 
_col2 (type: bigint)
   sort order: +++
-  Map-reduce partition columns: _col0 (type: string), _col1 
(type: int), _col2 (type: int)
-  Statistics: Num rows: 32 Data size: 3008 Basic stats: 
COMPLETE Column stats: COMPLETE
+  Map-reduce partition columns: _col0 (type: string), _col1 
(type: int), _col2 (type: bigint)
+  Statistics: Num rows: 32 Data size: 3136 Basic stats: 
COMPLETE Column stats: COMPLETE
   Reduce Operator Tree:
 Group By Operator
-  keys: KEY._col0 (type: string), KEY._col1 (type: int), KEY._col2 
(type: int)
+  keys: KEY._col0 (type: string), KEY._col1 (type: int), KEY._col2 
(type: bigint)
   mode: mergepartial
   outputColumnNames: _col0, _col1
-  Statistics: Num rows: 32 Data size: 3008 Basic stats: COMPLETE 
Column stats: COMPLETE
+  Statistics: Num rows: 32 Data size: 3136 Basic stats: COMPLETE 
Column stats: COMPLETE
   pruneGroupingSetId: true
   File Output Operator
 compressed: false
-Statistics: Num rows: 32 Data size: 3008 Basic stats: COMPLETE 
Column stats: COMPLETE
+Statistics: Num rows: 32 Data size: 3136 Basic stats: COMPLETE 
Column stats: COMPLETE
 table:
 input format: org.apache.hadoop.mapred.SequenceFileInputFormat
 output format: 
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
@@ -354,25 +354,25 @@ STAGE PLANS:
   outputColumnNames: state, locid
   Statistics: Num rows: 8 Data size: 720 Basic stats: COMPLETE 
Column stats: COMPLETE
   Group By Operator
-keys: state (type: string), locid (type: int), 0 (type: int)
+keys: state (type: string), locid (type: int), 0 (type: bigint)
 mode: hash
 outputColumnNames: _col0, _col1, _col2
-Statistics: Num rows: 24 Data size: 2256 Basic stats: COMPLETE 
Column stats: COMPLETE
+Statistics: Num rows: 24 Data size: 2352 Basic stats: COMPLETE 
Column stats: COMPLETE
 Reduce Output Operator
-  key expressions: _col0 (type: string), _col1 (type: int), 
_col2 (type: int)
+  key expressions: _col0 (type: string), _col1 (type: int), 
_col2 (type: bigint)
   sort order: +++
-  Map-reduce partition columns: _col0 (type: string), _col1 
(type: int), _col2 (type: int)
-  Statistics: Num rows: 24 Data size: 2256 Basic stats: 
COMPLETE Column stats: COMPLETE
+  Map-reduce partition columns: _col0 (type: string), _col1 
(type: int), _col2 (type: bigint)
+  Statistics: Num rows: 24 Data size: 2352 Basic stats: 
COMPLETE Column stats: COMPLETE
   Reduce Operator Tree:
 Group By Operator
-  keys: KEY._col0 (type: string), KEY._col1 (type: int), KEY._col2 
(type: int)
+  keys: KEY._col0 (type: string), KEY._col1 (type: int), KEY._col2 
(type: bigint)
   mode: mergepartial
   outputColumnNames: _col0, _col1
-  Statistics: Num rows: 24 Data size: 2256 Basic stats: COMPLETE 
Column stats: COMPLETE
+  Statistics: Num rows: 24 Data size: 2352 Basic stats: COMPLETE 
Column stats: COMPLETE
   pruneGroupingSetId: true
   File Output Operator
 compressed: false
-Statistics: Num rows: 24 Data size: 2256 Basic stats: COMPLETE 
Column stats: COMPLETE
+Statistics: Num rows: 24 Data size: 

[24/50] [abbrv] hive git commit: HIVE-17983 Make the standalone metastore generate tarballs etc. (Alan Gates, reviewed by Thejas Nair)

2018-02-12 Thread gates
http://git-wip-us.apache.org/repos/asf/hive/blob/47cac2d0/standalone-metastore/src/main/sql/derby/hive-schema-1.2.0.derby.sql
--
diff --git 
a/standalone-metastore/src/main/sql/derby/hive-schema-1.2.0.derby.sql 
b/standalone-metastore/src/main/sql/derby/hive-schema-1.2.0.derby.sql
new file mode 100644
index 000..43f61bf
--- /dev/null
+++ b/standalone-metastore/src/main/sql/derby/hive-schema-1.2.0.derby.sql
@@ -0,0 +1,405 @@
+-- Timestamp: 2011-09-22 15:32:02.024
+-- Source database is: 
/home/carl/Work/repos/hive1/metastore/scripts/upgrade/derby/mdb
+-- Connection URL is: 
jdbc:derby:/home/carl/Work/repos/hive1/metastore/scripts/upgrade/derby/mdb
+-- Specified schema is: APP
+-- appendLogs: false
+
+-- --
+-- DDL Statements for functions
+-- --
+
+CREATE FUNCTION "APP"."NUCLEUS_ASCII" (C CHAR(1)) RETURNS INTEGER LANGUAGE 
JAVA PARAMETER STYLE JAVA READS SQL DATA CALLED ON NULL INPUT EXTERNAL NAME 
'org.datanucleus.store.rdbms.adapter.DerbySQLFunction.ascii' ;
+
+CREATE FUNCTION "APP"."NUCLEUS_MATCHES" (TEXT VARCHAR(8000),PATTERN 
VARCHAR(8000)) RETURNS INTEGER LANGUAGE JAVA PARAMETER STYLE JAVA READS SQL 
DATA CALLED ON NULL INPUT EXTERNAL NAME 
'org.datanucleus.store.rdbms.adapter.DerbySQLFunction.matches' ;
+
+-- --
+-- DDL Statements for tables
+-- --
+
+CREATE TABLE "APP"."DBS" ("DB_ID" BIGINT NOT NULL, "DESC" VARCHAR(4000), 
"DB_LOCATION_URI" VARCHAR(4000) NOT NULL, "NAME" VARCHAR(128), "OWNER_NAME" 
VARCHAR(128), "OWNER_TYPE" VARCHAR(10));
+
+CREATE TABLE "APP"."TBL_PRIVS" ("TBL_GRANT_ID" BIGINT NOT NULL, "CREATE_TIME" 
INTEGER NOT NULL, "GRANT_OPTION" SMALLINT NOT NULL, "GRANTOR" VARCHAR(128), 
"GRANTOR_TYPE" VARCHAR(128), "PRINCIPAL_NAME" VARCHAR(128), "PRINCIPAL_TYPE" 
VARCHAR(128), "TBL_PRIV" VARCHAR(128), "TBL_ID" BIGINT);
+
+CREATE TABLE "APP"."DATABASE_PARAMS" ("DB_ID" BIGINT NOT NULL, "PARAM_KEY" 
VARCHAR(180) NOT NULL, "PARAM_VALUE" VARCHAR(4000));
+
+CREATE TABLE "APP"."TBL_COL_PRIVS" ("TBL_COLUMN_GRANT_ID" BIGINT NOT NULL, 
"COLUMN_NAME" VARCHAR(128), "CREATE_TIME" INTEGER NOT NULL, "GRANT_OPTION" 
SMALLINT NOT NULL, "GRANTOR" VARCHAR(128), "GRANTOR_TYPE" VARCHAR(128), 
"PRINCIPAL_NAME" VARCHAR(128), "PRINCIPAL_TYPE" VARCHAR(128), "TBL_COL_PRIV" 
VARCHAR(128), "TBL_ID" BIGINT);
+
+CREATE TABLE "APP"."SERDE_PARAMS" ("SERDE_ID" BIGINT NOT NULL, "PARAM_KEY" 
VARCHAR(256) NOT NULL, "PARAM_VALUE" VARCHAR(4000));
+
+CREATE TABLE "APP"."COLUMNS_V2" ("CD_ID" BIGINT NOT NULL, "COMMENT" 
VARCHAR(4000), "COLUMN_NAME" VARCHAR(128) NOT NULL, "TYPE_NAME" VARCHAR(4000), 
"INTEGER_IDX" INTEGER NOT NULL);
+
+CREATE TABLE "APP"."SORT_COLS" ("SD_ID" BIGINT NOT NULL, "COLUMN_NAME" 
VARCHAR(128), "ORDER" INTEGER NOT NULL, "INTEGER_IDX" INTEGER NOT NULL);
+
+CREATE TABLE "APP"."CDS" ("CD_ID" BIGINT NOT NULL);
+
+CREATE TABLE "APP"."PARTITION_KEY_VALS" ("PART_ID" BIGINT NOT NULL, 
"PART_KEY_VAL" VARCHAR(256), "INTEGER_IDX" INTEGER NOT NULL);
+
+CREATE TABLE "APP"."DB_PRIVS" ("DB_GRANT_ID" BIGINT NOT NULL, "CREATE_TIME" 
INTEGER NOT NULL, "DB_ID" BIGINT, "GRANT_OPTION" SMALLINT NOT NULL, "GRANTOR" 
VARCHAR(128), "GRANTOR_TYPE" VARCHAR(128), "PRINCIPAL_NAME" VARCHAR(128), 
"PRINCIPAL_TYPE" VARCHAR(128), "DB_PRIV" VARCHAR(128));
+
+CREATE TABLE "APP"."IDXS" ("INDEX_ID" BIGINT NOT NULL, "CREATE_TIME" INTEGER 
NOT NULL, "DEFERRED_REBUILD" CHAR(1) NOT NULL, "INDEX_HANDLER_CLASS" 
VARCHAR(4000), "INDEX_NAME" VARCHAR(128), "INDEX_TBL_ID" BIGINT, 
"LAST_ACCESS_TIME" INTEGER NOT NULL, "ORIG_TBL_ID" BIGINT, "SD_ID" BIGINT);
+
+CREATE TABLE "APP"."INDEX_PARAMS" ("INDEX_ID" BIGINT NOT NULL, "PARAM_KEY" 
VARCHAR(256) NOT NULL, "PARAM_VALUE" VARCHAR(4000));
+
+CREATE TABLE "APP"."PARTITIONS" ("PART_ID" BIGINT NOT NULL, "CREATE_TIME" 
INTEGER NOT NULL, "LAST_ACCESS_TIME" INTEGER NOT NULL, "PART_NAME" 
VARCHAR(767), "SD_ID" BIGINT, "TBL_ID" BIGINT);
+
+CREATE TABLE "APP"."SERDES" ("SERDE_ID" BIGINT NOT NULL, "NAME" VARCHAR(128), 
"SLIB" VARCHAR(4000));
+
+CREATE TABLE "APP"."PART_PRIVS" ("PART_GRANT_ID" BIGINT NOT NULL, 
"CREATE_TIME" INTEGER NOT NULL, "GRANT_OPTION" SMALLINT NOT NULL, "GRANTOR" 
VARCHAR(128), "GRANTOR_TYPE" VARCHAR(128), "PART_ID" BIGINT, "PRINCIPAL_NAME" 
VARCHAR(128), "PRINCIPAL_TYPE" VARCHAR(128), "PART_PRIV" VARCHAR(128));
+
+CREATE TABLE "APP"."ROLE_MAP" ("ROLE_GRANT_ID" BIGINT NOT NULL, "ADD_TIME" 
INTEGER NOT NULL, "GRANT_OPTION" SMALLINT NOT NULL, "GRANTOR" VARCHAR(128), 
"GRANTOR_TYPE" VARCHAR(128), "PRINCIPAL_NAME" VARCHAR(128), "PRINCIPAL_TYPE" 
VARCHAR(128), "ROLE_ID" BIGINT);
+
+CREATE TABLE "APP"."TYPES" ("TYPES_ID" BIGINT NOT NULL, "TYPE_NAME" 
VARCHAR(128), "TYPE1" VARCHAR(767), "TYPE2" VARCHAR(767));
+
+CREATE TABLE "APP"."GLOBAL_PRIVS" ("USER_GRANT_ID" BIGINT NOT NULL, 
"CREATE_TIME" INTEGER NOT NULL, "GRANT_OPTION" SMALLINT NOT NULL, "GRANTOR" 

[01/50] [abbrv] hive git commit: HIVE-18580: Create tests to cover exchange partitions (Marta Kuczora, reviewed by Adam Szita, Peter Vary) [Forced Update!]

2018-02-12 Thread gates
Repository: hive
Updated Branches:
  refs/heads/standalone-metastore b62300014 -> c4d22858c (forced update)


HIVE-18580: Create tests to cover exchange partitions (Marta Kuczora, reviewed 
by Adam Szita, Peter Vary)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/58bbfc73
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/58bbfc73
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/58bbfc73

Branch: refs/heads/standalone-metastore
Commit: 58bbfc733939f5fe2229af106809270e3d6fb4e2
Parents: 6155f30
Author: Peter Vary 
Authored: Fri Feb 9 15:29:53 2018 +0100
Committer: Peter Vary 
Committed: Fri Feb 9 15:29:53 2018 +0100

--
 .../client/TestExchangePartitions.java  | 1502 ++
 1 file changed, 1502 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/58bbfc73/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/client/TestExchangePartitions.java
--
diff --git 
a/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/client/TestExchangePartitions.java
 
b/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/client/TestExchangePartitions.java
new file mode 100644
index 000..5a7aeb7
--- /dev/null
+++ 
b/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/client/TestExchangePartitions.java
@@ -0,0 +1,1502 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.metastore.client;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.metastore.IMetaStoreClient;
+import org.apache.hadoop.hive.metastore.Warehouse;
+import org.apache.hadoop.hive.metastore.api.Database;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.MetaException;
+import org.apache.hadoop.hive.metastore.api.NoSuchObjectException;
+import org.apache.hadoop.hive.metastore.api.Partition;
+import org.apache.hadoop.hive.metastore.api.Table;
+import org.apache.hadoop.hive.metastore.client.builder.DatabaseBuilder;
+import org.apache.hadoop.hive.metastore.client.builder.PartitionBuilder;
+import org.apache.hadoop.hive.metastore.client.builder.TableBuilder;
+import org.apache.hadoop.hive.metastore.minihms.AbstractMetaStoreService;
+import org.apache.thrift.TException;
+import org.apache.thrift.transport.TTransportException;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+import com.google.common.collect.Lists;
+
+/**
+ * Tests for exchanging partitions.
+ */
+@RunWith(Parameterized.class)
+public class TestExchangePartitions {
+
+  // Needed until there is no junit release with @BeforeParam, @AfterParam 
(junit 4.13)
+  // 
https://github.com/junit-team/junit4/commit/1bf8438b65858565dbb64736bfe13aae9cfc1b5a
+  // Then we should remove our own copy
+  private static Set metaStoreServices = null;
+  private AbstractMetaStoreService metaStore;
+  private IMetaStoreClient client;
+
+  private static final String DB_NAME = "test_partition_db";
+  private static final String STRING_COL_TYPE = "string";
+  private static final String INT_COL_TYPE = "int";
+  private static final String YEAR_COL_NAME = "year";
+  private static final String MONTH_COL_NAME = "month";
+  private static final String DAY_COL_NAME = "day";
+  private static final short MAX = -1;
+  private static Table sourceTable;
+  private static Table destTable;
+  private static Partition[] partitions;
+
+
+  @Parameterized.Parameters(name = "{0}")
+  public static List getMetaStoreToTest() throws Exception {
+List result = MetaStoreFactoryForTests.getMetaStores();
+metaStoreServices 

[1/2] hive git commit: HIVE-18668: Really shade guava in ql (Zoltan Haindrich reviewed by Ashutosh Chauhan)

2018-02-12 Thread kgyrtkirk
Repository: hive
Updated Branches:
  refs/heads/master 233884620 -> 887233d28


HIVE-18668: Really shade guava in ql (Zoltan Haindrich reviewed by Ashutosh 
Chauhan)

Signed-off-by: Zoltan Haindrich 


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/91889089
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/91889089
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/91889089

Branch: refs/heads/master
Commit: 91889089c77c231aeead606ae89f580a80b7ada8
Parents: 2338846
Author: Zoltan Haindrich 
Authored: Mon Feb 12 10:30:57 2018 +0100
Committer: Zoltan Haindrich 
Committed: Mon Feb 12 10:30:57 2018 +0100

--
 ql/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/91889089/ql/pom.xml
--
diff --git a/ql/pom.xml b/ql/pom.xml
index 187b701..2d1034c 100644
--- a/ql/pom.xml
+++ b/ql/pom.xml
@@ -907,7 +907,7 @@
   io.airlift:aircompressor
   org.codehaus.jackson:jackson-core-asl
   org.codehaus.jackson:jackson-mapper-asl
-  com.google.guava:guava
+  com.google.common:guava-common
   net.sf.opencsv:opencsv
   org.apache.hive:hive-spark-client
   org.apache.hive:hive-storage-api



[2/2] hive git commit: HIVE-18646: Update errata.txt for HIVE-18617 (Daniel Voros via Zoltan Haindrich)

2018-02-12 Thread kgyrtkirk
HIVE-18646: Update errata.txt for HIVE-18617 (Daniel Voros via Zoltan Haindrich)

Signed-off-by: Zoltan Haindrich 


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/887233d2
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/887233d2
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/887233d2

Branch: refs/heads/master
Commit: 887233d28bbc64da0214d5c27653c9ca378766ef
Parents: 9188908
Author: Daniel Voros 
Authored: Mon Feb 12 10:59:30 2018 +0100
Committer: Zoltan Haindrich 
Committed: Mon Feb 12 10:59:30 2018 +0100

--
 errata.txt | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/887233d2/errata.txt
--
diff --git a/errata.txt b/errata.txt
index 87e41b8..d1d95ef 100644
--- a/errata.txt
+++ b/errata.txt
@@ -93,3 +93,4 @@ d16d4f1bcc43d6ebcab0eaf5bc635fb88b60be5f master HIVE-9423 
 https://issues.ap
 5facfbb863366d7a661c21c57011b8dbe43f52e0 master HIVE-16307 
https://issues.apache.org/jira/browse/HIVE-16307
 1c3039333ba71665e8b954fbee88188757bb4050 master HIVE-16743 
https://issues.apache.org/jira/browse/HIVE-16743
 e7081035bb9768bc014f0aba11417418ececbaf0 master HIVE-17109 
https://issues.apache.org/jira/browse/HIVE-17109
+f33db1f68c68b552b9888988f818c03879749461 master HIVE-18617 
https://issues.apache.org/jira/browse/HIVE-18617