hive git commit: HIVE-14363: bucketmap inner join query fails due to NullPointerException in some cases (Hari Subramaniyan, reviewed by Matt McCline)

2016-07-30 Thread harisankar
Repository: hive
Updated Branches:
  refs/heads/branch-2.1 cb65a3a8b -> d50cdf74c


HIVE-14363: bucketmap inner join query fails due to NullPointerException in 
some cases (Hari Subramaniyan, reviewed by Matt McCline)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/d50cdf74
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/d50cdf74
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/d50cdf74

Branch: refs/heads/branch-2.1
Commit: d50cdf74c25774635863b76bac8e4d6721dff92f
Parents: cb65a3a
Author: Hari Subramaniyan 
Authored: Sat Jul 30 22:25:29 2016 -0700
Committer: Hari Subramaniyan 
Committed: Sat Jul 30 22:27:10 2016 -0700

--
 .../org/apache/hadoop/hive/ql/exec/tez/MapRecordProcessor.java| 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/d50cdf74/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/MapRecordProcessor.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/MapRecordProcessor.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/MapRecordProcessor.java
index b8ecf89..1e92f0a 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/MapRecordProcessor.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/MapRecordProcessor.java
@@ -361,9 +361,10 @@ public class MapRecordProcessor extends RecordProcessor {
 // this sets up the map operator contexts correctly
 mapOp.initializeContexts();
 Deserializer deserializer = mapOp.getCurrentDeserializer();
+// deserializer is null in case of VectorMapOperator
 KeyValueReader reader =
   new KeyValueInputMerger(kvReaderList, deserializer,
-  new ObjectInspector[] { deserializer.getObjectInspector() }, mapOp
+  new ObjectInspector[] { deserializer == null ? null : 
deserializer.getObjectInspector() }, mapOp
   .getConf()
   .getSortCols());
 return reader;



hive git commit: HIVE-14363: bucketmap inner join query fails due to NullPointerException in some cases (Hari Subramaniyan, reviewed by Matt McCline)

2016-07-30 Thread harisankar
Repository: hive
Updated Branches:
  refs/heads/master 034c9821b -> 4f1cd26ce


HIVE-14363: bucketmap inner join query fails due to NullPointerException in 
some cases (Hari Subramaniyan, reviewed by Matt McCline)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/4f1cd26c
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/4f1cd26c
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/4f1cd26c

Branch: refs/heads/master
Commit: 4f1cd26ce3eb4d95efb8a0dc2bec7e03b360202c
Parents: 034c982
Author: Hari Subramaniyan 
Authored: Sat Jul 30 22:25:29 2016 -0700
Committer: Hari Subramaniyan 
Committed: Sat Jul 30 22:25:29 2016 -0700

--
 .../org/apache/hadoop/hive/ql/exec/tez/MapRecordProcessor.java| 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/4f1cd26c/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/MapRecordProcessor.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/MapRecordProcessor.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/MapRecordProcessor.java
index e8ccbc4..0886c0e 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/MapRecordProcessor.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/MapRecordProcessor.java
@@ -369,9 +369,10 @@ public class MapRecordProcessor extends RecordProcessor {
 // this sets up the map operator contexts correctly
 mapOp.initializeContexts();
 Deserializer deserializer = mapOp.getCurrentDeserializer();
+// deserializer is null in case of VectorMapOperator
 KeyValueReader reader =
   new KeyValueInputMerger(kvReaderList, deserializer,
-  new ObjectInspector[] { deserializer.getObjectInspector() }, mapOp
+  new ObjectInspector[] { deserializer == null ? null : 
deserializer.getObjectInspector() }, mapOp
   .getConf()
   .getSortCols());
 return reader;



hive git commit: HIVE-14335 : TaskDisplay's return value is not getting deserialized properly (Rajat Khandelwal via Szehon)

2016-07-30 Thread amareshwari
Repository: hive
Updated Branches:
  refs/heads/branch-2.1 a52217cb9 -> cb65a3a8b


HIVE-14335 : TaskDisplay's return value is not getting deserialized properly 
(Rajat Khandelwal via Szehon)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/cb65a3a8
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/cb65a3a8
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/cb65a3a8

Branch: refs/heads/branch-2.1
Commit: cb65a3a8b152b2a16c95c7baede0318469daf117
Parents: a52217c
Author: Szehon Ho 
Authored: Wed Jul 27 10:05:26 2016 -0700
Committer: Amareshwari Sriramadasu 
Committed: Sun Jul 31 08:12:40 2016 +0530

--
 ql/src/java/org/apache/hadoop/hive/ql/QueryDisplay.java | 12 ++--
 .../org/apache/hive/service/cli/CLIServiceTest.java |  3 +++
 2 files changed, 9 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/cb65a3a8/ql/src/java/org/apache/hadoop/hive/ql/QueryDisplay.java
--
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/QueryDisplay.java 
b/ql/src/java/org/apache/hadoop/hive/ql/QueryDisplay.java
index 703e997..bf6cb91 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/QueryDisplay.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/QueryDisplay.java
@@ -65,7 +65,7 @@ public class QueryDisplay {
   @JsonIgnoreProperties(ignoreUnknown = true)
   public static class TaskDisplay {
 
-private Integer returnVal;  //if set, determines that task is complete.
+private Integer returnValue;  //if set, determines that task is complete.
 private String errorMsg;
 
 private Long beginTime;
@@ -95,12 +95,12 @@ public class QueryDisplay {
 }
 @JsonIgnore
 public synchronized String getStatus() {
-  if (returnVal == null) {
+  if (returnValue == null) {
 return "Running";
-  } else if (returnVal == 0) {
+  } else if (returnValue == 0) {
 return "Success, ReturnVal 0";
   } else {
-return "Failure, ReturnVal " + String.valueOf(returnVal);
+return "Failure, ReturnVal " + String.valueOf(returnValue);
   }
 }
 
@@ -116,7 +116,7 @@ public class QueryDisplay {
 }
 
 public synchronized Integer getReturnValue() {
-  return returnVal;
+  return returnValue;
 }
 
 public synchronized String getErrorMsg() {
@@ -186,7 +186,7 @@ public class QueryDisplay {
   public synchronized void setTaskResult(String taskId, TaskResult result) {
 TaskDisplay taskDisplay = tasks.get(taskId);
 if (taskDisplay != null) {
-  taskDisplay.returnVal = result.getExitVal();
+  taskDisplay.returnValue = result.getExitVal();
   if (result.getTaskError() != null) {
 taskDisplay.errorMsg = result.getTaskError().toString();
   }

http://git-wip-us.apache.org/repos/asf/hive/blob/cb65a3a8/service/src/test/org/apache/hive/service/cli/CLIServiceTest.java
--
diff --git a/service/src/test/org/apache/hive/service/cli/CLIServiceTest.java 
b/service/src/test/org/apache/hive/service/cli/CLIServiceTest.java
index fb8ee4c..17d45ec 100644
--- a/service/src/test/org/apache/hive/service/cli/CLIServiceTest.java
+++ b/service/src/test/org/apache/hive/service/cli/CLIServiceTest.java
@@ -666,6 +666,9 @@ public abstract class CLIServiceTest {
   if (OperationState.CANCELED == state || state == OperationState.CLOSED
 || state == OperationState.FINISHED
 || state == OperationState.ERROR) {
+for (QueryDisplay.TaskDisplay display: taskStatuses) {
+  assertNotNull(display.getReturnValue());
+}
 break;
   }
   Thread.sleep(1000);



hive git commit: HIVE-14370: printStackTrace() called in Operator.close() (David Karoly, reviewed by Sergio Pena)

2016-07-30 Thread spena
Repository: hive
Updated Branches:
  refs/heads/master e769be999 -> 034c9821b


HIVE-14370: printStackTrace() called in Operator.close() (David Karoly, 
reviewed by Sergio Pena)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/034c9821
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/034c9821
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/034c9821

Branch: refs/heads/master
Commit: 034c9821b496f0b182fd7e3d967d421580c0f23d
Parents: e769be9
Author: David Karoly 
Authored: Sat Jul 30 17:30:57 2016 -0500
Committer: Sergio Pena 
Committed: Sat Jul 30 17:30:57 2016 -0500

--
 ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/034c9821/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java
--
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java
index 7b312a5..eaf4792 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java
@@ -701,7 +701,7 @@ public abstract class Operator 
implements Serializable,C
 LOG.debug(id + " Close done");
   }
 } catch (HiveException e) {
-  e.printStackTrace();
+  LOG.warn("Caught exception while closing operator: " + e.getMessage(), 
e);
   throw e;
 }
   }



[19/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_part_all_primitive.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_part_all_primitive.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_part_all_primitive.q.out
new file mode 100644
index 000..5e7507e
--- /dev/null
+++ 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_part_all_primitive.q.out
@@ -0,0 +1,2899 @@
+PREHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, Non-Vectorized, MapWork, Partitioned --> all primitive 
conversions
+--
+--
+-- SECTION: ALTER TABLE CHANGE COLUMNS Various --> Various
+--
+--
+--
+-- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: (BYTE, 
SHORT, INT, LONG, FLOAT, DOUBLE, DECIMAL, STRING, TIMESTAMP) --> BOOLEAN
+--
+CREATE TABLE part_change_various_various_boolean(insert_num int, c1 TINYINT, 
c2 SMALLINT, c3 INT, c4 BIGINT, c5 FLOAT, c6 DOUBLE, c7 DECIMAL(38,18), c8 
STRING, c9 TIMESTAMP, b STRING) PARTITIONED BY(part INT)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@part_change_various_various_boolean
+POSTHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, Non-Vectorized, MapWork, Partitioned --> all primitive 
conversions
+--
+--
+-- SECTION: ALTER TABLE CHANGE COLUMNS Various --> Various
+--
+--
+--
+-- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: (BYTE, 
SHORT, INT, LONG, FLOAT, DOUBLE, DECIMAL, STRING, TIMESTAMP) --> BOOLEAN
+--
+CREATE TABLE part_change_various_various_boolean(insert_num int, c1 TINYINT, 
c2 SMALLINT, c3 INT, c4 BIGINT, c5 FLOAT, c6 DOUBLE, c7 DECIMAL(38,18), c8 
STRING, c9 TIMESTAMP, b STRING) PARTITIONED BY(part INT)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@part_change_various_various_boolean
+PREHOOK: query: insert into table part_change_various_various_boolean 
partition(part=1)
+values(1, 255, 2000, 72909, 3244222, -29.0764, 470614135, 470614135, 
'true', '0004-09-22 18:26:29.51954', 'original'),
+  (2, 45, 1000, 483777, -23866739993, -3651.672121, 46114.284799488, 
46114.284799488, '', '2007-02-09 05:17:29.368756876', 'original'),
+  (3, 200, 72909, 3244222, -93222, 30.774, -66475.561431, 
-66475.561431, '1', '6229-06-28 02:54:28.970117179', 'original'),
+  (4, 1, 9, 754072151, 3289094, 46114.284799488 ,9250340.75, 
9250340.75, 'time will come', '2002-05-10 05:29:48.990818073', 'original')
+PREHOOK: type: QUERY
+PREHOOK: Input: default@values__tmp__table__1
+PREHOOK: Output: default@part_change_various_various_boolean@part=1
+POSTHOOK: query: insert into table part_change_various_various_boolean 
partition(part=1)
+values(1, 255, 2000, 72909, 3244222, -29.0764, 470614135, 470614135, 
'true', '0004-09-22 18:26:29.51954', 'original'),
+  (2, 45, 1000, 483777, -23866739993, -3651.672121, 46114.284799488, 
46114.284799488, '', '2007-02-09 05:17:29.368756876', 'original'),
+  (3, 200, 72909, 3244222, -93222, 30.774, -66475.561431, 
-66475.561431, '1', '6229-06-28 02:54:28.970117179', 'original'),
+  (4, 1, 9, 754072151, 3289094, 46114.284799488 ,9250340.75, 
9250340.75, 'time will come', '2002-05-10 05:29:48.990818073', 'original')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@values__tmp__table__1
+POSTHOOK: Output: default@part_change_various_various_boolean@part=1
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).b 
SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col11,
 type:string, comment:), ]
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c1 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c2 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c3 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col4, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c4 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col5, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c5 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col6, 
type:string, comment:), ]
+POSTHOOK: Lineage: 

[02/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_part_all_primitive.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_part_all_primitive.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_part_all_primitive.q.out
new file mode 100644
index 000..f924239
--- /dev/null
+++ 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_part_all_primitive.q.out
@@ -0,0 +1,2903 @@
+PREHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
primitive conversions
+-- NOTE: the use of hive.vectorized.use.row.serde.deserialize above which 
enables doing
+--  vectorized reading of TEXTFILE format files using the row SERDE methods.
+--
+--
+-- SECTION: ALTER TABLE CHANGE COLUMNS Various --> Various
+--
+--
+--
+-- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: (BYTE, 
SHORT, INT, LONG, FLOAT, DOUBLE, DECIMAL, STRING, TIMESTAMP) --> BOOLEAN
+--
+CREATE TABLE part_change_various_various_boolean(insert_num int, c1 TINYINT, 
c2 SMALLINT, c3 INT, c4 BIGINT, c5 FLOAT, c6 DOUBLE, c7 DECIMAL(38,18), c8 
STRING, c9 TIMESTAMP, b STRING) PARTITIONED BY(part INT)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@part_change_various_various_boolean
+POSTHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
primitive conversions
+-- NOTE: the use of hive.vectorized.use.row.serde.deserialize above which 
enables doing
+--  vectorized reading of TEXTFILE format files using the row SERDE methods.
+--
+--
+-- SECTION: ALTER TABLE CHANGE COLUMNS Various --> Various
+--
+--
+--
+-- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: (BYTE, 
SHORT, INT, LONG, FLOAT, DOUBLE, DECIMAL, STRING, TIMESTAMP) --> BOOLEAN
+--
+CREATE TABLE part_change_various_various_boolean(insert_num int, c1 TINYINT, 
c2 SMALLINT, c3 INT, c4 BIGINT, c5 FLOAT, c6 DOUBLE, c7 DECIMAL(38,18), c8 
STRING, c9 TIMESTAMP, b STRING) PARTITIONED BY(part INT)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@part_change_various_various_boolean
+PREHOOK: query: insert into table part_change_various_various_boolean 
partition(part=1)
+values(1, 255, 2000, 72909, 3244222, -29.0764, 470614135, 470614135, 
'true', '0004-09-22 18:26:29.51954', 'original'),
+  (2, 45, 1000, 483777, -23866739993, -3651.672121, 46114.284799488, 
46114.284799488, '', '2007-02-09 05:17:29.368756876', 'original'),
+  (3, 200, 72909, 3244222, -93222, 30.774, -66475.561431, 
-66475.561431, '1', '6229-06-28 02:54:28.970117179', 'original'),
+  (4, 1, 9, 754072151, 3289094, 46114.284799488 ,9250340.75, 
9250340.75, 'time will come', '2002-05-10 05:29:48.990818073', 'original')
+PREHOOK: type: QUERY
+PREHOOK: Input: default@values__tmp__table__1
+PREHOOK: Output: default@part_change_various_various_boolean@part=1
+POSTHOOK: query: insert into table part_change_various_various_boolean 
partition(part=1)
+values(1, 255, 2000, 72909, 3244222, -29.0764, 470614135, 470614135, 
'true', '0004-09-22 18:26:29.51954', 'original'),
+  (2, 45, 1000, 483777, -23866739993, -3651.672121, 46114.284799488, 
46114.284799488, '', '2007-02-09 05:17:29.368756876', 'original'),
+  (3, 200, 72909, 3244222, -93222, 30.774, -66475.561431, 
-66475.561431, '1', '6229-06-28 02:54:28.970117179', 'original'),
+  (4, 1, 9, 754072151, 3289094, 46114.284799488 ,9250340.75, 
9250340.75, 'time will come', '2002-05-10 05:29:48.990818073', 'original')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@values__tmp__table__1
+POSTHOOK: Output: default@part_change_various_various_boolean@part=1
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).b 
SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col11,
 type:string, comment:), ]
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c1 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c2 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c3 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col4, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c4 

[15/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_part_all_primitive.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_part_all_primitive.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_part_all_primitive.q.out
new file mode 100644
index 000..2a1be16
--- /dev/null
+++ 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_part_all_primitive.q.out
@@ -0,0 +1,2899 @@
+PREHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, Vectorized, MapWork, Partitioned --> all primitive 
conversions
+--
+--
+-- SECTION: ALTER TABLE CHANGE COLUMNS Various --> Various
+--
+--
+--
+-- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: (BYTE, 
SHORT, INT, LONG, FLOAT, DOUBLE, DECIMAL, STRING, TIMESTAMP) --> BOOLEAN
+--
+CREATE TABLE part_change_various_various_boolean(insert_num int, c1 TINYINT, 
c2 SMALLINT, c3 INT, c4 BIGINT, c5 FLOAT, c6 DOUBLE, c7 DECIMAL(38,18), c8 
STRING, c9 TIMESTAMP, b STRING) PARTITIONED BY(part INT)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@part_change_various_various_boolean
+POSTHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, Vectorized, MapWork, Partitioned --> all primitive 
conversions
+--
+--
+-- SECTION: ALTER TABLE CHANGE COLUMNS Various --> Various
+--
+--
+--
+-- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: (BYTE, 
SHORT, INT, LONG, FLOAT, DOUBLE, DECIMAL, STRING, TIMESTAMP) --> BOOLEAN
+--
+CREATE TABLE part_change_various_various_boolean(insert_num int, c1 TINYINT, 
c2 SMALLINT, c3 INT, c4 BIGINT, c5 FLOAT, c6 DOUBLE, c7 DECIMAL(38,18), c8 
STRING, c9 TIMESTAMP, b STRING) PARTITIONED BY(part INT)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@part_change_various_various_boolean
+PREHOOK: query: insert into table part_change_various_various_boolean 
partition(part=1)
+values(1, 255, 2000, 72909, 3244222, -29.0764, 470614135, 470614135, 
'true', '0004-09-22 18:26:29.51954', 'original'),
+  (2, 45, 1000, 483777, -23866739993, -3651.672121, 46114.284799488, 
46114.284799488, '', '2007-02-09 05:17:29.368756876', 'original'),
+  (3, 200, 72909, 3244222, -93222, 30.774, -66475.561431, 
-66475.561431, '1', '6229-06-28 02:54:28.970117179', 'original'),
+  (4, 1, 9, 754072151, 3289094, 46114.284799488 ,9250340.75, 
9250340.75, 'time will come', '2002-05-10 05:29:48.990818073', 'original')
+PREHOOK: type: QUERY
+PREHOOK: Input: default@values__tmp__table__1
+PREHOOK: Output: default@part_change_various_various_boolean@part=1
+POSTHOOK: query: insert into table part_change_various_various_boolean 
partition(part=1)
+values(1, 255, 2000, 72909, 3244222, -29.0764, 470614135, 470614135, 
'true', '0004-09-22 18:26:29.51954', 'original'),
+  (2, 45, 1000, 483777, -23866739993, -3651.672121, 46114.284799488, 
46114.284799488, '', '2007-02-09 05:17:29.368756876', 'original'),
+  (3, 200, 72909, 3244222, -93222, 30.774, -66475.561431, 
-66475.561431, '1', '6229-06-28 02:54:28.970117179', 'original'),
+  (4, 1, 9, 754072151, 3289094, 46114.284799488 ,9250340.75, 
9250340.75, 'time will come', '2002-05-10 05:29:48.990818073', 'original')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@values__tmp__table__1
+POSTHOOK: Output: default@part_change_various_various_boolean@part=1
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).b 
SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col11,
 type:string, comment:), ]
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c1 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c2 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c3 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col4, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c4 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col5, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c5 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col6, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_change_various_various_boolean 

[27/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/ql/src/test/results/clientpositive/llap/schema_evol_orc_acid_mapwork_part.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_acid_mapwork_part.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_acid_mapwork_part.q.out
new file mode 100644
index 000..fafad50
--- /dev/null
+++ 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_acid_mapwork_part.q.out
@@ -0,0 +1,3662 @@
+PREHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, ACID Non-Vectorized, MapWork, Partitioned
+-- *IMPORTANT NOTE* We set hive.exec.schema.evolution=false above since schema 
evolution is always used for ACID.
+-- Also, we don't do EXPLAINs on ACID files because the transaction id causes 
Q file statistics differences...
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT) clustered by (a) into 2 buckets STORED AS ORC 
TBLPROPERTIES ('transactional'='true')
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@part_add_int_permute_select
+POSTHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, ACID Non-Vectorized, MapWork, Partitioned
+-- *IMPORTANT NOTE* We set hive.exec.schema.evolution=false above since schema 
evolution is always used for ACID.
+-- Also, we don't do EXPLAINs on ACID files because the transaction id causes 
Q file statistics differences...
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT) clustered by (a) into 2 buckets STORED AS ORC 
TBLPROPERTIES ('transactional'='true')
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@part_add_int_permute_select
+PREHOOK: query: insert into table part_add_int_permute_select partition(part=1)
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+PREHOOK: type: QUERY
+PREHOOK: Input: default@values__tmp__table__1
+PREHOOK: Output: default@part_add_int_permute_select@part=1
+POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=1)
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@values__tmp__table__1
+POSTHOOK: Output: default@part_add_int_permute_select@part=1
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).insert_num 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
+_col0  _col1   _col2
+PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
+alter table part_add_int_permute_select add columns(c int)
+PREHOOK: type: ALTERTABLE_ADDCOLS
+PREHOOK: Input: default@part_add_int_permute_select
+PREHOOK: Output: default@part_add_int_permute_select
+POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
+alter table part_add_int_permute_select add columns(c int)
+POSTHOOK: type: ALTERTABLE_ADDCOLS
+POSTHOOK: Input: default@part_add_int_permute_select
+POSTHOOK: Output: default@part_add_int_permute_select
+PREHOOK: query: insert into table part_add_int_permute_select partition(part=2)
+values (5, 1, 'new', 10),
+   (6, 2, 'new', 20),
+   (7, 3, 'new', 30),
+   (8, 4, 'new', 40)
+PREHOOK: type: QUERY
+PREHOOK: Input: default@values__tmp__table__2
+PREHOOK: Output: default@part_add_int_permute_select@part=2
+POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=2)
+values (5, 1, 'new', 10),
+   (6, 2, 'new', 20),
+   (7, 3, 'new', 30),
+   (8, 4, 'new', 40)
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@values__tmp__table__2
+POSTHOOK: Output: default@part_add_int_permute_select@part=2
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=2).a EXPRESSION 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=2).b SIMPLE 

[11/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_part_all_complex.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_part_all_complex.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_part_all_complex.q.out
new file mode 100644
index 000..b414e3d
--- /dev/null
+++ 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_part_all_complex.q.out
@@ -0,0 +1,669 @@
+PREHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
complex conversions
+--
+--
+--
+-- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@part_change_various_various_struct1
+POSTHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
complex conversions
+--
+--
+--
+-- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@part_change_various_various_struct1
+PREHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
+row format delimited fields terminated by '|'
+collection items terminated by ','
+map keys terminated by ':' stored as textfile
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@struct1_a_txt
+POSTHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
+row format delimited fields terminated by '|'
+collection items terminated by ','
+map keys terminated by ':' stored as textfile
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@struct1_a_txt
+PREHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
+PREHOOK: type: LOAD
+ A masked pattern was here 
+PREHOOK: Output: default@struct1_a_txt
+POSTHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
+POSTHOOK: type: LOAD
+ A masked pattern was here 
+POSTHOOK: Output: default@struct1_a_txt
+PREHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
+PREHOOK: type: QUERY
+PREHOOK: Input: default@struct1_a_txt
+PREHOOK: Output: default@part_change_various_various_struct1@part=1
+POSTHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@struct1_a_txt
+POSTHOOK: Output: default@part_change_various_various_struct1@part=1
+POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).b 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:b, type:string, 
comment:null), ]
+POSTHOOK: Lineage: part_change_various_various_struct1 
PARTITION(part=1).insert_num SIMPLE 
[(struct1_a_txt)struct1_a_txt.FieldSchema(name:insert_num, type:int, 
comment:null), ]
+POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).s1 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:s1, 
type:struct,
 comment:null), ]
+struct1_a_txt.insert_num   struct1_a_txt.s1struct1_a_txt.b
+PREHOOK: query: select insert_num,part,s1,b from 
part_change_various_various_struct1 order by insert_num
+PREHOOK: type: QUERY
+PREHOOK: Input: default@part_change_various_various_struct1
+PREHOOK: Input: default@part_change_various_various_struct1@part=1
+ A masked pattern was here 
+POSTHOOK: query: select insert_num,part,s1,b from 
part_change_various_various_struct1 order by insert_num
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@part_change_various_various_struct1
+POSTHOOK: Input: default@part_change_various_various_struct1@part=1
+ A 

[06/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_part_all_primitive.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_part_all_primitive.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_part_all_primitive.q.out
new file mode 100644
index 000..3689718
--- /dev/null
+++ 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_part_all_primitive.q.out
@@ -0,0 +1,2903 @@
+PREHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
primitive conversions
+-- NOTE: the use of hive.vectorized.use.vector.serde.deserialize above which 
enables doing
+--  vectorized reading of TEXTFILE format files using the vector SERDE methods.
+--
+--
+-- SECTION: ALTER TABLE CHANGE COLUMNS Various --> Various
+--
+--
+--
+-- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: (BYTE, 
SHORT, INT, LONG, FLOAT, DOUBLE, DECIMAL, STRING, TIMESTAMP) --> BOOLEAN
+--
+CREATE TABLE part_change_various_various_boolean(insert_num int, c1 TINYINT, 
c2 SMALLINT, c3 INT, c4 BIGINT, c5 FLOAT, c6 DOUBLE, c7 DECIMAL(38,18), c8 
STRING, c9 TIMESTAMP, b STRING) PARTITIONED BY(part INT)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@part_change_various_various_boolean
+POSTHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
primitive conversions
+-- NOTE: the use of hive.vectorized.use.vector.serde.deserialize above which 
enables doing
+--  vectorized reading of TEXTFILE format files using the vector SERDE methods.
+--
+--
+-- SECTION: ALTER TABLE CHANGE COLUMNS Various --> Various
+--
+--
+--
+-- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: (BYTE, 
SHORT, INT, LONG, FLOAT, DOUBLE, DECIMAL, STRING, TIMESTAMP) --> BOOLEAN
+--
+CREATE TABLE part_change_various_various_boolean(insert_num int, c1 TINYINT, 
c2 SMALLINT, c3 INT, c4 BIGINT, c5 FLOAT, c6 DOUBLE, c7 DECIMAL(38,18), c8 
STRING, c9 TIMESTAMP, b STRING) PARTITIONED BY(part INT)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@part_change_various_various_boolean
+PREHOOK: query: insert into table part_change_various_various_boolean 
partition(part=1)
+values(1, 255, 2000, 72909, 3244222, -29.0764, 470614135, 470614135, 
'true', '0004-09-22 18:26:29.51954', 'original'),
+  (2, 45, 1000, 483777, -23866739993, -3651.672121, 46114.284799488, 
46114.284799488, '', '2007-02-09 05:17:29.368756876', 'original'),
+  (3, 200, 72909, 3244222, -93222, 30.774, -66475.561431, 
-66475.561431, '1', '6229-06-28 02:54:28.970117179', 'original'),
+  (4, 1, 9, 754072151, 3289094, 46114.284799488 ,9250340.75, 
9250340.75, 'time will come', '2002-05-10 05:29:48.990818073', 'original')
+PREHOOK: type: QUERY
+PREHOOK: Input: default@values__tmp__table__1
+PREHOOK: Output: default@part_change_various_various_boolean@part=1
+POSTHOOK: query: insert into table part_change_various_various_boolean 
partition(part=1)
+values(1, 255, 2000, 72909, 3244222, -29.0764, 470614135, 470614135, 
'true', '0004-09-22 18:26:29.51954', 'original'),
+  (2, 45, 1000, 483777, -23866739993, -3651.672121, 46114.284799488, 
46114.284799488, '', '2007-02-09 05:17:29.368756876', 'original'),
+  (3, 200, 72909, 3244222, -93222, 30.774, -66475.561431, 
-66475.561431, '1', '6229-06-28 02:54:28.970117179', 'original'),
+  (4, 1, 9, 754072151, 3289094, 46114.284799488 ,9250340.75, 
9250340.75, 'time will come', '2002-05-10 05:29:48.990818073', 'original')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@values__tmp__table__1
+POSTHOOK: Output: default@part_change_various_various_boolean@part=1
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).b 
SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col11,
 type:string, comment:), ]
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c1 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c2 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c3 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col4, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c4 

[18/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_table.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_table.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_table.q.out
new file mode 100644
index 000..798aee1
--- /dev/null
+++ 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_table.q.out
@@ -0,0 +1,3747 @@
+PREHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, Non-Vectorized, MapWork, Table
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, Non-Vectorized, MapWork, Table
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@table_add_int_permute_select
+PREHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
+PREHOOK: type: DESCTABLE
+PREHOOK: Input: default@table_add_int_permute_select
+POSTHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
+POSTHOOK: type: DESCTABLE
+POSTHOOK: Input: default@table_add_int_permute_select
+col_name   data_type   comment
+# col_name data_type   comment 
+
+insert_num int 
+a  int 
+b  string  
+
+# Detailed Table Information
+Database:  default  
+ A masked pattern was here 
+Retention: 0
+ A masked pattern was here 
+Table Type:MANAGED_TABLE
+Table Parameters:   
+   COLUMN_STATS_ACCURATE   {\"BASIC_STATS\":\"true\"}
+   numFiles0   
+   numRows 0   
+   rawDataSize 0   
+   totalSize   0   
+ A masked pattern was here 
+
+# Storage Information   
+SerDe Library: org.apache.hadoop.hive.ql.io.orc.OrcSerde
+InputFormat:   org.apache.hadoop.hive.ql.io.orc.OrcInputFormat  
+OutputFormat:  org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat
 
+Compressed:No   
+Num Buckets:   -1   
+Bucket Columns:[]   
+Sort Columns:  []   
+Storage Desc Params:
+   serialization.format1   
+PREHOOK: query: insert into table table_add_int_permute_select
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+PREHOOK: type: QUERY
+PREHOOK: Input: default@values__tmp__table__1
+PREHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: query: insert into table table_add_int_permute_select
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@values__tmp__table__1
+POSTHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
+POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
+POSTHOOK: Lineage: table_add_int_permute_select.insert_num EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
+_col0  _col1   _col2
+PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
+alter table table_add_int_permute_select add columns(c int)
+PREHOOK: type: ALTERTABLE_ADDCOLS
+PREHOOK: Input: default@table_add_int_permute_select
+PREHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
+alter table table_add_int_permute_select add columns(c int)
+POSTHOOK: type: ALTERTABLE_ADDCOLS
+POSTHOOK: Input: 

[09/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_table.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_table.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_table.q.out
new file mode 100644
index 000..a0a1241
--- /dev/null
+++ 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_table.q.out
@@ -0,0 +1,3747 @@
+PREHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Table
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Table
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@table_add_int_permute_select
+PREHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
+PREHOOK: type: DESCTABLE
+PREHOOK: Input: default@table_add_int_permute_select
+POSTHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
+POSTHOOK: type: DESCTABLE
+POSTHOOK: Input: default@table_add_int_permute_select
+col_name   data_type   comment
+# col_name data_type   comment 
+
+insert_num int 
+a  int 
+b  string  
+
+# Detailed Table Information
+Database:  default  
+ A masked pattern was here 
+Retention: 0
+ A masked pattern was here 
+Table Type:MANAGED_TABLE
+Table Parameters:   
+   COLUMN_STATS_ACCURATE   {\"BASIC_STATS\":\"true\"}
+   numFiles0   
+   numRows 0   
+   rawDataSize 0   
+   totalSize   0   
+ A masked pattern was here 
+
+# Storage Information   
+SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe  
 
+InputFormat:   org.apache.hadoop.mapred.TextInputFormat 
+OutputFormat:  
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat   
+Compressed:No   
+Num Buckets:   -1   
+Bucket Columns:[]   
+Sort Columns:  []   
+Storage Desc Params:
+   serialization.format1   
+PREHOOK: query: insert into table table_add_int_permute_select
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+PREHOOK: type: QUERY
+PREHOOK: Input: default@values__tmp__table__1
+PREHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: query: insert into table table_add_int_permute_select
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@values__tmp__table__1
+POSTHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
+POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
+POSTHOOK: Lineage: table_add_int_permute_select.insert_num EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
+_col0  _col1   _col2
+PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
+alter table table_add_int_permute_select add columns(c int)
+PREHOOK: type: ALTERTABLE_ADDCOLS
+PREHOOK: Input: default@table_add_int_permute_select
+PREHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
+alter table table_add_int_permute_select add columns(c int)
+POSTHOOK: type: 

[22/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_fetchwork_table.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_fetchwork_table.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_fetchwork_table.q.out
new file mode 100644
index 000..b569a94
--- /dev/null
+++ 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_fetchwork_table.q.out
@@ -0,0 +1,3747 @@
+PREHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, Non-Vectorized, FetchWork, Table
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, Non-Vectorized, FetchWork, Table
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@table_add_int_permute_select
+PREHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
+PREHOOK: type: DESCTABLE
+PREHOOK: Input: default@table_add_int_permute_select
+POSTHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
+POSTHOOK: type: DESCTABLE
+POSTHOOK: Input: default@table_add_int_permute_select
+col_name   data_type   comment
+# col_name data_type   comment 
+
+insert_num int 
+a  int 
+b  string  
+
+# Detailed Table Information
+Database:  default  
+ A masked pattern was here 
+Retention: 0
+ A masked pattern was here 
+Table Type:MANAGED_TABLE
+Table Parameters:   
+   COLUMN_STATS_ACCURATE   {\"BASIC_STATS\":\"true\"}
+   numFiles0   
+   numRows 0   
+   rawDataSize 0   
+   totalSize   0   
+ A masked pattern was here 
+
+# Storage Information   
+SerDe Library: org.apache.hadoop.hive.ql.io.orc.OrcSerde
+InputFormat:   org.apache.hadoop.hive.ql.io.orc.OrcInputFormat  
+OutputFormat:  org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat
 
+Compressed:No   
+Num Buckets:   -1   
+Bucket Columns:[]   
+Sort Columns:  []   
+Storage Desc Params:
+   serialization.format1   
+PREHOOK: query: insert into table table_add_int_permute_select
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+PREHOOK: type: QUERY
+PREHOOK: Input: default@values__tmp__table__1
+PREHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: query: insert into table table_add_int_permute_select
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@values__tmp__table__1
+POSTHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
+POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
+POSTHOOK: Lineage: table_add_int_permute_select.insert_num EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
+_col0  _col1   _col2
+PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
+alter table table_add_int_permute_select add columns(c int)
+PREHOOK: type: ALTERTABLE_ADDCOLS
+PREHOOK: Input: default@table_add_int_permute_select
+PREHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
+alter table table_add_int_permute_select add columns(c int)
+POSTHOOK: type: ALTERTABLE_ADDCOLS
+POSTHOOK: 

[28/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
HIVE-14355: Schema evolution for ORC in llap is broken for int to string 
conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/e769be99
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/e769be99
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/e769be99

Branch: refs/heads/master
Commit: e769be9993b56616710a6f2bb47a02078b7c49ab
Parents: 6b0131b
Author: Prasanth Jayachandran 
Authored: Sat Jul 30 13:13:34 2016 -0700
Committer: Prasanth Jayachandran 
Committed: Sat Jul 30 13:15:19 2016 -0700

--
 .../test/resources/testconfiguration.properties |   32 +
 .../hive/llap/io/api/impl/LlapInputFormat.java  |   87 +-
 .../llap/io/decode/ColumnVectorProducer.java|3 +-
 .../llap/io/decode/OrcColumnVectorProducer.java |3 +-
 .../llap/io/decode/OrcEncodedDataConsumer.java  |   17 +
 .../hive/llap/io/decode/ReadPipeline.java   |3 +
 .../llap/io/encoded/OrcEncodedDataReader.java   |   28 +-
 .../hadoop/hive/ql/io/orc/OrcInputFormat.java   |7 +-
 .../llap/orc_ppd_schema_evol_1a.q.out   |   70 +
 .../llap/orc_ppd_schema_evol_1b.q.out   |  124 +
 .../llap/orc_ppd_schema_evol_2a.q.out   |   70 +
 .../llap/orc_ppd_schema_evol_2b.q.out   |  124 +
 .../llap/orc_ppd_schema_evol_3a.q.out   | 1297 ++
 .../schema_evol_orc_acid_mapwork_part.q.out | 3662 
 .../schema_evol_orc_acid_mapwork_table.q.out| 3331 +++
 .../schema_evol_orc_acidvec_mapwork_part.q.out  | 3662 
 .../schema_evol_orc_acidvec_mapwork_table.q.out | 3331 +++
 .../schema_evol_orc_nonvec_fetchwork_part.q.out | 3995 +
 ...schema_evol_orc_nonvec_fetchwork_table.q.out | 3747 
 .../schema_evol_orc_nonvec_mapwork_part.q.out   | 3995 +
 ...ol_orc_nonvec_mapwork_part_all_complex.q.out |  669 +++
 ..._orc_nonvec_mapwork_part_all_primitive.q.out | 2899 +
 .../schema_evol_orc_nonvec_mapwork_table.q.out  | 3747 
 .../llap/schema_evol_orc_vec_mapwork_part.q.out | 3995 +
 ..._evol_orc_vec_mapwork_part_all_complex.q.out |  669 +++
 ...vol_orc_vec_mapwork_part_all_primitive.q.out | 2899 +
 .../schema_evol_orc_vec_mapwork_table.q.out | 3747 
 .../clientpositive/llap/schema_evol_stats.q.out |  392 ++
 .../schema_evol_text_nonvec_mapwork_part.q.out  | 3995 +
 ...l_text_nonvec_mapwork_part_all_complex.q.out |  669 +++
 ...text_nonvec_mapwork_part_all_primitive.q.out | 2899 +
 .../schema_evol_text_nonvec_mapwork_table.q.out | 3747 
 .../schema_evol_text_vec_mapwork_part.q.out | 3999 ++
 ...evol_text_vec_mapwork_part_all_complex.q.out |  673 +++
 ...ol_text_vec_mapwork_part_all_primitive.q.out | 2903 +
 .../schema_evol_text_vec_mapwork_table.q.out| 3751 
 .../schema_evol_text_vecrow_mapwork_part.q.out  | 3999 ++
 ...l_text_vecrow_mapwork_part_all_complex.q.out |  675 +++
 ...text_vecrow_mapwork_part_all_primitive.q.out | 2903 +
 .../schema_evol_text_vecrow_mapwork_table.q.out | 3751 
 40 files changed, 80519 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/itests/src/test/resources/testconfiguration.properties
--
diff --git a/itests/src/test/resources/testconfiguration.properties 
b/itests/src/test/resources/testconfiguration.properties
index e5f40e6..ac249ed 100644
--- a/itests/src/test/resources/testconfiguration.properties
+++ b/itests/src/test/resources/testconfiguration.properties
@@ -500,6 +500,38 @@ minillap.shared.query.files=bucket_map_join_tez1.q,\
   llap_nullscan.q,\
   mrr.q,\
   orc_ppd_basic.q,\
+  orc_ppd_schema_evol_1a.q,\
+  orc_ppd_schema_evol_1b.q,\
+  orc_ppd_schema_evol_2a.q,\
+  orc_ppd_schema_evol_2b.q,\
+  orc_ppd_schema_evol_3a.q,\
+  schema_evol_stats.q,\
+  schema_evol_orc_acid_mapwork_part.q,\
+  schema_evol_orc_acid_mapwork_table.q,\
+  schema_evol_orc_acidvec_mapwork_part.q,\
+  schema_evol_orc_acidvec_mapwork_table.q,\
+  schema_evol_orc_nonvec_fetchwork_part.q,\
+  schema_evol_orc_nonvec_fetchwork_table.q,\
+  schema_evol_orc_nonvec_mapwork_part.q,\
+  schema_evol_orc_nonvec_mapwork_part_all_complex.q,\
+  schema_evol_orc_nonvec_mapwork_part_all_primitive.q,\
+  schema_evol_orc_nonvec_mapwork_table.q,\
+  schema_evol_orc_vec_mapwork_part.q,\
+  schema_evol_orc_vec_mapwork_part_all_complex.q,\
+  schema_evol_orc_vec_mapwork_part_all_primitive.q,\
+  schema_evol_orc_vec_mapwork_table.q,\
+  schema_evol_text_nonvec_mapwork_part.q,\
+  

[12/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_part.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_part.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_part.q.out
new file mode 100644
index 000..e0d0cab
--- /dev/null
+++ 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_part.q.out
@@ -0,0 +1,3995 @@
+PREHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@part_add_int_permute_select
+POSTHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@part_add_int_permute_select
+PREHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
+PREHOOK: type: DESCTABLE
+PREHOOK: Input: default@part_add_int_permute_select
+POSTHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
+POSTHOOK: type: DESCTABLE
+POSTHOOK: Input: default@part_add_int_permute_select
+col_name   data_type   comment
+# col_name data_type   comment 
+
+insert_num int 
+a  int 
+b  string  
+
+# Partition Information 
+# col_name data_type   comment 
+
+part   int 
+
+# Detailed Table Information
+Database:  default  
+ A masked pattern was here 
+Retention: 0
+ A masked pattern was here 
+Table Type:MANAGED_TABLE
+Table Parameters:   
+ A masked pattern was here 
+
+# Storage Information   
+SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe  
 
+InputFormat:   org.apache.hadoop.mapred.TextInputFormat 
+OutputFormat:  
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat   
+Compressed:No   
+Num Buckets:   -1   
+Bucket Columns:[]   
+Sort Columns:  []   
+Storage Desc Params:
+   serialization.format1   
+PREHOOK: query: insert into table part_add_int_permute_select partition(part=1)
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+PREHOOK: type: QUERY
+PREHOOK: Input: default@values__tmp__table__1
+PREHOOK: Output: default@part_add_int_permute_select@part=1
+POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=1)
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@values__tmp__table__1
+POSTHOOK: Output: default@part_add_int_permute_select@part=1
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).insert_num 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
+_col0  _col1   _col2
+PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
+alter table part_add_int_permute_select add columns(c int)
+PREHOOK: type: ALTERTABLE_ADDCOLS
+PREHOOK: Input: default@part_add_int_permute_select
+PREHOOK: Output: default@part_add_int_permute_select
+POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS 

[23/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_fetchwork_part.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_fetchwork_part.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_fetchwork_part.q.out
new file mode 100644
index 000..d1634a9
--- /dev/null
+++ 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_fetchwork_part.q.out
@@ -0,0 +1,3995 @@
+PREHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, Non-Vectorized, FetchWork, Partitioned
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@part_add_int_permute_select
+POSTHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, Non-Vectorized, FetchWork, Partitioned
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@part_add_int_permute_select
+PREHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
+PREHOOK: type: DESCTABLE
+PREHOOK: Input: default@part_add_int_permute_select
+POSTHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
+POSTHOOK: type: DESCTABLE
+POSTHOOK: Input: default@part_add_int_permute_select
+col_name   data_type   comment
+# col_name data_type   comment 
+
+insert_num int 
+a  int 
+b  string  
+
+# Partition Information 
+# col_name data_type   comment 
+
+part   int 
+
+# Detailed Table Information
+Database:  default  
+ A masked pattern was here 
+Retention: 0
+ A masked pattern was here 
+Table Type:MANAGED_TABLE
+Table Parameters:   
+ A masked pattern was here 
+
+# Storage Information   
+SerDe Library: org.apache.hadoop.hive.ql.io.orc.OrcSerde
+InputFormat:   org.apache.hadoop.hive.ql.io.orc.OrcInputFormat  
+OutputFormat:  org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat
 
+Compressed:No   
+Num Buckets:   -1   
+Bucket Columns:[]   
+Sort Columns:  []   
+Storage Desc Params:
+   serialization.format1   
+PREHOOK: query: insert into table part_add_int_permute_select partition(part=1)
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+PREHOOK: type: QUERY
+PREHOOK: Input: default@values__tmp__table__1
+PREHOOK: Output: default@part_add_int_permute_select@part=1
+POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=1)
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@values__tmp__table__1
+POSTHOOK: Output: default@part_add_int_permute_select@part=1
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).insert_num 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
+_col0  _col1   _col2
+PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
+alter table part_add_int_permute_select add columns(c int)
+PREHOOK: type: ALTERTABLE_ADDCOLS
+PREHOOK: Input: default@part_add_int_permute_select
+PREHOOK: Output: default@part_add_int_permute_select
+POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
+alter table 

[04/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_part.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_part.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_part.q.out
new file mode 100644
index 000..7f5afd6
--- /dev/null
+++ 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_part.q.out
@@ -0,0 +1,3999 @@
+PREHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned
+-- NOTE: the use of hive.vectorized.use.row.serde.deserialize above which 
enables doing
+--  vectorized reading of TEXTFILE format files using the row SERDE methods.
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@part_add_int_permute_select
+POSTHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned
+-- NOTE: the use of hive.vectorized.use.row.serde.deserialize above which 
enables doing
+--  vectorized reading of TEXTFILE format files using the row SERDE methods.
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@part_add_int_permute_select
+PREHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
+PREHOOK: type: DESCTABLE
+PREHOOK: Input: default@part_add_int_permute_select
+POSTHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
+POSTHOOK: type: DESCTABLE
+POSTHOOK: Input: default@part_add_int_permute_select
+col_name   data_type   comment
+# col_name data_type   comment 
+
+insert_num int 
+a  int 
+b  string  
+
+# Partition Information 
+# col_name data_type   comment 
+
+part   int 
+
+# Detailed Table Information
+Database:  default  
+ A masked pattern was here 
+Retention: 0
+ A masked pattern was here 
+Table Type:MANAGED_TABLE
+Table Parameters:   
+ A masked pattern was here 
+
+# Storage Information   
+SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe  
 
+InputFormat:   org.apache.hadoop.mapred.TextInputFormat 
+OutputFormat:  
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat   
+Compressed:No   
+Num Buckets:   -1   
+Bucket Columns:[]   
+Sort Columns:  []   
+Storage Desc Params:
+   serialization.format1   
+PREHOOK: query: insert into table part_add_int_permute_select partition(part=1)
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+PREHOOK: type: QUERY
+PREHOOK: Input: default@values__tmp__table__1
+PREHOOK: Output: default@part_add_int_permute_select@part=1
+POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=1)
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@values__tmp__table__1
+POSTHOOK: Output: default@part_add_int_permute_select@part=1
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).insert_num 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, 

[21/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_part.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_part.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_part.q.out
new file mode 100644
index 000..127d5a9
--- /dev/null
+++ 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_part.q.out
@@ -0,0 +1,3995 @@
+PREHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, Non-Vectorized, MapWork, Partitioned
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@part_add_int_permute_select
+POSTHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, Non-Vectorized, MapWork, Partitioned
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@part_add_int_permute_select
+PREHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
+PREHOOK: type: DESCTABLE
+PREHOOK: Input: default@part_add_int_permute_select
+POSTHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
+POSTHOOK: type: DESCTABLE
+POSTHOOK: Input: default@part_add_int_permute_select
+col_name   data_type   comment
+# col_name data_type   comment 
+
+insert_num int 
+a  int 
+b  string  
+
+# Partition Information 
+# col_name data_type   comment 
+
+part   int 
+
+# Detailed Table Information
+Database:  default  
+ A masked pattern was here 
+Retention: 0
+ A masked pattern was here 
+Table Type:MANAGED_TABLE
+Table Parameters:   
+ A masked pattern was here 
+
+# Storage Information   
+SerDe Library: org.apache.hadoop.hive.ql.io.orc.OrcSerde
+InputFormat:   org.apache.hadoop.hive.ql.io.orc.OrcInputFormat  
+OutputFormat:  org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat
 
+Compressed:No   
+Num Buckets:   -1   
+Bucket Columns:[]   
+Sort Columns:  []   
+Storage Desc Params:
+   serialization.format1   
+PREHOOK: query: insert into table part_add_int_permute_select partition(part=1)
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+PREHOOK: type: QUERY
+PREHOOK: Input: default@values__tmp__table__1
+PREHOOK: Output: default@part_add_int_permute_select@part=1
+POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=1)
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@values__tmp__table__1
+POSTHOOK: Output: default@part_add_int_permute_select@part=1
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).insert_num 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
+_col0  _col1   _col2
+PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
+alter table part_add_int_permute_select add columns(c int)
+PREHOOK: type: ALTERTABLE_ADDCOLS
+PREHOOK: Input: default@part_add_int_permute_select
+PREHOOK: Output: default@part_add_int_permute_select
+POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
+alter table 

[13/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/ql/src/test/results/clientpositive/llap/schema_evol_stats.q.out
--
diff --git a/ql/src/test/results/clientpositive/llap/schema_evol_stats.q.out 
b/ql/src/test/results/clientpositive/llap/schema_evol_stats.q.out
new file mode 100644
index 000..63b4c19
--- /dev/null
+++ b/ql/src/test/results/clientpositive/llap/schema_evol_stats.q.out
@@ -0,0 +1,392 @@
+PREHOOK: query: CREATE TABLE partitioned1(a INT, b STRING) PARTITIONED BY(part 
INT) STORED AS TEXTFILE
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@partitioned1
+POSTHOOK: query: CREATE TABLE partitioned1(a INT, b STRING) PARTITIONED 
BY(part INT) STORED AS TEXTFILE
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@partitioned1
+PREHOOK: query: insert into table partitioned1 partition(part=1) values(1, 
'original'),(2, 'original'), (3, 'original'),(4, 'original')
+PREHOOK: type: QUERY
+PREHOOK: Input: default@values__tmp__table__1
+PREHOOK: Output: default@partitioned1@part=1
+POSTHOOK: query: insert into table partitioned1 partition(part=1) values(1, 
'original'),(2, 'original'), (3, 'original'),(4, 'original')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@values__tmp__table__1
+POSTHOOK: Output: default@partitioned1@part=1
+POSTHOOK: Lineage: partitioned1 PARTITION(part=1).a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
+POSTHOOK: Lineage: partitioned1 PARTITION(part=1).b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
+PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
+alter table partitioned1 add columns(c int, d string)
+PREHOOK: type: ALTERTABLE_ADDCOLS
+PREHOOK: Input: default@partitioned1
+PREHOOK: Output: default@partitioned1
+POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
+alter table partitioned1 add columns(c int, d string)
+POSTHOOK: type: ALTERTABLE_ADDCOLS
+POSTHOOK: Input: default@partitioned1
+POSTHOOK: Output: default@partitioned1
+PREHOOK: query: insert into table partitioned1 partition(part=2) values(1, 
'new', 10, 'ten'),(2, 'new', NULL, 'twenty'), (3, 'new', 30, 'thirty'),(4, 
'new', 40, 'forty')
+PREHOOK: type: QUERY
+PREHOOK: Input: default@values__tmp__table__2
+PREHOOK: Output: default@partitioned1@part=2
+POSTHOOK: query: insert into table partitioned1 partition(part=2) values(1, 
'new', 10, 'ten'),(2, 'new', NULL, 'twenty'), (3, 'new', 30, 'thirty'),(4, 
'new', 40, 'forty')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@values__tmp__table__2
+POSTHOOK: Output: default@partitioned1@part=2
+POSTHOOK: Lineage: partitioned1 PARTITION(part=2).a EXPRESSION 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
+POSTHOOK: Lineage: partitioned1 PARTITION(part=2).b SIMPLE 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
+POSTHOOK: Lineage: partitioned1 PARTITION(part=2).c EXPRESSION 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
+POSTHOOK: Lineage: partitioned1 PARTITION(part=2).d SIMPLE 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col4, 
type:string, comment:), ]
+PREHOOK: query: analyze table partitioned1 compute statistics for columns
+PREHOOK: type: QUERY
+PREHOOK: Input: default@partitioned1
+PREHOOK: Input: default@partitioned1@part=1
+PREHOOK: Input: default@partitioned1@part=2
+ A masked pattern was here 
+POSTHOOK: query: analyze table partitioned1 compute statistics for columns
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@partitioned1
+POSTHOOK: Input: default@partitioned1@part=1
+POSTHOOK: Input: default@partitioned1@part=2
+ A masked pattern was here 
+PREHOOK: query: desc formatted partitioned1
+PREHOOK: type: DESCTABLE
+PREHOOK: Input: default@partitioned1
+POSTHOOK: query: desc formatted partitioned1
+POSTHOOK: type: DESCTABLE
+POSTHOOK: Input: default@partitioned1
+# col_name data_type   comment 
+
+a  int 
+b  string  
+c  int 
+d  string  
+
+# Partition Information 
+# col_name data_type   comment 
+
+part   int 
+
+# Detailed Table Information
+Database:  default  
+ A masked pattern was here 
+Retention: 0
+ A masked 

[08/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_part.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_part.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_part.q.out
new file mode 100644
index 000..5b1ec99
--- /dev/null
+++ 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_part.q.out
@@ -0,0 +1,3999 @@
+PREHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned
+-- NOTE: the use of hive.vectorized.use.vector.serde.deserialize above which 
enables doing
+--  vectorized reading of TEXTFILE format files using the vector SERDE methods.
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@part_add_int_permute_select
+POSTHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned
+-- NOTE: the use of hive.vectorized.use.vector.serde.deserialize above which 
enables doing
+--  vectorized reading of TEXTFILE format files using the vector SERDE methods.
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@part_add_int_permute_select
+PREHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
+PREHOOK: type: DESCTABLE
+PREHOOK: Input: default@part_add_int_permute_select
+POSTHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
+POSTHOOK: type: DESCTABLE
+POSTHOOK: Input: default@part_add_int_permute_select
+col_name   data_type   comment
+# col_name data_type   comment 
+
+insert_num int 
+a  int 
+b  string  
+
+# Partition Information 
+# col_name data_type   comment 
+
+part   int 
+
+# Detailed Table Information
+Database:  default  
+ A masked pattern was here 
+Retention: 0
+ A masked pattern was here 
+Table Type:MANAGED_TABLE
+Table Parameters:   
+ A masked pattern was here 
+
+# Storage Information   
+SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe  
 
+InputFormat:   org.apache.hadoop.mapred.TextInputFormat 
+OutputFormat:  
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat   
+Compressed:No   
+Num Buckets:   -1   
+Bucket Columns:[]   
+Sort Columns:  []   
+Storage Desc Params:
+   serialization.format1   
+PREHOOK: query: insert into table part_add_int_permute_select partition(part=1)
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+PREHOOK: type: QUERY
+PREHOOK: Input: default@values__tmp__table__1
+PREHOOK: Output: default@part_add_int_permute_select@part=1
+POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=1)
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@values__tmp__table__1
+POSTHOOK: Output: default@part_add_int_permute_select@part=1
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).insert_num 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, 

[26/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/ql/src/test/results/clientpositive/llap/schema_evol_orc_acid_mapwork_table.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_acid_mapwork_table.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_acid_mapwork_table.q.out
new file mode 100644
index 000..e69e9bd
--- /dev/null
+++ 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_acid_mapwork_table.q.out
@@ -0,0 +1,3331 @@
+PREHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, ACID Non-Vectorized, MapWork, Table
+-- *IMPORTANT NOTE* We set hive.exec.schema.evolution=false above since schema 
evolution is always used for ACID.
+-- Also, we don't do EXPLAINs on ACID files because the transaction id causes 
Q file statistics differences...
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING) 
clustered by (a) into 2 buckets STORED AS ORC  TBLPROPERTIES 
('transactional'='true')
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, ACID Non-Vectorized, MapWork, Table
+-- *IMPORTANT NOTE* We set hive.exec.schema.evolution=false above since schema 
evolution is always used for ACID.
+-- Also, we don't do EXPLAINs on ACID files because the transaction id causes 
Q file statistics differences...
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING) 
clustered by (a) into 2 buckets STORED AS ORC  TBLPROPERTIES 
('transactional'='true')
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@table_add_int_permute_select
+PREHOOK: query: insert into table table_add_int_permute_select
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+PREHOOK: type: QUERY
+PREHOOK: Input: default@values__tmp__table__1
+PREHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: query: insert into table table_add_int_permute_select
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@values__tmp__table__1
+POSTHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
+POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
+POSTHOOK: Lineage: table_add_int_permute_select.insert_num EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
+_col0  _col1   _col2
+PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
+alter table table_add_int_permute_select add columns(c int)
+PREHOOK: type: ALTERTABLE_ADDCOLS
+PREHOOK: Input: default@table_add_int_permute_select
+PREHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
+alter table table_add_int_permute_select add columns(c int)
+POSTHOOK: type: ALTERTABLE_ADDCOLS
+POSTHOOK: Input: default@table_add_int_permute_select
+POSTHOOK: Output: default@table_add_int_permute_select
+PREHOOK: query: insert into table table_add_int_permute_select
+values (5, 1, 'new', 10),
+   (6, 2, 'new', 20),
+   (7, 3, 'new', 30),
+   (8, 4, 'new', 40)
+PREHOOK: type: QUERY
+PREHOOK: Input: default@values__tmp__table__2
+PREHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: query: insert into table table_add_int_permute_select
+values (5, 1, 'new', 10),
+   (6, 2, 'new', 20),
+   (7, 3, 'new', 30),
+   (8, 4, 'new', 40)
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@values__tmp__table__2
+POSTHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
+POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
+POSTHOOK: Lineage: table_add_int_permute_select.c EXPRESSION 

[17/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_part.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_part.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_part.q.out
new file mode 100644
index 000..ad1bd9b
--- /dev/null
+++ 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_part.q.out
@@ -0,0 +1,3995 @@
+PREHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, Vectorized, MapWork, Partitioned
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@part_add_int_permute_select
+POSTHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, Vectorized, MapWork, Partitioned
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@part_add_int_permute_select
+PREHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
+PREHOOK: type: DESCTABLE
+PREHOOK: Input: default@part_add_int_permute_select
+POSTHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
+POSTHOOK: type: DESCTABLE
+POSTHOOK: Input: default@part_add_int_permute_select
+col_name   data_type   comment
+# col_name data_type   comment 
+
+insert_num int 
+a  int 
+b  string  
+
+# Partition Information 
+# col_name data_type   comment 
+
+part   int 
+
+# Detailed Table Information
+Database:  default  
+ A masked pattern was here 
+Retention: 0
+ A masked pattern was here 
+Table Type:MANAGED_TABLE
+Table Parameters:   
+ A masked pattern was here 
+
+# Storage Information   
+SerDe Library: org.apache.hadoop.hive.ql.io.orc.OrcSerde
+InputFormat:   org.apache.hadoop.hive.ql.io.orc.OrcInputFormat  
+OutputFormat:  org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat
 
+Compressed:No   
+Num Buckets:   -1   
+Bucket Columns:[]   
+Sort Columns:  []   
+Storage Desc Params:
+   serialization.format1   
+PREHOOK: query: insert into table part_add_int_permute_select partition(part=1)
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+PREHOOK: type: QUERY
+PREHOOK: Input: default@values__tmp__table__1
+PREHOOK: Output: default@part_add_int_permute_select@part=1
+POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=1)
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@values__tmp__table__1
+POSTHOOK: Output: default@part_add_int_permute_select@part=1
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).insert_num 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
+_col0  _col1   _col2
+PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
+alter table part_add_int_permute_select add columns(c int)
+PREHOOK: type: ALTERTABLE_ADDCOLS
+PREHOOK: Input: default@part_add_int_permute_select
+PREHOOK: Output: default@part_add_int_permute_select
+POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
+alter table part_add_int_permute_select add 

[01/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
Repository: hive
Updated Branches:
  refs/heads/master 6b0131b04 -> e769be999


http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_table.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_table.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_table.q.out
new file mode 100644
index 000..bfa8195
--- /dev/null
+++ 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_table.q.out
@@ -0,0 +1,3751 @@
+PREHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Table
+-- NOTE: the use of hive.vectorized.use.row.serde.deserialize above which 
enables doing
+--  vectorized reading of TEXTFILE format files using the row SERDE methods.
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Table
+-- NOTE: the use of hive.vectorized.use.row.serde.deserialize above which 
enables doing
+--  vectorized reading of TEXTFILE format files using the row SERDE methods.
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@table_add_int_permute_select
+PREHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
+PREHOOK: type: DESCTABLE
+PREHOOK: Input: default@table_add_int_permute_select
+POSTHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
+POSTHOOK: type: DESCTABLE
+POSTHOOK: Input: default@table_add_int_permute_select
+col_name   data_type   comment
+# col_name data_type   comment 
+
+insert_num int 
+a  int 
+b  string  
+
+# Detailed Table Information
+Database:  default  
+ A masked pattern was here 
+Retention: 0
+ A masked pattern was here 
+Table Type:MANAGED_TABLE
+Table Parameters:   
+   COLUMN_STATS_ACCURATE   {\"BASIC_STATS\":\"true\"}
+   numFiles0   
+   numRows 0   
+   rawDataSize 0   
+   totalSize   0   
+ A masked pattern was here 
+
+# Storage Information   
+SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe  
 
+InputFormat:   org.apache.hadoop.mapred.TextInputFormat 
+OutputFormat:  
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat   
+Compressed:No   
+Num Buckets:   -1   
+Bucket Columns:[]   
+Sort Columns:  []   
+Storage Desc Params:
+   serialization.format1   
+PREHOOK: query: insert into table table_add_int_permute_select
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+PREHOOK: type: QUERY
+PREHOOK: Input: default@values__tmp__table__1
+PREHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: query: insert into table table_add_int_permute_select
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@values__tmp__table__1
+POSTHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
+POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
+POSTHOOK: Lineage: table_add_int_permute_select.insert_num EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
+_col0  

[03/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_part_all_complex.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_part_all_complex.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_part_all_complex.q.out
new file mode 100644
index 000..15c3485
--- /dev/null
+++ 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_part_all_complex.q.out
@@ -0,0 +1,675 @@
+PREHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
complex conversions
+-- NOTE: the use of hive.vectorized.use.row.serde.deserialize above which 
enables doing
+--  vectorized reading of TEXTFILE format files using the row SERDE methods.
+
+--
+--
+--
+-- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@part_change_various_various_struct1
+POSTHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
complex conversions
+-- NOTE: the use of hive.vectorized.use.row.serde.deserialize above which 
enables doing
+--  vectorized reading of TEXTFILE format files using the row SERDE methods.
+
+--
+--
+--
+-- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@part_change_various_various_struct1
+PREHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
+row format delimited fields terminated by '|'
+collection items terminated by ','
+map keys terminated by ':' stored as textfile
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@struct1_a_txt
+POSTHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
+row format delimited fields terminated by '|'
+collection items terminated by ','
+map keys terminated by ':' stored as textfile
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@struct1_a_txt
+PREHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
+PREHOOK: type: LOAD
+ A masked pattern was here 
+PREHOOK: Output: default@struct1_a_txt
+POSTHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
+POSTHOOK: type: LOAD
+ A masked pattern was here 
+POSTHOOK: Output: default@struct1_a_txt
+PREHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
+PREHOOK: type: QUERY
+PREHOOK: Input: default@struct1_a_txt
+PREHOOK: Output: default@part_change_various_various_struct1@part=1
+POSTHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@struct1_a_txt
+POSTHOOK: Output: default@part_change_various_various_struct1@part=1
+POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).b 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:b, type:string, 
comment:null), ]
+POSTHOOK: Lineage: part_change_various_various_struct1 
PARTITION(part=1).insert_num SIMPLE 
[(struct1_a_txt)struct1_a_txt.FieldSchema(name:insert_num, type:int, 
comment:null), ]
+POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).s1 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:s1, 
type:struct,
 comment:null), ]
+struct1_a_txt.insert_num   struct1_a_txt.s1struct1_a_txt.b
+PREHOOK: query: select insert_num,part,s1,b from 
part_change_various_various_struct1 order by insert_num
+PREHOOK: type: QUERY
+PREHOOK: Input: default@part_change_various_various_struct1
+PREHOOK: Input: 

[10/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_part_all_primitive.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_part_all_primitive.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_part_all_primitive.q.out
new file mode 100644
index 000..7ec794f
--- /dev/null
+++ 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_part_all_primitive.q.out
@@ -0,0 +1,2899 @@
+PREHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
primitive conversions
+--
+--
+-- SECTION: ALTER TABLE CHANGE COLUMNS Various --> Various
+--
+--
+--
+-- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: (BYTE, 
SHORT, INT, LONG, FLOAT, DOUBLE, DECIMAL, STRING, TIMESTAMP) --> BOOLEAN
+--
+CREATE TABLE part_change_various_various_boolean(insert_num int, c1 TINYINT, 
c2 SMALLINT, c3 INT, c4 BIGINT, c5 FLOAT, c6 DOUBLE, c7 DECIMAL(38,18), c8 
STRING, c9 TIMESTAMP, b STRING) PARTITIONED BY(part INT)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@part_change_various_various_boolean
+POSTHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
primitive conversions
+--
+--
+-- SECTION: ALTER TABLE CHANGE COLUMNS Various --> Various
+--
+--
+--
+-- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: (BYTE, 
SHORT, INT, LONG, FLOAT, DOUBLE, DECIMAL, STRING, TIMESTAMP) --> BOOLEAN
+--
+CREATE TABLE part_change_various_various_boolean(insert_num int, c1 TINYINT, 
c2 SMALLINT, c3 INT, c4 BIGINT, c5 FLOAT, c6 DOUBLE, c7 DECIMAL(38,18), c8 
STRING, c9 TIMESTAMP, b STRING) PARTITIONED BY(part INT)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@part_change_various_various_boolean
+PREHOOK: query: insert into table part_change_various_various_boolean 
partition(part=1)
+values(1, 255, 2000, 72909, 3244222, -29.0764, 470614135, 470614135, 
'true', '0004-09-22 18:26:29.51954', 'original'),
+  (2, 45, 1000, 483777, -23866739993, -3651.672121, 46114.284799488, 
46114.284799488, '', '2007-02-09 05:17:29.368756876', 'original'),
+  (3, 200, 72909, 3244222, -93222, 30.774, -66475.561431, 
-66475.561431, '1', '6229-06-28 02:54:28.970117179', 'original'),
+  (4, 1, 9, 754072151, 3289094, 46114.284799488 ,9250340.75, 
9250340.75, 'time will come', '2002-05-10 05:29:48.990818073', 'original')
+PREHOOK: type: QUERY
+PREHOOK: Input: default@values__tmp__table__1
+PREHOOK: Output: default@part_change_various_various_boolean@part=1
+POSTHOOK: query: insert into table part_change_various_various_boolean 
partition(part=1)
+values(1, 255, 2000, 72909, 3244222, -29.0764, 470614135, 470614135, 
'true', '0004-09-22 18:26:29.51954', 'original'),
+  (2, 45, 1000, 483777, -23866739993, -3651.672121, 46114.284799488, 
46114.284799488, '', '2007-02-09 05:17:29.368756876', 'original'),
+  (3, 200, 72909, 3244222, -93222, 30.774, -66475.561431, 
-66475.561431, '1', '6229-06-28 02:54:28.970117179', 'original'),
+  (4, 1, 9, 754072151, 3289094, 46114.284799488 ,9250340.75, 
9250340.75, 'time will come', '2002-05-10 05:29:48.990818073', 'original')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@values__tmp__table__1
+POSTHOOK: Output: default@part_change_various_various_boolean@part=1
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).b 
SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col11,
 type:string, comment:), ]
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c1 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c2 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c3 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col4, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c4 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col5, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c5 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col6, 
type:string, comment:), ]
+POSTHOOK: Lineage: 

[14/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_table.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_table.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_table.q.out
new file mode 100644
index 000..a0f0703
--- /dev/null
+++ 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_table.q.out
@@ -0,0 +1,3747 @@
+PREHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, Vectorized, MapWork, Table
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, Vectorized, MapWork, Table
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@table_add_int_permute_select
+PREHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
+PREHOOK: type: DESCTABLE
+PREHOOK: Input: default@table_add_int_permute_select
+POSTHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
+POSTHOOK: type: DESCTABLE
+POSTHOOK: Input: default@table_add_int_permute_select
+col_name   data_type   comment
+# col_name data_type   comment 
+
+insert_num int 
+a  int 
+b  string  
+
+# Detailed Table Information
+Database:  default  
+ A masked pattern was here 
+Retention: 0
+ A masked pattern was here 
+Table Type:MANAGED_TABLE
+Table Parameters:   
+   COLUMN_STATS_ACCURATE   {\"BASIC_STATS\":\"true\"}
+   numFiles0   
+   numRows 0   
+   rawDataSize 0   
+   totalSize   0   
+ A masked pattern was here 
+
+# Storage Information   
+SerDe Library: org.apache.hadoop.hive.ql.io.orc.OrcSerde
+InputFormat:   org.apache.hadoop.hive.ql.io.orc.OrcInputFormat  
+OutputFormat:  org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat
 
+Compressed:No   
+Num Buckets:   -1   
+Bucket Columns:[]   
+Sort Columns:  []   
+Storage Desc Params:
+   serialization.format1   
+PREHOOK: query: insert into table table_add_int_permute_select
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+PREHOOK: type: QUERY
+PREHOOK: Input: default@values__tmp__table__1
+PREHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: query: insert into table table_add_int_permute_select
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@values__tmp__table__1
+POSTHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
+POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
+POSTHOOK: Lineage: table_add_int_permute_select.insert_num EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
+_col0  _col1   _col2
+PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
+alter table table_add_int_permute_select add columns(c int)
+PREHOOK: type: ALTERTABLE_ADDCOLS
+PREHOOK: Input: default@table_add_int_permute_select
+PREHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
+alter table table_add_int_permute_select add columns(c int)
+POSTHOOK: type: ALTERTABLE_ADDCOLS
+POSTHOOK: Input: 

[07/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_part_all_complex.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_part_all_complex.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_part_all_complex.q.out
new file mode 100644
index 000..d4995c7
--- /dev/null
+++ 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_part_all_complex.q.out
@@ -0,0 +1,673 @@
+PREHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
complex conversions
+-- NOTE: the use of hive.vectorized.use.vector.serde.deserialize above which 
enables doing
+--  vectorized reading of TEXTFILE format files using the vector SERDE methods.
+--
+--
+--
+-- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@part_change_various_various_struct1
+POSTHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
complex conversions
+-- NOTE: the use of hive.vectorized.use.vector.serde.deserialize above which 
enables doing
+--  vectorized reading of TEXTFILE format files using the vector SERDE methods.
+--
+--
+--
+-- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@part_change_various_various_struct1
+PREHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
+row format delimited fields terminated by '|'
+collection items terminated by ','
+map keys terminated by ':' stored as textfile
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@struct1_a_txt
+POSTHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
+row format delimited fields terminated by '|'
+collection items terminated by ','
+map keys terminated by ':' stored as textfile
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@struct1_a_txt
+PREHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
+PREHOOK: type: LOAD
+ A masked pattern was here 
+PREHOOK: Output: default@struct1_a_txt
+POSTHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
+POSTHOOK: type: LOAD
+ A masked pattern was here 
+POSTHOOK: Output: default@struct1_a_txt
+PREHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
+PREHOOK: type: QUERY
+PREHOOK: Input: default@struct1_a_txt
+PREHOOK: Output: default@part_change_various_various_struct1@part=1
+POSTHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@struct1_a_txt
+POSTHOOK: Output: default@part_change_various_various_struct1@part=1
+POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).b 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:b, type:string, 
comment:null), ]
+POSTHOOK: Lineage: part_change_various_various_struct1 
PARTITION(part=1).insert_num SIMPLE 
[(struct1_a_txt)struct1_a_txt.FieldSchema(name:insert_num, type:int, 
comment:null), ]
+POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).s1 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:s1, 
type:struct,
 comment:null), ]
+struct1_a_txt.insert_num   struct1_a_txt.s1struct1_a_txt.b
+PREHOOK: query: select insert_num,part,s1,b from 
part_change_various_various_struct1 order by insert_num
+PREHOOK: type: QUERY
+PREHOOK: Input: default@part_change_various_various_struct1
+PREHOOK: Input: 

[20/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_part_all_complex.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_part_all_complex.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_part_all_complex.q.out
new file mode 100644
index 000..9f47c1c
--- /dev/null
+++ 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_part_all_complex.q.out
@@ -0,0 +1,669 @@
+PREHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, Non-Vectorized, MapWork, Partitioned --> all complex 
conversions
+--
+--
+--
+-- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@part_change_various_various_struct1
+POSTHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, Non-Vectorized, MapWork, Partitioned --> all complex 
conversions
+--
+--
+--
+-- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@part_change_various_various_struct1
+PREHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
+row format delimited fields terminated by '|'
+collection items terminated by ','
+map keys terminated by ':' stored as textfile
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@struct1_a_txt
+POSTHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
+row format delimited fields terminated by '|'
+collection items terminated by ','
+map keys terminated by ':' stored as textfile
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@struct1_a_txt
+PREHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
+PREHOOK: type: LOAD
+ A masked pattern was here 
+PREHOOK: Output: default@struct1_a_txt
+POSTHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
+POSTHOOK: type: LOAD
+ A masked pattern was here 
+POSTHOOK: Output: default@struct1_a_txt
+PREHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
+PREHOOK: type: QUERY
+PREHOOK: Input: default@struct1_a_txt
+PREHOOK: Output: default@part_change_various_various_struct1@part=1
+POSTHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@struct1_a_txt
+POSTHOOK: Output: default@part_change_various_various_struct1@part=1
+POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).b 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:b, type:string, 
comment:null), ]
+POSTHOOK: Lineage: part_change_various_various_struct1 
PARTITION(part=1).insert_num SIMPLE 
[(struct1_a_txt)struct1_a_txt.FieldSchema(name:insert_num, type:int, 
comment:null), ]
+POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).s1 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:s1, 
type:struct,
 comment:null), ]
+struct1_a_txt.insert_num   struct1_a_txt.s1struct1_a_txt.b
+PREHOOK: query: select insert_num,part,s1,b from 
part_change_various_various_struct1 order by insert_num
+PREHOOK: type: QUERY
+PREHOOK: Input: default@part_change_various_various_struct1
+PREHOOK: Input: default@part_change_various_various_struct1@part=1
+ A masked pattern was here 
+POSTHOOK: query: select insert_num,part,s1,b from 
part_change_various_various_struct1 order by insert_num
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@part_change_various_various_struct1
+POSTHOOK: Input: default@part_change_various_various_struct1@part=1
+ A masked pattern was 

[05/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_table.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_table.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_table.q.out
new file mode 100644
index 000..4137c31
--- /dev/null
+++ 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_table.q.out
@@ -0,0 +1,3751 @@
+PREHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Table
+-- NOTE: the use of hive.vectorized.use.vector.serde.deserialize above which 
enables doing
+--  vectorized reading of TEXTFILE format files using the vector SERDE methods.
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Table
+-- NOTE: the use of hive.vectorized.use.vector.serde.deserialize above which 
enables doing
+--  vectorized reading of TEXTFILE format files using the vector SERDE methods.
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@table_add_int_permute_select
+PREHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
+PREHOOK: type: DESCTABLE
+PREHOOK: Input: default@table_add_int_permute_select
+POSTHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
+POSTHOOK: type: DESCTABLE
+POSTHOOK: Input: default@table_add_int_permute_select
+col_name   data_type   comment
+# col_name data_type   comment 
+
+insert_num int 
+a  int 
+b  string  
+
+# Detailed Table Information
+Database:  default  
+ A masked pattern was here 
+Retention: 0
+ A masked pattern was here 
+Table Type:MANAGED_TABLE
+Table Parameters:   
+   COLUMN_STATS_ACCURATE   {\"BASIC_STATS\":\"true\"}
+   numFiles0   
+   numRows 0   
+   rawDataSize 0   
+   totalSize   0   
+ A masked pattern was here 
+
+# Storage Information   
+SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe  
 
+InputFormat:   org.apache.hadoop.mapred.TextInputFormat 
+OutputFormat:  
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat   
+Compressed:No   
+Num Buckets:   -1   
+Bucket Columns:[]   
+Sort Columns:  []   
+Storage Desc Params:
+   serialization.format1   
+PREHOOK: query: insert into table table_add_int_permute_select
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+PREHOOK: type: QUERY
+PREHOOK: Input: default@values__tmp__table__1
+PREHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: query: insert into table table_add_int_permute_select
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@values__tmp__table__1
+POSTHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
+POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
+POSTHOOK: Lineage: table_add_int_permute_select.insert_num EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
+_col0  _col1   _col2
+PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
+alter table 

[25/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/ql/src/test/results/clientpositive/llap/schema_evol_orc_acidvec_mapwork_part.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_acidvec_mapwork_part.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_acidvec_mapwork_part.q.out
new file mode 100644
index 000..abe001d
--- /dev/null
+++ 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_acidvec_mapwork_part.q.out
@@ -0,0 +1,3662 @@
+PREHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, ACID Vectorized, MapWork, Partitioned
+-- *IMPORTANT NOTE* We set hive.exec.schema.evolution=false above since schema 
evolution is always used for ACID.
+-- Also, we don't do EXPLAINs on ACID files because the transaction id causes 
Q file statistics differences...
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT) clustered by (a) into 2 buckets STORED AS ORC 
TBLPROPERTIES ('transactional'='true')
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@part_add_int_permute_select
+POSTHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, ACID Vectorized, MapWork, Partitioned
+-- *IMPORTANT NOTE* We set hive.exec.schema.evolution=false above since schema 
evolution is always used for ACID.
+-- Also, we don't do EXPLAINs on ACID files because the transaction id causes 
Q file statistics differences...
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT) clustered by (a) into 2 buckets STORED AS ORC 
TBLPROPERTIES ('transactional'='true')
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@part_add_int_permute_select
+PREHOOK: query: insert into table part_add_int_permute_select partition(part=1)
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+PREHOOK: type: QUERY
+PREHOOK: Input: default@values__tmp__table__1
+PREHOOK: Output: default@part_add_int_permute_select@part=1
+POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=1)
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@values__tmp__table__1
+POSTHOOK: Output: default@part_add_int_permute_select@part=1
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).insert_num 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
+_col0  _col1   _col2
+PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
+alter table part_add_int_permute_select add columns(c int)
+PREHOOK: type: ALTERTABLE_ADDCOLS
+PREHOOK: Input: default@part_add_int_permute_select
+PREHOOK: Output: default@part_add_int_permute_select
+POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
+alter table part_add_int_permute_select add columns(c int)
+POSTHOOK: type: ALTERTABLE_ADDCOLS
+POSTHOOK: Input: default@part_add_int_permute_select
+POSTHOOK: Output: default@part_add_int_permute_select
+PREHOOK: query: insert into table part_add_int_permute_select partition(part=2)
+values (5, 1, 'new', 10),
+   (6, 2, 'new', 20),
+   (7, 3, 'new', 30),
+   (8, 4, 'new', 40)
+PREHOOK: type: QUERY
+PREHOOK: Input: default@values__tmp__table__2
+PREHOOK: Output: default@part_add_int_permute_select@part=2
+POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=2)
+values (5, 1, 'new', 10),
+   (6, 2, 'new', 20),
+   (7, 3, 'new', 30),
+   (8, 4, 'new', 40)
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@values__tmp__table__2
+POSTHOOK: Output: default@part_add_int_permute_select@part=2
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=2).a EXPRESSION 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
+POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=2).b SIMPLE 

[24/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/ql/src/test/results/clientpositive/llap/schema_evol_orc_acidvec_mapwork_table.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_acidvec_mapwork_table.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_acidvec_mapwork_table.q.out
new file mode 100644
index 000..8ce8794
--- /dev/null
+++ 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_acidvec_mapwork_table.q.out
@@ -0,0 +1,3331 @@
+PREHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, ACID Vectorized, MapWork, Table
+-- *IMPORTANT NOTE* We set hive.exec.schema.evolution=false above since schema 
evolution is always used for ACID.
+-- Also, we don't do EXPLAINs on ACID files because the transaction id causes 
Q file statistics differences...
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING) 
clustered by (a) into 2 buckets STORED AS ORC  TBLPROPERTIES 
('transactional'='true')
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, ACID Vectorized, MapWork, Table
+-- *IMPORTANT NOTE* We set hive.exec.schema.evolution=false above since schema 
evolution is always used for ACID.
+-- Also, we don't do EXPLAINs on ACID files because the transaction id causes 
Q file statistics differences...
+--
+--
+-- SECTION: ALTER TABLE ADD COLUMNS
+--
+--
+-- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
+--
+--
+CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING) 
clustered by (a) into 2 buckets STORED AS ORC  TBLPROPERTIES 
('transactional'='true')
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@table_add_int_permute_select
+PREHOOK: query: insert into table table_add_int_permute_select
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+PREHOOK: type: QUERY
+PREHOOK: Input: default@values__tmp__table__1
+PREHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: query: insert into table table_add_int_permute_select
+values (1, 1, 'original'),
+   (2, 2, 'original'),
+   (3, 3, 'original'),
+   (4, 4, 'original')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@values__tmp__table__1
+POSTHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
+POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
+POSTHOOK: Lineage: table_add_int_permute_select.insert_num EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
+_col0  _col1   _col2
+PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
+alter table table_add_int_permute_select add columns(c int)
+PREHOOK: type: ALTERTABLE_ADDCOLS
+PREHOOK: Input: default@table_add_int_permute_select
+PREHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
+alter table table_add_int_permute_select add columns(c int)
+POSTHOOK: type: ALTERTABLE_ADDCOLS
+POSTHOOK: Input: default@table_add_int_permute_select
+POSTHOOK: Output: default@table_add_int_permute_select
+PREHOOK: query: insert into table table_add_int_permute_select
+values (5, 1, 'new', 10),
+   (6, 2, 'new', 20),
+   (7, 3, 'new', 30),
+   (8, 4, 'new', 40)
+PREHOOK: type: QUERY
+PREHOOK: Input: default@values__tmp__table__2
+PREHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: query: insert into table table_add_int_permute_select
+values (5, 1, 'new', 10),
+   (6, 2, 'new', 20),
+   (7, 3, 'new', 30),
+   (8, 4, 'new', 40)
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@values__tmp__table__2
+POSTHOOK: Output: default@table_add_int_permute_select
+POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
+POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
+POSTHOOK: Lineage: table_add_int_permute_select.c EXPRESSION 

[16/28] hive git commit: HIVE-14355: Schema evolution for ORC in llap is broken for int to string conversion (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-07-30 Thread prasanthj
http://git-wip-us.apache.org/repos/asf/hive/blob/e769be99/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_part_all_complex.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_part_all_complex.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_part_all_complex.q.out
new file mode 100644
index 000..86e211b
--- /dev/null
+++ 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_part_all_complex.q.out
@@ -0,0 +1,669 @@
+PREHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, Vectorized, MapWork, Partitioned --> all complex 
conversions
+--
+--
+--
+-- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@part_change_various_various_struct1
+POSTHOOK: query: -- SORT_QUERY_RESULTS
+--
+-- FILE VARIATION: ORC, Vectorized, MapWork, Partitioned --> all complex 
conversions
+--
+--
+--
+-- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@part_change_various_various_struct1
+PREHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
+row format delimited fields terminated by '|'
+collection items terminated by ','
+map keys terminated by ':' stored as textfile
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@struct1_a_txt
+POSTHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
+row format delimited fields terminated by '|'
+collection items terminated by ','
+map keys terminated by ':' stored as textfile
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@struct1_a_txt
+PREHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
+PREHOOK: type: LOAD
+ A masked pattern was here 
+PREHOOK: Output: default@struct1_a_txt
+POSTHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
+POSTHOOK: type: LOAD
+ A masked pattern was here 
+POSTHOOK: Output: default@struct1_a_txt
+PREHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
+PREHOOK: type: QUERY
+PREHOOK: Input: default@struct1_a_txt
+PREHOOK: Output: default@part_change_various_various_struct1@part=1
+POSTHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@struct1_a_txt
+POSTHOOK: Output: default@part_change_various_various_struct1@part=1
+POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).b 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:b, type:string, 
comment:null), ]
+POSTHOOK: Lineage: part_change_various_various_struct1 
PARTITION(part=1).insert_num SIMPLE 
[(struct1_a_txt)struct1_a_txt.FieldSchema(name:insert_num, type:int, 
comment:null), ]
+POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).s1 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:s1, 
type:struct,
 comment:null), ]
+struct1_a_txt.insert_num   struct1_a_txt.s1struct1_a_txt.b
+PREHOOK: query: select insert_num,part,s1,b from 
part_change_various_various_struct1 order by insert_num
+PREHOOK: type: QUERY
+PREHOOK: Input: default@part_change_various_various_struct1
+PREHOOK: Input: default@part_change_various_various_struct1@part=1
+ A masked pattern was here 
+POSTHOOK: query: select insert_num,part,s1,b from 
part_change_various_various_struct1 order by insert_num
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@part_change_various_various_struct1
+POSTHOOK: Input: default@part_change_various_various_struct1@part=1
+ A masked pattern was here