Build failed in Jenkins: Phoenix | Master #837

2015-07-14 Thread Apache Jenkins Server
See 

Changes:

[samarth.jain] PHOENIX-2117 Fix flapping DataIngestIT

--
[...truncated 118527 lines...]
Running org.apache.phoenix.end2end.HashJoinIT
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 76.652 sec - 
in org.apache.phoenix.end2end.DeleteIT
Running org.apache.phoenix.end2end.ReadOnlyIT
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.092 sec - in 
org.apache.phoenix.end2end.SkipScanQueryIT
Running org.apache.phoenix.end2end.MD5FunctionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.252 sec - in 
org.apache.phoenix.end2end.ReadOnlyIT
Running org.apache.phoenix.end2end.InstrFunctionIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.932 sec - in 
org.apache.phoenix.end2end.InstrFunctionIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.148 sec - in 
org.apache.phoenix.end2end.MD5FunctionIT
Running org.apache.phoenix.end2end.IsNullIT
Running org.apache.phoenix.end2end.DateTimeIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.507 sec - in 
org.apache.phoenix.end2end.IsNullIT
Running org.apache.phoenix.end2end.HashJoinLocalIndexIT
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 426.015 sec - 
in org.apache.phoenix.end2end.index.LocalIndexIT
Running org.apache.phoenix.end2end.StoreNullsIT
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.758 sec - 
in org.apache.phoenix.end2end.DateTimeIT
Running org.apache.phoenix.end2end.ExpFunctionEnd2EndIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.108 sec - in 
org.apache.phoenix.end2end.ExpFunctionEnd2EndIT
Running org.apache.phoenix.end2end.PowerFunctionEnd2EndIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.43 sec - in 
org.apache.phoenix.end2end.StoreNullsIT
Running org.apache.phoenix.end2end.ArithmeticQueryIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.922 sec - in 
org.apache.phoenix.end2end.PowerFunctionEnd2EndIT
Running org.apache.phoenix.end2end.UpgradeIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.084 sec - in 
org.apache.phoenix.end2end.HashJoinLocalIndexIT
Running org.apache.phoenix.end2end.LikeExpressionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.94 sec - in 
org.apache.phoenix.end2end.LikeExpressionIT
Running org.apache.phoenix.end2end.DecodeFunctionIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.595 sec - in 
org.apache.phoenix.end2end.DecodeFunctionIT
Running org.apache.phoenix.end2end.ExecuteStatementsIT
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 80.409 sec - 
in org.apache.phoenix.end2end.ArithmeticQueryIT
Running org.apache.phoenix.end2end.ArraysWithNullsIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.915 sec - in 
org.apache.phoenix.end2end.ExecuteStatementsIT
Running org.apache.phoenix.end2end.SpooledTmpFileDeleteIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.145 sec - in 
org.apache.phoenix.end2end.SpooledTmpFileDeleteIT
Running org.apache.phoenix.end2end.ServerExceptionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.831 sec - in 
org.apache.phoenix.end2end.ServerExceptionIT
Running org.apache.phoenix.end2end.ToDateFunctionIT
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.494 sec - 
in org.apache.phoenix.end2end.ToDateFunctionIT
Running org.apache.phoenix.end2end.CSVCommonsLoaderIT
Tests run: 108, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 311.821 sec - 
in org.apache.phoenix.end2end.HashJoinIT
Running org.apache.phoenix.end2end.CbrtFunctionEnd2EndIT
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.073 sec - 
in org.apache.phoenix.end2end.CSVCommonsLoaderIT
Running org.apache.phoenix.end2end.AlterSessionIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.115 sec - in 
org.apache.phoenix.end2end.CbrtFunctionEnd2EndIT
Running org.apache.phoenix.end2end.MappingTableDataTypeIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.52 sec - in 
org.apache.phoenix.end2end.AlterSessionIT
Running org.apache.phoenix.end2end.TenantSpecificViewIndexIT
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 121.82 sec - 
in org.apache.phoenix.end2end.ArraysWithNullsIT
Running org.apache.phoenix.end2end.AbsFunctionEnd2EndIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.764 sec - in 
org.apache.phoenix.end2end.MappingTableDataTypeIT
Running org.apache.phoenix.end2end.InListIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.386 sec - in 
org.apache.phoenix.end2end.AbsFunctionEnd2EndIT
Running org.apache.phoenix.end2end.StatementHintsIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.693 sec - in 
org.apache.phoenix.end2end.StatementHintsIT
Running org.a

Apache-Phoenix | 4.x-HBase-1.0 | Build Successful

2015-07-14 Thread Apache Jenkins Server
4.x-HBase-1.0 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-1.0

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-1.0/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-1.0/lastCompletedBuild/testReport/

Changes
[samarth.jain] PHOENIX-2117 Fix flapping DataIngestIT



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Apache-Phoenix | Master | Build Successful

2015-07-14 Thread Apache Jenkins Server
Master branch build status Successful
Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/master

Last Successful Compiled Artifacts https://builds.apache.org/job/Phoenix-master/lastSuccessfulBuild/artifact/

Last Complete Test Report https://builds.apache.org/job/Phoenix-master/lastCompletedBuild/testReport/

Changes
[samarth.jain] PHOENIX-2111 Race condition on creation of new view and adding of column to base table



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Apache-Phoenix | 4.x-HBase-1.0 | Build Successful

2015-07-14 Thread Apache Jenkins Server
4.x-HBase-1.0 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-1.0

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-1.0/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-1.0/lastCompletedBuild/testReport/

Changes
[samarth.jain] PHOENIX-2111 Race condition on creation of new view and adding of column to base table



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


phoenix git commit: PHOENIX-2117 Fix flapping DataIngestIT

2015-07-14 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.0 3d56c6382 -> 64cf1b8ff


PHOENIX-2117 Fix flapping DataIngestIT


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/64cf1b8f
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/64cf1b8f
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/64cf1b8f

Branch: refs/heads/4.x-HBase-1.0
Commit: 64cf1b8ffc45131e50b09fa43f536f31baa116a9
Parents: 3d56c63
Author: Samarth 
Authored: Tue Jul 14 17:41:59 2015 -0700
Committer: Samarth 
Committed: Tue Jul 14 17:41:59 2015 -0700

--
 phoenix-pherf/src/test/resources/datamodel/test_schema.sql | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/64cf1b8f/phoenix-pherf/src/test/resources/datamodel/test_schema.sql
--
diff --git a/phoenix-pherf/src/test/resources/datamodel/test_schema.sql 
b/phoenix-pherf/src/test/resources/datamodel/test_schema.sql
index 162d288..4e6b9d4 100644
--- a/phoenix-pherf/src/test/resources/datamodel/test_schema.sql
+++ b/phoenix-pherf/src/test/resources/datamodel/test_schema.sql
@@ -29,4 +29,4 @@ CREATE TABLE IF NOT EXISTS PHERF.TEST_TABLE (
 PARENT_ID,
 CREATED_DATE DESC
 )
-) VERSIONS=1,MULTI_TENANT=true,SALT_BUCKETS=16
+) VERSIONS=1,MULTI_TENANT=true



phoenix git commit: PHOENIX-2117 Fix flapping DataIngestIT

2015-07-14 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/master 9f09f1a5d -> cf2bc5517


PHOENIX-2117 Fix flapping DataIngestIT


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/cf2bc551
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/cf2bc551
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/cf2bc551

Branch: refs/heads/master
Commit: cf2bc55175788603830ba8bc8b3eacc0998361c1
Parents: 9f09f1a
Author: Samarth 
Authored: Tue Jul 14 17:40:29 2015 -0700
Committer: Samarth 
Committed: Tue Jul 14 17:40:46 2015 -0700

--
 phoenix-pherf/src/test/resources/datamodel/test_schema.sql | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/cf2bc551/phoenix-pherf/src/test/resources/datamodel/test_schema.sql
--
diff --git a/phoenix-pherf/src/test/resources/datamodel/test_schema.sql 
b/phoenix-pherf/src/test/resources/datamodel/test_schema.sql
index 162d288..4e6b9d4 100644
--- a/phoenix-pherf/src/test/resources/datamodel/test_schema.sql
+++ b/phoenix-pherf/src/test/resources/datamodel/test_schema.sql
@@ -29,4 +29,4 @@ CREATE TABLE IF NOT EXISTS PHERF.TEST_TABLE (
 PARENT_ID,
 CREATED_DATE DESC
 )
-) VERSIONS=1,MULTI_TENANT=true,SALT_BUCKETS=16
+) VERSIONS=1,MULTI_TENANT=true



[1/3] phoenix git commit: PHOENIX-2111 Race condition on creation of new view and adding of column to base table

2015-07-14 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.0 95c5bcd8c -> 3d56c6382


http://git-wip-us.apache.org/repos/asf/phoenix/blob/3d56c638/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index feb5989..52b038b 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -18,6 +18,9 @@
 package org.apache.phoenix.query;
 
 import static org.apache.hadoop.hbase.HColumnDescriptor.TTL;
+import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_MAJOR_VERSION;
+import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_MINOR_VERSION;
+import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_PATCH_NUMBER;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES;
 import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_DROP_METADATA;
 import static org.apache.phoenix.util.UpgradeUtil.upgradeTo4_5_0;
@@ -110,6 +113,7 @@ import org.apache.phoenix.hbase.index.Indexer;
 import org.apache.phoenix.hbase.index.covered.CoveredColumnsIndexBuilder;
 import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 import org.apache.phoenix.hbase.index.util.KeyValueBuilder;
+import org.apache.phoenix.hbase.index.util.VersionUtil;
 import org.apache.phoenix.index.PhoenixIndexBuilder;
 import org.apache.phoenix.index.PhoenixIndexCodec;
 import org.apache.phoenix.jdbc.PhoenixConnection;
@@ -966,7 +970,7 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 BlockingRpcCallback 
rpcCallback =
 new 
BlockingRpcCallback();
 GetVersionRequest.Builder builder = 
GetVersionRequest.newBuilder();
-
+
builder.setClientVersion(VersionUtil.encodeVersion(PHOENIX_MAJOR_VERSION, 
PHOENIX_MINOR_VERSION, PHOENIX_PATCH_NUMBER));
 instance.getVersion(controller, builder.build(), 
rpcCallback);
 if(controller.getFailedOn() != null) {
 throw controller.getFailedOn();
@@ -1265,6 +1269,7 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 MutationProto mp = ProtobufUtil.toProto(m);
 
builder.addTableMetadataMutations(mp.toByteString());
 }
+
builder.setClientVersion(VersionUtil.encodeVersion(PHOENIX_MAJOR_VERSION, 
PHOENIX_MINOR_VERSION, PHOENIX_PATCH_NUMBER));
 instance.createTable(controller, builder.build(), 
rpcCallback);
 if(controller.getFailedOn() != null) {
 throw controller.getFailedOn();
@@ -1293,12 +1298,12 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 builder.setTableName(ByteStringer.wrap(tableBytes));
 builder.setTableTimestamp(tableTimestamp);
 builder.setClientTimestamp(clientTimestamp);
-
-   instance.getTable(controller, builder.build(), rpcCallback);
-   if(controller.getFailedOn() != null) {
-   throw controller.getFailedOn();
-   }
-   return rpcCallback.get();
+
builder.setClientVersion(VersionUtil.encodeVersion(PHOENIX_MAJOR_VERSION, 
PHOENIX_MINOR_VERSION, PHOENIX_PATCH_NUMBER));
+instance.getTable(controller, builder.build(), 
rpcCallback);
+if(controller.getFailedOn() != null) {
+throw controller.getFailedOn();
+}
+return rpcCallback.get();
 }
 });
 }
@@ -1325,7 +1330,7 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 }
 builder.setTableType(tableType.getSerializedValue());
 builder.setCascade(cascade);
-
+
builder.setClientVersion(VersionUtil.encodeVersion(PHOENIX_MAJOR_VERSION, 
PHOENIX_MINOR_VERSION, PHOENIX_PATCH_NUMBER));
 instance.dropTable(controller, builder.build(), 
rpcCallback);
 if(controller.getFailedOn() != null) {
 throw controller.getFailedOn();
@@ -1379,6 +1384,7 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 
builder.addTableMetadataMutations(mp

[3/3] phoenix git commit: PHOENIX-2111 Race condition on creation of new view and adding of column to base table

2015-07-14 Thread samarth
PHOENIX-2111 Race condition on creation of new view and adding of column to 
base table


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/3d56c638
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/3d56c638
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/3d56c638

Branch: refs/heads/4.x-HBase-1.0
Commit: 3d56c638222f214d7d916f7be03f881578fd3148
Parents: 95c5bcd
Author: Samarth 
Authored: Tue Jul 14 17:35:37 2015 -0700
Committer: Samarth 
Committed: Tue Jul 14 17:35:37 2015 -0700

--
 .../coprocessor/MetaDataEndpointImpl.java   |  190 ++-
 .../coprocessor/generated/MetaDataProtos.java   | 1243 +-
 .../query/ConnectionQueryServicesImpl.java  |   75 +-
 .../apache/phoenix/schema/MetaDataClient.java   |   12 +-
 .../org/apache/phoenix/util/MetaDataUtil.java   |   14 +
 .../org/apache/phoenix/util/PhoenixRuntime.java |4 -
 .../org/apache/phoenix/util/UpgradeUtil.java|4 +-
 phoenix-protocol/src/main/MetaDataService.proto |   14 +-
 8 files changed, 1388 insertions(+), 168 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/3d56c638/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index e385a8f..32ce536 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -1068,6 +1068,50 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 return null;
 }
 
+/**
+ * 
+ * @return null if the physical table row information is not present.
+ * 
+ */
+private static Mutation getPhysicalTableForView(List 
tableMetadata, byte[][] parentSchemaTableNames) {
+int size = tableMetadata.size();
+byte[][] rowKeyMetaData = new byte[3][];
+MetaDataUtil.getTenantIdAndSchemaAndTableName(tableMetadata, 
rowKeyMetaData);
+Mutation physicalTableRow = null;
+boolean physicalTableLinkFound = false;
+if (size >= 2) {
+int i = size - 1;
+while (i >= 1) {
+Mutation m = tableMetadata.get(i);
+if (m instanceof Put) {
+LinkType linkType = MetaDataUtil.getLinkType(m);
+if (linkType == LinkType.PHYSICAL_TABLE) {
+physicalTableRow = m;
+physicalTableLinkFound = true;
+break;
+}
+}
+i--;
+}
+}
+if (!physicalTableLinkFound) {
+parentSchemaTableNames[0] = null;
+parentSchemaTableNames[1] = null;
+return null;
+}
+rowKeyMetaData = new byte[5][];
+getVarChars(physicalTableRow.getRow(), 5, rowKeyMetaData);
+byte[] colBytes = 
rowKeyMetaData[PhoenixDatabaseMetaData.COLUMN_NAME_INDEX];
+byte[] famBytes = 
rowKeyMetaData[PhoenixDatabaseMetaData.FAMILY_NAME_INDEX];
+if ((colBytes == null || colBytes.length == 0) && (famBytes != null && 
famBytes.length > 0)) {
+byte[] sName = 
SchemaUtil.getSchemaNameFromFullName(famBytes).getBytes();
+byte[] tName = 
SchemaUtil.getTableNameFromFullName(famBytes).getBytes();
+parentSchemaTableNames[0] = sName;
+parentSchemaTableNames[1] = tName;
+}
+return physicalTableRow;
+}
+
 @Override
 public void createTable(RpcController controller, CreateTableRequest 
request,
 RpcCallback done) {
@@ -1075,66 +1119,101 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 byte[][] rowKeyMetaData = new byte[3][];
 byte[] schemaName = null;
 byte[] tableName = null;
-
 try {
 List tableMetadata = ProtobufUtil.getMutations(request);
 MetaDataUtil.getTenantIdAndSchemaAndTableName(tableMetadata, 
rowKeyMetaData);
 byte[] tenantIdBytes = 
rowKeyMetaData[PhoenixDatabaseMetaData.TENANT_ID_INDEX];
 schemaName = 
rowKeyMetaData[PhoenixDatabaseMetaData.SCHEMA_NAME_INDEX];
 tableName = 
rowKeyMetaData[PhoenixDatabaseMetaData.TABLE_NAME_INDEX];
-byte[] parentTableName = 
MetaDataUtil.getParentTableName(tableMetadata);
-byte[] lockTableName = parentTableName == null ? tableName : 
parentTableName;
-byte[] lockKey = SchemaUtil.getTableKey(tenantIdBytes, schemaName,

[2/3] phoenix git commit: PHOENIX-2111 Race condition on creation of new view and adding of column to base table

2015-07-14 Thread samarth
http://git-wip-us.apache.org/repos/asf/phoenix/blob/3d56c638/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/MetaDataProtos.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/MetaDataProtos.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/MetaDataProtos.java
index acb32d2..a121d28 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/MetaDataProtos.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/MetaDataProtos.java
@@ -1811,6 +1811,16 @@ public final class MetaDataProtos {
  * required int64 clientTimestamp = 5;
  */
 long getClientTimestamp();
+
+// optional int32 clientVersion = 6;
+/**
+ * optional int32 clientVersion = 6;
+ */
+boolean hasClientVersion();
+/**
+ * optional int32 clientVersion = 6;
+ */
+int getClientVersion();
   }
   /**
* Protobuf type {@code GetTableRequest}
@@ -1888,6 +1898,11 @@ public final class MetaDataProtos {
   clientTimestamp_ = input.readInt64();
   break;
 }
+case 48: {
+  bitField0_ |= 0x0020;
+  clientVersion_ = input.readInt32();
+  break;
+}
   }
 }
   } catch (com.google.protobuf.InvalidProtocolBufferException e) {
@@ -2008,12 +2023,29 @@ public final class MetaDataProtos {
   return clientTimestamp_;
 }
 
+// optional int32 clientVersion = 6;
+public static final int CLIENTVERSION_FIELD_NUMBER = 6;
+private int clientVersion_;
+/**
+ * optional int32 clientVersion = 6;
+ */
+public boolean hasClientVersion() {
+  return ((bitField0_ & 0x0020) == 0x0020);
+}
+/**
+ * optional int32 clientVersion = 6;
+ */
+public int getClientVersion() {
+  return clientVersion_;
+}
+
 private void initFields() {
   tenantId_ = com.google.protobuf.ByteString.EMPTY;
   schemaName_ = com.google.protobuf.ByteString.EMPTY;
   tableName_ = com.google.protobuf.ByteString.EMPTY;
   tableTimestamp_ = 0L;
   clientTimestamp_ = 0L;
+  clientVersion_ = 0;
 }
 private byte memoizedIsInitialized = -1;
 public final boolean isInitialized() {
@@ -2062,6 +2094,9 @@ public final class MetaDataProtos {
   if (((bitField0_ & 0x0010) == 0x0010)) {
 output.writeInt64(5, clientTimestamp_);
   }
+  if (((bitField0_ & 0x0020) == 0x0020)) {
+output.writeInt32(6, clientVersion_);
+  }
   getUnknownFields().writeTo(output);
 }
 
@@ -2091,6 +2126,10 @@ public final class MetaDataProtos {
 size += com.google.protobuf.CodedOutputStream
   .computeInt64Size(5, clientTimestamp_);
   }
+  if (((bitField0_ & 0x0020) == 0x0020)) {
+size += com.google.protobuf.CodedOutputStream
+  .computeInt32Size(6, clientVersion_);
+  }
   size += getUnknownFields().getSerializedSize();
   memoizedSerializedSize = size;
   return size;
@@ -2139,6 +2178,11 @@ public final class MetaDataProtos {
 result = result && (getClientTimestamp()
 == other.getClientTimestamp());
   }
+  result = result && (hasClientVersion() == other.hasClientVersion());
+  if (hasClientVersion()) {
+result = result && (getClientVersion()
+== other.getClientVersion());
+  }
   result = result &&
   getUnknownFields().equals(other.getUnknownFields());
   return result;
@@ -2172,6 +2216,10 @@ public final class MetaDataProtos {
 hash = (37 * hash) + CLIENTTIMESTAMP_FIELD_NUMBER;
 hash = (53 * hash) + hashLong(getClientTimestamp());
   }
+  if (hasClientVersion()) {
+hash = (37 * hash) + CLIENTVERSION_FIELD_NUMBER;
+hash = (53 * hash) + getClientVersion();
+  }
   hash = (29 * hash) + getUnknownFields().hashCode();
   memoizedHashCode = hash;
   return hash;
@@ -2291,6 +2339,8 @@ public final class MetaDataProtos {
 bitField0_ = (bitField0_ & ~0x0008);
 clientTimestamp_ = 0L;
 bitField0_ = (bitField0_ & ~0x0010);
+clientVersion_ = 0;
+bitField0_ = (bitField0_ & ~0x0020);
 return this;
   }
 
@@ -2339,6 +2389,10 @@ public final class MetaDataProtos {
   to_bitField0_ |= 0x0010;
 }
 result.clientTimestamp_ = clientTimestamp_;
+if (((from_bitField0_ & 0x0020) == 0x0020)) {
+  to_bitField0_ |= 0x0020;
+}
+result.clientVersion_ = clientVersion_;
 result.bitField0_ = to_bitField0_;
 onBuilt();
 return result;
@@ -2370,6 +2424,9 @@ public final class MetaDataProtos {
 if (other.hasClientTimestamp()) {
   setClientTimestamp(other.getClientTimestamp());
 }

[3/3] phoenix git commit: PHOENIX-2111 Race condition on creation of new view and adding of column to base table

2015-07-14 Thread samarth
PHOENIX-2111 Race condition on creation of new view and adding of column to 
base table


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/9f09f1a5
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/9f09f1a5
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/9f09f1a5

Branch: refs/heads/master
Commit: 9f09f1a5ddce38c256c647ca7cd80617259e35ea
Parents: 4b99c63
Author: Samarth 
Authored: Tue Jul 14 17:24:01 2015 -0700
Committer: Samarth 
Committed: Tue Jul 14 17:24:01 2015 -0700

--
 .../coprocessor/MetaDataEndpointImpl.java   |  239 ++--
 .../coprocessor/generated/MetaDataProtos.java   | 1243 +-
 .../query/ConnectionQueryServicesImpl.java  |   75 +-
 .../apache/phoenix/schema/MetaDataClient.java   |   12 +-
 .../org/apache/phoenix/util/MetaDataUtil.java   |   14 +
 .../org/apache/phoenix/util/PhoenixRuntime.java |4 -
 .../org/apache/phoenix/util/UpgradeUtil.java|4 +-
 phoenix-protocol/src/main/MetaDataService.proto |   14 +-
 8 files changed, 1414 insertions(+), 191 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/9f09f1a5/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index dcfe61d..5396a69 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -1068,6 +1068,50 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 return null;
 }
 
+/**
+ * 
+ * @return null if the physical table row information is not present.
+ * 
+ */
+private static Mutation getPhysicalTableForView(List 
tableMetadata, byte[][] parentSchemaTableNames) {
+int size = tableMetadata.size();
+byte[][] rowKeyMetaData = new byte[3][];
+MetaDataUtil.getTenantIdAndSchemaAndTableName(tableMetadata, 
rowKeyMetaData);
+Mutation physicalTableRow = null;
+boolean physicalTableLinkFound = false;
+if (size >= 2) {
+int i = size - 1;
+while (i >= 1) {
+Mutation m = tableMetadata.get(i);
+if (m instanceof Put) {
+LinkType linkType = MetaDataUtil.getLinkType(m);
+if (linkType == LinkType.PHYSICAL_TABLE) {
+physicalTableRow = m;
+physicalTableLinkFound = true;
+break;
+}
+}
+i--;
+}
+}
+if (!physicalTableLinkFound) {
+parentSchemaTableNames[0] = null;
+parentSchemaTableNames[1] = null;
+return null;
+}
+rowKeyMetaData = new byte[5][];
+getVarChars(physicalTableRow.getRow(), 5, rowKeyMetaData);
+byte[] colBytes = 
rowKeyMetaData[PhoenixDatabaseMetaData.COLUMN_NAME_INDEX];
+byte[] famBytes = 
rowKeyMetaData[PhoenixDatabaseMetaData.FAMILY_NAME_INDEX];
+if ((colBytes == null || colBytes.length == 0) && (famBytes != null && 
famBytes.length > 0)) {
+byte[] sName = 
SchemaUtil.getSchemaNameFromFullName(famBytes).getBytes();
+byte[] tName = 
SchemaUtil.getTableNameFromFullName(famBytes).getBytes();
+parentSchemaTableNames[0] = sName;
+parentSchemaTableNames[1] = tName;
+}
+return physicalTableRow;
+}
+
 @Override
 public void createTable(RpcController controller, CreateTableRequest 
request,
 RpcCallback done) {
@@ -1075,66 +1119,101 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 byte[][] rowKeyMetaData = new byte[3][];
 byte[] schemaName = null;
 byte[] tableName = null;
-
 try {
 List tableMetadata = ProtobufUtil.getMutations(request);
 MetaDataUtil.getTenantIdAndSchemaAndTableName(tableMetadata, 
rowKeyMetaData);
 byte[] tenantIdBytes = 
rowKeyMetaData[PhoenixDatabaseMetaData.TENANT_ID_INDEX];
 schemaName = 
rowKeyMetaData[PhoenixDatabaseMetaData.SCHEMA_NAME_INDEX];
 tableName = 
rowKeyMetaData[PhoenixDatabaseMetaData.TABLE_NAME_INDEX];
-byte[] parentTableName = 
MetaDataUtil.getParentTableName(tableMetadata);
-byte[] lockTableName = parentTableName == null ? tableName : 
parentTableName;
-byte[] lockKey = SchemaUtil.getTableKey(tenantIdBytes, schemaName, 
lock

[1/3] phoenix git commit: PHOENIX-2111 Race condition on creation of new view and adding of column to base table

2015-07-14 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/master 4b99c632c -> 9f09f1a5d


http://git-wip-us.apache.org/repos/asf/phoenix/blob/9f09f1a5/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index feb5989..52b038b 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -18,6 +18,9 @@
 package org.apache.phoenix.query;
 
 import static org.apache.hadoop.hbase.HColumnDescriptor.TTL;
+import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_MAJOR_VERSION;
+import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_MINOR_VERSION;
+import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_PATCH_NUMBER;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES;
 import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_DROP_METADATA;
 import static org.apache.phoenix.util.UpgradeUtil.upgradeTo4_5_0;
@@ -110,6 +113,7 @@ import org.apache.phoenix.hbase.index.Indexer;
 import org.apache.phoenix.hbase.index.covered.CoveredColumnsIndexBuilder;
 import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 import org.apache.phoenix.hbase.index.util.KeyValueBuilder;
+import org.apache.phoenix.hbase.index.util.VersionUtil;
 import org.apache.phoenix.index.PhoenixIndexBuilder;
 import org.apache.phoenix.index.PhoenixIndexCodec;
 import org.apache.phoenix.jdbc.PhoenixConnection;
@@ -966,7 +970,7 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 BlockingRpcCallback 
rpcCallback =
 new 
BlockingRpcCallback();
 GetVersionRequest.Builder builder = 
GetVersionRequest.newBuilder();
-
+
builder.setClientVersion(VersionUtil.encodeVersion(PHOENIX_MAJOR_VERSION, 
PHOENIX_MINOR_VERSION, PHOENIX_PATCH_NUMBER));
 instance.getVersion(controller, builder.build(), 
rpcCallback);
 if(controller.getFailedOn() != null) {
 throw controller.getFailedOn();
@@ -1265,6 +1269,7 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 MutationProto mp = ProtobufUtil.toProto(m);
 
builder.addTableMetadataMutations(mp.toByteString());
 }
+
builder.setClientVersion(VersionUtil.encodeVersion(PHOENIX_MAJOR_VERSION, 
PHOENIX_MINOR_VERSION, PHOENIX_PATCH_NUMBER));
 instance.createTable(controller, builder.build(), 
rpcCallback);
 if(controller.getFailedOn() != null) {
 throw controller.getFailedOn();
@@ -1293,12 +1298,12 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 builder.setTableName(ByteStringer.wrap(tableBytes));
 builder.setTableTimestamp(tableTimestamp);
 builder.setClientTimestamp(clientTimestamp);
-
-   instance.getTable(controller, builder.build(), rpcCallback);
-   if(controller.getFailedOn() != null) {
-   throw controller.getFailedOn();
-   }
-   return rpcCallback.get();
+
builder.setClientVersion(VersionUtil.encodeVersion(PHOENIX_MAJOR_VERSION, 
PHOENIX_MINOR_VERSION, PHOENIX_PATCH_NUMBER));
+instance.getTable(controller, builder.build(), 
rpcCallback);
+if(controller.getFailedOn() != null) {
+throw controller.getFailedOn();
+}
+return rpcCallback.get();
 }
 });
 }
@@ -1325,7 +1330,7 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 }
 builder.setTableType(tableType.getSerializedValue());
 builder.setCascade(cascade);
-
+
builder.setClientVersion(VersionUtil.encodeVersion(PHOENIX_MAJOR_VERSION, 
PHOENIX_MINOR_VERSION, PHOENIX_PATCH_NUMBER));
 instance.dropTable(controller, builder.build(), 
rpcCallback);
 if(controller.getFailedOn() != null) {
 throw controller.getFailedOn();
@@ -1379,6 +1384,7 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 
builder.addTableMetadataMutations(mp.toByte

[2/3] phoenix git commit: PHOENIX-2111 Race condition on creation of new view and adding of column to base table

2015-07-14 Thread samarth
http://git-wip-us.apache.org/repos/asf/phoenix/blob/9f09f1a5/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/MetaDataProtos.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/MetaDataProtos.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/MetaDataProtos.java
index acb32d2..a121d28 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/MetaDataProtos.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/MetaDataProtos.java
@@ -1811,6 +1811,16 @@ public final class MetaDataProtos {
  * required int64 clientTimestamp = 5;
  */
 long getClientTimestamp();
+
+// optional int32 clientVersion = 6;
+/**
+ * optional int32 clientVersion = 6;
+ */
+boolean hasClientVersion();
+/**
+ * optional int32 clientVersion = 6;
+ */
+int getClientVersion();
   }
   /**
* Protobuf type {@code GetTableRequest}
@@ -1888,6 +1898,11 @@ public final class MetaDataProtos {
   clientTimestamp_ = input.readInt64();
   break;
 }
+case 48: {
+  bitField0_ |= 0x0020;
+  clientVersion_ = input.readInt32();
+  break;
+}
   }
 }
   } catch (com.google.protobuf.InvalidProtocolBufferException e) {
@@ -2008,12 +2023,29 @@ public final class MetaDataProtos {
   return clientTimestamp_;
 }
 
+// optional int32 clientVersion = 6;
+public static final int CLIENTVERSION_FIELD_NUMBER = 6;
+private int clientVersion_;
+/**
+ * optional int32 clientVersion = 6;
+ */
+public boolean hasClientVersion() {
+  return ((bitField0_ & 0x0020) == 0x0020);
+}
+/**
+ * optional int32 clientVersion = 6;
+ */
+public int getClientVersion() {
+  return clientVersion_;
+}
+
 private void initFields() {
   tenantId_ = com.google.protobuf.ByteString.EMPTY;
   schemaName_ = com.google.protobuf.ByteString.EMPTY;
   tableName_ = com.google.protobuf.ByteString.EMPTY;
   tableTimestamp_ = 0L;
   clientTimestamp_ = 0L;
+  clientVersion_ = 0;
 }
 private byte memoizedIsInitialized = -1;
 public final boolean isInitialized() {
@@ -2062,6 +2094,9 @@ public final class MetaDataProtos {
   if (((bitField0_ & 0x0010) == 0x0010)) {
 output.writeInt64(5, clientTimestamp_);
   }
+  if (((bitField0_ & 0x0020) == 0x0020)) {
+output.writeInt32(6, clientVersion_);
+  }
   getUnknownFields().writeTo(output);
 }
 
@@ -2091,6 +2126,10 @@ public final class MetaDataProtos {
 size += com.google.protobuf.CodedOutputStream
   .computeInt64Size(5, clientTimestamp_);
   }
+  if (((bitField0_ & 0x0020) == 0x0020)) {
+size += com.google.protobuf.CodedOutputStream
+  .computeInt32Size(6, clientVersion_);
+  }
   size += getUnknownFields().getSerializedSize();
   memoizedSerializedSize = size;
   return size;
@@ -2139,6 +2178,11 @@ public final class MetaDataProtos {
 result = result && (getClientTimestamp()
 == other.getClientTimestamp());
   }
+  result = result && (hasClientVersion() == other.hasClientVersion());
+  if (hasClientVersion()) {
+result = result && (getClientVersion()
+== other.getClientVersion());
+  }
   result = result &&
   getUnknownFields().equals(other.getUnknownFields());
   return result;
@@ -2172,6 +2216,10 @@ public final class MetaDataProtos {
 hash = (37 * hash) + CLIENTTIMESTAMP_FIELD_NUMBER;
 hash = (53 * hash) + hashLong(getClientTimestamp());
   }
+  if (hasClientVersion()) {
+hash = (37 * hash) + CLIENTVERSION_FIELD_NUMBER;
+hash = (53 * hash) + getClientVersion();
+  }
   hash = (29 * hash) + getUnknownFields().hashCode();
   memoizedHashCode = hash;
   return hash;
@@ -2291,6 +2339,8 @@ public final class MetaDataProtos {
 bitField0_ = (bitField0_ & ~0x0008);
 clientTimestamp_ = 0L;
 bitField0_ = (bitField0_ & ~0x0010);
+clientVersion_ = 0;
+bitField0_ = (bitField0_ & ~0x0020);
 return this;
   }
 
@@ -2339,6 +2389,10 @@ public final class MetaDataProtos {
   to_bitField0_ |= 0x0010;
 }
 result.clientTimestamp_ = clientTimestamp_;
+if (((from_bitField0_ & 0x0020) == 0x0020)) {
+  to_bitField0_ |= 0x0020;
+}
+result.clientVersion_ = clientVersion_;
 result.bitField0_ = to_bitField0_;
 onBuilt();
 return result;
@@ -2370,6 +2424,9 @@ public final class MetaDataProtos {
 if (other.hasClientTimestamp()) {
   setClientTimestamp(other.getClientTimestamp());
 }

phoenix git commit: PHOENIX-2111 Addendum - prevent IllegalStateException for older clients

2015-07-14 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 4198ea6af -> 481a802ee


PHOENIX-2111 Addendum - prevent IllegalStateException for older clients


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/481a802e
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/481a802e
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/481a802e

Branch: refs/heads/4.x-HBase-0.98
Commit: 481a802ee56b0560c34cec0bb956931f73c73d2c
Parents: 4198ea6
Author: Samarth 
Authored: Tue Jul 14 17:05:21 2015 -0700
Committer: Samarth 
Committed: Tue Jul 14 17:05:35 2015 -0700

--
 .../coprocessor/MetaDataEndpointImpl.java   | 22 
 1 file changed, 4 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/481a802e/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 6372700..da8110c 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -1110,20 +1110,6 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 return physicalTableRow;
 }
 
-private long getSequenceNumberForTable(byte[] headerRowKey) throws 
IOException {
-Get get = new Get(headerRowKey);
-get.addColumn(TABLE_FAMILY_BYTES, TABLE_SEQ_NUM_BYTES);
-byte[] b;
-try (HTableInterface hTable = 
ServerUtil.getHTableForCoprocessorScan(env, 
PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES)) {
-Result result = hTable.get(get);
-b = result.getValue(TABLE_FAMILY_BYTES, TABLE_SEQ_NUM_BYTES);
-}
-if (b == null) {
-throw new IllegalArgumentException("No rows returned for the row 
key: " + Bytes.toString(headerRowKey));
-}
-return PLong.INSTANCE.getCodec().decodeLong(new 
ImmutableBytesWritable(b), SortOrder.getDefault());
-}
-
 @Override
 public void createTable(RpcController controller, CreateTableRequest 
request,
 RpcCallback done) {
@@ -1203,10 +1189,10 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 if (tableType == PTableType.VIEW && 
viewPhysicalTableRow != null && request.hasClientVersion()) {
 // Starting 4.5, the client passes the sequence 
number of the physical table in the table metadata.
 parentTableSeqNumber = 
MetaDataUtil.getSequenceNumber(viewPhysicalTableRow);
-} else if (tableType == PTableType.VIEW) {
-// Before 4.5, due to a bug, the parent table key 
wasn't available. Using get to 
-// figure out the parent table sequence number.
-parentTableSeqNumber = 
getSequenceNumberForTable(parentTableKey);
+} else if (tableType == PTableType.VIEW && 
!request.hasClientVersion()) {
+// Before 4.5, due to a bug, the parent table key 
wasn't available.
+// So don't do anything and prevent the exception 
from being thrown.
+parentTableSeqNumber = 
parentTable.getSequenceNumber();
 } else {
 parentTableSeqNumber = 
MetaDataUtil.getParentSequenceNumber(tableMetadata);
 }



phoenix git commit: Fix Apache RAT warnings

2015-07-14 Thread mujtaba
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 177e2f247 -> 4198ea6af


Fix Apache RAT warnings


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/4198ea6a
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/4198ea6a
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/4198ea6a

Branch: refs/heads/4.x-HBase-0.98
Commit: 4198ea6af48aadc72f120fb3cc7c1c50845989ab
Parents: 177e2f2
Author: Mujtaba 
Authored: Tue Jul 14 16:48:15 2015 -0700
Committer: Mujtaba 
Committed: Tue Jul 14 16:48:15 2015 -0700

--
 .../org/apache/phoenix/rpc/UpdateCacheIT.java   | 20 +++
 .../expression/function/ArrayFillFunction.java  | 20 +++
 .../phoenix/schema/SequenceAllocation.java  | 21 +++-
 .../phoenix/schema/SequenceAllocationTest.java  | 19 ++
 .../apache/phoenix/pherf/workload/Workload.java | 18 +
 .../phoenix/pig/udf/ReserveNSequenceTestIT.java | 21 ++--
 6 files changed, 116 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/4198ea6a/phoenix-core/src/it/java/org/apache/phoenix/rpc/UpdateCacheIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/rpc/UpdateCacheIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/rpc/UpdateCacheIT.java
index c657e41..1efe5fb 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/rpc/UpdateCacheIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/rpc/UpdateCacheIT.java
@@ -1,3 +1,23 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
 package org.apache.phoenix.rpc;
 
 import static org.apache.phoenix.util.TestUtil.INDEX_DATA_SCHEMA;

http://git-wip-us.apache.org/repos/asf/phoenix/blob/4198ea6a/phoenix-core/src/main/java/org/apache/phoenix/expression/function/ArrayFillFunction.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/function/ArrayFillFunction.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/function/ArrayFillFunction.java
index 5c3a2e5..db104e8 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/function/ArrayFillFunction.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/function/ArrayFillFunction.java
@@ -1,3 +1,23 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
 package org.apache.phoenix.expression.function;
 
 import java.util.Arrays;

http://git-wip-us.apache.org/repos/asf/phoenix/blob/4198ea6a/phoenix-core/src/main/java/org/apache/phoenix/schema/SequenceAllocation.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/SequenceAllocation.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/SequenceAllocation.java
index afb4a20..aaccc23 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/SequenceAllocation.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/SequenceAllocation.java
@@ -1,3 +1,22 @@
+/*
+ * Copyright 2010 The Apac

phoenix git commit: PHOENIX-2117 Fix flapping DataIngestIT

2015-07-14 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 8ffd6d8d6 -> 177e2f247


PHOENIX-2117 Fix flapping DataIngestIT


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/177e2f24
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/177e2f24
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/177e2f24

Branch: refs/heads/4.x-HBase-0.98
Commit: 177e2f247cdcb18df7fda5e384983e54c3bdb062
Parents: 8ffd6d8
Author: Samarth 
Authored: Tue Jul 14 16:55:19 2015 -0700
Committer: Samarth 
Committed: Tue Jul 14 16:56:49 2015 -0700

--
 phoenix-pherf/src/test/resources/datamodel/test_schema.sql | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/177e2f24/phoenix-pherf/src/test/resources/datamodel/test_schema.sql
--
diff --git a/phoenix-pherf/src/test/resources/datamodel/test_schema.sql 
b/phoenix-pherf/src/test/resources/datamodel/test_schema.sql
index 162d288..4e6b9d4 100644
--- a/phoenix-pherf/src/test/resources/datamodel/test_schema.sql
+++ b/phoenix-pherf/src/test/resources/datamodel/test_schema.sql
@@ -29,4 +29,4 @@ CREATE TABLE IF NOT EXISTS PHERF.TEST_TABLE (
 PARENT_ID,
 CREATED_DATE DESC
 )
-) VERSIONS=1,MULTI_TENANT=true,SALT_BUCKETS=16
+) VERSIONS=1,MULTI_TENANT=true



[3/3] phoenix git commit: PHOENIX-2111 Race condition on creation of new view and adding of column to base table

2015-07-14 Thread samarth
PHOENIX-2111 Race condition on creation of new view and adding of column to 
base table


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/8ffd6d8d
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/8ffd6d8d
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/8ffd6d8d

Branch: refs/heads/4.x-HBase-0.98
Commit: 8ffd6d8d609f5f82a8a20f97f4d7c7347504abc5
Parents: 1928ba0
Author: Samarth 
Authored: Tue Jul 14 15:13:13 2015 -0700
Committer: Samarth 
Committed: Tue Jul 14 15:13:13 2015 -0700

--
 .../coprocessor/MetaDataEndpointImpl.java   |  221 +++-
 .../coprocessor/generated/MetaDataProtos.java   | 1243 +-
 .../query/ConnectionQueryServicesImpl.java  |   75 +-
 .../apache/phoenix/schema/MetaDataClient.java   |   12 +-
 .../org/apache/phoenix/util/MetaDataUtil.java   |   14 +
 .../org/apache/phoenix/util/PhoenixRuntime.java |4 -
 .../org/apache/phoenix/util/UpgradeUtil.java|4 +-
 phoenix-protocol/src/main/MetaDataService.proto |   14 +-
 8 files changed, 1413 insertions(+), 174 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/8ffd6d8d/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index defc7af..6372700 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -1066,7 +1066,64 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 }
 return null;
 }
-
+/**
+ * 
+ * @return null if the physical table row information is not present.
+ * 
+ */
+private static Mutation getPhysicalTableForView(List 
tableMetadata, byte[][] parentSchemaTableNames) {
+int size = tableMetadata.size();
+byte[][] rowKeyMetaData = new byte[3][];
+MetaDataUtil.getTenantIdAndSchemaAndTableName(tableMetadata, 
rowKeyMetaData);
+Mutation physicalTableRow = null;
+boolean physicalTableLinkFound = false;
+if (size >= 2) {
+int i = size - 1;
+while (i >= 1) {
+Mutation m = tableMetadata.get(i);
+if (m instanceof Put) {
+LinkType linkType = MetaDataUtil.getLinkType(m);
+if (linkType == LinkType.PHYSICAL_TABLE) {
+physicalTableRow = m;
+physicalTableLinkFound = true;
+break;
+}
+}
+i--;
+}
+}
+if (!physicalTableLinkFound) {
+parentSchemaTableNames[0] = null;
+parentSchemaTableNames[1] = null;
+return null;
+}
+rowKeyMetaData = new byte[5][];
+getVarChars(physicalTableRow.getRow(), 5, rowKeyMetaData);
+byte[] colBytes = 
rowKeyMetaData[PhoenixDatabaseMetaData.COLUMN_NAME_INDEX];
+byte[] famBytes = 
rowKeyMetaData[PhoenixDatabaseMetaData.FAMILY_NAME_INDEX];
+if ((colBytes == null || colBytes.length == 0) && (famBytes != null && 
famBytes.length > 0)) {
+byte[] sName = 
SchemaUtil.getSchemaNameFromFullName(famBytes).getBytes();
+byte[] tName = 
SchemaUtil.getTableNameFromFullName(famBytes).getBytes();
+parentSchemaTableNames[0] = sName;
+parentSchemaTableNames[1] = tName;
+}
+return physicalTableRow;
+}
+
+private long getSequenceNumberForTable(byte[] headerRowKey) throws 
IOException {
+Get get = new Get(headerRowKey);
+get.addColumn(TABLE_FAMILY_BYTES, TABLE_SEQ_NUM_BYTES);
+byte[] b;
+try (HTableInterface hTable = 
ServerUtil.getHTableForCoprocessorScan(env, 
PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES)) {
+Result result = hTable.get(get);
+b = result.getValue(TABLE_FAMILY_BYTES, TABLE_SEQ_NUM_BYTES);
+}
+if (b == null) {
+throw new IllegalArgumentException("No rows returned for the row 
key: " + Bytes.toString(headerRowKey));
+}
+return PLong.INSTANCE.getCodec().decodeLong(new 
ImmutableBytesWritable(b), SortOrder.getDefault());
+}
+
 @Override
 public void createTable(RpcController controller, CreateTableRequest 
request,
 RpcCallback done) {
@@ -1074,66 +1131,101 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 byte[][] rowKeyMetaData = new b

[2/3] phoenix git commit: PHOENIX-2111 Race condition on creation of new view and adding of column to base table

2015-07-14 Thread samarth
http://git-wip-us.apache.org/repos/asf/phoenix/blob/8ffd6d8d/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/MetaDataProtos.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/MetaDataProtos.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/MetaDataProtos.java
index acb32d2..a121d28 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/MetaDataProtos.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/MetaDataProtos.java
@@ -1811,6 +1811,16 @@ public final class MetaDataProtos {
  * required int64 clientTimestamp = 5;
  */
 long getClientTimestamp();
+
+// optional int32 clientVersion = 6;
+/**
+ * optional int32 clientVersion = 6;
+ */
+boolean hasClientVersion();
+/**
+ * optional int32 clientVersion = 6;
+ */
+int getClientVersion();
   }
   /**
* Protobuf type {@code GetTableRequest}
@@ -1888,6 +1898,11 @@ public final class MetaDataProtos {
   clientTimestamp_ = input.readInt64();
   break;
 }
+case 48: {
+  bitField0_ |= 0x0020;
+  clientVersion_ = input.readInt32();
+  break;
+}
   }
 }
   } catch (com.google.protobuf.InvalidProtocolBufferException e) {
@@ -2008,12 +2023,29 @@ public final class MetaDataProtos {
   return clientTimestamp_;
 }
 
+// optional int32 clientVersion = 6;
+public static final int CLIENTVERSION_FIELD_NUMBER = 6;
+private int clientVersion_;
+/**
+ * optional int32 clientVersion = 6;
+ */
+public boolean hasClientVersion() {
+  return ((bitField0_ & 0x0020) == 0x0020);
+}
+/**
+ * optional int32 clientVersion = 6;
+ */
+public int getClientVersion() {
+  return clientVersion_;
+}
+
 private void initFields() {
   tenantId_ = com.google.protobuf.ByteString.EMPTY;
   schemaName_ = com.google.protobuf.ByteString.EMPTY;
   tableName_ = com.google.protobuf.ByteString.EMPTY;
   tableTimestamp_ = 0L;
   clientTimestamp_ = 0L;
+  clientVersion_ = 0;
 }
 private byte memoizedIsInitialized = -1;
 public final boolean isInitialized() {
@@ -2062,6 +2094,9 @@ public final class MetaDataProtos {
   if (((bitField0_ & 0x0010) == 0x0010)) {
 output.writeInt64(5, clientTimestamp_);
   }
+  if (((bitField0_ & 0x0020) == 0x0020)) {
+output.writeInt32(6, clientVersion_);
+  }
   getUnknownFields().writeTo(output);
 }
 
@@ -2091,6 +2126,10 @@ public final class MetaDataProtos {
 size += com.google.protobuf.CodedOutputStream
   .computeInt64Size(5, clientTimestamp_);
   }
+  if (((bitField0_ & 0x0020) == 0x0020)) {
+size += com.google.protobuf.CodedOutputStream
+  .computeInt32Size(6, clientVersion_);
+  }
   size += getUnknownFields().getSerializedSize();
   memoizedSerializedSize = size;
   return size;
@@ -2139,6 +2178,11 @@ public final class MetaDataProtos {
 result = result && (getClientTimestamp()
 == other.getClientTimestamp());
   }
+  result = result && (hasClientVersion() == other.hasClientVersion());
+  if (hasClientVersion()) {
+result = result && (getClientVersion()
+== other.getClientVersion());
+  }
   result = result &&
   getUnknownFields().equals(other.getUnknownFields());
   return result;
@@ -2172,6 +2216,10 @@ public final class MetaDataProtos {
 hash = (37 * hash) + CLIENTTIMESTAMP_FIELD_NUMBER;
 hash = (53 * hash) + hashLong(getClientTimestamp());
   }
+  if (hasClientVersion()) {
+hash = (37 * hash) + CLIENTVERSION_FIELD_NUMBER;
+hash = (53 * hash) + getClientVersion();
+  }
   hash = (29 * hash) + getUnknownFields().hashCode();
   memoizedHashCode = hash;
   return hash;
@@ -2291,6 +2339,8 @@ public final class MetaDataProtos {
 bitField0_ = (bitField0_ & ~0x0008);
 clientTimestamp_ = 0L;
 bitField0_ = (bitField0_ & ~0x0010);
+clientVersion_ = 0;
+bitField0_ = (bitField0_ & ~0x0020);
 return this;
   }
 
@@ -2339,6 +2389,10 @@ public final class MetaDataProtos {
   to_bitField0_ |= 0x0010;
 }
 result.clientTimestamp_ = clientTimestamp_;
+if (((from_bitField0_ & 0x0020) == 0x0020)) {
+  to_bitField0_ |= 0x0020;
+}
+result.clientVersion_ = clientVersion_;
 result.bitField0_ = to_bitField0_;
 onBuilt();
 return result;
@@ -2370,6 +2424,9 @@ public final class MetaDataProtos {
 if (other.hasClientTimestamp()) {
   setClientTimestamp(other.getClientTimestamp());
 }

[1/3] phoenix git commit: PHOENIX-2111 Race condition on creation of new view and adding of column to base table

2015-07-14 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 1928ba03c -> 8ffd6d8d6


http://git-wip-us.apache.org/repos/asf/phoenix/blob/8ffd6d8d/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index 1c24b2c..cb405b1 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -18,6 +18,9 @@
 package org.apache.phoenix.query;
 
 import static org.apache.hadoop.hbase.HColumnDescriptor.TTL;
+import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_MAJOR_VERSION;
+import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_MINOR_VERSION;
+import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_PATCH_NUMBER;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES;
 import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_DROP_METADATA;
 import static org.apache.phoenix.util.UpgradeUtil.upgradeTo4_5_0;
@@ -110,6 +113,7 @@ import org.apache.phoenix.hbase.index.Indexer;
 import org.apache.phoenix.hbase.index.covered.CoveredColumnsIndexBuilder;
 import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 import org.apache.phoenix.hbase.index.util.KeyValueBuilder;
+import org.apache.phoenix.hbase.index.util.VersionUtil;
 import org.apache.phoenix.index.PhoenixIndexBuilder;
 import org.apache.phoenix.index.PhoenixIndexCodec;
 import org.apache.phoenix.jdbc.PhoenixConnection;
@@ -966,7 +970,7 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 BlockingRpcCallback 
rpcCallback =
 new 
BlockingRpcCallback();
 GetVersionRequest.Builder builder = 
GetVersionRequest.newBuilder();
-
+
builder.setClientVersion(VersionUtil.encodeVersion(PHOENIX_MAJOR_VERSION, 
PHOENIX_MINOR_VERSION, PHOENIX_PATCH_NUMBER));
 instance.getVersion(controller, builder.build(), 
rpcCallback);
 if(controller.getFailedOn() != null) {
 throw controller.getFailedOn();
@@ -1265,6 +1269,7 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 MutationProto mp = ProtobufUtil.toProto(m);
 
builder.addTableMetadataMutations(mp.toByteString());
 }
+
builder.setClientVersion(VersionUtil.encodeVersion(PHOENIX_MAJOR_VERSION, 
PHOENIX_MINOR_VERSION, PHOENIX_PATCH_NUMBER));
 instance.createTable(controller, builder.build(), 
rpcCallback);
 if(controller.getFailedOn() != null) {
 throw controller.getFailedOn();
@@ -1293,12 +1298,12 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 builder.setTableName(ByteStringer.wrap(tableBytes));
 builder.setTableTimestamp(tableTimestamp);
 builder.setClientTimestamp(clientTimestamp);
-
-   instance.getTable(controller, builder.build(), rpcCallback);
-   if(controller.getFailedOn() != null) {
-   throw controller.getFailedOn();
-   }
-   return rpcCallback.get();
+
builder.setClientVersion(VersionUtil.encodeVersion(PHOENIX_MAJOR_VERSION, 
PHOENIX_MINOR_VERSION, PHOENIX_PATCH_NUMBER));
+instance.getTable(controller, builder.build(), 
rpcCallback);
+if(controller.getFailedOn() != null) {
+throw controller.getFailedOn();
+}
+return rpcCallback.get();
 }
 });
 }
@@ -1325,7 +1330,7 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 }
 builder.setTableType(tableType.getSerializedValue());
 builder.setCascade(cascade);
-
+
builder.setClientVersion(VersionUtil.encodeVersion(PHOENIX_MAJOR_VERSION, 
PHOENIX_MINOR_VERSION, PHOENIX_PATCH_NUMBER));
 instance.dropTable(controller, builder.build(), 
rpcCallback);
 if(controller.getFailedOn() != null) {
 throw controller.getFailedOn();
@@ -1379,6 +1384,7 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 
builder.addTableMetadataMutations(m

Apache-Phoenix | 4.x-HBase-1.0 | Build Successful

2015-07-14 Thread Apache Jenkins Server
4.x-HBase-1.0 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-1.0

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-1.0/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-1.0/lastCompletedBuild/testReport/

Changes
[jtaylor] PHOENIX-2067 Sort order incorrect for variable length DESC columns



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Apache-Phoenix | Master | Build Successful

2015-07-14 Thread Apache Jenkins Server
Master branch build status Successful
Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/master

Last Successful Compiled Artifacts https://builds.apache.org/job/Phoenix-master/lastSuccessfulBuild/artifact/

Last Complete Test Report https://builds.apache.org/job/Phoenix-master/lastCompletedBuild/testReport/

Changes
[jtaylor] PHOENIX-2067 Sort order incorrect for variable length DESC columns



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


phoenix git commit: PHOENIX-2067 Sort order incorrect for variable length DESC columns

2015-07-14 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.0 b31608f96 -> 95c5bcd8c


PHOENIX-2067 Sort order incorrect for variable length DESC columns


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/95c5bcd8
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/95c5bcd8
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/95c5bcd8

Branch: refs/heads/4.x-HBase-1.0
Commit: 95c5bcd8c5586c33f58d411a7d424f29463495b7
Parents: b31608f
Author: James Taylor 
Authored: Tue Jul 14 13:40:58 2015 -0700
Committer: James Taylor 
Committed: Tue Jul 14 13:42:14 2015 -0700

--
 .../org/apache/phoenix/util/UpgradeUtil.java | 19 ---
 1 file changed, 16 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/95c5bcd8/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java 
b/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
index 81c9085..c7baf43 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
@@ -80,12 +80,11 @@ import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.SaltingUtil;
 import org.apache.phoenix.schema.SortOrder;
 import org.apache.phoenix.schema.types.PBoolean;
+import org.apache.phoenix.schema.types.PDataType;
 import org.apache.phoenix.schema.types.PDecimal;
-import org.apache.phoenix.schema.types.PDecimalArray;
 import org.apache.phoenix.schema.types.PInteger;
 import org.apache.phoenix.schema.types.PLong;
 import org.apache.phoenix.schema.types.PVarchar;
-import org.apache.phoenix.schema.types.PVarcharArray;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -878,6 +877,20 @@ public class UpgradeUtil {
 }
 return otherTables;
 }
+
+// Return all types that are not fixed width that may need upgrading due 
to PHOENIX-2067
+// We exclude VARBINARY as we no longer support DESC for it.
+private static String getAffectedDataTypes() {
+StringBuilder buf = new StringBuilder("(" + 
PVarchar.INSTANCE.getSqlType() + "," + PDecimal.INSTANCE.getSqlType() + ",");
+for (PDataType type : PDataType.values()) {
+if (type.isArrayType()) {
+buf.append(type.getSqlType());
+buf.append(',');
+}
+}
+buf.setCharAt(buf.length()-1, ')');
+return buf.toString();
+}
 /**
  * Identify the tables that need to be upgraded due to PHOENIX-2067
  */
@@ -890,7 +903,7 @@ public class UpgradeUtil {
 "WHERE COLUMN_NAME IS NOT NULL\n" + 
 "AND COLUMN_FAMILY IS NULL\n" + 
 "AND SORT_ORDER = " + SortOrder.DESC.getSystemValue() + "\n" + 
-"AND DATA_TYPE IN (" + PVarchar.INSTANCE.getSqlType() + "," + 
PDecimal.INSTANCE.getSqlType() + "," + PVarcharArray.INSTANCE.getSqlType() + 
"," + PDecimalArray.INSTANCE.getSqlType() + ")\n" +
+"AND DATA_TYPE IN " + getAffectedDataTypes() + "\n" +
 "GROUP BY TENANT_ID,TABLE_SCHEM,TABLE_NAME");
 Set physicalTables = Sets.newHashSetWithExpectedSize(1024);
 List remainingTableNames = addPhysicalTables(conn, rs, 
PTableType.INDEX, physicalTables);



phoenix git commit: PHOENIX-2067 Sort order incorrect for variable length DESC columns

2015-07-14 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 dfc1af7d9 -> 1928ba03c


PHOENIX-2067 Sort order incorrect for variable length DESC columns


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/1928ba03
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/1928ba03
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/1928ba03

Branch: refs/heads/4.x-HBase-0.98
Commit: 1928ba03ccdf4dd3a9365159b7b6d2ec2f001af2
Parents: dfc1af7
Author: James Taylor 
Authored: Tue Jul 14 13:40:58 2015 -0700
Committer: James Taylor 
Committed: Tue Jul 14 13:41:45 2015 -0700

--
 .../org/apache/phoenix/util/UpgradeUtil.java | 19 ---
 1 file changed, 16 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/1928ba03/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java 
b/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
index 81c9085..c7baf43 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
@@ -80,12 +80,11 @@ import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.SaltingUtil;
 import org.apache.phoenix.schema.SortOrder;
 import org.apache.phoenix.schema.types.PBoolean;
+import org.apache.phoenix.schema.types.PDataType;
 import org.apache.phoenix.schema.types.PDecimal;
-import org.apache.phoenix.schema.types.PDecimalArray;
 import org.apache.phoenix.schema.types.PInteger;
 import org.apache.phoenix.schema.types.PLong;
 import org.apache.phoenix.schema.types.PVarchar;
-import org.apache.phoenix.schema.types.PVarcharArray;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -878,6 +877,20 @@ public class UpgradeUtil {
 }
 return otherTables;
 }
+
+// Return all types that are not fixed width that may need upgrading due 
to PHOENIX-2067
+// We exclude VARBINARY as we no longer support DESC for it.
+private static String getAffectedDataTypes() {
+StringBuilder buf = new StringBuilder("(" + 
PVarchar.INSTANCE.getSqlType() + "," + PDecimal.INSTANCE.getSqlType() + ",");
+for (PDataType type : PDataType.values()) {
+if (type.isArrayType()) {
+buf.append(type.getSqlType());
+buf.append(',');
+}
+}
+buf.setCharAt(buf.length()-1, ')');
+return buf.toString();
+}
 /**
  * Identify the tables that need to be upgraded due to PHOENIX-2067
  */
@@ -890,7 +903,7 @@ public class UpgradeUtil {
 "WHERE COLUMN_NAME IS NOT NULL\n" + 
 "AND COLUMN_FAMILY IS NULL\n" + 
 "AND SORT_ORDER = " + SortOrder.DESC.getSystemValue() + "\n" + 
-"AND DATA_TYPE IN (" + PVarchar.INSTANCE.getSqlType() + "," + 
PDecimal.INSTANCE.getSqlType() + "," + PVarcharArray.INSTANCE.getSqlType() + 
"," + PDecimalArray.INSTANCE.getSqlType() + ")\n" +
+"AND DATA_TYPE IN " + getAffectedDataTypes() + "\n" +
 "GROUP BY TENANT_ID,TABLE_SCHEM,TABLE_NAME");
 Set physicalTables = Sets.newHashSetWithExpectedSize(1024);
 List remainingTableNames = addPhysicalTables(conn, rs, 
PTableType.INDEX, physicalTables);



phoenix git commit: PHOENIX-2067 Sort order incorrect for variable length DESC columns

2015-07-14 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/master 2620a80c1 -> 4b99c632c


PHOENIX-2067 Sort order incorrect for variable length DESC columns


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/4b99c632
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/4b99c632
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/4b99c632

Branch: refs/heads/master
Commit: 4b99c632c5e40251451e69fbe6d108f51e549e9e
Parents: 2620a80
Author: James Taylor 
Authored: Tue Jul 14 13:40:58 2015 -0700
Committer: James Taylor 
Committed: Tue Jul 14 13:40:58 2015 -0700

--
 .../org/apache/phoenix/util/UpgradeUtil.java | 19 ---
 1 file changed, 16 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/4b99c632/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java 
b/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
index e59ea98..0ad6b9d 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
@@ -80,12 +80,11 @@ import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.SaltingUtil;
 import org.apache.phoenix.schema.SortOrder;
 import org.apache.phoenix.schema.types.PBoolean;
+import org.apache.phoenix.schema.types.PDataType;
 import org.apache.phoenix.schema.types.PDecimal;
-import org.apache.phoenix.schema.types.PDecimalArray;
 import org.apache.phoenix.schema.types.PInteger;
 import org.apache.phoenix.schema.types.PLong;
 import org.apache.phoenix.schema.types.PVarchar;
-import org.apache.phoenix.schema.types.PVarcharArray;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -878,6 +877,20 @@ public class UpgradeUtil {
 }
 return otherTables;
 }
+
+// Return all types that are not fixed width that may need upgrading due 
to PHOENIX-2067
+// We exclude VARBINARY as we no longer support DESC for it.
+private static String getAffectedDataTypes() {
+StringBuilder buf = new StringBuilder("(" + 
PVarchar.INSTANCE.getSqlType() + "," + PDecimal.INSTANCE.getSqlType() + ",");
+for (PDataType type : PDataType.values()) {
+if (type.isArrayType()) {
+buf.append(type.getSqlType());
+buf.append(',');
+}
+}
+buf.setCharAt(buf.length()-1, ')');
+return buf.toString();
+}
 /**
  * Identify the tables that need to be upgraded due to PHOENIX-2067
  */
@@ -890,7 +903,7 @@ public class UpgradeUtil {
 "WHERE COLUMN_NAME IS NOT NULL\n" + 
 "AND COLUMN_FAMILY IS NULL\n" + 
 "AND SORT_ORDER = " + SortOrder.DESC.getSystemValue() + "\n" + 
-"AND DATA_TYPE IN (" + PVarchar.INSTANCE.getSqlType() + "," + 
PDecimal.INSTANCE.getSqlType() + "," + PVarcharArray.INSTANCE.getSqlType() + 
"," + PDecimalArray.INSTANCE.getSqlType() + ")\n" +
+"AND DATA_TYPE IN " + getAffectedDataTypes() + "\n" +
 "GROUP BY TENANT_ID,TABLE_SCHEM,TABLE_NAME");
 Set physicalTables = Sets.newHashSetWithExpectedSize(1024);
 List remainingTableNames = addPhysicalTables(conn, rs, 
PTableType.INDEX, physicalTables);



Apache-Phoenix | Master | Build Successful

2015-07-14 Thread Apache Jenkins Server
Master branch build status Successful
Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/master

Last Successful Compiled Artifacts https://builds.apache.org/job/Phoenix-master/lastSuccessfulBuild/artifact/

Last Complete Test Report https://builds.apache.org/job/Phoenix-master/lastCompletedBuild/testReport/

Changes
[jtaylor] PHOENIX-2067 Sort order incorrect for variable length DESC columns



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


[3/7] phoenix git commit: PHOENIX-2067 Sort order incorrect for variable length DESC columns

2015-07-14 Thread jamestaylor
http://git-wip-us.apache.org/repos/asf/phoenix/blob/b31608f9/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDataType.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDataType.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDataType.java
index 60d2020..2c91dc5 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDataType.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDataType.java
@@ -47,1060 +47,1036 @@ import com.google.common.primitives.Longs;
  */
 public abstract class PDataType implements DataType, 
Comparable> {
 
-  private final String sqlTypeName;
-  private final int sqlType;
-  private final Class clazz;
-  private final byte[] clazzNameBytes;
-  private final byte[] sqlTypeNameBytes;
-  private final PDataCodec codec;
-  private final int ordinal;
-
-  protected PDataType(String sqlTypeName, int sqlType, Class clazz, PDataCodec 
codec, int ordinal) {
-this.sqlTypeName = sqlTypeName;
-this.sqlType = sqlType;
-this.clazz = clazz;
-this.clazzNameBytes = Bytes.toBytes(clazz.getName());
-this.sqlTypeNameBytes = Bytes.toBytes(sqlTypeName);
-this.codec = codec;
-this.ordinal = ordinal;
-  }
-
-  @Deprecated
-  public static PDataType[] values() {
-return PDataTypeFactory.getInstance().getOrderedTypes();
-  }
-
-  @Deprecated
-  public int ordinal() {
-return ordinal;
-  }
-
-  @Override
-  public Class encodedClass() {
-return getJavaClass();
-  }
-
-  public boolean isCastableTo(PDataType targetType) {
-return isComparableTo(targetType);
-  }
-
-  public final PDataCodec getCodec() {
-return codec;
-  }
-
-  public boolean isBytesComparableWith(PDataType otherType) {
-return this == otherType
-|| this.getClass() == PVarbinary.class
-|| otherType == PVarbinary.INSTANCE
-|| this.getClass() == PBinary.class
-|| otherType == PBinary.INSTANCE;
-  }
-
-  public int estimateByteSize(Object o) {
-if (isFixedWidth()) {
-  return getByteSize();
-}
-if (isArrayType()) {
-  PhoenixArray array = (PhoenixArray) o;
-  int noOfElements = array.numElements;
-  int totalVarSize = 0;
-  for (int i = 0; i < noOfElements; i++) {
-totalVarSize += array.estimateByteSize(i);
-  }
-  return totalVarSize;
-}
-// Non fixed width types must override this
-throw new UnsupportedOperationException();
-  }
-
-  public Integer getMaxLength(Object o) {
-return null;
-  }
-
-  public Integer getScale(Object o) {
-return null;
-  }
-
-  /**
-   * Estimate the byte size from the type length. For example, for char, byte 
size would be the
-   * same as length. For decimal, byte size would have no correlation with the 
length.
-   */
-  public Integer estimateByteSizeFromLength(Integer length) {
-if (isFixedWidth()) {
-  return getByteSize();
-}
-if (isArrayType()) {
-  return null;
-}
-// If not fixed width, default to say the byte size is the same as length.
-return length;
-  }
-
-  public final String getSqlTypeName() {
-return sqlTypeName;
-  }
-
-  public final int getSqlType() {
-return sqlType;
-  }
-
-  public final Class getJavaClass() {
-return clazz;
-  }
-
-  public boolean isArrayType() {
-return false;
-  }
-
-  public final int compareTo(byte[] lhs, int lhsOffset, int lhsLength, 
SortOrder lhsSortOrder,
-  byte[] rhs, int rhsOffset, int rhsLength, SortOrder rhsSortOrder,
-  PDataType rhsType) {
-Preconditions.checkNotNull(lhsSortOrder);
-Preconditions.checkNotNull(rhsSortOrder);
-if (this.isBytesComparableWith(rhsType)) { // directly compare the bytes
-  return compareTo(lhs, lhsOffset, lhsLength, lhsSortOrder, rhs, 
rhsOffset, rhsLength,
-  rhsSortOrder);
-}
-PDataCodec lhsCodec = this.getCodec();
-if (lhsCodec
-== null) { // no lhs native type representation, so convert rhsType to 
bytes representation of lhsType
-  byte[] rhsConverted =
-  this.toBytes(this.toObject(rhs, rhsOffset, rhsLength, rhsType, 
rhsSortOrder));
-  if (rhsSortOrder == SortOrder.DESC) {
-rhsSortOrder = SortOrder.ASC;
-  }
-  if (lhsSortOrder == SortOrder.DESC) {
-lhs = SortOrder.invert(lhs, lhsOffset, new byte[lhsLength], 0, 
lhsLength);
-  }
-  return Bytes.compareTo(lhs, lhsOffset, lhsLength, rhsConverted, 0, 
rhsConverted.length);
-}
-PDataCodec rhsCodec = rhsType.getCodec();
-if (rhsCodec == null) {
-  byte[] lhsConverted =
-  rhsType.toBytes(rhsType.toObject(lhs, lhsOffset, lhsLength, this, 
lhsSortOrder));
-  if (lhsSortOrder == SortOrder.DESC) {
-lhsSortOrder = SortOrder.ASC;
-  }
-  if (rhsSortOrder == SortOrder.DESC) {
-rhs = SortOrder.invert(rhs, rhsOffset, new byte[rhsLength], 0, 
rhsLength);
-  }
-  return By

[2/7] phoenix git commit: PHOENIX-2067 Sort order incorrect for variable length DESC columns

2015-07-14 Thread jamestaylor
http://git-wip-us.apache.org/repos/asf/phoenix/blob/b31608f9/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDateArray.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDateArray.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDateArray.java
index 764401c..a07418c 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDateArray.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDateArray.java
@@ -17,93 +17,78 @@
  */
 package org.apache.phoenix.schema.types;
 
-import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
-import org.apache.phoenix.schema.SortOrder;
-
-import java.sql.Types;
 import java.sql.Date;
 
-public class PDateArray extends PArrayDataType {
-
-  public static final PDateArray INSTANCE = new PDateArray();
-
-  private PDateArray() {
-super("DATE ARRAY", PDataType.ARRAY_TYPE_BASE + 
PDate.INSTANCE.getSqlType(), PhoenixArray.class,
-null, 40);
-  }
+import org.apache.phoenix.schema.SortOrder;
 
-  @Override
-  public boolean isArrayType() {
-return true;
-  }
+public class PDateArray extends PArrayDataType {
 
-  @Override
-  public boolean isFixedWidth() {
-return false;
-  }
+public static final PDateArray INSTANCE = new PDateArray();
 
-  @Override
-  public int compareTo(Object lhs, Object rhs, PDataType rhsType) {
-return compareTo(lhs, rhs);
-  }
+private PDateArray() {
+super("DATE ARRAY", PDataType.ARRAY_TYPE_BASE + 
PDate.INSTANCE.getSqlType(), PhoenixArray.class,
+null, 40);
+}
 
-  @Override
-  public Integer getByteSize() {
-return null;
-  }
+@Override
+public boolean isArrayType() {
+return true;
+}
 
-  @Override
-  public byte[] toBytes(Object object) {
-return toBytes(object, SortOrder.ASC);
-  }
+@Override
+public boolean isFixedWidth() {
+return false;
+}
 
-  @Override
-  public byte[] toBytes(Object object, SortOrder sortOrder) {
-return toBytes(object, PDate.INSTANCE, sortOrder);
-  }
+@Override
+public int compareTo(Object lhs, Object rhs, PDataType rhsType) {
+return compareTo(lhs, rhs);
+}
 
-  @Override
-  public Object toObject(byte[] bytes, int offset, int length,
-  PDataType actualType, SortOrder sortOrder, Integer maxLength, Integer 
scale) {
-return toObject(bytes, offset, length, PDate.INSTANCE, sortOrder, 
maxLength, scale,
-PDate.INSTANCE);
-  }
+@Override
+public Integer getByteSize() {
+return null;
+}
 
-  @Override
-  public boolean isCoercibleTo(PDataType targetType) {
-return isCoercibleTo(targetType, this);
-  }
+@Override
+public byte[] toBytes(Object object) {
+return toBytes(object, SortOrder.ASC);
+}
 
-  @Override
-  public boolean isCoercibleTo(PDataType targetType, Object value) {
-if (value == null) {
-  return true;
+@Override
+public byte[] toBytes(Object object, SortOrder sortOrder) {
+return toBytes(object, PDate.INSTANCE, sortOrder);
 }
-PhoenixArray pArr = (PhoenixArray) value;
-Object[] dateArr = (Object[]) pArr.array;
-for (Object i : dateArr) {
-  if (!super.isCoercibleTo(PDate.INSTANCE, i)) {
-return false;
-  }
+
+@Override
+public Object toObject(byte[] bytes, int offset, int length,
+PDataType actualType, SortOrder sortOrder, Integer maxLength, 
Integer scale) {
+return toObject(bytes, offset, length, PDate.INSTANCE, sortOrder, 
maxLength, scale,
+PDate.INSTANCE);
 }
-return true;
-  }
 
-  @Override
-  public int getResultSetSqlType() {
-return Types.ARRAY;
-  }
+@Override
+public boolean isCoercibleTo(PDataType targetType) {
+return isCoercibleTo(targetType, this);
+}
 
-  @Override
-  public void coerceBytes(ImmutableBytesWritable ptr, Object object, PDataType 
actualType,
-  Integer maxLength, Integer scale, SortOrder actualModifer, Integer 
desiredMaxLength,
-  Integer desiredScale,SortOrder desiredModifier) {
-coerceBytes(ptr, object, actualType, maxLength, scale, desiredMaxLength, 
desiredScale,
-this, actualModifer, desiredModifier);
-  }
+@Override
+public boolean isCoercibleTo(PDataType targetType, Object value) {
+if (value == null) {
+return true;
+}
+PhoenixArray pArr = (PhoenixArray) value;
+Object[] dateArr = (Object[]) pArr.array;
+for (Object i : dateArr) {
+if (!super.isCoercibleTo(PDate.INSTANCE, i)) {
+return false;
+}
+}
+return true;
+}
 
-  @Override
-  public Object getSampleValue(Integer maxLength, Integer arrayLength) {
-return getSampleValue(PDate.INSTANCE, arrayLength, maxLength);
-  }
+@Override
+public Object getSampleValue(Integer maxLength, Integer arrayLength) 

[7/7] phoenix git commit: PHOENIX-2067 Sort order incorrect for variable length DESC columns

2015-07-14 Thread jamestaylor
PHOENIX-2067 Sort order incorrect for variable length DESC columns

Conflicts:

phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java

phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
phoenix-core/src/main/java/org/apache/phoenix/execute/BaseQueryPlan.java

phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java

Conflicts:

phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java

phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/b31608f9
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/b31608f9
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/b31608f9

Branch: refs/heads/4.x-HBase-1.0
Commit: b31608f968f5cc3fd38768eac6f42a07a0d7485e
Parents: 27d78b6
Author: James Taylor 
Authored: Mon Jul 13 11:17:37 2015 -0700
Committer: James Taylor 
Committed: Tue Jul 14 11:20:46 2015 -0700

--
 dev/eclipse_prefs_phoenix.epf   |2 +-
 .../org/apache/phoenix/end2end/ArrayIT.java |   59 +
 .../org/apache/phoenix/end2end/IsNullIT.java|   52 +-
 .../apache/phoenix/end2end/LpadFunctionIT.java  |   24 +
 .../apache/phoenix/end2end/ReverseScanIT.java   |   30 +
 .../phoenix/end2end/RowValueConstructorIT.java  |7 +-
 .../apache/phoenix/end2end/SortOrderFIT.java|  563 -
 .../org/apache/phoenix/end2end/SortOrderIT.java |  572 +
 .../apache/phoenix/compile/FromCompiler.java|3 +-
 .../apache/phoenix/compile/JoinCompiler.java|8 +-
 .../apache/phoenix/compile/OrderByCompiler.java |4 +-
 .../phoenix/compile/OrderPreservingTracker.java |7 +-
 .../org/apache/phoenix/compile/ScanRanges.java  |5 +-
 .../compile/TupleProjectionCompiler.java|4 +-
 .../apache/phoenix/compile/UnionCompiler.java   |5 +-
 .../apache/phoenix/compile/UpsertCompiler.java  |   16 +-
 .../apache/phoenix/compile/WhereOptimizer.java  |   53 +-
 .../coprocessor/BaseScannerRegionObserver.java  |4 +-
 .../coprocessor/MetaDataEndpointImpl.java   |   73 +-
 .../UngroupedAggregateRegionObserver.java   |  125 +-
 .../coprocessor/generated/PTableProtos.java |  105 +-
 .../phoenix/exception/SQLExceptionCode.java |1 +
 .../apache/phoenix/execute/BaseQueryPlan.java   |9 +-
 .../DescVarLengthFastByteComparisons.java   |  219 ++
 .../expression/ArrayConstructorExpression.java  |2 +-
 .../phoenix/expression/OrderByExpression.java   |   13 +-
 .../RowValueConstructorExpression.java  |8 +-
 .../function/ArrayConcatFunction.java   |   11 +-
 .../function/ArrayModifierFunction.java |3 +-
 .../expression/function/LpadFunction.java   |8 +-
 .../expression/util/regex/JONIPattern.java  |5 +-
 .../apache/phoenix/filter/SkipScanFilter.java   |3 +-
 .../apache/phoenix/index/IndexMaintainer.java   |  127 +-
 .../phoenix/iterate/BaseResultIterators.java|  109 +-
 .../phoenix/iterate/OrderedResultIterator.java  |   52 +-
 .../apache/phoenix/jdbc/PhoenixConnection.java  |   28 +-
 .../query/ConnectionQueryServicesImpl.java  |   17 +-
 .../java/org/apache/phoenix/query/KeyRange.java |   14 -
 .../apache/phoenix/query/QueryConstants.java|3 +
 .../apache/phoenix/schema/DelegateTable.java|5 +
 .../apache/phoenix/schema/MetaDataClient.java   |   31 +-
 .../java/org/apache/phoenix/schema/PTable.java  |9 +
 .../org/apache/phoenix/schema/PTableImpl.java   |   78 +-
 .../org/apache/phoenix/schema/RowKeySchema.java |   44 +-
 .../phoenix/schema/RowKeyValueAccessor.java |   12 +-
 .../org/apache/phoenix/schema/ValueSchema.java  |   30 +-
 .../phoenix/schema/stats/StatisticsUtil.java|4 +-
 .../phoenix/schema/types/PArrayDataType.java|  682 +++---
 .../phoenix/schema/types/PBinaryArray.java  |  122 +-
 .../phoenix/schema/types/PBooleanArray.java |  112 +-
 .../apache/phoenix/schema/types/PCharArray.java |  128 +-
 .../apache/phoenix/schema/types/PDataType.java  | 2037 +-
 .../apache/phoenix/schema/types/PDateArray.java |  131 +-
 .../phoenix/schema/types/PDecimalArray.java |  126 +-
 .../phoenix/schema/types/PDoubleArray.java  |  128 +-
 .../phoenix/schema/types/PFloatArray.java   |  130 +-
 .../phoenix/schema/types/PIntegerArray.java |  130 +-
 .../apache/phoenix/schema/types/PLongArray.java |  130 +-
 .../phoenix/schema/types/PSmallintArray.java|  130 +-
 .../apache/phoenix/schema/types/PTimeArray.java |  133 +-
 .../phoenix/schema/types/PTimestampArray.java   |  132 +-
 .../phoenix/schema/types/PTinyintArray.java |  130 +-
 .../schema/types/PUnsignedDateArray.java|  128 +-
 .../schema/types/PUnsignedDoubleArray.java  |  136 +-
 .../s

[6/7] phoenix git commit: PHOENIX-2067 Sort order incorrect for variable length DESC columns

2015-07-14 Thread jamestaylor
http://git-wip-us.apache.org/repos/asf/phoenix/blob/b31608f9/phoenix-core/src/main/java/org/apache/phoenix/compile/UnionCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/UnionCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/UnionCompiler.java
index 269232e..942e244 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/UnionCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/UnionCompiler.java
@@ -80,8 +80,9 @@ public class UnionCompiler {
 }
 Long scn = statement.getConnection().getSCN();
 PTable tempTable = 
PTableImpl.makePTable(statement.getConnection().getTenantId(), 
UNION_SCHEMA_NAME, UNION_TABLE_NAME, 
-PTableType.SUBQUERY, null, HConstants.LATEST_TIMESTAMP, scn == 
null ? HConstants.LATEST_TIMESTAMP : scn, null, null, projectedColumns, null, 
null, null,
-true, null, null, null, true, true, true, null, null, 
null);
+PTableType.SUBQUERY, null, HConstants.LATEST_TIMESTAMP, scn == 
null ? HConstants.LATEST_TIMESTAMP : scn, null, null,
+projectedColumns, null, null, null,
+true, null, null, null, true, true, true, null, null, 
null, false);
 TableRef tableRef = new TableRef(null, tempTable, 0, false);
 return tableRef;
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/b31608f9/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
index 7b39a28..e12f5a4 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
@@ -150,8 +150,10 @@ public class UpsertCompiler {
 
SQLExceptionCode.DATA_EXCEEDS_MAX_CAPACITY).setColumnName(column.getName().getString())
 .setMessage("value=" + 
column.getDataType().toStringLiteral(ptr, null)).build()
 .buildException(); }
-column.getDataType().coerceBytes(ptr, value, 
column.getDataType(), precision, scale,
-SortOrder.getDefault(), column.getMaxLength(), 
column.getScale(), column.getSortOrder());
+column.getDataType().coerceBytes(ptr, value, 
column.getDataType(), 
+precision, scale, SortOrder.getDefault(), 
+column.getMaxLength(), column.getScale(), 
column.getSortOrder(),
+table.rowKeyOrderOptimizable());
 values[i] = ByteUtil.copyKeyBytesIfNecessary(ptr);
 }
 setValues(values, pkSlotIndexes, columnIndexes, table, 
mutation, statement);
@@ -772,6 +774,7 @@ public class UpsertCompiler {
 final SequenceManager sequenceManager = 
context.getSequenceManager();
 // Next evaluate all the expressions
 int nodeIndex = nodeIndexOffset;
+PTable table = tableRef.getTable();
 Tuple tuple = sequenceManager.getSequenceCount() == 0 ? null :
 sequenceManager.newSequenceTuple(null);
 for (Expression constantExpression : constantExpressions) {
@@ -793,9 +796,10 @@ public class UpsertCompiler {
 .setMessage("value=" + 
constantExpression.toString()).build().buildException();
 }
 }
-column.getDataType().coerceBytes(ptr, value,
-constantExpression.getDataType(), 
constantExpression.getMaxLength(), constantExpression.getScale(), 
constantExpression.getSortOrder(),
-column.getMaxLength(), 
column.getScale(),column.getSortOrder());
+column.getDataType().coerceBytes(ptr, value, 
constantExpression.getDataType(), 
+constantExpression.getMaxLength(), 
constantExpression.getScale(), constantExpression.getSortOrder(),
+column.getMaxLength(), 
column.getScale(),column.getSortOrder(),
+table.rowKeyOrderOptimizable());
 if (overlapViewColumns.contains(column) && 
Bytes.compareTo(ptr.get(), ptr.getOffset(), ptr.getLength(), 
column.getViewConstant(), 0, column.getViewConstant().length-1) != 0) {
 throw new SQLExceptionInfo.Builder(
 SQLExceptionCode.CANNOT_UPDATE_VIEW_COLUMN)
@@ -814,7 +818,7 @@ public class UpsertCompiler {
 }
 }
 Map mutation = 
Maps.newHashMapWi

[4/7] phoenix git commit: PHOENIX-2067 Sort order incorrect for variable length DESC columns

2015-07-14 Thread jamestaylor
http://git-wip-us.apache.org/repos/asf/phoenix/blob/b31608f9/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PArrayDataType.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PArrayDataType.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PArrayDataType.java
index 4e32cc0..dd11569 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PArrayDataType.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PArrayDataType.java
@@ -20,6 +20,7 @@ package org.apache.phoenix.schema.types;
 import java.io.DataOutputStream;
 import java.io.IOException;
 import java.nio.ByteBuffer;
+import java.sql.Types;
 import java.text.Format;
 import java.util.LinkedList;
 import java.util.List;
@@ -34,61 +35,88 @@ import org.apache.phoenix.schema.SortOrder;
 import org.apache.phoenix.schema.ValueSchema;
 import org.apache.phoenix.schema.tuple.Tuple;
 import org.apache.phoenix.util.ByteUtil;
+import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.TrustedByteArrayOutputStream;
 
 import com.google.common.base.Objects;
 import com.google.common.base.Preconditions;
 
 /**
- * The datatype for PColummns that are Arrays. Any variable length array would 
follow the below order. 
- * Every element would be seperated by a seperator byte '0'. Null elements are 
counted and once a first 
- * non null element appears we write the count of the nulls prefixed with a 
seperator byte.
- * Trailing nulls are not taken into account. The last non null element is 
followed by two seperator bytes. 
- * For eg a, b, null, null, c, null -> 65 0 66 0 0 2 67 0 0 0 
- * a null null null b c null d -> 65 0 0 3 66 0 67 0 0 1 68 0 0 0.
- * The reason we use this serialization format is to allow the
- * byte array of arrays of the same type to be directly comparable against 
each other. 
- * This prevents a costly deserialization on compare and allows an array 
column to be used as the last column in a primary key constraint.
+ * The datatype for PColummns that are Arrays. Any variable length array would 
follow the below order. Every element
+ * would be seperated by a seperator byte '0'. Null elements are counted and 
once a first non null element appears we
+ * write the count of the nulls prefixed with a seperator byte. Trailing nulls 
are not taken into account. The last non
+ * null element is followed by two seperator bytes. For eg a, b, null, null, 
c, null -> 65 0 66 0 0 2 67 0 0 0 a null
+ * null null b c null d -> 65 0 0 3 66 0 67 0 0 1 68 0 0 0. The reason we use 
this serialization format is to allow the
+ * byte array of arrays of the same type to be directly comparable against 
each other. This prevents a costly
+ * deserialization on compare and allows an array column to be used as the 
last column in a primary key constraint.
  */
 public abstract class PArrayDataType extends PDataType {
 
+@Override
+public final int getResultSetSqlType() {
+  return Types.ARRAY;
+}
+
+@Override
+public final void coerceBytes(ImmutableBytesWritable ptr, Object object, 
PDataType actualType,
+Integer maxLength, Integer scale, SortOrder actualModifer, Integer 
desiredMaxLength,
+Integer desiredScale, SortOrder desiredModifier, boolean 
expectedRowKeyOrderOptimizable) {
+  coerceBytes(ptr, object, actualType, maxLength, scale, desiredMaxLength, 
desiredScale,
+  this, actualModifer, desiredModifier, 
expectedRowKeyOrderOptimizable);
+}
+
+@Override
+public final void coerceBytes(ImmutableBytesWritable ptr, Object object, 
PDataType actualType,
+Integer maxLength, Integer scale, SortOrder actualModifer, Integer 
desiredMaxLength,
+Integer desiredScale, SortOrder desiredModifier) {
+  coerceBytes(ptr, object, actualType, maxLength, scale, desiredMaxLength, 
desiredScale,
+  this, actualModifer, desiredModifier, true);
+}
+
 public static final byte ARRAY_SERIALIZATION_VERSION = 1;
 
-  protected PArrayDataType(String sqlTypeName, int sqlType, Class clazz, 
PDataCodec codec, int ordinal) {
-super(sqlTypeName, sqlType, clazz, codec, ordinal);
-  }
+protected PArrayDataType(String sqlTypeName, int sqlType, Class clazz, 
PDataCodec codec, int ordinal) {
+super(sqlTypeName, sqlType, clazz, codec, ordinal);
+}
+
+private static byte getSeparatorByte(boolean rowKeyOrderOptimizable, 
SortOrder sortOrder) {
+return SchemaUtil.getSeparatorByte(rowKeyOrderOptimizable, false, 
sortOrder);
+}
 
-  public byte[] toBytes(Object object, PDataType baseType, SortOrder 
sortOrder) {
-   if(object == null) {
-   throw new ConstraintViolationException(this + " may not 
be null");
-   }
-   PhoenixArray arr = ((PhoenixArray)object);
+public byte[] toBytes(Object object, PDataType baseType, SortOrder 
sortOrder) 

[5/7] phoenix git commit: PHOENIX-2067 Sort order incorrect for variable length DESC columns

2015-07-14 Thread jamestaylor
http://git-wip-us.apache.org/repos/asf/phoenix/blob/b31608f9/phoenix-core/src/main/java/org/apache/phoenix/index/IndexMaintainer.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/index/IndexMaintainer.java 
b/phoenix-core/src/main/java/org/apache/phoenix/index/IndexMaintainer.java
index 0956753..a12f633 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/index/IndexMaintainer.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/index/IndexMaintainer.java
@@ -59,14 +59,10 @@ import 
org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 import org.apache.phoenix.hbase.index.util.KeyValueBuilder;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixStatement;
-import org.apache.phoenix.parse.AndParseNode;
-import org.apache.phoenix.parse.BaseParseNodeVisitor;
-import org.apache.phoenix.parse.BooleanParseNodeVisitor;
 import org.apache.phoenix.parse.FunctionParseNode;
 import org.apache.phoenix.parse.ParseNode;
 import org.apache.phoenix.parse.SQLParser;
 import org.apache.phoenix.parse.StatelessTraverseAllParseNodeVisitor;
-import org.apache.phoenix.parse.TraverseAllParseNodeVisitor;
 import org.apache.phoenix.parse.UDFParseNode;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.schema.ColumnNotFoundException;
@@ -265,6 +261,7 @@ public class IndexMaintainer implements Writable, 
Iterable {
 private int[] dataPkPosition;
 private int maxTrailingNulls;
 private ColumnReference dataEmptyKeyValueRef;
+private boolean rowKeyOrderOptimizable;
 
 private IndexMaintainer(RowKeySchema dataRowKeySchema, boolean 
isDataTableSalted) {
 this.dataRowKeySchema = dataRowKeySchema;
@@ -273,6 +270,7 @@ public class IndexMaintainer implements Writable, 
Iterable {
 
 private IndexMaintainer(PTable dataTable, PTable index, PhoenixConnection 
connection) {
 this(dataTable.getRowKeySchema(), dataTable.getBucketNum() != null);
+this.rowKeyOrderOptimizable = index.rowKeyOrderOptimizable();
 this.isMultiTenant = dataTable.isMultiTenant();
 this.viewIndexId = index.getViewIndexId() == null ? null : 
MetaDataUtil.getViewIndexIdDataType().toBytes(index.getViewIndexId());
 this.isLocalIndex = index.getIndexType() == IndexType.LOCAL;
@@ -434,7 +432,7 @@ public class IndexMaintainer implements Writable, 
Iterable {
 dataRowKeySchema.next(ptr, dataPosOffset, maxRowKeyOffset);
 output.write(ptr.get(), ptr.getOffset(), ptr.getLength());
 if 
(!dataRowKeySchema.getField(dataPosOffset).getDataType().isFixedWidth()) {
-output.writeByte(QueryConstants.SEPARATOR_BYTE);
+
output.writeByte(SchemaUtil.getSeparatorByte(rowKeyOrderOptimizable, 
ptr.getLength()==0, dataRowKeySchema.getField(dataPosOffset)));
 }
 dataPosOffset++;
 }
@@ -481,21 +479,22 @@ public class IndexMaintainer implements Writable, 
Iterable {
 }
 boolean isDataColumnInverted = dataSortOrder != SortOrder.ASC;
 PDataType indexColumnType = 
IndexUtil.getIndexColumnDataType(isNullable, dataColumnType);
-boolean isBytesComparable = 
dataColumnType.isBytesComparableWith(indexColumnType) ;
-if (isBytesComparable && isDataColumnInverted == 
descIndexColumnBitSet.get(i)) {
+boolean isBytesComparable = 
dataColumnType.isBytesComparableWith(indexColumnType);
+boolean isIndexColumnDesc = descIndexColumnBitSet.get(i);
+if (isBytesComparable && isDataColumnInverted == 
isIndexColumnDesc) {
 output.write(ptr.get(), ptr.getOffset(), ptr.getLength());
 } else {
 if (!isBytesComparable)  {
 indexColumnType.coerceBytes(ptr, dataColumnType, 
dataSortOrder, SortOrder.getDefault());
 }
-if (descIndexColumnBitSet.get(i) != isDataColumnInverted) {
+if (isDataColumnInverted != isIndexColumnDesc) {
 writeInverted(ptr.get(), ptr.getOffset(), 
ptr.getLength(), output);
 } else {
 output.write(ptr.get(), ptr.getOffset(), 
ptr.getLength());
 }
 }
 if (!indexColumnType.isFixedWidth()) {
-output.writeByte(QueryConstants.SEPARATOR_BYTE);
+
output.writeByte(SchemaUtil.getSeparatorByte(rowKeyOrderOptimizable, 
ptr.getLength() == 0, isIndexColumnDesc ? SortOrder.DESC : SortOrder.ASC));
 }
 }
 int length = stream.size();
@@ -545,7 +544,7 @@ public class IndexMaintainer implements Writable, 
Iterable {
 indexRowKeySchema.next(ptr, indexPosOffset, maxRowKeyOffset);

[1/7] phoenix git commit: PHOENIX-2067 Sort order incorrect for variable length DESC columns

2015-07-14 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.0 27d78b653 -> b31608f96


http://git-wip-us.apache.org/repos/asf/phoenix/blob/b31608f9/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PUnsignedTimestampArray.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PUnsignedTimestampArray.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PUnsignedTimestampArray.java
index 1159b5c..3407310 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PUnsignedTimestampArray.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PUnsignedTimestampArray.java
@@ -17,94 +17,80 @@
  */
 package org.apache.phoenix.schema.types;
 
-import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
-import org.apache.phoenix.schema.SortOrder;
+import java.sql.Timestamp;
 
-import java.sql.*;
+import org.apache.phoenix.schema.SortOrder;
 
 public class PUnsignedTimestampArray extends PArrayDataType {
 
-  public static final PUnsignedTimestampArray INSTANCE = new 
PUnsignedTimestampArray();
-
-  private PUnsignedTimestampArray() {
-super("UNSIGNED_TIMESTAMP ARRAY",
-PDataType.ARRAY_TYPE_BASE + PUnsignedTimestamp.INSTANCE.getSqlType(), 
PhoenixArray.class,
-null, 37);
-  }
-
-  @Override
-  public boolean isArrayType() {
-return true;
-  }
-
-  @Override
-  public boolean isFixedWidth() {
-return false;
-  }
+public static final PUnsignedTimestampArray INSTANCE = new 
PUnsignedTimestampArray();
 
-  @Override
-  public int compareTo(Object lhs, Object rhs, PDataType rhsType) {
-return compareTo(lhs, rhs);
-  }
+private PUnsignedTimestampArray() {
+super("UNSIGNED_TIMESTAMP ARRAY",
+PDataType.ARRAY_TYPE_BASE + 
PUnsignedTimestamp.INSTANCE.getSqlType(), PhoenixArray.class,
+null, 37);
+}
 
-  @Override
-  public Integer getByteSize() {
-return null;
-  }
+@Override
+public boolean isArrayType() {
+return true;
+}
 
-  @Override
-  public byte[] toBytes(Object object) {
-return toBytes(object, SortOrder.ASC);
-  }
+@Override
+public boolean isFixedWidth() {
+return false;
+}
 
-  @Override
-  public byte[] toBytes(Object object, SortOrder sortOrder) {
-return toBytes(object, PUnsignedTimestamp.INSTANCE, sortOrder);
-  }
+@Override
+public int compareTo(Object lhs, Object rhs, PDataType rhsType) {
+return compareTo(lhs, rhs);
+}
 
-  @Override
-  public Object toObject(byte[] bytes, int offset, int length,
-  PDataType actualType, SortOrder sortOrder, Integer maxLength,
-  Integer scale) {
-return toObject(bytes, offset, length, PUnsignedTimestamp.INSTANCE, 
sortOrder,
-maxLength, scale, PUnsignedTimestamp.INSTANCE);
-  }
+@Override
+public Integer getByteSize() {
+return null;
+}
 
-  @Override
-  public boolean isCoercibleTo(PDataType targetType) {
-return isCoercibleTo(targetType, this);
-  }
+@Override
+public byte[] toBytes(Object object) {
+return toBytes(object, SortOrder.ASC);
+}
 
-  @Override
-  public boolean isCoercibleTo(PDataType targetType, Object value) {
-if (value == null) {
-  return true;
+@Override
+public byte[] toBytes(Object object, SortOrder sortOrder) {
+return toBytes(object, PUnsignedTimestamp.INSTANCE, sortOrder);
 }
-PhoenixArray pArr = (PhoenixArray) value;
-Object[] timeStampArr = (Object[]) pArr.array;
-for (Object i : timeStampArr) {
-  if (!super.isCoercibleTo(PUnsignedTimestamp.INSTANCE, i)) {
-return false;
-  }
+
+@Override
+public Object toObject(byte[] bytes, int offset, int length,
+PDataType actualType, SortOrder sortOrder, Integer maxLength,
+Integer scale) {
+return toObject(bytes, offset, length, PUnsignedTimestamp.INSTANCE, 
sortOrder,
+maxLength, scale, PUnsignedTimestamp.INSTANCE);
 }
-return true;
-  }
 
-  @Override
-  public void coerceBytes(ImmutableBytesWritable ptr, Object object, PDataType 
actualType,
-  Integer maxLength, Integer scale, SortOrder actualModifer, Integer 
desiredMaxLength,
-  Integer desiredScale, SortOrder desiredModifier) {
-coerceBytes(ptr, object, actualType, maxLength, scale, desiredMaxLength, 
desiredScale,
-this, actualModifer, desiredModifier);
-  }
+@Override
+public boolean isCoercibleTo(PDataType targetType) {
+return isCoercibleTo(targetType, this);
+}
 
-  @Override
-  public int getResultSetSqlType() {
-return Types.ARRAY;
-  }
+@Override
+public boolean isCoercibleTo(PDataType targetType, Object value) {
+if (value == null) {
+return true;
+}
+PhoenixArray pArr = (PhoenixArray) value;
+Object[] timeStampArr = (Object[]) pArr.array;
+for (Object i : t

[7/7] phoenix git commit: PHOENIX-2067 Sort order incorrect for variable length DESC columns

2015-07-14 Thread jamestaylor
PHOENIX-2067 Sort order incorrect for variable length DESC columns

Conflicts:

phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java

phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
phoenix-core/src/main/java/org/apache/phoenix/execute/BaseQueryPlan.java

phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/dfc1af7d
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/dfc1af7d
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/dfc1af7d

Branch: refs/heads/4.x-HBase-0.98
Commit: dfc1af7d98c7bca6e88a6e3a02d4a8d7c565b14b
Parents: 9d244e0
Author: James Taylor 
Authored: Mon Jul 13 11:17:37 2015 -0700
Committer: James Taylor 
Committed: Tue Jul 14 11:14:16 2015 -0700

--
 dev/eclipse_prefs_phoenix.epf   |2 +-
 .../org/apache/phoenix/end2end/ArrayIT.java |   59 +
 .../org/apache/phoenix/end2end/IsNullIT.java|   52 +-
 .../apache/phoenix/end2end/LpadFunctionIT.java  |   24 +
 .../apache/phoenix/end2end/ReverseScanIT.java   |   30 +
 .../phoenix/end2end/RowValueConstructorIT.java  |7 +-
 .../apache/phoenix/end2end/SortOrderFIT.java|  563 -
 .../org/apache/phoenix/end2end/SortOrderIT.java |  572 +
 .../apache/phoenix/compile/FromCompiler.java|3 +-
 .../apache/phoenix/compile/JoinCompiler.java|8 +-
 .../apache/phoenix/compile/OrderByCompiler.java |4 +-
 .../phoenix/compile/OrderPreservingTracker.java |7 +-
 .../org/apache/phoenix/compile/ScanRanges.java  |5 +-
 .../compile/TupleProjectionCompiler.java|4 +-
 .../apache/phoenix/compile/UnionCompiler.java   |5 +-
 .../apache/phoenix/compile/UpsertCompiler.java  |   16 +-
 .../apache/phoenix/compile/WhereOptimizer.java  |   53 +-
 .../coprocessor/BaseScannerRegionObserver.java  |4 +-
 .../coprocessor/MetaDataEndpointImpl.java   |   73 +-
 .../UngroupedAggregateRegionObserver.java   |  125 +-
 .../coprocessor/generated/PTableProtos.java |  105 +-
 .../phoenix/exception/SQLExceptionCode.java |1 +
 .../apache/phoenix/execute/BaseQueryPlan.java   |9 +-
 .../DescVarLengthFastByteComparisons.java   |  219 ++
 .../expression/ArrayConstructorExpression.java  |2 +-
 .../phoenix/expression/OrderByExpression.java   |   13 +-
 .../RowValueConstructorExpression.java  |8 +-
 .../function/ArrayConcatFunction.java   |   11 +-
 .../function/ArrayModifierFunction.java |3 +-
 .../expression/function/LpadFunction.java   |8 +-
 .../expression/util/regex/JONIPattern.java  |5 +-
 .../apache/phoenix/filter/SkipScanFilter.java   |3 +-
 .../apache/phoenix/index/IndexMaintainer.java   |  127 +-
 .../phoenix/iterate/BaseResultIterators.java|  109 +-
 .../phoenix/iterate/OrderedResultIterator.java  |   52 +-
 .../apache/phoenix/jdbc/PhoenixConnection.java  |   27 +-
 .../query/ConnectionQueryServicesImpl.java  |   17 +-
 .../java/org/apache/phoenix/query/KeyRange.java |   14 -
 .../apache/phoenix/query/QueryConstants.java|3 +
 .../apache/phoenix/schema/DelegateTable.java|5 +
 .../apache/phoenix/schema/MetaDataClient.java   |   31 +-
 .../java/org/apache/phoenix/schema/PTable.java  |9 +
 .../org/apache/phoenix/schema/PTableImpl.java   |   78 +-
 .../org/apache/phoenix/schema/RowKeySchema.java |   44 +-
 .../phoenix/schema/RowKeyValueAccessor.java |   12 +-
 .../org/apache/phoenix/schema/ValueSchema.java  |   30 +-
 .../phoenix/schema/stats/StatisticsUtil.java|4 +-
 .../phoenix/schema/types/PArrayDataType.java|  682 +++---
 .../phoenix/schema/types/PBinaryArray.java  |  122 +-
 .../phoenix/schema/types/PBooleanArray.java |  112 +-
 .../apache/phoenix/schema/types/PCharArray.java |  128 +-
 .../apache/phoenix/schema/types/PDataType.java  | 2037 +-
 .../apache/phoenix/schema/types/PDateArray.java |  131 +-
 .../phoenix/schema/types/PDecimalArray.java |  126 +-
 .../phoenix/schema/types/PDoubleArray.java  |  128 +-
 .../phoenix/schema/types/PFloatArray.java   |  130 +-
 .../phoenix/schema/types/PIntegerArray.java |  130 +-
 .../apache/phoenix/schema/types/PLongArray.java |  130 +-
 .../phoenix/schema/types/PSmallintArray.java|  130 +-
 .../apache/phoenix/schema/types/PTimeArray.java |  133 +-
 .../phoenix/schema/types/PTimestampArray.java   |  132 +-
 .../phoenix/schema/types/PTinyintArray.java |  130 +-
 .../schema/types/PUnsignedDateArray.java|  128 +-
 .../schema/types/PUnsignedDoubleArray.java  |  136 +-
 .../schema/types/PUnsignedFloatArray.java   |  130 +-
 .../phoenix/schema/types/PUnsignedIntArray.java |  130 +-
 .../schema/types/PUnsignedLongArray.java|  130 +-
 .../schema/type

[6/7] phoenix git commit: PHOENIX-2067 Sort order incorrect for variable length DESC columns

2015-07-14 Thread jamestaylor
http://git-wip-us.apache.org/repos/asf/phoenix/blob/dfc1af7d/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
index 7b39a28..e12f5a4 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
@@ -150,8 +150,10 @@ public class UpsertCompiler {
 
SQLExceptionCode.DATA_EXCEEDS_MAX_CAPACITY).setColumnName(column.getName().getString())
 .setMessage("value=" + 
column.getDataType().toStringLiteral(ptr, null)).build()
 .buildException(); }
-column.getDataType().coerceBytes(ptr, value, 
column.getDataType(), precision, scale,
-SortOrder.getDefault(), column.getMaxLength(), 
column.getScale(), column.getSortOrder());
+column.getDataType().coerceBytes(ptr, value, 
column.getDataType(), 
+precision, scale, SortOrder.getDefault(), 
+column.getMaxLength(), column.getScale(), 
column.getSortOrder(),
+table.rowKeyOrderOptimizable());
 values[i] = ByteUtil.copyKeyBytesIfNecessary(ptr);
 }
 setValues(values, pkSlotIndexes, columnIndexes, table, 
mutation, statement);
@@ -772,6 +774,7 @@ public class UpsertCompiler {
 final SequenceManager sequenceManager = 
context.getSequenceManager();
 // Next evaluate all the expressions
 int nodeIndex = nodeIndexOffset;
+PTable table = tableRef.getTable();
 Tuple tuple = sequenceManager.getSequenceCount() == 0 ? null :
 sequenceManager.newSequenceTuple(null);
 for (Expression constantExpression : constantExpressions) {
@@ -793,9 +796,10 @@ public class UpsertCompiler {
 .setMessage("value=" + 
constantExpression.toString()).build().buildException();
 }
 }
-column.getDataType().coerceBytes(ptr, value,
-constantExpression.getDataType(), 
constantExpression.getMaxLength(), constantExpression.getScale(), 
constantExpression.getSortOrder(),
-column.getMaxLength(), 
column.getScale(),column.getSortOrder());
+column.getDataType().coerceBytes(ptr, value, 
constantExpression.getDataType(), 
+constantExpression.getMaxLength(), 
constantExpression.getScale(), constantExpression.getSortOrder(),
+column.getMaxLength(), 
column.getScale(),column.getSortOrder(),
+table.rowKeyOrderOptimizable());
 if (overlapViewColumns.contains(column) && 
Bytes.compareTo(ptr.get(), ptr.getOffset(), ptr.getLength(), 
column.getViewConstant(), 0, column.getViewConstant().length-1) != 0) {
 throw new SQLExceptionInfo.Builder(
 SQLExceptionCode.CANNOT_UPDATE_VIEW_COLUMN)
@@ -814,7 +818,7 @@ public class UpsertCompiler {
 }
 }
 Map mutation = 
Maps.newHashMapWithExpectedSize(1);
-setValues(values, pkSlotIndexes, columnIndexes, 
tableRef.getTable(), mutation, statement);
+setValues(values, pkSlotIndexes, columnIndexes, table, 
mutation, statement);
 return new MutationState(tableRef, mutation, 0, maxSize, 
connection);
 }
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/dfc1af7d/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
index 0cbef11..332f293 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
@@ -61,7 +61,9 @@ import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.RowKeySchema;
 import org.apache.phoenix.schema.SaltingUtil;
 import org.apache.phoenix.schema.SortOrder;
+import org.apache.phoenix.schema.ValueSchema.Field;
 import org.apache.phoenix.schema.tuple.Tuple;
+import org.apache.phoenix.schema.types.PArrayDataType;
 import org.apache.phoenix.schema.types.PChar;
 import org.apache.phoenix.schema.types.PDataType;
 import org.apache.phoenix.schema.types.PVarbinary;
@@ -194,8 +196,9 @@ public class WhereOptimizer {
 

[5/7] phoenix git commit: PHOENIX-2067 Sort order incorrect for variable length DESC columns

2015-07-14 Thread jamestaylor
http://git-wip-us.apache.org/repos/asf/phoenix/blob/dfc1af7d/phoenix-core/src/main/java/org/apache/phoenix/index/IndexMaintainer.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/index/IndexMaintainer.java 
b/phoenix-core/src/main/java/org/apache/phoenix/index/IndexMaintainer.java
index 0956753..a12f633 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/index/IndexMaintainer.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/index/IndexMaintainer.java
@@ -59,14 +59,10 @@ import 
org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 import org.apache.phoenix.hbase.index.util.KeyValueBuilder;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixStatement;
-import org.apache.phoenix.parse.AndParseNode;
-import org.apache.phoenix.parse.BaseParseNodeVisitor;
-import org.apache.phoenix.parse.BooleanParseNodeVisitor;
 import org.apache.phoenix.parse.FunctionParseNode;
 import org.apache.phoenix.parse.ParseNode;
 import org.apache.phoenix.parse.SQLParser;
 import org.apache.phoenix.parse.StatelessTraverseAllParseNodeVisitor;
-import org.apache.phoenix.parse.TraverseAllParseNodeVisitor;
 import org.apache.phoenix.parse.UDFParseNode;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.schema.ColumnNotFoundException;
@@ -265,6 +261,7 @@ public class IndexMaintainer implements Writable, 
Iterable {
 private int[] dataPkPosition;
 private int maxTrailingNulls;
 private ColumnReference dataEmptyKeyValueRef;
+private boolean rowKeyOrderOptimizable;
 
 private IndexMaintainer(RowKeySchema dataRowKeySchema, boolean 
isDataTableSalted) {
 this.dataRowKeySchema = dataRowKeySchema;
@@ -273,6 +270,7 @@ public class IndexMaintainer implements Writable, 
Iterable {
 
 private IndexMaintainer(PTable dataTable, PTable index, PhoenixConnection 
connection) {
 this(dataTable.getRowKeySchema(), dataTable.getBucketNum() != null);
+this.rowKeyOrderOptimizable = index.rowKeyOrderOptimizable();
 this.isMultiTenant = dataTable.isMultiTenant();
 this.viewIndexId = index.getViewIndexId() == null ? null : 
MetaDataUtil.getViewIndexIdDataType().toBytes(index.getViewIndexId());
 this.isLocalIndex = index.getIndexType() == IndexType.LOCAL;
@@ -434,7 +432,7 @@ public class IndexMaintainer implements Writable, 
Iterable {
 dataRowKeySchema.next(ptr, dataPosOffset, maxRowKeyOffset);
 output.write(ptr.get(), ptr.getOffset(), ptr.getLength());
 if 
(!dataRowKeySchema.getField(dataPosOffset).getDataType().isFixedWidth()) {
-output.writeByte(QueryConstants.SEPARATOR_BYTE);
+
output.writeByte(SchemaUtil.getSeparatorByte(rowKeyOrderOptimizable, 
ptr.getLength()==0, dataRowKeySchema.getField(dataPosOffset)));
 }
 dataPosOffset++;
 }
@@ -481,21 +479,22 @@ public class IndexMaintainer implements Writable, 
Iterable {
 }
 boolean isDataColumnInverted = dataSortOrder != SortOrder.ASC;
 PDataType indexColumnType = 
IndexUtil.getIndexColumnDataType(isNullable, dataColumnType);
-boolean isBytesComparable = 
dataColumnType.isBytesComparableWith(indexColumnType) ;
-if (isBytesComparable && isDataColumnInverted == 
descIndexColumnBitSet.get(i)) {
+boolean isBytesComparable = 
dataColumnType.isBytesComparableWith(indexColumnType);
+boolean isIndexColumnDesc = descIndexColumnBitSet.get(i);
+if (isBytesComparable && isDataColumnInverted == 
isIndexColumnDesc) {
 output.write(ptr.get(), ptr.getOffset(), ptr.getLength());
 } else {
 if (!isBytesComparable)  {
 indexColumnType.coerceBytes(ptr, dataColumnType, 
dataSortOrder, SortOrder.getDefault());
 }
-if (descIndexColumnBitSet.get(i) != isDataColumnInverted) {
+if (isDataColumnInverted != isIndexColumnDesc) {
 writeInverted(ptr.get(), ptr.getOffset(), 
ptr.getLength(), output);
 } else {
 output.write(ptr.get(), ptr.getOffset(), 
ptr.getLength());
 }
 }
 if (!indexColumnType.isFixedWidth()) {
-output.writeByte(QueryConstants.SEPARATOR_BYTE);
+
output.writeByte(SchemaUtil.getSeparatorByte(rowKeyOrderOptimizable, 
ptr.getLength() == 0, isIndexColumnDesc ? SortOrder.DESC : SortOrder.ASC));
 }
 }
 int length = stream.size();
@@ -545,7 +544,7 @@ public class IndexMaintainer implements Writable, 
Iterable {
 indexRowKeySchema.next(ptr, indexPosOffset, maxRowKeyOffset);

[1/7] phoenix git commit: PHOENIX-2067 Sort order incorrect for variable length DESC columns

2015-07-14 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 9d244e0d7 -> dfc1af7d9


http://git-wip-us.apache.org/repos/asf/phoenix/blob/dfc1af7d/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PUnsignedTimestampArray.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PUnsignedTimestampArray.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PUnsignedTimestampArray.java
index 1159b5c..3407310 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PUnsignedTimestampArray.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PUnsignedTimestampArray.java
@@ -17,94 +17,80 @@
  */
 package org.apache.phoenix.schema.types;
 
-import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
-import org.apache.phoenix.schema.SortOrder;
+import java.sql.Timestamp;
 
-import java.sql.*;
+import org.apache.phoenix.schema.SortOrder;
 
 public class PUnsignedTimestampArray extends PArrayDataType {
 
-  public static final PUnsignedTimestampArray INSTANCE = new 
PUnsignedTimestampArray();
-
-  private PUnsignedTimestampArray() {
-super("UNSIGNED_TIMESTAMP ARRAY",
-PDataType.ARRAY_TYPE_BASE + PUnsignedTimestamp.INSTANCE.getSqlType(), 
PhoenixArray.class,
-null, 37);
-  }
-
-  @Override
-  public boolean isArrayType() {
-return true;
-  }
-
-  @Override
-  public boolean isFixedWidth() {
-return false;
-  }
+public static final PUnsignedTimestampArray INSTANCE = new 
PUnsignedTimestampArray();
 
-  @Override
-  public int compareTo(Object lhs, Object rhs, PDataType rhsType) {
-return compareTo(lhs, rhs);
-  }
+private PUnsignedTimestampArray() {
+super("UNSIGNED_TIMESTAMP ARRAY",
+PDataType.ARRAY_TYPE_BASE + 
PUnsignedTimestamp.INSTANCE.getSqlType(), PhoenixArray.class,
+null, 37);
+}
 
-  @Override
-  public Integer getByteSize() {
-return null;
-  }
+@Override
+public boolean isArrayType() {
+return true;
+}
 
-  @Override
-  public byte[] toBytes(Object object) {
-return toBytes(object, SortOrder.ASC);
-  }
+@Override
+public boolean isFixedWidth() {
+return false;
+}
 
-  @Override
-  public byte[] toBytes(Object object, SortOrder sortOrder) {
-return toBytes(object, PUnsignedTimestamp.INSTANCE, sortOrder);
-  }
+@Override
+public int compareTo(Object lhs, Object rhs, PDataType rhsType) {
+return compareTo(lhs, rhs);
+}
 
-  @Override
-  public Object toObject(byte[] bytes, int offset, int length,
-  PDataType actualType, SortOrder sortOrder, Integer maxLength,
-  Integer scale) {
-return toObject(bytes, offset, length, PUnsignedTimestamp.INSTANCE, 
sortOrder,
-maxLength, scale, PUnsignedTimestamp.INSTANCE);
-  }
+@Override
+public Integer getByteSize() {
+return null;
+}
 
-  @Override
-  public boolean isCoercibleTo(PDataType targetType) {
-return isCoercibleTo(targetType, this);
-  }
+@Override
+public byte[] toBytes(Object object) {
+return toBytes(object, SortOrder.ASC);
+}
 
-  @Override
-  public boolean isCoercibleTo(PDataType targetType, Object value) {
-if (value == null) {
-  return true;
+@Override
+public byte[] toBytes(Object object, SortOrder sortOrder) {
+return toBytes(object, PUnsignedTimestamp.INSTANCE, sortOrder);
 }
-PhoenixArray pArr = (PhoenixArray) value;
-Object[] timeStampArr = (Object[]) pArr.array;
-for (Object i : timeStampArr) {
-  if (!super.isCoercibleTo(PUnsignedTimestamp.INSTANCE, i)) {
-return false;
-  }
+
+@Override
+public Object toObject(byte[] bytes, int offset, int length,
+PDataType actualType, SortOrder sortOrder, Integer maxLength,
+Integer scale) {
+return toObject(bytes, offset, length, PUnsignedTimestamp.INSTANCE, 
sortOrder,
+maxLength, scale, PUnsignedTimestamp.INSTANCE);
 }
-return true;
-  }
 
-  @Override
-  public void coerceBytes(ImmutableBytesWritable ptr, Object object, PDataType 
actualType,
-  Integer maxLength, Integer scale, SortOrder actualModifer, Integer 
desiredMaxLength,
-  Integer desiredScale, SortOrder desiredModifier) {
-coerceBytes(ptr, object, actualType, maxLength, scale, desiredMaxLength, 
desiredScale,
-this, actualModifer, desiredModifier);
-  }
+@Override
+public boolean isCoercibleTo(PDataType targetType) {
+return isCoercibleTo(targetType, this);
+}
 
-  @Override
-  public int getResultSetSqlType() {
-return Types.ARRAY;
-  }
+@Override
+public boolean isCoercibleTo(PDataType targetType, Object value) {
+if (value == null) {
+return true;
+}
+PhoenixArray pArr = (PhoenixArray) value;
+Object[] timeStampArr = (Object[]) pArr.array;
+for (Object i : 

[2/7] phoenix git commit: PHOENIX-2067 Sort order incorrect for variable length DESC columns

2015-07-14 Thread jamestaylor
http://git-wip-us.apache.org/repos/asf/phoenix/blob/dfc1af7d/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDateArray.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDateArray.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDateArray.java
index 764401c..a07418c 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDateArray.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDateArray.java
@@ -17,93 +17,78 @@
  */
 package org.apache.phoenix.schema.types;
 
-import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
-import org.apache.phoenix.schema.SortOrder;
-
-import java.sql.Types;
 import java.sql.Date;
 
-public class PDateArray extends PArrayDataType {
-
-  public static final PDateArray INSTANCE = new PDateArray();
-
-  private PDateArray() {
-super("DATE ARRAY", PDataType.ARRAY_TYPE_BASE + 
PDate.INSTANCE.getSqlType(), PhoenixArray.class,
-null, 40);
-  }
+import org.apache.phoenix.schema.SortOrder;
 
-  @Override
-  public boolean isArrayType() {
-return true;
-  }
+public class PDateArray extends PArrayDataType {
 
-  @Override
-  public boolean isFixedWidth() {
-return false;
-  }
+public static final PDateArray INSTANCE = new PDateArray();
 
-  @Override
-  public int compareTo(Object lhs, Object rhs, PDataType rhsType) {
-return compareTo(lhs, rhs);
-  }
+private PDateArray() {
+super("DATE ARRAY", PDataType.ARRAY_TYPE_BASE + 
PDate.INSTANCE.getSqlType(), PhoenixArray.class,
+null, 40);
+}
 
-  @Override
-  public Integer getByteSize() {
-return null;
-  }
+@Override
+public boolean isArrayType() {
+return true;
+}
 
-  @Override
-  public byte[] toBytes(Object object) {
-return toBytes(object, SortOrder.ASC);
-  }
+@Override
+public boolean isFixedWidth() {
+return false;
+}
 
-  @Override
-  public byte[] toBytes(Object object, SortOrder sortOrder) {
-return toBytes(object, PDate.INSTANCE, sortOrder);
-  }
+@Override
+public int compareTo(Object lhs, Object rhs, PDataType rhsType) {
+return compareTo(lhs, rhs);
+}
 
-  @Override
-  public Object toObject(byte[] bytes, int offset, int length,
-  PDataType actualType, SortOrder sortOrder, Integer maxLength, Integer 
scale) {
-return toObject(bytes, offset, length, PDate.INSTANCE, sortOrder, 
maxLength, scale,
-PDate.INSTANCE);
-  }
+@Override
+public Integer getByteSize() {
+return null;
+}
 
-  @Override
-  public boolean isCoercibleTo(PDataType targetType) {
-return isCoercibleTo(targetType, this);
-  }
+@Override
+public byte[] toBytes(Object object) {
+return toBytes(object, SortOrder.ASC);
+}
 
-  @Override
-  public boolean isCoercibleTo(PDataType targetType, Object value) {
-if (value == null) {
-  return true;
+@Override
+public byte[] toBytes(Object object, SortOrder sortOrder) {
+return toBytes(object, PDate.INSTANCE, sortOrder);
 }
-PhoenixArray pArr = (PhoenixArray) value;
-Object[] dateArr = (Object[]) pArr.array;
-for (Object i : dateArr) {
-  if (!super.isCoercibleTo(PDate.INSTANCE, i)) {
-return false;
-  }
+
+@Override
+public Object toObject(byte[] bytes, int offset, int length,
+PDataType actualType, SortOrder sortOrder, Integer maxLength, 
Integer scale) {
+return toObject(bytes, offset, length, PDate.INSTANCE, sortOrder, 
maxLength, scale,
+PDate.INSTANCE);
 }
-return true;
-  }
 
-  @Override
-  public int getResultSetSqlType() {
-return Types.ARRAY;
-  }
+@Override
+public boolean isCoercibleTo(PDataType targetType) {
+return isCoercibleTo(targetType, this);
+}
 
-  @Override
-  public void coerceBytes(ImmutableBytesWritable ptr, Object object, PDataType 
actualType,
-  Integer maxLength, Integer scale, SortOrder actualModifer, Integer 
desiredMaxLength,
-  Integer desiredScale,SortOrder desiredModifier) {
-coerceBytes(ptr, object, actualType, maxLength, scale, desiredMaxLength, 
desiredScale,
-this, actualModifer, desiredModifier);
-  }
+@Override
+public boolean isCoercibleTo(PDataType targetType, Object value) {
+if (value == null) {
+return true;
+}
+PhoenixArray pArr = (PhoenixArray) value;
+Object[] dateArr = (Object[]) pArr.array;
+for (Object i : dateArr) {
+if (!super.isCoercibleTo(PDate.INSTANCE, i)) {
+return false;
+}
+}
+return true;
+}
 
-  @Override
-  public Object getSampleValue(Integer maxLength, Integer arrayLength) {
-return getSampleValue(PDate.INSTANCE, arrayLength, maxLength);
-  }
+@Override
+public Object getSampleValue(Integer maxLength, Integer arrayLength) 

[3/7] phoenix git commit: PHOENIX-2067 Sort order incorrect for variable length DESC columns

2015-07-14 Thread jamestaylor
http://git-wip-us.apache.org/repos/asf/phoenix/blob/dfc1af7d/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDataType.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDataType.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDataType.java
index 60d2020..2c91dc5 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDataType.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDataType.java
@@ -47,1060 +47,1036 @@ import com.google.common.primitives.Longs;
  */
 public abstract class PDataType implements DataType, 
Comparable> {
 
-  private final String sqlTypeName;
-  private final int sqlType;
-  private final Class clazz;
-  private final byte[] clazzNameBytes;
-  private final byte[] sqlTypeNameBytes;
-  private final PDataCodec codec;
-  private final int ordinal;
-
-  protected PDataType(String sqlTypeName, int sqlType, Class clazz, PDataCodec 
codec, int ordinal) {
-this.sqlTypeName = sqlTypeName;
-this.sqlType = sqlType;
-this.clazz = clazz;
-this.clazzNameBytes = Bytes.toBytes(clazz.getName());
-this.sqlTypeNameBytes = Bytes.toBytes(sqlTypeName);
-this.codec = codec;
-this.ordinal = ordinal;
-  }
-
-  @Deprecated
-  public static PDataType[] values() {
-return PDataTypeFactory.getInstance().getOrderedTypes();
-  }
-
-  @Deprecated
-  public int ordinal() {
-return ordinal;
-  }
-
-  @Override
-  public Class encodedClass() {
-return getJavaClass();
-  }
-
-  public boolean isCastableTo(PDataType targetType) {
-return isComparableTo(targetType);
-  }
-
-  public final PDataCodec getCodec() {
-return codec;
-  }
-
-  public boolean isBytesComparableWith(PDataType otherType) {
-return this == otherType
-|| this.getClass() == PVarbinary.class
-|| otherType == PVarbinary.INSTANCE
-|| this.getClass() == PBinary.class
-|| otherType == PBinary.INSTANCE;
-  }
-
-  public int estimateByteSize(Object o) {
-if (isFixedWidth()) {
-  return getByteSize();
-}
-if (isArrayType()) {
-  PhoenixArray array = (PhoenixArray) o;
-  int noOfElements = array.numElements;
-  int totalVarSize = 0;
-  for (int i = 0; i < noOfElements; i++) {
-totalVarSize += array.estimateByteSize(i);
-  }
-  return totalVarSize;
-}
-// Non fixed width types must override this
-throw new UnsupportedOperationException();
-  }
-
-  public Integer getMaxLength(Object o) {
-return null;
-  }
-
-  public Integer getScale(Object o) {
-return null;
-  }
-
-  /**
-   * Estimate the byte size from the type length. For example, for char, byte 
size would be the
-   * same as length. For decimal, byte size would have no correlation with the 
length.
-   */
-  public Integer estimateByteSizeFromLength(Integer length) {
-if (isFixedWidth()) {
-  return getByteSize();
-}
-if (isArrayType()) {
-  return null;
-}
-// If not fixed width, default to say the byte size is the same as length.
-return length;
-  }
-
-  public final String getSqlTypeName() {
-return sqlTypeName;
-  }
-
-  public final int getSqlType() {
-return sqlType;
-  }
-
-  public final Class getJavaClass() {
-return clazz;
-  }
-
-  public boolean isArrayType() {
-return false;
-  }
-
-  public final int compareTo(byte[] lhs, int lhsOffset, int lhsLength, 
SortOrder lhsSortOrder,
-  byte[] rhs, int rhsOffset, int rhsLength, SortOrder rhsSortOrder,
-  PDataType rhsType) {
-Preconditions.checkNotNull(lhsSortOrder);
-Preconditions.checkNotNull(rhsSortOrder);
-if (this.isBytesComparableWith(rhsType)) { // directly compare the bytes
-  return compareTo(lhs, lhsOffset, lhsLength, lhsSortOrder, rhs, 
rhsOffset, rhsLength,
-  rhsSortOrder);
-}
-PDataCodec lhsCodec = this.getCodec();
-if (lhsCodec
-== null) { // no lhs native type representation, so convert rhsType to 
bytes representation of lhsType
-  byte[] rhsConverted =
-  this.toBytes(this.toObject(rhs, rhsOffset, rhsLength, rhsType, 
rhsSortOrder));
-  if (rhsSortOrder == SortOrder.DESC) {
-rhsSortOrder = SortOrder.ASC;
-  }
-  if (lhsSortOrder == SortOrder.DESC) {
-lhs = SortOrder.invert(lhs, lhsOffset, new byte[lhsLength], 0, 
lhsLength);
-  }
-  return Bytes.compareTo(lhs, lhsOffset, lhsLength, rhsConverted, 0, 
rhsConverted.length);
-}
-PDataCodec rhsCodec = rhsType.getCodec();
-if (rhsCodec == null) {
-  byte[] lhsConverted =
-  rhsType.toBytes(rhsType.toObject(lhs, lhsOffset, lhsLength, this, 
lhsSortOrder));
-  if (lhsSortOrder == SortOrder.DESC) {
-lhsSortOrder = SortOrder.ASC;
-  }
-  if (rhsSortOrder == SortOrder.DESC) {
-rhs = SortOrder.invert(rhs, rhsOffset, new byte[rhsLength], 0, 
rhsLength);
-  }
-  return By

[4/7] phoenix git commit: PHOENIX-2067 Sort order incorrect for variable length DESC columns

2015-07-14 Thread jamestaylor
http://git-wip-us.apache.org/repos/asf/phoenix/blob/dfc1af7d/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PArrayDataType.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PArrayDataType.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PArrayDataType.java
index 4e32cc0..dd11569 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PArrayDataType.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PArrayDataType.java
@@ -20,6 +20,7 @@ package org.apache.phoenix.schema.types;
 import java.io.DataOutputStream;
 import java.io.IOException;
 import java.nio.ByteBuffer;
+import java.sql.Types;
 import java.text.Format;
 import java.util.LinkedList;
 import java.util.List;
@@ -34,61 +35,88 @@ import org.apache.phoenix.schema.SortOrder;
 import org.apache.phoenix.schema.ValueSchema;
 import org.apache.phoenix.schema.tuple.Tuple;
 import org.apache.phoenix.util.ByteUtil;
+import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.TrustedByteArrayOutputStream;
 
 import com.google.common.base.Objects;
 import com.google.common.base.Preconditions;
 
 /**
- * The datatype for PColummns that are Arrays. Any variable length array would 
follow the below order. 
- * Every element would be seperated by a seperator byte '0'. Null elements are 
counted and once a first 
- * non null element appears we write the count of the nulls prefixed with a 
seperator byte.
- * Trailing nulls are not taken into account. The last non null element is 
followed by two seperator bytes. 
- * For eg a, b, null, null, c, null -> 65 0 66 0 0 2 67 0 0 0 
- * a null null null b c null d -> 65 0 0 3 66 0 67 0 0 1 68 0 0 0.
- * The reason we use this serialization format is to allow the
- * byte array of arrays of the same type to be directly comparable against 
each other. 
- * This prevents a costly deserialization on compare and allows an array 
column to be used as the last column in a primary key constraint.
+ * The datatype for PColummns that are Arrays. Any variable length array would 
follow the below order. Every element
+ * would be seperated by a seperator byte '0'. Null elements are counted and 
once a first non null element appears we
+ * write the count of the nulls prefixed with a seperator byte. Trailing nulls 
are not taken into account. The last non
+ * null element is followed by two seperator bytes. For eg a, b, null, null, 
c, null -> 65 0 66 0 0 2 67 0 0 0 a null
+ * null null b c null d -> 65 0 0 3 66 0 67 0 0 1 68 0 0 0. The reason we use 
this serialization format is to allow the
+ * byte array of arrays of the same type to be directly comparable against 
each other. This prevents a costly
+ * deserialization on compare and allows an array column to be used as the 
last column in a primary key constraint.
  */
 public abstract class PArrayDataType extends PDataType {
 
+@Override
+public final int getResultSetSqlType() {
+  return Types.ARRAY;
+}
+
+@Override
+public final void coerceBytes(ImmutableBytesWritable ptr, Object object, 
PDataType actualType,
+Integer maxLength, Integer scale, SortOrder actualModifer, Integer 
desiredMaxLength,
+Integer desiredScale, SortOrder desiredModifier, boolean 
expectedRowKeyOrderOptimizable) {
+  coerceBytes(ptr, object, actualType, maxLength, scale, desiredMaxLength, 
desiredScale,
+  this, actualModifer, desiredModifier, 
expectedRowKeyOrderOptimizable);
+}
+
+@Override
+public final void coerceBytes(ImmutableBytesWritable ptr, Object object, 
PDataType actualType,
+Integer maxLength, Integer scale, SortOrder actualModifer, Integer 
desiredMaxLength,
+Integer desiredScale, SortOrder desiredModifier) {
+  coerceBytes(ptr, object, actualType, maxLength, scale, desiredMaxLength, 
desiredScale,
+  this, actualModifer, desiredModifier, true);
+}
+
 public static final byte ARRAY_SERIALIZATION_VERSION = 1;
 
-  protected PArrayDataType(String sqlTypeName, int sqlType, Class clazz, 
PDataCodec codec, int ordinal) {
-super(sqlTypeName, sqlType, clazz, codec, ordinal);
-  }
+protected PArrayDataType(String sqlTypeName, int sqlType, Class clazz, 
PDataCodec codec, int ordinal) {
+super(sqlTypeName, sqlType, clazz, codec, ordinal);
+}
+
+private static byte getSeparatorByte(boolean rowKeyOrderOptimizable, 
SortOrder sortOrder) {
+return SchemaUtil.getSeparatorByte(rowKeyOrderOptimizable, false, 
sortOrder);
+}
 
-  public byte[] toBytes(Object object, PDataType baseType, SortOrder 
sortOrder) {
-   if(object == null) {
-   throw new ConstraintViolationException(this + " may not 
be null");
-   }
-   PhoenixArray arr = ((PhoenixArray)object);
+public byte[] toBytes(Object object, PDataType baseType, SortOrder 
sortOrder) 

[3/7] phoenix git commit: PHOENIX-2067 Sort order incorrect for variable length DESC columns

2015-07-14 Thread jamestaylor
http://git-wip-us.apache.org/repos/asf/phoenix/blob/2620a80c/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDataType.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDataType.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDataType.java
index 60d2020..2c91dc5 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDataType.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDataType.java
@@ -47,1060 +47,1036 @@ import com.google.common.primitives.Longs;
  */
 public abstract class PDataType implements DataType, 
Comparable> {
 
-  private final String sqlTypeName;
-  private final int sqlType;
-  private final Class clazz;
-  private final byte[] clazzNameBytes;
-  private final byte[] sqlTypeNameBytes;
-  private final PDataCodec codec;
-  private final int ordinal;
-
-  protected PDataType(String sqlTypeName, int sqlType, Class clazz, PDataCodec 
codec, int ordinal) {
-this.sqlTypeName = sqlTypeName;
-this.sqlType = sqlType;
-this.clazz = clazz;
-this.clazzNameBytes = Bytes.toBytes(clazz.getName());
-this.sqlTypeNameBytes = Bytes.toBytes(sqlTypeName);
-this.codec = codec;
-this.ordinal = ordinal;
-  }
-
-  @Deprecated
-  public static PDataType[] values() {
-return PDataTypeFactory.getInstance().getOrderedTypes();
-  }
-
-  @Deprecated
-  public int ordinal() {
-return ordinal;
-  }
-
-  @Override
-  public Class encodedClass() {
-return getJavaClass();
-  }
-
-  public boolean isCastableTo(PDataType targetType) {
-return isComparableTo(targetType);
-  }
-
-  public final PDataCodec getCodec() {
-return codec;
-  }
-
-  public boolean isBytesComparableWith(PDataType otherType) {
-return this == otherType
-|| this.getClass() == PVarbinary.class
-|| otherType == PVarbinary.INSTANCE
-|| this.getClass() == PBinary.class
-|| otherType == PBinary.INSTANCE;
-  }
-
-  public int estimateByteSize(Object o) {
-if (isFixedWidth()) {
-  return getByteSize();
-}
-if (isArrayType()) {
-  PhoenixArray array = (PhoenixArray) o;
-  int noOfElements = array.numElements;
-  int totalVarSize = 0;
-  for (int i = 0; i < noOfElements; i++) {
-totalVarSize += array.estimateByteSize(i);
-  }
-  return totalVarSize;
-}
-// Non fixed width types must override this
-throw new UnsupportedOperationException();
-  }
-
-  public Integer getMaxLength(Object o) {
-return null;
-  }
-
-  public Integer getScale(Object o) {
-return null;
-  }
-
-  /**
-   * Estimate the byte size from the type length. For example, for char, byte 
size would be the
-   * same as length. For decimal, byte size would have no correlation with the 
length.
-   */
-  public Integer estimateByteSizeFromLength(Integer length) {
-if (isFixedWidth()) {
-  return getByteSize();
-}
-if (isArrayType()) {
-  return null;
-}
-// If not fixed width, default to say the byte size is the same as length.
-return length;
-  }
-
-  public final String getSqlTypeName() {
-return sqlTypeName;
-  }
-
-  public final int getSqlType() {
-return sqlType;
-  }
-
-  public final Class getJavaClass() {
-return clazz;
-  }
-
-  public boolean isArrayType() {
-return false;
-  }
-
-  public final int compareTo(byte[] lhs, int lhsOffset, int lhsLength, 
SortOrder lhsSortOrder,
-  byte[] rhs, int rhsOffset, int rhsLength, SortOrder rhsSortOrder,
-  PDataType rhsType) {
-Preconditions.checkNotNull(lhsSortOrder);
-Preconditions.checkNotNull(rhsSortOrder);
-if (this.isBytesComparableWith(rhsType)) { // directly compare the bytes
-  return compareTo(lhs, lhsOffset, lhsLength, lhsSortOrder, rhs, 
rhsOffset, rhsLength,
-  rhsSortOrder);
-}
-PDataCodec lhsCodec = this.getCodec();
-if (lhsCodec
-== null) { // no lhs native type representation, so convert rhsType to 
bytes representation of lhsType
-  byte[] rhsConverted =
-  this.toBytes(this.toObject(rhs, rhsOffset, rhsLength, rhsType, 
rhsSortOrder));
-  if (rhsSortOrder == SortOrder.DESC) {
-rhsSortOrder = SortOrder.ASC;
-  }
-  if (lhsSortOrder == SortOrder.DESC) {
-lhs = SortOrder.invert(lhs, lhsOffset, new byte[lhsLength], 0, 
lhsLength);
-  }
-  return Bytes.compareTo(lhs, lhsOffset, lhsLength, rhsConverted, 0, 
rhsConverted.length);
-}
-PDataCodec rhsCodec = rhsType.getCodec();
-if (rhsCodec == null) {
-  byte[] lhsConverted =
-  rhsType.toBytes(rhsType.toObject(lhs, lhsOffset, lhsLength, this, 
lhsSortOrder));
-  if (lhsSortOrder == SortOrder.DESC) {
-lhsSortOrder = SortOrder.ASC;
-  }
-  if (rhsSortOrder == SortOrder.DESC) {
-rhs = SortOrder.invert(rhs, rhsOffset, new byte[rhsLength], 0, 
rhsLength);
-  }
-  return By

[2/7] phoenix git commit: PHOENIX-2067 Sort order incorrect for variable length DESC columns

2015-07-14 Thread jamestaylor
http://git-wip-us.apache.org/repos/asf/phoenix/blob/2620a80c/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDateArray.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDateArray.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDateArray.java
index 764401c..a07418c 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDateArray.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDateArray.java
@@ -17,93 +17,78 @@
  */
 package org.apache.phoenix.schema.types;
 
-import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
-import org.apache.phoenix.schema.SortOrder;
-
-import java.sql.Types;
 import java.sql.Date;
 
-public class PDateArray extends PArrayDataType {
-
-  public static final PDateArray INSTANCE = new PDateArray();
-
-  private PDateArray() {
-super("DATE ARRAY", PDataType.ARRAY_TYPE_BASE + 
PDate.INSTANCE.getSqlType(), PhoenixArray.class,
-null, 40);
-  }
+import org.apache.phoenix.schema.SortOrder;
 
-  @Override
-  public boolean isArrayType() {
-return true;
-  }
+public class PDateArray extends PArrayDataType {
 
-  @Override
-  public boolean isFixedWidth() {
-return false;
-  }
+public static final PDateArray INSTANCE = new PDateArray();
 
-  @Override
-  public int compareTo(Object lhs, Object rhs, PDataType rhsType) {
-return compareTo(lhs, rhs);
-  }
+private PDateArray() {
+super("DATE ARRAY", PDataType.ARRAY_TYPE_BASE + 
PDate.INSTANCE.getSqlType(), PhoenixArray.class,
+null, 40);
+}
 
-  @Override
-  public Integer getByteSize() {
-return null;
-  }
+@Override
+public boolean isArrayType() {
+return true;
+}
 
-  @Override
-  public byte[] toBytes(Object object) {
-return toBytes(object, SortOrder.ASC);
-  }
+@Override
+public boolean isFixedWidth() {
+return false;
+}
 
-  @Override
-  public byte[] toBytes(Object object, SortOrder sortOrder) {
-return toBytes(object, PDate.INSTANCE, sortOrder);
-  }
+@Override
+public int compareTo(Object lhs, Object rhs, PDataType rhsType) {
+return compareTo(lhs, rhs);
+}
 
-  @Override
-  public Object toObject(byte[] bytes, int offset, int length,
-  PDataType actualType, SortOrder sortOrder, Integer maxLength, Integer 
scale) {
-return toObject(bytes, offset, length, PDate.INSTANCE, sortOrder, 
maxLength, scale,
-PDate.INSTANCE);
-  }
+@Override
+public Integer getByteSize() {
+return null;
+}
 
-  @Override
-  public boolean isCoercibleTo(PDataType targetType) {
-return isCoercibleTo(targetType, this);
-  }
+@Override
+public byte[] toBytes(Object object) {
+return toBytes(object, SortOrder.ASC);
+}
 
-  @Override
-  public boolean isCoercibleTo(PDataType targetType, Object value) {
-if (value == null) {
-  return true;
+@Override
+public byte[] toBytes(Object object, SortOrder sortOrder) {
+return toBytes(object, PDate.INSTANCE, sortOrder);
 }
-PhoenixArray pArr = (PhoenixArray) value;
-Object[] dateArr = (Object[]) pArr.array;
-for (Object i : dateArr) {
-  if (!super.isCoercibleTo(PDate.INSTANCE, i)) {
-return false;
-  }
+
+@Override
+public Object toObject(byte[] bytes, int offset, int length,
+PDataType actualType, SortOrder sortOrder, Integer maxLength, 
Integer scale) {
+return toObject(bytes, offset, length, PDate.INSTANCE, sortOrder, 
maxLength, scale,
+PDate.INSTANCE);
 }
-return true;
-  }
 
-  @Override
-  public int getResultSetSqlType() {
-return Types.ARRAY;
-  }
+@Override
+public boolean isCoercibleTo(PDataType targetType) {
+return isCoercibleTo(targetType, this);
+}
 
-  @Override
-  public void coerceBytes(ImmutableBytesWritable ptr, Object object, PDataType 
actualType,
-  Integer maxLength, Integer scale, SortOrder actualModifer, Integer 
desiredMaxLength,
-  Integer desiredScale,SortOrder desiredModifier) {
-coerceBytes(ptr, object, actualType, maxLength, scale, desiredMaxLength, 
desiredScale,
-this, actualModifer, desiredModifier);
-  }
+@Override
+public boolean isCoercibleTo(PDataType targetType, Object value) {
+if (value == null) {
+return true;
+}
+PhoenixArray pArr = (PhoenixArray) value;
+Object[] dateArr = (Object[]) pArr.array;
+for (Object i : dateArr) {
+if (!super.isCoercibleTo(PDate.INSTANCE, i)) {
+return false;
+}
+}
+return true;
+}
 
-  @Override
-  public Object getSampleValue(Integer maxLength, Integer arrayLength) {
-return getSampleValue(PDate.INSTANCE, arrayLength, maxLength);
-  }
+@Override
+public Object getSampleValue(Integer maxLength, Integer arrayLength) 

[7/7] phoenix git commit: PHOENIX-2067 Sort order incorrect for variable length DESC columns

2015-07-14 Thread jamestaylor
PHOENIX-2067 Sort order incorrect for variable length DESC columns


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/2620a80c
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/2620a80c
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/2620a80c

Branch: refs/heads/master
Commit: 2620a80c1e35c0d214f06a1b16e99da5415a1a2c
Parents: 01b4f60
Author: James Taylor 
Authored: Mon Jul 13 11:17:37 2015 -0700
Committer: James Taylor 
Committed: Tue Jul 14 10:55:27 2015 -0700

--
 dev/eclipse_prefs_phoenix.epf   |2 +-
 .../org/apache/phoenix/end2end/ArrayIT.java |   59 +
 .../org/apache/phoenix/end2end/IsNullIT.java|   52 +-
 .../apache/phoenix/end2end/LpadFunctionIT.java  |   24 +
 .../apache/phoenix/end2end/ReverseScanIT.java   |   30 +
 .../phoenix/end2end/RowValueConstructorIT.java  |7 +-
 .../apache/phoenix/end2end/SortOrderFIT.java|  563 -
 .../org/apache/phoenix/end2end/SortOrderIT.java |  572 +
 .../apache/phoenix/compile/FromCompiler.java|3 +-
 .../apache/phoenix/compile/JoinCompiler.java|8 +-
 .../apache/phoenix/compile/OrderByCompiler.java |4 +-
 .../phoenix/compile/OrderPreservingTracker.java |7 +-
 .../org/apache/phoenix/compile/ScanRanges.java  |5 +-
 .../compile/TupleProjectionCompiler.java|4 +-
 .../apache/phoenix/compile/UnionCompiler.java   |5 +-
 .../apache/phoenix/compile/UpsertCompiler.java  |   16 +-
 .../apache/phoenix/compile/WhereOptimizer.java  |   53 +-
 .../coprocessor/BaseScannerRegionObserver.java  |2 +
 .../coprocessor/MetaDataEndpointImpl.java   |   73 +-
 .../UngroupedAggregateRegionObserver.java   |  108 +-
 .../coprocessor/generated/PTableProtos.java |  105 +-
 .../phoenix/exception/SQLExceptionCode.java |1 +
 .../apache/phoenix/execute/BaseQueryPlan.java   |   14 +-
 .../DescVarLengthFastByteComparisons.java   |  219 ++
 .../expression/ArrayConstructorExpression.java  |2 +-
 .../phoenix/expression/OrderByExpression.java   |   13 +-
 .../RowValueConstructorExpression.java  |8 +-
 .../function/ArrayConcatFunction.java   |   11 +-
 .../function/ArrayModifierFunction.java |3 +-
 .../expression/function/LpadFunction.java   |8 +-
 .../expression/util/regex/JONIPattern.java  |5 +-
 .../apache/phoenix/filter/SkipScanFilter.java   |3 +-
 .../apache/phoenix/index/IndexMaintainer.java   |  127 +-
 .../phoenix/iterate/BaseResultIterators.java|  109 +-
 .../phoenix/iterate/OrderedResultIterator.java  |   52 +-
 .../apache/phoenix/jdbc/PhoenixConnection.java  |   29 +-
 .../query/ConnectionQueryServicesImpl.java  |   17 +-
 .../java/org/apache/phoenix/query/KeyRange.java |   14 -
 .../apache/phoenix/query/QueryConstants.java|3 +
 .../apache/phoenix/schema/DelegateTable.java|5 +
 .../apache/phoenix/schema/MetaDataClient.java   |   31 +-
 .../java/org/apache/phoenix/schema/PTable.java  |9 +
 .../org/apache/phoenix/schema/PTableImpl.java   |   78 +-
 .../org/apache/phoenix/schema/RowKeySchema.java |   44 +-
 .../phoenix/schema/RowKeyValueAccessor.java |   12 +-
 .../org/apache/phoenix/schema/ValueSchema.java  |   30 +-
 .../phoenix/schema/stats/StatisticsUtil.java|4 +-
 .../phoenix/schema/types/PArrayDataType.java|  682 +++---
 .../phoenix/schema/types/PBinaryArray.java  |  122 +-
 .../phoenix/schema/types/PBooleanArray.java |  112 +-
 .../apache/phoenix/schema/types/PCharArray.java |  128 +-
 .../apache/phoenix/schema/types/PDataType.java  | 2037 +-
 .../apache/phoenix/schema/types/PDateArray.java |  131 +-
 .../phoenix/schema/types/PDecimalArray.java |  126 +-
 .../phoenix/schema/types/PDoubleArray.java  |  128 +-
 .../phoenix/schema/types/PFloatArray.java   |  130 +-
 .../phoenix/schema/types/PIntegerArray.java |  130 +-
 .../apache/phoenix/schema/types/PLongArray.java |  130 +-
 .../phoenix/schema/types/PSmallintArray.java|  130 +-
 .../apache/phoenix/schema/types/PTimeArray.java |  133 +-
 .../phoenix/schema/types/PTimestampArray.java   |  132 +-
 .../phoenix/schema/types/PTinyintArray.java |  130 +-
 .../schema/types/PUnsignedDateArray.java|  128 +-
 .../schema/types/PUnsignedDoubleArray.java  |  136 +-
 .../schema/types/PUnsignedFloatArray.java   |  130 +-
 .../phoenix/schema/types/PUnsignedIntArray.java |  130 +-
 .../schema/types/PUnsignedLongArray.java|  130 +-
 .../schema/types/PUnsignedSmallintArray.java|  132 +-
 .../schema/types/PUnsignedTimeArray.java|  132 +-
 .../schema/types/PUnsignedTimestampArray.java   |  134 +-
 .../schema/types/PUnsignedTinyintArray.java |  132 +-
 .../phoenix/schema/types/PVarbinaryArray.java   |  130 +-
 .../phoenix/schema/types/PVarcharArray.java |  130 +-
 .../java/org/apache/phoenix/util/ByteUtil.ja

[4/7] phoenix git commit: PHOENIX-2067 Sort order incorrect for variable length DESC columns

2015-07-14 Thread jamestaylor
http://git-wip-us.apache.org/repos/asf/phoenix/blob/2620a80c/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PArrayDataType.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PArrayDataType.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PArrayDataType.java
index 4e32cc0..dd11569 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PArrayDataType.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PArrayDataType.java
@@ -20,6 +20,7 @@ package org.apache.phoenix.schema.types;
 import java.io.DataOutputStream;
 import java.io.IOException;
 import java.nio.ByteBuffer;
+import java.sql.Types;
 import java.text.Format;
 import java.util.LinkedList;
 import java.util.List;
@@ -34,61 +35,88 @@ import org.apache.phoenix.schema.SortOrder;
 import org.apache.phoenix.schema.ValueSchema;
 import org.apache.phoenix.schema.tuple.Tuple;
 import org.apache.phoenix.util.ByteUtil;
+import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.TrustedByteArrayOutputStream;
 
 import com.google.common.base.Objects;
 import com.google.common.base.Preconditions;
 
 /**
- * The datatype for PColummns that are Arrays. Any variable length array would 
follow the below order. 
- * Every element would be seperated by a seperator byte '0'. Null elements are 
counted and once a first 
- * non null element appears we write the count of the nulls prefixed with a 
seperator byte.
- * Trailing nulls are not taken into account. The last non null element is 
followed by two seperator bytes. 
- * For eg a, b, null, null, c, null -> 65 0 66 0 0 2 67 0 0 0 
- * a null null null b c null d -> 65 0 0 3 66 0 67 0 0 1 68 0 0 0.
- * The reason we use this serialization format is to allow the
- * byte array of arrays of the same type to be directly comparable against 
each other. 
- * This prevents a costly deserialization on compare and allows an array 
column to be used as the last column in a primary key constraint.
+ * The datatype for PColummns that are Arrays. Any variable length array would 
follow the below order. Every element
+ * would be seperated by a seperator byte '0'. Null elements are counted and 
once a first non null element appears we
+ * write the count of the nulls prefixed with a seperator byte. Trailing nulls 
are not taken into account. The last non
+ * null element is followed by two seperator bytes. For eg a, b, null, null, 
c, null -> 65 0 66 0 0 2 67 0 0 0 a null
+ * null null b c null d -> 65 0 0 3 66 0 67 0 0 1 68 0 0 0. The reason we use 
this serialization format is to allow the
+ * byte array of arrays of the same type to be directly comparable against 
each other. This prevents a costly
+ * deserialization on compare and allows an array column to be used as the 
last column in a primary key constraint.
  */
 public abstract class PArrayDataType extends PDataType {
 
+@Override
+public final int getResultSetSqlType() {
+  return Types.ARRAY;
+}
+
+@Override
+public final void coerceBytes(ImmutableBytesWritable ptr, Object object, 
PDataType actualType,
+Integer maxLength, Integer scale, SortOrder actualModifer, Integer 
desiredMaxLength,
+Integer desiredScale, SortOrder desiredModifier, boolean 
expectedRowKeyOrderOptimizable) {
+  coerceBytes(ptr, object, actualType, maxLength, scale, desiredMaxLength, 
desiredScale,
+  this, actualModifer, desiredModifier, 
expectedRowKeyOrderOptimizable);
+}
+
+@Override
+public final void coerceBytes(ImmutableBytesWritable ptr, Object object, 
PDataType actualType,
+Integer maxLength, Integer scale, SortOrder actualModifer, Integer 
desiredMaxLength,
+Integer desiredScale, SortOrder desiredModifier) {
+  coerceBytes(ptr, object, actualType, maxLength, scale, desiredMaxLength, 
desiredScale,
+  this, actualModifer, desiredModifier, true);
+}
+
 public static final byte ARRAY_SERIALIZATION_VERSION = 1;
 
-  protected PArrayDataType(String sqlTypeName, int sqlType, Class clazz, 
PDataCodec codec, int ordinal) {
-super(sqlTypeName, sqlType, clazz, codec, ordinal);
-  }
+protected PArrayDataType(String sqlTypeName, int sqlType, Class clazz, 
PDataCodec codec, int ordinal) {
+super(sqlTypeName, sqlType, clazz, codec, ordinal);
+}
+
+private static byte getSeparatorByte(boolean rowKeyOrderOptimizable, 
SortOrder sortOrder) {
+return SchemaUtil.getSeparatorByte(rowKeyOrderOptimizable, false, 
sortOrder);
+}
 
-  public byte[] toBytes(Object object, PDataType baseType, SortOrder 
sortOrder) {
-   if(object == null) {
-   throw new ConstraintViolationException(this + " may not 
be null");
-   }
-   PhoenixArray arr = ((PhoenixArray)object);
+public byte[] toBytes(Object object, PDataType baseType, SortOrder 
sortOrder) 

[5/7] phoenix git commit: PHOENIX-2067 Sort order incorrect for variable length DESC columns

2015-07-14 Thread jamestaylor
http://git-wip-us.apache.org/repos/asf/phoenix/blob/2620a80c/phoenix-core/src/main/java/org/apache/phoenix/index/IndexMaintainer.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/index/IndexMaintainer.java 
b/phoenix-core/src/main/java/org/apache/phoenix/index/IndexMaintainer.java
index 0956753..a12f633 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/index/IndexMaintainer.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/index/IndexMaintainer.java
@@ -59,14 +59,10 @@ import 
org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 import org.apache.phoenix.hbase.index.util.KeyValueBuilder;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixStatement;
-import org.apache.phoenix.parse.AndParseNode;
-import org.apache.phoenix.parse.BaseParseNodeVisitor;
-import org.apache.phoenix.parse.BooleanParseNodeVisitor;
 import org.apache.phoenix.parse.FunctionParseNode;
 import org.apache.phoenix.parse.ParseNode;
 import org.apache.phoenix.parse.SQLParser;
 import org.apache.phoenix.parse.StatelessTraverseAllParseNodeVisitor;
-import org.apache.phoenix.parse.TraverseAllParseNodeVisitor;
 import org.apache.phoenix.parse.UDFParseNode;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.schema.ColumnNotFoundException;
@@ -265,6 +261,7 @@ public class IndexMaintainer implements Writable, 
Iterable {
 private int[] dataPkPosition;
 private int maxTrailingNulls;
 private ColumnReference dataEmptyKeyValueRef;
+private boolean rowKeyOrderOptimizable;
 
 private IndexMaintainer(RowKeySchema dataRowKeySchema, boolean 
isDataTableSalted) {
 this.dataRowKeySchema = dataRowKeySchema;
@@ -273,6 +270,7 @@ public class IndexMaintainer implements Writable, 
Iterable {
 
 private IndexMaintainer(PTable dataTable, PTable index, PhoenixConnection 
connection) {
 this(dataTable.getRowKeySchema(), dataTable.getBucketNum() != null);
+this.rowKeyOrderOptimizable = index.rowKeyOrderOptimizable();
 this.isMultiTenant = dataTable.isMultiTenant();
 this.viewIndexId = index.getViewIndexId() == null ? null : 
MetaDataUtil.getViewIndexIdDataType().toBytes(index.getViewIndexId());
 this.isLocalIndex = index.getIndexType() == IndexType.LOCAL;
@@ -434,7 +432,7 @@ public class IndexMaintainer implements Writable, 
Iterable {
 dataRowKeySchema.next(ptr, dataPosOffset, maxRowKeyOffset);
 output.write(ptr.get(), ptr.getOffset(), ptr.getLength());
 if 
(!dataRowKeySchema.getField(dataPosOffset).getDataType().isFixedWidth()) {
-output.writeByte(QueryConstants.SEPARATOR_BYTE);
+
output.writeByte(SchemaUtil.getSeparatorByte(rowKeyOrderOptimizable, 
ptr.getLength()==0, dataRowKeySchema.getField(dataPosOffset)));
 }
 dataPosOffset++;
 }
@@ -481,21 +479,22 @@ public class IndexMaintainer implements Writable, 
Iterable {
 }
 boolean isDataColumnInverted = dataSortOrder != SortOrder.ASC;
 PDataType indexColumnType = 
IndexUtil.getIndexColumnDataType(isNullable, dataColumnType);
-boolean isBytesComparable = 
dataColumnType.isBytesComparableWith(indexColumnType) ;
-if (isBytesComparable && isDataColumnInverted == 
descIndexColumnBitSet.get(i)) {
+boolean isBytesComparable = 
dataColumnType.isBytesComparableWith(indexColumnType);
+boolean isIndexColumnDesc = descIndexColumnBitSet.get(i);
+if (isBytesComparable && isDataColumnInverted == 
isIndexColumnDesc) {
 output.write(ptr.get(), ptr.getOffset(), ptr.getLength());
 } else {
 if (!isBytesComparable)  {
 indexColumnType.coerceBytes(ptr, dataColumnType, 
dataSortOrder, SortOrder.getDefault());
 }
-if (descIndexColumnBitSet.get(i) != isDataColumnInverted) {
+if (isDataColumnInverted != isIndexColumnDesc) {
 writeInverted(ptr.get(), ptr.getOffset(), 
ptr.getLength(), output);
 } else {
 output.write(ptr.get(), ptr.getOffset(), 
ptr.getLength());
 }
 }
 if (!indexColumnType.isFixedWidth()) {
-output.writeByte(QueryConstants.SEPARATOR_BYTE);
+
output.writeByte(SchemaUtil.getSeparatorByte(rowKeyOrderOptimizable, 
ptr.getLength() == 0, isIndexColumnDesc ? SortOrder.DESC : SortOrder.ASC));
 }
 }
 int length = stream.size();
@@ -545,7 +544,7 @@ public class IndexMaintainer implements Writable, 
Iterable {
 indexRowKeySchema.next(ptr, indexPosOffset, maxRowKeyOffset);

[6/7] phoenix git commit: PHOENIX-2067 Sort order incorrect for variable length DESC columns

2015-07-14 Thread jamestaylor
http://git-wip-us.apache.org/repos/asf/phoenix/blob/2620a80c/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
index 7b39a28..e12f5a4 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
@@ -150,8 +150,10 @@ public class UpsertCompiler {
 
SQLExceptionCode.DATA_EXCEEDS_MAX_CAPACITY).setColumnName(column.getName().getString())
 .setMessage("value=" + 
column.getDataType().toStringLiteral(ptr, null)).build()
 .buildException(); }
-column.getDataType().coerceBytes(ptr, value, 
column.getDataType(), precision, scale,
-SortOrder.getDefault(), column.getMaxLength(), 
column.getScale(), column.getSortOrder());
+column.getDataType().coerceBytes(ptr, value, 
column.getDataType(), 
+precision, scale, SortOrder.getDefault(), 
+column.getMaxLength(), column.getScale(), 
column.getSortOrder(),
+table.rowKeyOrderOptimizable());
 values[i] = ByteUtil.copyKeyBytesIfNecessary(ptr);
 }
 setValues(values, pkSlotIndexes, columnIndexes, table, 
mutation, statement);
@@ -772,6 +774,7 @@ public class UpsertCompiler {
 final SequenceManager sequenceManager = 
context.getSequenceManager();
 // Next evaluate all the expressions
 int nodeIndex = nodeIndexOffset;
+PTable table = tableRef.getTable();
 Tuple tuple = sequenceManager.getSequenceCount() == 0 ? null :
 sequenceManager.newSequenceTuple(null);
 for (Expression constantExpression : constantExpressions) {
@@ -793,9 +796,10 @@ public class UpsertCompiler {
 .setMessage("value=" + 
constantExpression.toString()).build().buildException();
 }
 }
-column.getDataType().coerceBytes(ptr, value,
-constantExpression.getDataType(), 
constantExpression.getMaxLength(), constantExpression.getScale(), 
constantExpression.getSortOrder(),
-column.getMaxLength(), 
column.getScale(),column.getSortOrder());
+column.getDataType().coerceBytes(ptr, value, 
constantExpression.getDataType(), 
+constantExpression.getMaxLength(), 
constantExpression.getScale(), constantExpression.getSortOrder(),
+column.getMaxLength(), 
column.getScale(),column.getSortOrder(),
+table.rowKeyOrderOptimizable());
 if (overlapViewColumns.contains(column) && 
Bytes.compareTo(ptr.get(), ptr.getOffset(), ptr.getLength(), 
column.getViewConstant(), 0, column.getViewConstant().length-1) != 0) {
 throw new SQLExceptionInfo.Builder(
 SQLExceptionCode.CANNOT_UPDATE_VIEW_COLUMN)
@@ -814,7 +818,7 @@ public class UpsertCompiler {
 }
 }
 Map mutation = 
Maps.newHashMapWithExpectedSize(1);
-setValues(values, pkSlotIndexes, columnIndexes, 
tableRef.getTable(), mutation, statement);
+setValues(values, pkSlotIndexes, columnIndexes, table, 
mutation, statement);
 return new MutationState(tableRef, mutation, 0, maxSize, 
connection);
 }
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/2620a80c/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
index 0cbef11..332f293 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
@@ -61,7 +61,9 @@ import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.RowKeySchema;
 import org.apache.phoenix.schema.SaltingUtil;
 import org.apache.phoenix.schema.SortOrder;
+import org.apache.phoenix.schema.ValueSchema.Field;
 import org.apache.phoenix.schema.tuple.Tuple;
+import org.apache.phoenix.schema.types.PArrayDataType;
 import org.apache.phoenix.schema.types.PChar;
 import org.apache.phoenix.schema.types.PDataType;
 import org.apache.phoenix.schema.types.PVarbinary;
@@ -194,8 +196,9 @@ public class WhereOptimizer {
 

[1/7] phoenix git commit: PHOENIX-2067 Sort order incorrect for variable length DESC columns

2015-07-14 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/master 01b4f6055 -> 2620a80c1


http://git-wip-us.apache.org/repos/asf/phoenix/blob/2620a80c/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PUnsignedTimestampArray.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PUnsignedTimestampArray.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PUnsignedTimestampArray.java
index 1159b5c..3407310 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PUnsignedTimestampArray.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PUnsignedTimestampArray.java
@@ -17,94 +17,80 @@
  */
 package org.apache.phoenix.schema.types;
 
-import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
-import org.apache.phoenix.schema.SortOrder;
+import java.sql.Timestamp;
 
-import java.sql.*;
+import org.apache.phoenix.schema.SortOrder;
 
 public class PUnsignedTimestampArray extends PArrayDataType {
 
-  public static final PUnsignedTimestampArray INSTANCE = new 
PUnsignedTimestampArray();
-
-  private PUnsignedTimestampArray() {
-super("UNSIGNED_TIMESTAMP ARRAY",
-PDataType.ARRAY_TYPE_BASE + PUnsignedTimestamp.INSTANCE.getSqlType(), 
PhoenixArray.class,
-null, 37);
-  }
-
-  @Override
-  public boolean isArrayType() {
-return true;
-  }
-
-  @Override
-  public boolean isFixedWidth() {
-return false;
-  }
+public static final PUnsignedTimestampArray INSTANCE = new 
PUnsignedTimestampArray();
 
-  @Override
-  public int compareTo(Object lhs, Object rhs, PDataType rhsType) {
-return compareTo(lhs, rhs);
-  }
+private PUnsignedTimestampArray() {
+super("UNSIGNED_TIMESTAMP ARRAY",
+PDataType.ARRAY_TYPE_BASE + 
PUnsignedTimestamp.INSTANCE.getSqlType(), PhoenixArray.class,
+null, 37);
+}
 
-  @Override
-  public Integer getByteSize() {
-return null;
-  }
+@Override
+public boolean isArrayType() {
+return true;
+}
 
-  @Override
-  public byte[] toBytes(Object object) {
-return toBytes(object, SortOrder.ASC);
-  }
+@Override
+public boolean isFixedWidth() {
+return false;
+}
 
-  @Override
-  public byte[] toBytes(Object object, SortOrder sortOrder) {
-return toBytes(object, PUnsignedTimestamp.INSTANCE, sortOrder);
-  }
+@Override
+public int compareTo(Object lhs, Object rhs, PDataType rhsType) {
+return compareTo(lhs, rhs);
+}
 
-  @Override
-  public Object toObject(byte[] bytes, int offset, int length,
-  PDataType actualType, SortOrder sortOrder, Integer maxLength,
-  Integer scale) {
-return toObject(bytes, offset, length, PUnsignedTimestamp.INSTANCE, 
sortOrder,
-maxLength, scale, PUnsignedTimestamp.INSTANCE);
-  }
+@Override
+public Integer getByteSize() {
+return null;
+}
 
-  @Override
-  public boolean isCoercibleTo(PDataType targetType) {
-return isCoercibleTo(targetType, this);
-  }
+@Override
+public byte[] toBytes(Object object) {
+return toBytes(object, SortOrder.ASC);
+}
 
-  @Override
-  public boolean isCoercibleTo(PDataType targetType, Object value) {
-if (value == null) {
-  return true;
+@Override
+public byte[] toBytes(Object object, SortOrder sortOrder) {
+return toBytes(object, PUnsignedTimestamp.INSTANCE, sortOrder);
 }
-PhoenixArray pArr = (PhoenixArray) value;
-Object[] timeStampArr = (Object[]) pArr.array;
-for (Object i : timeStampArr) {
-  if (!super.isCoercibleTo(PUnsignedTimestamp.INSTANCE, i)) {
-return false;
-  }
+
+@Override
+public Object toObject(byte[] bytes, int offset, int length,
+PDataType actualType, SortOrder sortOrder, Integer maxLength,
+Integer scale) {
+return toObject(bytes, offset, length, PUnsignedTimestamp.INSTANCE, 
sortOrder,
+maxLength, scale, PUnsignedTimestamp.INSTANCE);
 }
-return true;
-  }
 
-  @Override
-  public void coerceBytes(ImmutableBytesWritable ptr, Object object, PDataType 
actualType,
-  Integer maxLength, Integer scale, SortOrder actualModifer, Integer 
desiredMaxLength,
-  Integer desiredScale, SortOrder desiredModifier) {
-coerceBytes(ptr, object, actualType, maxLength, scale, desiredMaxLength, 
desiredScale,
-this, actualModifer, desiredModifier);
-  }
+@Override
+public boolean isCoercibleTo(PDataType targetType) {
+return isCoercibleTo(targetType, this);
+}
 
-  @Override
-  public int getResultSetSqlType() {
-return Types.ARRAY;
-  }
+@Override
+public boolean isCoercibleTo(PDataType targetType, Object value) {
+if (value == null) {
+return true;
+}
+PhoenixArray pArr = (PhoenixArray) value;
+Object[] timeStampArr = (Object[]) pArr.array;
+for (Object i : timeStam