Build failed in Jenkins: Phoenix | Master #2012

2018-05-01 Thread Apache Jenkins Server
See 


Changes:

[tdsilva] PHOENIX-4712 When creating an index on a table, meta data cache of 
views

--
[...truncated 109.53 KB...]
[INFO] Running org.apache.phoenix.end2end.join.SortMergeJoinGlobalIndexIT
[INFO] Running org.apache.phoenix.end2end.join.SortMergeJoinLocalIndexIT
[INFO] Running org.apache.phoenix.end2end.join.SortMergeJoinNoIndexIT
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 359.222 
s - in org.apache.phoenix.end2end.index.LocalMutableTxIndexIT
[INFO] Running org.apache.phoenix.end2end.join.SubqueryIT
[INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 183.062 
s - in org.apache.phoenix.end2end.join.HashJoinNoIndexIT
[INFO] Running org.apache.phoenix.end2end.join.SubqueryUsingSortMergeJoinIT
[INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 322.371 
s - in org.apache.phoenix.end2end.join.HashJoinGlobalIndexIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 191.591 
s - in org.apache.phoenix.end2end.join.SortMergeJoinNoIndexIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.688 s 
- in org.apache.phoenix.end2end.salted.SaltedTableIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.255 s 
- in org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
[INFO] Running org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.795 s 
- in org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Running org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.002 s 
- in org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
[INFO] Running org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 s - 
in org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Running org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.382 s 
- in org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Running org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Tests run: 80, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 583.971 
s - in org.apache.phoenix.end2end.index.MutableIndexIT
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.214 s 
- in org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Running org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.291 s 
- in org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Running org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 227.704 
s - in org.apache.phoenix.end2end.join.SubqueryIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.539 s 
- in org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Running org.apache.phoenix.tx.TxCheckpointIT
[INFO] Running org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Running org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 163.494 
s - in org.apache.phoenix.end2end.join.SubqueryUsingSortMergeJoinIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 330.632 
s - in org.apache.phoenix.end2end.join.SortMergeJoinGlobalIndexIT
[INFO] Running org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.654 s 
- in org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 56.497 s 
- in org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 79.565 s 
- in org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 587.656 
s - in org.apache.phoenix.end2end.join.HashJoinLocalIndexIT
[WARNING] Tests run: 52, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
213.982 s - in org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 287.106 
s - in org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 587.382 
s - in org.apache.phoenix.end2end.join.SortMergeJoinLocalIndexIT
[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Failures: 
[ERROR]   ConcurrentMutationsIT.testConcurrentDeletesAndUpsertValues:214 
Expected to find PK in data table: (0,0)
[ERROR] Errors: 
[ERROR]   

phoenix git commit: PHOENIX-4712 When creating an index on a table, meta data cache of views related to the table isn't updated

2018-05-01 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/master de87cf50a -> 40ff0b95e


PHOENIX-4712 When creating an index on a table, meta data cache of views 
related to the table isn't updated


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/40ff0b95
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/40ff0b95
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/40ff0b95

Branch: refs/heads/master
Commit: 40ff0b95e6d994e8dcf7a49a98bdc5a1bad2ef82
Parents: de87cf5
Author: Thomas D'Silva 
Authored: Tue May 1 13:32:45 2018 -0700
Committer: Thomas D'Silva 
Committed: Tue May 1 21:20:59 2018 -0700

--
 .../java/org/apache/phoenix/end2end/ViewIT.java | 46 
 .../apache/phoenix/schema/MetaDataClient.java   | 11 +++--
 2 files changed, 53 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/40ff0b95/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
index 5c0d100..279bbd7 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
@@ -33,6 +33,7 @@ import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
+import java.sql.Statement;
 import java.util.List;
 import java.util.Properties;
 
@@ -894,6 +895,51 @@ public class ViewIT extends BaseViewIT {
 }
 }
 
+@Test
+public void testQueryWithSeparateConnectionForViewOnTableThatHasIndex() 
throws SQLException {
+try (Connection conn = DriverManager.getConnection(getUrl());
+Connection conn2 = DriverManager.getConnection(getUrl());
+Statement s = conn.createStatement();
+Statement s2 = conn2.createStatement()) {
+String tableName = generateUniqueName();
+String viewName = generateUniqueName();
+String indexName = generateUniqueName();
+helpTestQueryForViewOnTableThatHasIndex(s, s2, tableName, 
viewName, indexName);
+}
+}
+
+@Test
+public void testQueryForViewOnTableThatHasIndex() throws SQLException {
+try (Connection conn = DriverManager.getConnection(getUrl());
+Statement s = conn.createStatement()) {
+String tableName = generateUniqueName();
+String viewName = generateUniqueName();
+String indexName = generateUniqueName();
+helpTestQueryForViewOnTableThatHasIndex(s, s, tableName, viewName, 
indexName);
+}
+}
+
+private void helpTestQueryForViewOnTableThatHasIndex(Statement s1, 
Statement s2, String tableName, String viewName, String indexName)
+throws SQLException {
+// Create a table
+s1.execute("create table " + tableName + " (col1 varchar primary key, 
col2 varchar)");
+
+// Create a view on the table
+s1.execute("create view " + viewName + " (col3 varchar) as select * 
from " + tableName);
+s1.executeQuery("select * from " + viewName);
+// Create a index on the table
+s1.execute("create index " + indexName + " ON " + tableName + " 
(col2)");
+
+try (ResultSet rs =
+s2.executeQuery("explain select /*+ INDEX(" + viewName + " " + 
indexName
++ ") */ * from " + viewName + " where col2 = 'aaa'")) {
+String explainPlan = QueryUtil.getExplainPlan(rs);
+
+// check if the query uses the index
+assertTrue(explainPlan.contains(indexName));
+}
+}
+
 private void validate(String viewName, Connection tenantConn, String[] 
whereClauseArray,
 long[] expectedArray) throws SQLException {
 for (int i = 0; i < whereClauseArray.length; ++i) {

http://git-wip-us.apache.org/repos/asf/phoenix/blob/40ff0b95/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
index 69d8a56..009289b 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
@@ -682,7 +682,7 @@ public class MetaDataClient {
 // In this case, we update the parent table which 
may in turn pull
 // in indexes to add to 

phoenix git commit: PHOENIX-4712 When creating an index on a table, meta data cache of views related to the table isn't updated

2018-05-01 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/5.x-HBase-2.0 87564a864 -> 6db0cb04d


PHOENIX-4712 When creating an index on a table, meta data cache of views 
related to the table isn't updated


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/6db0cb04
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/6db0cb04
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/6db0cb04

Branch: refs/heads/5.x-HBase-2.0
Commit: 6db0cb04df9633812365a6280a07a0a7a3caba6d
Parents: 87564a8
Author: Thomas D'Silva 
Authored: Tue May 1 13:32:45 2018 -0700
Committer: Thomas D'Silva 
Committed: Tue May 1 21:20:49 2018 -0700

--
 .../java/org/apache/phoenix/end2end/ViewIT.java | 46 
 .../apache/phoenix/schema/MetaDataClient.java   | 11 +++--
 2 files changed, 53 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/6db0cb04/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
index 4b64a09..e277c18 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
@@ -33,6 +33,7 @@ import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
+import java.sql.Statement;
 import java.util.List;
 import java.util.Properties;
 
@@ -894,6 +895,51 @@ public class ViewIT extends BaseViewIT {
 }
 }
 
+@Test
+public void testQueryWithSeparateConnectionForViewOnTableThatHasIndex() 
throws SQLException {
+try (Connection conn = DriverManager.getConnection(getUrl());
+Connection conn2 = DriverManager.getConnection(getUrl());
+Statement s = conn.createStatement();
+Statement s2 = conn2.createStatement()) {
+String tableName = generateUniqueName();
+String viewName = generateUniqueName();
+String indexName = generateUniqueName();
+helpTestQueryForViewOnTableThatHasIndex(s, s2, tableName, 
viewName, indexName);
+}
+}
+
+@Test
+public void testQueryForViewOnTableThatHasIndex() throws SQLException {
+try (Connection conn = DriverManager.getConnection(getUrl());
+Statement s = conn.createStatement()) {
+String tableName = generateUniqueName();
+String viewName = generateUniqueName();
+String indexName = generateUniqueName();
+helpTestQueryForViewOnTableThatHasIndex(s, s, tableName, viewName, 
indexName);
+}
+}
+
+private void helpTestQueryForViewOnTableThatHasIndex(Statement s1, 
Statement s2, String tableName, String viewName, String indexName)
+throws SQLException {
+// Create a table
+s1.execute("create table " + tableName + " (col1 varchar primary key, 
col2 varchar)");
+
+// Create a view on the table
+s1.execute("create view " + viewName + " (col3 varchar) as select * 
from " + tableName);
+s1.executeQuery("select * from " + viewName);
+// Create a index on the table
+s1.execute("create index " + indexName + " ON " + tableName + " 
(col2)");
+
+try (ResultSet rs =
+s2.executeQuery("explain select /*+ INDEX(" + viewName + " " + 
indexName
++ ") */ * from " + viewName + " where col2 = 'aaa'")) {
+String explainPlan = QueryUtil.getExplainPlan(rs);
+
+// check if the query uses the index
+assertTrue(explainPlan.contains(indexName));
+}
+}
+
 private void validate(String viewName, Connection tenantConn, String[] 
whereClauseArray,
 long[] expectedArray) throws SQLException {
 for (int i = 0; i < whereClauseArray.length; ++i) {

http://git-wip-us.apache.org/repos/asf/phoenix/blob/6db0cb04/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
index 2333acc..4997af9 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
@@ -684,7 +684,7 @@ public class MetaDataClient {
 // In this case, we update the parent table which 
may in turn pull
 // in 

phoenix git commit: PHOENIX-4712 When creating an index on a table, meta data cache of views related to the table isn't updated

2018-05-01 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 882c059eb -> 5b7b104ed


PHOENIX-4712 When creating an index on a table, meta data cache of views 
related to the table isn't updated


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/5b7b104e
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/5b7b104e
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/5b7b104e

Branch: refs/heads/4.x-HBase-0.98
Commit: 5b7b104ed74ae7f4ef39cafc883e1da00e431503
Parents: 882c059
Author: Thomas D'Silva 
Authored: Tue May 1 13:32:45 2018 -0700
Committer: Thomas D'Silva 
Committed: Tue May 1 21:19:28 2018 -0700

--
 .../java/org/apache/phoenix/end2end/ViewIT.java | 46 
 .../apache/phoenix/schema/MetaDataClient.java   | 11 +++--
 2 files changed, 53 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/5b7b104e/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
index 5c0d100..279bbd7 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
@@ -33,6 +33,7 @@ import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
+import java.sql.Statement;
 import java.util.List;
 import java.util.Properties;
 
@@ -894,6 +895,51 @@ public class ViewIT extends BaseViewIT {
 }
 }
 
+@Test
+public void testQueryWithSeparateConnectionForViewOnTableThatHasIndex() 
throws SQLException {
+try (Connection conn = DriverManager.getConnection(getUrl());
+Connection conn2 = DriverManager.getConnection(getUrl());
+Statement s = conn.createStatement();
+Statement s2 = conn2.createStatement()) {
+String tableName = generateUniqueName();
+String viewName = generateUniqueName();
+String indexName = generateUniqueName();
+helpTestQueryForViewOnTableThatHasIndex(s, s2, tableName, 
viewName, indexName);
+}
+}
+
+@Test
+public void testQueryForViewOnTableThatHasIndex() throws SQLException {
+try (Connection conn = DriverManager.getConnection(getUrl());
+Statement s = conn.createStatement()) {
+String tableName = generateUniqueName();
+String viewName = generateUniqueName();
+String indexName = generateUniqueName();
+helpTestQueryForViewOnTableThatHasIndex(s, s, tableName, viewName, 
indexName);
+}
+}
+
+private void helpTestQueryForViewOnTableThatHasIndex(Statement s1, 
Statement s2, String tableName, String viewName, String indexName)
+throws SQLException {
+// Create a table
+s1.execute("create table " + tableName + " (col1 varchar primary key, 
col2 varchar)");
+
+// Create a view on the table
+s1.execute("create view " + viewName + " (col3 varchar) as select * 
from " + tableName);
+s1.executeQuery("select * from " + viewName);
+// Create a index on the table
+s1.execute("create index " + indexName + " ON " + tableName + " 
(col2)");
+
+try (ResultSet rs =
+s2.executeQuery("explain select /*+ INDEX(" + viewName + " " + 
indexName
++ ") */ * from " + viewName + " where col2 = 'aaa'")) {
+String explainPlan = QueryUtil.getExplainPlan(rs);
+
+// check if the query uses the index
+assertTrue(explainPlan.contains(indexName));
+}
+}
+
 private void validate(String viewName, Connection tenantConn, String[] 
whereClauseArray,
 long[] expectedArray) throws SQLException {
 for (int i = 0; i < whereClauseArray.length; ++i) {

http://git-wip-us.apache.org/repos/asf/phoenix/blob/5b7b104e/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
index 372371b..e69dac7 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
@@ -677,7 +677,7 @@ public class MetaDataClient {
 // In this case, we update the parent table which 
may in turn pull
 // in 

phoenix git commit: PHOENIX-4712 When creating an index on a table, meta data cache of views related to the table isn't updated

2018-05-01 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 cfac2daac -> e546e38f1


PHOENIX-4712 When creating an index on a table, meta data cache of views 
related to the table isn't updated


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/e546e38f
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/e546e38f
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/e546e38f

Branch: refs/heads/4.x-HBase-1.2
Commit: e546e38f13401a17ab287323cf72fba821a97683
Parents: cfac2da
Author: Thomas D'Silva 
Authored: Tue May 1 13:32:45 2018 -0700
Committer: Thomas D'Silva 
Committed: Tue May 1 21:19:47 2018 -0700

--
 .../java/org/apache/phoenix/end2end/ViewIT.java | 46 
 .../apache/phoenix/schema/MetaDataClient.java   | 11 +++--
 2 files changed, 53 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/e546e38f/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
index 5c0d100..279bbd7 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
@@ -33,6 +33,7 @@ import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
+import java.sql.Statement;
 import java.util.List;
 import java.util.Properties;
 
@@ -894,6 +895,51 @@ public class ViewIT extends BaseViewIT {
 }
 }
 
+@Test
+public void testQueryWithSeparateConnectionForViewOnTableThatHasIndex() 
throws SQLException {
+try (Connection conn = DriverManager.getConnection(getUrl());
+Connection conn2 = DriverManager.getConnection(getUrl());
+Statement s = conn.createStatement();
+Statement s2 = conn2.createStatement()) {
+String tableName = generateUniqueName();
+String viewName = generateUniqueName();
+String indexName = generateUniqueName();
+helpTestQueryForViewOnTableThatHasIndex(s, s2, tableName, 
viewName, indexName);
+}
+}
+
+@Test
+public void testQueryForViewOnTableThatHasIndex() throws SQLException {
+try (Connection conn = DriverManager.getConnection(getUrl());
+Statement s = conn.createStatement()) {
+String tableName = generateUniqueName();
+String viewName = generateUniqueName();
+String indexName = generateUniqueName();
+helpTestQueryForViewOnTableThatHasIndex(s, s, tableName, viewName, 
indexName);
+}
+}
+
+private void helpTestQueryForViewOnTableThatHasIndex(Statement s1, 
Statement s2, String tableName, String viewName, String indexName)
+throws SQLException {
+// Create a table
+s1.execute("create table " + tableName + " (col1 varchar primary key, 
col2 varchar)");
+
+// Create a view on the table
+s1.execute("create view " + viewName + " (col3 varchar) as select * 
from " + tableName);
+s1.executeQuery("select * from " + viewName);
+// Create a index on the table
+s1.execute("create index " + indexName + " ON " + tableName + " 
(col2)");
+
+try (ResultSet rs =
+s2.executeQuery("explain select /*+ INDEX(" + viewName + " " + 
indexName
++ ") */ * from " + viewName + " where col2 = 'aaa'")) {
+String explainPlan = QueryUtil.getExplainPlan(rs);
+
+// check if the query uses the index
+assertTrue(explainPlan.contains(indexName));
+}
+}
+
 private void validate(String viewName, Connection tenantConn, String[] 
whereClauseArray,
 long[] expectedArray) throws SQLException {
 for (int i = 0; i < whereClauseArray.length; ++i) {

http://git-wip-us.apache.org/repos/asf/phoenix/blob/e546e38f/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
index 69d8a56..009289b 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
@@ -682,7 +682,7 @@ public class MetaDataClient {
 // In this case, we update the parent table which 
may in turn pull
 // in 

phoenix git commit: PHOENIX-4712 When creating an index on a table, meta data cache of views related to the table isn't updated

2018-05-01 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.3 5c5153557 -> ab5200afe


PHOENIX-4712 When creating an index on a table, meta data cache of views 
related to the table isn't updated


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/ab5200af
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/ab5200af
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/ab5200af

Branch: refs/heads/4.x-HBase-1.3
Commit: ab5200afee0f99cbec2f69b7cec0c2643bb41fdd
Parents: 5c51535
Author: Thomas D'Silva 
Authored: Tue May 1 13:32:45 2018 -0700
Committer: Thomas D'Silva 
Committed: Tue May 1 21:19:06 2018 -0700

--
 .../java/org/apache/phoenix/end2end/ViewIT.java | 46 
 .../apache/phoenix/schema/MetaDataClient.java   | 11 +++--
 2 files changed, 53 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/ab5200af/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
index 5c0d100..279bbd7 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
@@ -33,6 +33,7 @@ import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
+import java.sql.Statement;
 import java.util.List;
 import java.util.Properties;
 
@@ -894,6 +895,51 @@ public class ViewIT extends BaseViewIT {
 }
 }
 
+@Test
+public void testQueryWithSeparateConnectionForViewOnTableThatHasIndex() 
throws SQLException {
+try (Connection conn = DriverManager.getConnection(getUrl());
+Connection conn2 = DriverManager.getConnection(getUrl());
+Statement s = conn.createStatement();
+Statement s2 = conn2.createStatement()) {
+String tableName = generateUniqueName();
+String viewName = generateUniqueName();
+String indexName = generateUniqueName();
+helpTestQueryForViewOnTableThatHasIndex(s, s2, tableName, 
viewName, indexName);
+}
+}
+
+@Test
+public void testQueryForViewOnTableThatHasIndex() throws SQLException {
+try (Connection conn = DriverManager.getConnection(getUrl());
+Statement s = conn.createStatement()) {
+String tableName = generateUniqueName();
+String viewName = generateUniqueName();
+String indexName = generateUniqueName();
+helpTestQueryForViewOnTableThatHasIndex(s, s, tableName, viewName, 
indexName);
+}
+}
+
+private void helpTestQueryForViewOnTableThatHasIndex(Statement s1, 
Statement s2, String tableName, String viewName, String indexName)
+throws SQLException {
+// Create a table
+s1.execute("create table " + tableName + " (col1 varchar primary key, 
col2 varchar)");
+
+// Create a view on the table
+s1.execute("create view " + viewName + " (col3 varchar) as select * 
from " + tableName);
+s1.executeQuery("select * from " + viewName);
+// Create a index on the table
+s1.execute("create index " + indexName + " ON " + tableName + " 
(col2)");
+
+try (ResultSet rs =
+s2.executeQuery("explain select /*+ INDEX(" + viewName + " " + 
indexName
++ ") */ * from " + viewName + " where col2 = 'aaa'")) {
+String explainPlan = QueryUtil.getExplainPlan(rs);
+
+// check if the query uses the index
+assertTrue(explainPlan.contains(indexName));
+}
+}
+
 private void validate(String viewName, Connection tenantConn, String[] 
whereClauseArray,
 long[] expectedArray) throws SQLException {
 for (int i = 0; i < whereClauseArray.length; ++i) {

http://git-wip-us.apache.org/repos/asf/phoenix/blob/ab5200af/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
index 69d8a56..009289b 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
@@ -682,7 +682,7 @@ public class MetaDataClient {
 // In this case, we update the parent table which 
may in turn pull
 // in 

phoenix git commit: PHOENIX-4712 When creating an index on a table, meta data cache of views related to the table isn't updated

2018-05-01 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 6e25a5f74 -> d5921d9fe


PHOENIX-4712 When creating an index on a table, meta data cache of views 
related to the table isn't updated


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/d5921d9f
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/d5921d9f
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/d5921d9f

Branch: refs/heads/4.x-HBase-1.1
Commit: d5921d9fe90d60a24e9d7385e2d987ec80f8a8f7
Parents: 6e25a5f
Author: Thomas D'Silva 
Authored: Tue May 1 13:32:45 2018 -0700
Committer: Thomas D'Silva 
Committed: Tue May 1 21:19:37 2018 -0700

--
 .../java/org/apache/phoenix/end2end/ViewIT.java | 46 
 .../apache/phoenix/schema/MetaDataClient.java   | 11 +++--
 2 files changed, 53 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/d5921d9f/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
index 5c0d100..279bbd7 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
@@ -33,6 +33,7 @@ import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
+import java.sql.Statement;
 import java.util.List;
 import java.util.Properties;
 
@@ -894,6 +895,51 @@ public class ViewIT extends BaseViewIT {
 }
 }
 
+@Test
+public void testQueryWithSeparateConnectionForViewOnTableThatHasIndex() 
throws SQLException {
+try (Connection conn = DriverManager.getConnection(getUrl());
+Connection conn2 = DriverManager.getConnection(getUrl());
+Statement s = conn.createStatement();
+Statement s2 = conn2.createStatement()) {
+String tableName = generateUniqueName();
+String viewName = generateUniqueName();
+String indexName = generateUniqueName();
+helpTestQueryForViewOnTableThatHasIndex(s, s2, tableName, 
viewName, indexName);
+}
+}
+
+@Test
+public void testQueryForViewOnTableThatHasIndex() throws SQLException {
+try (Connection conn = DriverManager.getConnection(getUrl());
+Statement s = conn.createStatement()) {
+String tableName = generateUniqueName();
+String viewName = generateUniqueName();
+String indexName = generateUniqueName();
+helpTestQueryForViewOnTableThatHasIndex(s, s, tableName, viewName, 
indexName);
+}
+}
+
+private void helpTestQueryForViewOnTableThatHasIndex(Statement s1, 
Statement s2, String tableName, String viewName, String indexName)
+throws SQLException {
+// Create a table
+s1.execute("create table " + tableName + " (col1 varchar primary key, 
col2 varchar)");
+
+// Create a view on the table
+s1.execute("create view " + viewName + " (col3 varchar) as select * 
from " + tableName);
+s1.executeQuery("select * from " + viewName);
+// Create a index on the table
+s1.execute("create index " + indexName + " ON " + tableName + " 
(col2)");
+
+try (ResultSet rs =
+s2.executeQuery("explain select /*+ INDEX(" + viewName + " " + 
indexName
++ ") */ * from " + viewName + " where col2 = 'aaa'")) {
+String explainPlan = QueryUtil.getExplainPlan(rs);
+
+// check if the query uses the index
+assertTrue(explainPlan.contains(indexName));
+}
+}
+
 private void validate(String viewName, Connection tenantConn, String[] 
whereClauseArray,
 long[] expectedArray) throws SQLException {
 for (int i = 0; i < whereClauseArray.length; ++i) {

http://git-wip-us.apache.org/repos/asf/phoenix/blob/d5921d9f/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
index 7fecaad..c80b64a 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
@@ -676,7 +676,7 @@ public class MetaDataClient {
 // In this case, we update the parent table which 
may in turn pull
 // in 

Apache-Phoenix | 4.x-HBase-1.1 | Build Successful

2018-05-01 Thread Apache Jenkins Server
4.x-HBase-1.1 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-1.1

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-1.1/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-1.1/lastCompletedBuild/testReport/

Changes
[jtaylor] PHOENIX-4719 Avoid static initialization deadlock while loading regions

[jtaylor] PHOENIX-4718 Decrease overhead of tracking aggregate heap size

[jtaylor] PHOENIX-4721 Adding PK column to a table with multiple secondary indexes

[jtaylor] PHOENIX-4720 SequenceIT is flapping



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Apache-Phoenix | 4.x-HBase-0.98 | Build Successful

2018-05-01 Thread Apache Jenkins Server
4.x-HBase-0.98 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-0.98

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/lastCompletedBuild/testReport/

Changes
[jtaylor] PHOENIX-4719 Avoid static initialization deadlock while loading regions

[jtaylor] PHOENIX-4718 Decrease overhead of tracking aggregate heap size

[jtaylor] PHOENIX-4721 Adding PK column to a table with multiple secondary indexes

[jtaylor] PHOENIX-4720 SequenceIT is flapping



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Build failed in Jenkins: Phoenix | Master #2011

2018-05-01 Thread Apache Jenkins Server
See 


Changes:

[jtaylor] PHOENIX-4719 Avoid static initialization deadlock while loading 
regions

[jtaylor] PHOENIX-4718 Decrease overhead of tracking aggregate heap size

[jtaylor] PHOENIX-4721 Adding PK column to a table with multiple secondary 
indexes

[jtaylor] PHOENIX-4720 SequenceIT is flapping

--
[...truncated 109.26 KB...]
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 526.197 
s - in org.apache.phoenix.end2end.index.LocalMutableNonTxIndexIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 122.96 s 
- in org.apache.phoenix.end2end.join.HashJoinMoreIT
[INFO] Running org.apache.phoenix.end2end.join.SubqueryIT
[INFO] Running org.apache.phoenix.end2end.join.SortMergeJoinLocalIndexIT
[INFO] Running org.apache.phoenix.end2end.join.SortMergeJoinNoIndexIT
[INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 425.836 
s - in org.apache.phoenix.end2end.join.HashJoinNoIndexIT
[INFO] Running org.apache.phoenix.end2end.join.SubqueryUsingSortMergeJoinIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 439.5 s 
- in org.apache.phoenix.end2end.join.SortMergeJoinNoIndexIT
[INFO] Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 484.613 
s - in org.apache.phoenix.end2end.join.SubqueryIT
[INFO] Tests run: 80, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
1,033.246 s - in org.apache.phoenix.end2end.index.MutableIndexIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableIT
[INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 743.302 
s - in org.apache.phoenix.end2end.join.HashJoinGlobalIndexIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.18 s 
- in org.apache.phoenix.end2end.salted.SaltedTableIT
[INFO] Running org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.302 s 
- in org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
[INFO] Running org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.909 s 
- in org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Running org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Running org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.469 s 
- in org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Running org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.465 s 
- in org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
[INFO] Running org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.635 s 
- in org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Running org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.376 s 
- in org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Running org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.34 s 
- in org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Running org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 105.158 
s - in org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Running org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 116.626 
s - in org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 762.409 
s - in org.apache.phoenix.end2end.join.SortMergeJoinGlobalIndexIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 107.225 
s - in org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 340.273 
s - in org.apache.phoenix.end2end.join.SubqueryUsingSortMergeJoinIT
[INFO] Running org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.032 s 
- in org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
1,004.449 s - in org.apache.phoenix.end2end.join.HashJoinLocalIndexIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 933.599 
s - in org.apache.phoenix.end2end.join.SortMergeJoinLocalIndexIT
[WARNING] Tests run: 52, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
398.792 s - in org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 40, Failures: 0, Errors: 0, 

Build failed in Jenkins: Phoenix-4.x-HBase-1.2 #346

2018-05-01 Thread Apache Jenkins Server
See 


Changes:

[jtaylor] PHOENIX-4719 Avoid static initialization deadlock while loading 
regions

[jtaylor] PHOENIX-4718 Decrease overhead of tracking aggregate heap size

[jtaylor] PHOENIX-4721 Adding PK column to a table with multiple secondary 
indexes

[jtaylor] PHOENIX-4720 SequenceIT is flapping

--
[...truncated 112.58 KB...]
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test (HBaseManagedTimeTests) 
@ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test 
(NeedTheirOwnClusterTests) @ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running org.apache.phoenix.end2end.ChangePermissionsIT
[INFO] Running org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.742 s 
- in 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedMutableNonTxStatsCollectorIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
[INFO] Running org.apache.phoenix.end2end.ColumnEncodedMutableTxStatsCollectorIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.605 s 
- in org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 51.287 s 
- in org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.777 s 
- in org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Running org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.812 s 
- in org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Running org.apache.phoenix.end2end.CostBasedDecisionIT
[INFO] Running org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.DropSchemaIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.6 s - 
in org.apache.phoenix.end2end.DropSchemaIT
[WARNING] Tests run: 26, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
199.283 s - in 
org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT
[WARNING] Tests run: 26, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
200.243 s - in 
org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
[WARNING] Tests run: 26, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
203.584 s - in 
org.apache.phoenix.end2end.ColumnEncodedMutableNonTxStatsCollectorIT
[WARNING] Tests run: 26, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
214.018 s - in org.apache.phoenix.end2end.ColumnEncodedMutableTxStatsCollectorIT
[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 131.127 
s - in org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Running org.apache.phoenix.end2end.IndexScrutinyToolIT
[INFO] Running org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Running org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Running 
org.apache.phoenix.end2end.IndexToolForPartialBuildWithNamespaceEnabledIT
[INFO] Running org.apache.phoenix.end2end.IndexToolIT
[INFO] Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 222.56 
s - in org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 231.028 
s - in org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Running org.apache.phoenix.end2end.MigrateSystemTablesToSystemNamespaceIT
[INFO] Running org.apache.phoenix.end2end.LocalIndexSplitMergeIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 100.258 
s - in org.apache.phoenix.end2end.LocalIndexSplitMergeIT
[INFO] Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 613.25 
s - in org.apache.phoenix.end2end.CostBasedDecisionIT
[INFO] Running 
org.apache.phoenix.end2end.NonColumnEncodedImmutableNonTxStatsCollectorIT
[INFO] Running 
org.apache.phoenix.end2end.NonColumnEncodedImmutableTxStatsCollectorIT
[INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 

Build failed in Jenkins: Phoenix-4.x-HBase-1.3 #112

2018-05-01 Thread Apache Jenkins Server
See 


Changes:

[jtaylor] PHOENIX-4719 Avoid static initialization deadlock while loading 
regions

[jtaylor] PHOENIX-4718 Decrease overhead of tracking aggregate heap size

[jtaylor] PHOENIX-4721 Adding PK column to a table with multiple secondary 
indexes

[jtaylor] PHOENIX-4720 SequenceIT is flapping

--
[...truncated 106.13 KB...]
[WARNING] Tests run: 14, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
140.973 s - in org.apache.phoenix.end2end.index.ViewIndexIT
[INFO] Running org.apache.phoenix.end2end.index.txn.RollbackIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 111.086 
s - in org.apache.phoenix.end2end.index.txn.MutableRollbackIT
[INFO] Running org.apache.phoenix.end2end.join.HashJoinGlobalIndexIT
[INFO] Running org.apache.phoenix.end2end.join.HashJoinCacheIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.822 s 
- in org.apache.phoenix.end2end.join.HashJoinCacheIT
[INFO] Running org.apache.phoenix.end2end.join.HashJoinLocalIndexIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 79.095 s 
- in org.apache.phoenix.end2end.index.txn.RollbackIT
[INFO] Running org.apache.phoenix.end2end.join.HashJoinMoreIT
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 460.721 
s - in org.apache.phoenix.end2end.index.LocalMutableNonTxIndexIT
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 454.091 
s - in org.apache.phoenix.end2end.index.LocalImmutableNonTxIndexIT
[INFO] Running org.apache.phoenix.end2end.join.HashJoinNoIndexIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 100.257 
s - in org.apache.phoenix.end2end.join.HashJoinMoreIT
[INFO] Running org.apache.phoenix.end2end.join.SortMergeJoinLocalIndexIT
[INFO] Running org.apache.phoenix.end2end.join.SortMergeJoinGlobalIndexIT
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 487.99 
s - in org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT
[INFO] Running org.apache.phoenix.end2end.join.SortMergeJoinNoIndexIT
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 491.956 
s - in org.apache.phoenix.end2end.index.LocalMutableTxIndexIT
[INFO] Running org.apache.phoenix.end2end.join.SubqueryIT
[INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 393.476 
s - in org.apache.phoenix.end2end.join.HashJoinNoIndexIT
[INFO] Running org.apache.phoenix.end2end.join.SubqueryUsingSortMergeJoinIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 393.601 
s - in org.apache.phoenix.end2end.join.SortMergeJoinNoIndexIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableIT
[INFO] Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 419.931 
s - in org.apache.phoenix.end2end.join.SubqueryIT
[INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 655.606 
s - in org.apache.phoenix.end2end.join.HashJoinGlobalIndexIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.049 s 
- in org.apache.phoenix.end2end.salted.SaltedTableIT
[INFO] Running org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Tests run: 80, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 893.553 
s - in org.apache.phoenix.end2end.index.MutableIndexIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.969 s 
- in org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Running org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.276 s 
- in org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Running org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.338 s 
- in org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
[INFO] Running org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Running org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.394 s 
- in org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Running org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.478 s 
- in org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Running org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.562 s 
- in org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Running org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.553 s 
- in org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
[INFO] 

[1/5] phoenix git commit: PHOENIX-4720 SequenceIT is flapping [Forced Update!]

2018-05-01 Thread pboado
Repository: phoenix
Updated Branches:
  refs/heads/4.x-cdh5.14 6fcdd9509 -> 2b22c2986 (forced update)


PHOENIX-4720 SequenceIT is flapping


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/6b5ff317
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/6b5ff317
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/6b5ff317

Branch: refs/heads/4.x-cdh5.14
Commit: 6b5ff317798981ba38d2cdcc37891d11d346d838
Parents: 4e0b3fb
Author: James Taylor 
Authored: Mon Apr 30 19:50:34 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:30:11 2018 -0700

--
 .../org/apache/phoenix/end2end/SequenceIT.java  | 42 +++-
 .../coprocessor/SequenceRegionObserver.java |  3 +-
 2 files changed, 43 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/6b5ff317/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
index 9b870e1..4cc9628 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
@@ -41,6 +41,7 @@ import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.SchemaNotFoundException;
 import org.apache.phoenix.schema.SequenceAlreadyExistsException;
 import org.apache.phoenix.schema.SequenceNotFoundException;
+import org.apache.phoenix.util.EnvironmentEdge;
 import org.apache.phoenix.util.EnvironmentEdgeManager;
 import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
@@ -90,6 +91,19 @@ public class SequenceIT extends ParallelStatsDisabledIT {
assertTrue(rs.next());
}
 
+private static class MyClock extends EnvironmentEdge {
+public volatile long time;
+
+public MyClock (long time) {
+this.time = time;
+}
+
+@Override
+public long currentTime() {
+return time;
+}
+}
+
@Test
public void testDuplicateSequences() throws Exception {
 String sequenceName = generateSequenceNameWithSchema();
@@ -105,7 +119,28 @@ public class SequenceIT extends ParallelStatsDisabledIT {
}
}
 
-   @Test
+@Test
+public void testDuplicateSequencesAtSameTimestamp() throws Exception {
+final MyClock clock = new MyClock(1000);
+EnvironmentEdgeManager.injectEdge(clock);
+try {
+String sequenceName = generateSequenceNameWithSchema();
+
+
+conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + 
" START WITH 2 INCREMENT BY 4\n");
+
+try {
+conn.createStatement().execute("CREATE SEQUENCE " + 
sequenceName + " START WITH 2 INCREMENT BY 4\n");
+Assert.fail("Duplicate sequences");
+} catch (SequenceAlreadyExistsException e){
+
+}
+} finally {
+EnvironmentEdgeManager.reset();
+}
+}
+
+@Test
public void testSequenceNotFound() throws Exception {
 String sequenceName = generateSequenceNameWithSchema();

@@ -753,26 +788,31 @@ public class SequenceIT extends ParallelStatsDisabledIT {
 assertSequenceValuesForSingleRow(sequenceName, 1, 2, 3);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
INCREMENT BY -1");
 
 assertSequenceValuesForSingleRow(sequenceName, 1, 0, -1);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
MINVALUE 10");
 
 assertSequenceValuesForSingleRow(sequenceName, 10, 11, 12);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
INCREMENT BY -1 MINVALUE 10 ");
 
 assertSequenceValuesForSingleRow(sequenceName, Long.MAX_VALUE, 
Long.MAX_VALUE - 1, Long.MAX_VALUE - 2);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
MAXVALUE 0");
  

[3/5] phoenix git commit: PHOENIX-4719 Avoid static initialization deadlock while loading regions (Pedro Boado)

2018-05-01 Thread pboado
PHOENIX-4719 Avoid static initialization deadlock while loading regions (Pedro 
Boado)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/540d5a94
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/540d5a94
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/540d5a94

Branch: refs/heads/4.x-cdh5.14
Commit: 540d5a94a7a419251a0eb31566b374ef4958eef0
Parents: 01df0cb
Author: James Taylor 
Authored: Mon Apr 30 19:49:39 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:30:11 2018 -0700

--
 .../main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java | 2 ++
 .../src/main/java/org/apache/phoenix/query/QueryConstants.java | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/540d5a94/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
index 320c6e7..55de772 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
@@ -290,6 +290,8 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 public static final String ASYNC_REBUILD_TIMESTAMP = 
"ASYNC_REBUILD_TIMESTAMP";
 public static final byte[] ASYNC_REBUILD_TIMESTAMP_BYTES = 
Bytes.toBytes(ASYNC_REBUILD_TIMESTAMP);
 
+public static final String COLUMN_ENCODED_BYTES = "COLUMN_ENCODED_BYTES";
+
 public static final String PARENT_TENANT_ID = "PARENT_TENANT_ID";
 public static final byte[] PARENT_TENANT_ID_BYTES = 
Bytes.toBytes(PARENT_TENANT_ID);
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/540d5a94/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java 
b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
index 2fe7b14..65806ae 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
@@ -433,7 +433,7 @@ public interface QueryConstants {
 " CONSTRAINT " + SYSTEM_TABLE_PK_NAME + " PRIMARY KEY 
(QUERY_ID))\n" +
 PhoenixDatabaseMetaData.TRANSACTIONAL + "=" + Boolean.FALSE+ ",\n" 
+
 HColumnDescriptor.TTL + "=" + 
MetaDataProtocol.DEFAULT_LOG_TTL+",\n"+
-TableProperty.COLUMN_ENCODED_BYTES.toString()+" = 0";
+PhoenixDatabaseMetaData.COLUMN_ENCODED_BYTES +" = 0";
 
 public static final byte[] OFFSET_FAMILY = "f_offset".getBytes();
 public static final byte[] OFFSET_COLUMN = "c_offset".getBytes();



[5/5] phoenix git commit: Changes for CDH 5.14.x

2018-05-01 Thread pboado
Changes for CDH 5.14.x


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/2b22c298
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/2b22c298
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/2b22c298

Branch: refs/heads/4.x-cdh5.14
Commit: 2b22c2986111b7e11edf7d89cd7707b0412772ee
Parents: 6b5ff31
Author: Pedro Boado 
Authored: Sat Mar 10 17:54:04 2018 +
Committer: Pedro Boado 
Committed: Tue May 1 22:52:28 2018 +0100

--
 phoenix-assembly/pom.xml|  2 +-
 phoenix-client/pom.xml  |  2 +-
 phoenix-core/pom.xml|  2 +-
 .../hadoop/hbase/ipc/PhoenixRpcScheduler.java   | 34 ++--
 phoenix-flume/pom.xml   |  2 +-
 phoenix-hive/pom.xml|  2 +-
 phoenix-kafka/pom.xml   |  2 +-
 phoenix-load-balancer/pom.xml   |  2 +-
 phoenix-parcel/pom.xml  |  2 +-
 phoenix-pherf/pom.xml   |  2 +-
 phoenix-pig/pom.xml |  2 +-
 phoenix-queryserver-client/pom.xml  |  2 +-
 phoenix-queryserver/pom.xml |  2 +-
 phoenix-server/pom.xml  |  2 +-
 phoenix-spark/pom.xml   |  2 +-
 phoenix-tracing-webapp/pom.xml  |  2 +-
 pom.xml |  4 +--
 17 files changed, 49 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/2b22c298/phoenix-assembly/pom.xml
--
diff --git a/phoenix-assembly/pom.xml b/phoenix-assembly/pom.xml
index 55a9a6e..c013cf0 100644
--- a/phoenix-assembly/pom.xml
+++ b/phoenix-assembly/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.14.0-cdh5.11.2-SNAPSHOT
+4.14.0-cdh5.14.2-SNAPSHOT
   
   phoenix-assembly
   Phoenix Assembly

http://git-wip-us.apache.org/repos/asf/phoenix/blob/2b22c298/phoenix-client/pom.xml
--
diff --git a/phoenix-client/pom.xml b/phoenix-client/pom.xml
index 2454de6..6de0f65 100644
--- a/phoenix-client/pom.xml
+++ b/phoenix-client/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.14.0-cdh5.11.2-SNAPSHOT
+4.14.0-cdh5.14.2-SNAPSHOT
   
   phoenix-client
   Phoenix Client

http://git-wip-us.apache.org/repos/asf/phoenix/blob/2b22c298/phoenix-core/pom.xml
--
diff --git a/phoenix-core/pom.xml b/phoenix-core/pom.xml
index e1f8e2a..d17facf 100644
--- a/phoenix-core/pom.xml
+++ b/phoenix-core/pom.xml
@@ -4,7 +4,7 @@
   
 org.apache.phoenix
 phoenix
-4.14.0-cdh5.11.2-SNAPSHOT
+4.14.0-cdh5.14.2-SNAPSHOT
   
   phoenix-core
   Phoenix Core

http://git-wip-us.apache.org/repos/asf/phoenix/blob/2b22c298/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcScheduler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcScheduler.java
 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcScheduler.java
index 4fdddf5..d1f05f8 100644
--- 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcScheduler.java
+++ 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcScheduler.java
@@ -124,6 +124,36 @@ public class PhoenixRpcScheduler extends RpcScheduler {
 public void setMetadataExecutorForTesting(RpcExecutor executor) {
 this.metadataCallExecutor = executor;
 }
-
-
+
+@Override
+public int getReadQueueLength() {
+return delegate.getReadQueueLength();
+}
+
+@Override
+public int getWriteQueueLength() {
+return delegate.getWriteQueueLength();
+}
+
+@Override
+public int getScanQueueLength() {
+return delegate.getScanQueueLength();
+}
+
+@Override
+public int getActiveReadRpcHandlerCount() {
+return delegate.getActiveReadRpcHandlerCount();
+}
+
+@Override
+public int getActiveWriteRpcHandlerCount() {
+return delegate.getActiveWriteRpcHandlerCount();
+}
+
+@Override
+public int getActiveScanRpcHandlerCount() {
+return delegate.getActiveScanRpcHandlerCount();
+}
+
+
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/2b22c298/phoenix-flume/pom.xml
--
diff --git a/phoenix-flume/pom.xml b/phoenix-flume/pom.xml
index d61a9aa..e0532cb 100644
--- a/phoenix-flume/pom.xml
+++ b/phoenix-flume/pom.xml
@@ -26,7 +26,7 @@
   
  

[2/5] phoenix git commit: PHOENIX-4721 Adding PK column to a table with multiple secondary indexes fails

2018-05-01 Thread pboado
PHOENIX-4721 Adding PK column to a table with multiple secondary indexes fails


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/4e0b3fbc
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/4e0b3fbc
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/4e0b3fbc

Branch: refs/heads/4.x-cdh5.14
Commit: 4e0b3fbc08b5d4289f2b3c04496a8282602ef6ea
Parents: ea6771d
Author: James Taylor 
Authored: Mon Apr 30 22:57:49 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:30:11 2018 -0700

--
 .../phoenix/end2end/AlterTableWithViewsIT.java  | 45 +++-
 .../apache/phoenix/schema/MetaDataClient.java   |  3 +-
 2 files changed, 46 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/4e0b3fbc/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
index e1b1372..ab3a4ab 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
@@ -33,6 +33,7 @@ import java.sql.ResultSetMetaData;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.Collection;
+import java.util.Properties;
 
 import org.apache.commons.lang.ArrayUtils;
 import org.apache.hadoop.hbase.client.HTableInterface;
@@ -47,7 +48,9 @@ import org.apache.phoenix.schema.PNameFactory;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.StringUtil;
+import org.apache.phoenix.util.TestUtil;
 import org.junit.Test;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
@@ -863,5 +866,45 @@ public class AlterTableWithViewsIT extends 
ParallelStatsDisabledIT {
 assertTrue(viewTable.isAppendOnlySchema());
 }
 }
-
+
+@Test
+public void testAlterTableWithIndexesExtendPk() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+conn.setAutoCommit(false);
+String tableName = generateUniqueName();
+String indexName1 = "I_" + generateUniqueName();
+String indexName2 = "I_" + generateUniqueName();
+
+try {
+String ddl = "CREATE TABLE " + tableName +
+" (ORG_ID CHAR(15) NOT NULL," +
+" PARTITION_KEY CHAR(3) NOT NULL, " +
+" ACTIVITY_DATE DATE NOT NULL, " +
+" FK1_ID CHAR(15) NOT NULL, " +
+" FK2_ID CHAR(15) NOT NULL, " +
+" TYPE VARCHAR NOT NULL, " +
+" IS_OPEN BOOLEAN " +
+" CONSTRAINT PKVIEW PRIMARY KEY " +
+"(" +
+"ORG_ID, PARTITION_KEY, ACTIVITY_DATE, FK1_ID, FK2_ID, TYPE" +
+"))";
+createTestTable(getUrl(), ddl);
+
+String idx1ddl = "CREATE INDEX " + indexName1 + " ON " + tableName 
+ " (FK1_ID, ACTIVITY_DATE DESC) INCLUDE (IS_OPEN)";
+PreparedStatement stmt1 = conn.prepareStatement(idx1ddl);
+stmt1.execute();
+
+String idx2ddl = "CREATE INDEX " + indexName2 + " ON " + tableName 
+ " (FK2_ID, ACTIVITY_DATE DESC) INCLUDE (IS_OPEN)";
+PreparedStatement stmt2 = conn.prepareStatement(idx2ddl);
+stmt2.execute();
+
+ddl = "ALTER TABLE " + tableName + " ADD SOURCE VARCHAR(25) NULL 
PRIMARY KEY";
+PreparedStatement stmt3 = conn.prepareStatement(ddl);
+stmt3.execute();
+} finally {
+conn.close();
+}
+}
+
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/4e0b3fbc/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
index a3d2baf..69d8a56 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
@@ -3381,13 +3381,14 @@ public class MetaDataClient {
 if (colDef.isPK()) {
 PDataType indexColDataType = 

[4/5] phoenix git commit: PHOENIX-4718 Decrease overhead of tracking aggregate heap size

2018-05-01 Thread pboado
PHOENIX-4718 Decrease overhead of tracking aggregate heap size


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/ea6771df
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/ea6771df
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/ea6771df

Branch: refs/heads/4.x-cdh5.14
Commit: ea6771df9cb1110cdf8d0bf8580d454cb86d395a
Parents: 540d5a9
Author: James Taylor 
Authored: Mon Apr 30 22:03:38 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:30:11 2018 -0700

--
 .../phoenix/end2end/SpillableGroupByIT.java | 17 ++--
 .../GroupedAggregateRegionObserver.java | 95 ++--
 .../UngroupedAggregateRegionObserver.java   | 48 +-
 .../phoenix/execute/ClientAggregatePlan.java|  2 +-
 .../expression/aggregator/Aggregator.java   |  9 +-
 .../expression/aggregator/Aggregators.java  |  3 +-
 .../expression/aggregator/BaseAggregator.java   |  4 +
 .../aggregator/ClientAggregators.java   |  3 +-
 .../DistinctValueWithCountServerAggregator.java | 15 ++--
 .../NonSizeTrackingServerAggregators.java   | 42 +
 .../aggregator/ServerAggregators.java   | 42 +
 .../SizeTrackingServerAggregators.java  | 59 
 .../org/apache/phoenix/query/QueryServices.java |  1 +
 .../phoenix/query/QueryServicesOptions.java |  1 +
 .../phoenix/compile/QueryCompilerTest.java  |  4 +-
 .../phoenix/query/QueryServicesTestImpl.java|  2 +
 16 files changed, 231 insertions(+), 116 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/ea6771df/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
index 3689c4c..21b2ac9 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
@@ -53,9 +53,9 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 
 private static final int NUM_ROWS_INSERTED = 1000;
 
-// covers: COUNT, COUNT(DISTINCT) SUM, AVG, MIN, MAX 
+// covers: COUNT, SUM, AVG, MIN, MAX 
 private static String GROUPBY1 = "select "
-+ "count(*), count(distinct uri), sum(appcpu), avg(appcpu), uri, 
min(id), max(id) from %s "
++ "count(*), sum(appcpu), avg(appcpu), uri, min(id), max(id) from 
%s "
 + "group by uri";
 
 private static String GROUPBY2 = "select count(distinct uri) from %s";
@@ -135,13 +135,12 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 
 int count = 0;
 while (rs.next()) {
-String uri = rs.getString(5);
+String uri = rs.getString(4);
 assertEquals(2, rs.getInt(1));
-assertEquals(1, rs.getInt(2));
-assertEquals(20, rs.getInt(3));
-assertEquals(10, rs.getInt(4));
-int a = Integer.valueOf(rs.getString(6)).intValue();
-int b = Integer.valueOf(rs.getString(7)).intValue();
+assertEquals(20, rs.getInt(2));
+assertEquals(10, rs.getInt(3));
+int a = Integer.valueOf(rs.getString(5)).intValue();
+int b = Integer.valueOf(rs.getString(6)).intValue();
 assertEquals(Integer.valueOf(uri).intValue(), Math.min(a, b));
 assertEquals(NUM_ROWS_INSERTED / 2 + Integer.valueOf(uri), 
Math.max(a, b));
 count++;
@@ -206,4 +205,4 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 }
 
 }
-}
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/phoenix/blob/ea6771df/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
index 201bcec..a6fa6a5 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
@@ -124,53 +124,56 @@ public class GroupedAggregateRegionObserver extends 
BaseScannerRegionObserver {
 }
 
 List expressions = 
deserializeGroupByExpressions(expressionBytes, 0);
-ServerAggregators aggregators =
-ServerAggregators.deserialize(scan
-

[5/5] phoenix git commit: Changes for CDH 5.13.x

2018-05-01 Thread pboado
Changes for CDH 5.13.x


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/d1a0df37
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/d1a0df37
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/d1a0df37

Branch: refs/heads/4.x-cdh5.13
Commit: d1a0df37ac7722903d4e3222e5499805f6008c93
Parents: 6b5ff31
Author: Pedro Boado 
Authored: Sat Mar 10 17:54:04 2018 +
Committer: Pedro Boado 
Committed: Tue May 1 22:51:25 2018 +0100

--
 phoenix-assembly/pom.xml|  2 +-
 phoenix-client/pom.xml  |  2 +-
 phoenix-core/pom.xml|  2 +-
 .../hadoop/hbase/ipc/PhoenixRpcScheduler.java   | 34 ++--
 phoenix-flume/pom.xml   |  2 +-
 phoenix-hive/pom.xml|  2 +-
 phoenix-kafka/pom.xml   |  2 +-
 phoenix-load-balancer/pom.xml   |  2 +-
 phoenix-parcel/pom.xml  |  2 +-
 phoenix-pherf/pom.xml   |  2 +-
 phoenix-pig/pom.xml |  2 +-
 phoenix-queryserver-client/pom.xml  |  2 +-
 phoenix-queryserver/pom.xml |  2 +-
 phoenix-server/pom.xml  |  2 +-
 phoenix-spark/pom.xml   |  2 +-
 phoenix-tracing-webapp/pom.xml  |  2 +-
 pom.xml |  4 +--
 17 files changed, 49 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/d1a0df37/phoenix-assembly/pom.xml
--
diff --git a/phoenix-assembly/pom.xml b/phoenix-assembly/pom.xml
index 55a9a6e..f0cd238 100644
--- a/phoenix-assembly/pom.xml
+++ b/phoenix-assembly/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.14.0-cdh5.11.2-SNAPSHOT
+4.14.0-cdh5.13.2-SNAPSHOT
   
   phoenix-assembly
   Phoenix Assembly

http://git-wip-us.apache.org/repos/asf/phoenix/blob/d1a0df37/phoenix-client/pom.xml
--
diff --git a/phoenix-client/pom.xml b/phoenix-client/pom.xml
index 2454de6..b4da311 100644
--- a/phoenix-client/pom.xml
+++ b/phoenix-client/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.14.0-cdh5.11.2-SNAPSHOT
+4.14.0-cdh5.13.2-SNAPSHOT
   
   phoenix-client
   Phoenix Client

http://git-wip-us.apache.org/repos/asf/phoenix/blob/d1a0df37/phoenix-core/pom.xml
--
diff --git a/phoenix-core/pom.xml b/phoenix-core/pom.xml
index e1f8e2a..19ddeb5 100644
--- a/phoenix-core/pom.xml
+++ b/phoenix-core/pom.xml
@@ -4,7 +4,7 @@
   
 org.apache.phoenix
 phoenix
-4.14.0-cdh5.11.2-SNAPSHOT
+4.14.0-cdh5.13.2-SNAPSHOT
   
   phoenix-core
   Phoenix Core

http://git-wip-us.apache.org/repos/asf/phoenix/blob/d1a0df37/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcScheduler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcScheduler.java
 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcScheduler.java
index 4fdddf5..d1f05f8 100644
--- 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcScheduler.java
+++ 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcScheduler.java
@@ -124,6 +124,36 @@ public class PhoenixRpcScheduler extends RpcScheduler {
 public void setMetadataExecutorForTesting(RpcExecutor executor) {
 this.metadataCallExecutor = executor;
 }
-
-
+
+@Override
+public int getReadQueueLength() {
+return delegate.getReadQueueLength();
+}
+
+@Override
+public int getWriteQueueLength() {
+return delegate.getWriteQueueLength();
+}
+
+@Override
+public int getScanQueueLength() {
+return delegate.getScanQueueLength();
+}
+
+@Override
+public int getActiveReadRpcHandlerCount() {
+return delegate.getActiveReadRpcHandlerCount();
+}
+
+@Override
+public int getActiveWriteRpcHandlerCount() {
+return delegate.getActiveWriteRpcHandlerCount();
+}
+
+@Override
+public int getActiveScanRpcHandlerCount() {
+return delegate.getActiveScanRpcHandlerCount();
+}
+
+
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/d1a0df37/phoenix-flume/pom.xml
--
diff --git a/phoenix-flume/pom.xml b/phoenix-flume/pom.xml
index d61a9aa..bcc037c 100644
--- a/phoenix-flume/pom.xml
+++ b/phoenix-flume/pom.xml
@@ -26,7 +26,7 @@
   
  

[1/5] phoenix git commit: PHOENIX-4720 SequenceIT is flapping [Forced Update!]

2018-05-01 Thread pboado
Repository: phoenix
Updated Branches:
  refs/heads/4.x-cdh5.13 30443514b -> d1a0df37a (forced update)


PHOENIX-4720 SequenceIT is flapping


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/6b5ff317
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/6b5ff317
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/6b5ff317

Branch: refs/heads/4.x-cdh5.13
Commit: 6b5ff317798981ba38d2cdcc37891d11d346d838
Parents: 4e0b3fb
Author: James Taylor 
Authored: Mon Apr 30 19:50:34 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:30:11 2018 -0700

--
 .../org/apache/phoenix/end2end/SequenceIT.java  | 42 +++-
 .../coprocessor/SequenceRegionObserver.java |  3 +-
 2 files changed, 43 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/6b5ff317/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
index 9b870e1..4cc9628 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
@@ -41,6 +41,7 @@ import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.SchemaNotFoundException;
 import org.apache.phoenix.schema.SequenceAlreadyExistsException;
 import org.apache.phoenix.schema.SequenceNotFoundException;
+import org.apache.phoenix.util.EnvironmentEdge;
 import org.apache.phoenix.util.EnvironmentEdgeManager;
 import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
@@ -90,6 +91,19 @@ public class SequenceIT extends ParallelStatsDisabledIT {
assertTrue(rs.next());
}
 
+private static class MyClock extends EnvironmentEdge {
+public volatile long time;
+
+public MyClock (long time) {
+this.time = time;
+}
+
+@Override
+public long currentTime() {
+return time;
+}
+}
+
@Test
public void testDuplicateSequences() throws Exception {
 String sequenceName = generateSequenceNameWithSchema();
@@ -105,7 +119,28 @@ public class SequenceIT extends ParallelStatsDisabledIT {
}
}
 
-   @Test
+@Test
+public void testDuplicateSequencesAtSameTimestamp() throws Exception {
+final MyClock clock = new MyClock(1000);
+EnvironmentEdgeManager.injectEdge(clock);
+try {
+String sequenceName = generateSequenceNameWithSchema();
+
+
+conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + 
" START WITH 2 INCREMENT BY 4\n");
+
+try {
+conn.createStatement().execute("CREATE SEQUENCE " + 
sequenceName + " START WITH 2 INCREMENT BY 4\n");
+Assert.fail("Duplicate sequences");
+} catch (SequenceAlreadyExistsException e){
+
+}
+} finally {
+EnvironmentEdgeManager.reset();
+}
+}
+
+@Test
public void testSequenceNotFound() throws Exception {
 String sequenceName = generateSequenceNameWithSchema();

@@ -753,26 +788,31 @@ public class SequenceIT extends ParallelStatsDisabledIT {
 assertSequenceValuesForSingleRow(sequenceName, 1, 2, 3);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
INCREMENT BY -1");
 
 assertSequenceValuesForSingleRow(sequenceName, 1, 0, -1);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
MINVALUE 10");
 
 assertSequenceValuesForSingleRow(sequenceName, 10, 11, 12);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
INCREMENT BY -1 MINVALUE 10 ");
 
 assertSequenceValuesForSingleRow(sequenceName, Long.MAX_VALUE, 
Long.MAX_VALUE - 1, Long.MAX_VALUE - 2);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
MAXVALUE 0");
  

[3/5] phoenix git commit: PHOENIX-4719 Avoid static initialization deadlock while loading regions (Pedro Boado)

2018-05-01 Thread pboado
PHOENIX-4719 Avoid static initialization deadlock while loading regions (Pedro 
Boado)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/540d5a94
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/540d5a94
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/540d5a94

Branch: refs/heads/4.x-cdh5.13
Commit: 540d5a94a7a419251a0eb31566b374ef4958eef0
Parents: 01df0cb
Author: James Taylor 
Authored: Mon Apr 30 19:49:39 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:30:11 2018 -0700

--
 .../main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java | 2 ++
 .../src/main/java/org/apache/phoenix/query/QueryConstants.java | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/540d5a94/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
index 320c6e7..55de772 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
@@ -290,6 +290,8 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 public static final String ASYNC_REBUILD_TIMESTAMP = 
"ASYNC_REBUILD_TIMESTAMP";
 public static final byte[] ASYNC_REBUILD_TIMESTAMP_BYTES = 
Bytes.toBytes(ASYNC_REBUILD_TIMESTAMP);
 
+public static final String COLUMN_ENCODED_BYTES = "COLUMN_ENCODED_BYTES";
+
 public static final String PARENT_TENANT_ID = "PARENT_TENANT_ID";
 public static final byte[] PARENT_TENANT_ID_BYTES = 
Bytes.toBytes(PARENT_TENANT_ID);
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/540d5a94/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java 
b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
index 2fe7b14..65806ae 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
@@ -433,7 +433,7 @@ public interface QueryConstants {
 " CONSTRAINT " + SYSTEM_TABLE_PK_NAME + " PRIMARY KEY 
(QUERY_ID))\n" +
 PhoenixDatabaseMetaData.TRANSACTIONAL + "=" + Boolean.FALSE+ ",\n" 
+
 HColumnDescriptor.TTL + "=" + 
MetaDataProtocol.DEFAULT_LOG_TTL+",\n"+
-TableProperty.COLUMN_ENCODED_BYTES.toString()+" = 0";
+PhoenixDatabaseMetaData.COLUMN_ENCODED_BYTES +" = 0";
 
 public static final byte[] OFFSET_FAMILY = "f_offset".getBytes();
 public static final byte[] OFFSET_COLUMN = "c_offset".getBytes();



[2/5] phoenix git commit: PHOENIX-4721 Adding PK column to a table with multiple secondary indexes fails

2018-05-01 Thread pboado
PHOENIX-4721 Adding PK column to a table with multiple secondary indexes fails


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/4e0b3fbc
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/4e0b3fbc
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/4e0b3fbc

Branch: refs/heads/4.x-cdh5.13
Commit: 4e0b3fbc08b5d4289f2b3c04496a8282602ef6ea
Parents: ea6771d
Author: James Taylor 
Authored: Mon Apr 30 22:57:49 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:30:11 2018 -0700

--
 .../phoenix/end2end/AlterTableWithViewsIT.java  | 45 +++-
 .../apache/phoenix/schema/MetaDataClient.java   |  3 +-
 2 files changed, 46 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/4e0b3fbc/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
index e1b1372..ab3a4ab 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
@@ -33,6 +33,7 @@ import java.sql.ResultSetMetaData;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.Collection;
+import java.util.Properties;
 
 import org.apache.commons.lang.ArrayUtils;
 import org.apache.hadoop.hbase.client.HTableInterface;
@@ -47,7 +48,9 @@ import org.apache.phoenix.schema.PNameFactory;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.StringUtil;
+import org.apache.phoenix.util.TestUtil;
 import org.junit.Test;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
@@ -863,5 +866,45 @@ public class AlterTableWithViewsIT extends 
ParallelStatsDisabledIT {
 assertTrue(viewTable.isAppendOnlySchema());
 }
 }
-
+
+@Test
+public void testAlterTableWithIndexesExtendPk() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+conn.setAutoCommit(false);
+String tableName = generateUniqueName();
+String indexName1 = "I_" + generateUniqueName();
+String indexName2 = "I_" + generateUniqueName();
+
+try {
+String ddl = "CREATE TABLE " + tableName +
+" (ORG_ID CHAR(15) NOT NULL," +
+" PARTITION_KEY CHAR(3) NOT NULL, " +
+" ACTIVITY_DATE DATE NOT NULL, " +
+" FK1_ID CHAR(15) NOT NULL, " +
+" FK2_ID CHAR(15) NOT NULL, " +
+" TYPE VARCHAR NOT NULL, " +
+" IS_OPEN BOOLEAN " +
+" CONSTRAINT PKVIEW PRIMARY KEY " +
+"(" +
+"ORG_ID, PARTITION_KEY, ACTIVITY_DATE, FK1_ID, FK2_ID, TYPE" +
+"))";
+createTestTable(getUrl(), ddl);
+
+String idx1ddl = "CREATE INDEX " + indexName1 + " ON " + tableName 
+ " (FK1_ID, ACTIVITY_DATE DESC) INCLUDE (IS_OPEN)";
+PreparedStatement stmt1 = conn.prepareStatement(idx1ddl);
+stmt1.execute();
+
+String idx2ddl = "CREATE INDEX " + indexName2 + " ON " + tableName 
+ " (FK2_ID, ACTIVITY_DATE DESC) INCLUDE (IS_OPEN)";
+PreparedStatement stmt2 = conn.prepareStatement(idx2ddl);
+stmt2.execute();
+
+ddl = "ALTER TABLE " + tableName + " ADD SOURCE VARCHAR(25) NULL 
PRIMARY KEY";
+PreparedStatement stmt3 = conn.prepareStatement(ddl);
+stmt3.execute();
+} finally {
+conn.close();
+}
+}
+
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/4e0b3fbc/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
index a3d2baf..69d8a56 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
@@ -3381,13 +3381,14 @@ public class MetaDataClient {
 if (colDef.isPK()) {
 PDataType indexColDataType = 

[4/5] phoenix git commit: PHOENIX-4718 Decrease overhead of tracking aggregate heap size

2018-05-01 Thread pboado
PHOENIX-4718 Decrease overhead of tracking aggregate heap size


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/ea6771df
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/ea6771df
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/ea6771df

Branch: refs/heads/4.x-cdh5.13
Commit: ea6771df9cb1110cdf8d0bf8580d454cb86d395a
Parents: 540d5a9
Author: James Taylor 
Authored: Mon Apr 30 22:03:38 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:30:11 2018 -0700

--
 .../phoenix/end2end/SpillableGroupByIT.java | 17 ++--
 .../GroupedAggregateRegionObserver.java | 95 ++--
 .../UngroupedAggregateRegionObserver.java   | 48 +-
 .../phoenix/execute/ClientAggregatePlan.java|  2 +-
 .../expression/aggregator/Aggregator.java   |  9 +-
 .../expression/aggregator/Aggregators.java  |  3 +-
 .../expression/aggregator/BaseAggregator.java   |  4 +
 .../aggregator/ClientAggregators.java   |  3 +-
 .../DistinctValueWithCountServerAggregator.java | 15 ++--
 .../NonSizeTrackingServerAggregators.java   | 42 +
 .../aggregator/ServerAggregators.java   | 42 +
 .../SizeTrackingServerAggregators.java  | 59 
 .../org/apache/phoenix/query/QueryServices.java |  1 +
 .../phoenix/query/QueryServicesOptions.java |  1 +
 .../phoenix/compile/QueryCompilerTest.java  |  4 +-
 .../phoenix/query/QueryServicesTestImpl.java|  2 +
 16 files changed, 231 insertions(+), 116 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/ea6771df/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
index 3689c4c..21b2ac9 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
@@ -53,9 +53,9 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 
 private static final int NUM_ROWS_INSERTED = 1000;
 
-// covers: COUNT, COUNT(DISTINCT) SUM, AVG, MIN, MAX 
+// covers: COUNT, SUM, AVG, MIN, MAX 
 private static String GROUPBY1 = "select "
-+ "count(*), count(distinct uri), sum(appcpu), avg(appcpu), uri, 
min(id), max(id) from %s "
++ "count(*), sum(appcpu), avg(appcpu), uri, min(id), max(id) from 
%s "
 + "group by uri";
 
 private static String GROUPBY2 = "select count(distinct uri) from %s";
@@ -135,13 +135,12 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 
 int count = 0;
 while (rs.next()) {
-String uri = rs.getString(5);
+String uri = rs.getString(4);
 assertEquals(2, rs.getInt(1));
-assertEquals(1, rs.getInt(2));
-assertEquals(20, rs.getInt(3));
-assertEquals(10, rs.getInt(4));
-int a = Integer.valueOf(rs.getString(6)).intValue();
-int b = Integer.valueOf(rs.getString(7)).intValue();
+assertEquals(20, rs.getInt(2));
+assertEquals(10, rs.getInt(3));
+int a = Integer.valueOf(rs.getString(5)).intValue();
+int b = Integer.valueOf(rs.getString(6)).intValue();
 assertEquals(Integer.valueOf(uri).intValue(), Math.min(a, b));
 assertEquals(NUM_ROWS_INSERTED / 2 + Integer.valueOf(uri), 
Math.max(a, b));
 count++;
@@ -206,4 +205,4 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 }
 
 }
-}
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/phoenix/blob/ea6771df/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
index 201bcec..a6fa6a5 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
@@ -124,53 +124,56 @@ public class GroupedAggregateRegionObserver extends 
BaseScannerRegionObserver {
 }
 
 List expressions = 
deserializeGroupByExpressions(expressionBytes, 0);
-ServerAggregators aggregators =
-ServerAggregators.deserialize(scan
-

[3/5] phoenix git commit: PHOENIX-4719 Avoid static initialization deadlock while loading regions (Pedro Boado)

2018-05-01 Thread pboado
PHOENIX-4719 Avoid static initialization deadlock while loading regions (Pedro 
Boado)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/540d5a94
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/540d5a94
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/540d5a94

Branch: refs/heads/4.x-cdh5.12
Commit: 540d5a94a7a419251a0eb31566b374ef4958eef0
Parents: 01df0cb
Author: James Taylor 
Authored: Mon Apr 30 19:49:39 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:30:11 2018 -0700

--
 .../main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java | 2 ++
 .../src/main/java/org/apache/phoenix/query/QueryConstants.java | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/540d5a94/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
index 320c6e7..55de772 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
@@ -290,6 +290,8 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 public static final String ASYNC_REBUILD_TIMESTAMP = 
"ASYNC_REBUILD_TIMESTAMP";
 public static final byte[] ASYNC_REBUILD_TIMESTAMP_BYTES = 
Bytes.toBytes(ASYNC_REBUILD_TIMESTAMP);
 
+public static final String COLUMN_ENCODED_BYTES = "COLUMN_ENCODED_BYTES";
+
 public static final String PARENT_TENANT_ID = "PARENT_TENANT_ID";
 public static final byte[] PARENT_TENANT_ID_BYTES = 
Bytes.toBytes(PARENT_TENANT_ID);
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/540d5a94/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java 
b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
index 2fe7b14..65806ae 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
@@ -433,7 +433,7 @@ public interface QueryConstants {
 " CONSTRAINT " + SYSTEM_TABLE_PK_NAME + " PRIMARY KEY 
(QUERY_ID))\n" +
 PhoenixDatabaseMetaData.TRANSACTIONAL + "=" + Boolean.FALSE+ ",\n" 
+
 HColumnDescriptor.TTL + "=" + 
MetaDataProtocol.DEFAULT_LOG_TTL+",\n"+
-TableProperty.COLUMN_ENCODED_BYTES.toString()+" = 0";
+PhoenixDatabaseMetaData.COLUMN_ENCODED_BYTES +" = 0";
 
 public static final byte[] OFFSET_FAMILY = "f_offset".getBytes();
 public static final byte[] OFFSET_COLUMN = "c_offset".getBytes();



[1/5] phoenix git commit: PHOENIX-4720 SequenceIT is flapping [Forced Update!]

2018-05-01 Thread pboado
Repository: phoenix
Updated Branches:
  refs/heads/4.x-cdh5.12 41972c049 -> 5de0ec17e (forced update)


PHOENIX-4720 SequenceIT is flapping


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/6b5ff317
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/6b5ff317
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/6b5ff317

Branch: refs/heads/4.x-cdh5.12
Commit: 6b5ff317798981ba38d2cdcc37891d11d346d838
Parents: 4e0b3fb
Author: James Taylor 
Authored: Mon Apr 30 19:50:34 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:30:11 2018 -0700

--
 .../org/apache/phoenix/end2end/SequenceIT.java  | 42 +++-
 .../coprocessor/SequenceRegionObserver.java |  3 +-
 2 files changed, 43 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/6b5ff317/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
index 9b870e1..4cc9628 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
@@ -41,6 +41,7 @@ import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.SchemaNotFoundException;
 import org.apache.phoenix.schema.SequenceAlreadyExistsException;
 import org.apache.phoenix.schema.SequenceNotFoundException;
+import org.apache.phoenix.util.EnvironmentEdge;
 import org.apache.phoenix.util.EnvironmentEdgeManager;
 import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
@@ -90,6 +91,19 @@ public class SequenceIT extends ParallelStatsDisabledIT {
assertTrue(rs.next());
}
 
+private static class MyClock extends EnvironmentEdge {
+public volatile long time;
+
+public MyClock (long time) {
+this.time = time;
+}
+
+@Override
+public long currentTime() {
+return time;
+}
+}
+
@Test
public void testDuplicateSequences() throws Exception {
 String sequenceName = generateSequenceNameWithSchema();
@@ -105,7 +119,28 @@ public class SequenceIT extends ParallelStatsDisabledIT {
}
}
 
-   @Test
+@Test
+public void testDuplicateSequencesAtSameTimestamp() throws Exception {
+final MyClock clock = new MyClock(1000);
+EnvironmentEdgeManager.injectEdge(clock);
+try {
+String sequenceName = generateSequenceNameWithSchema();
+
+
+conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + 
" START WITH 2 INCREMENT BY 4\n");
+
+try {
+conn.createStatement().execute("CREATE SEQUENCE " + 
sequenceName + " START WITH 2 INCREMENT BY 4\n");
+Assert.fail("Duplicate sequences");
+} catch (SequenceAlreadyExistsException e){
+
+}
+} finally {
+EnvironmentEdgeManager.reset();
+}
+}
+
+@Test
public void testSequenceNotFound() throws Exception {
 String sequenceName = generateSequenceNameWithSchema();

@@ -753,26 +788,31 @@ public class SequenceIT extends ParallelStatsDisabledIT {
 assertSequenceValuesForSingleRow(sequenceName, 1, 2, 3);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
INCREMENT BY -1");
 
 assertSequenceValuesForSingleRow(sequenceName, 1, 0, -1);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
MINVALUE 10");
 
 assertSequenceValuesForSingleRow(sequenceName, 10, 11, 12);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
INCREMENT BY -1 MINVALUE 10 ");
 
 assertSequenceValuesForSingleRow(sequenceName, Long.MAX_VALUE, 
Long.MAX_VALUE - 1, Long.MAX_VALUE - 2);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
MAXVALUE 0");
  

[2/5] phoenix git commit: PHOENIX-4721 Adding PK column to a table with multiple secondary indexes fails

2018-05-01 Thread pboado
PHOENIX-4721 Adding PK column to a table with multiple secondary indexes fails


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/4e0b3fbc
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/4e0b3fbc
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/4e0b3fbc

Branch: refs/heads/4.x-cdh5.12
Commit: 4e0b3fbc08b5d4289f2b3c04496a8282602ef6ea
Parents: ea6771d
Author: James Taylor 
Authored: Mon Apr 30 22:57:49 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:30:11 2018 -0700

--
 .../phoenix/end2end/AlterTableWithViewsIT.java  | 45 +++-
 .../apache/phoenix/schema/MetaDataClient.java   |  3 +-
 2 files changed, 46 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/4e0b3fbc/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
index e1b1372..ab3a4ab 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
@@ -33,6 +33,7 @@ import java.sql.ResultSetMetaData;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.Collection;
+import java.util.Properties;
 
 import org.apache.commons.lang.ArrayUtils;
 import org.apache.hadoop.hbase.client.HTableInterface;
@@ -47,7 +48,9 @@ import org.apache.phoenix.schema.PNameFactory;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.StringUtil;
+import org.apache.phoenix.util.TestUtil;
 import org.junit.Test;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
@@ -863,5 +866,45 @@ public class AlterTableWithViewsIT extends 
ParallelStatsDisabledIT {
 assertTrue(viewTable.isAppendOnlySchema());
 }
 }
-
+
+@Test
+public void testAlterTableWithIndexesExtendPk() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+conn.setAutoCommit(false);
+String tableName = generateUniqueName();
+String indexName1 = "I_" + generateUniqueName();
+String indexName2 = "I_" + generateUniqueName();
+
+try {
+String ddl = "CREATE TABLE " + tableName +
+" (ORG_ID CHAR(15) NOT NULL," +
+" PARTITION_KEY CHAR(3) NOT NULL, " +
+" ACTIVITY_DATE DATE NOT NULL, " +
+" FK1_ID CHAR(15) NOT NULL, " +
+" FK2_ID CHAR(15) NOT NULL, " +
+" TYPE VARCHAR NOT NULL, " +
+" IS_OPEN BOOLEAN " +
+" CONSTRAINT PKVIEW PRIMARY KEY " +
+"(" +
+"ORG_ID, PARTITION_KEY, ACTIVITY_DATE, FK1_ID, FK2_ID, TYPE" +
+"))";
+createTestTable(getUrl(), ddl);
+
+String idx1ddl = "CREATE INDEX " + indexName1 + " ON " + tableName 
+ " (FK1_ID, ACTIVITY_DATE DESC) INCLUDE (IS_OPEN)";
+PreparedStatement stmt1 = conn.prepareStatement(idx1ddl);
+stmt1.execute();
+
+String idx2ddl = "CREATE INDEX " + indexName2 + " ON " + tableName 
+ " (FK2_ID, ACTIVITY_DATE DESC) INCLUDE (IS_OPEN)";
+PreparedStatement stmt2 = conn.prepareStatement(idx2ddl);
+stmt2.execute();
+
+ddl = "ALTER TABLE " + tableName + " ADD SOURCE VARCHAR(25) NULL 
PRIMARY KEY";
+PreparedStatement stmt3 = conn.prepareStatement(ddl);
+stmt3.execute();
+} finally {
+conn.close();
+}
+}
+
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/4e0b3fbc/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
index a3d2baf..69d8a56 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
@@ -3381,13 +3381,14 @@ public class MetaDataClient {
 if (colDef.isPK()) {
 PDataType indexColDataType = 

[4/5] phoenix git commit: PHOENIX-4718 Decrease overhead of tracking aggregate heap size

2018-05-01 Thread pboado
PHOENIX-4718 Decrease overhead of tracking aggregate heap size


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/ea6771df
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/ea6771df
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/ea6771df

Branch: refs/heads/4.x-cdh5.12
Commit: ea6771df9cb1110cdf8d0bf8580d454cb86d395a
Parents: 540d5a9
Author: James Taylor 
Authored: Mon Apr 30 22:03:38 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:30:11 2018 -0700

--
 .../phoenix/end2end/SpillableGroupByIT.java | 17 ++--
 .../GroupedAggregateRegionObserver.java | 95 ++--
 .../UngroupedAggregateRegionObserver.java   | 48 +-
 .../phoenix/execute/ClientAggregatePlan.java|  2 +-
 .../expression/aggregator/Aggregator.java   |  9 +-
 .../expression/aggregator/Aggregators.java  |  3 +-
 .../expression/aggregator/BaseAggregator.java   |  4 +
 .../aggregator/ClientAggregators.java   |  3 +-
 .../DistinctValueWithCountServerAggregator.java | 15 ++--
 .../NonSizeTrackingServerAggregators.java   | 42 +
 .../aggregator/ServerAggregators.java   | 42 +
 .../SizeTrackingServerAggregators.java  | 59 
 .../org/apache/phoenix/query/QueryServices.java |  1 +
 .../phoenix/query/QueryServicesOptions.java |  1 +
 .../phoenix/compile/QueryCompilerTest.java  |  4 +-
 .../phoenix/query/QueryServicesTestImpl.java|  2 +
 16 files changed, 231 insertions(+), 116 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/ea6771df/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
index 3689c4c..21b2ac9 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
@@ -53,9 +53,9 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 
 private static final int NUM_ROWS_INSERTED = 1000;
 
-// covers: COUNT, COUNT(DISTINCT) SUM, AVG, MIN, MAX 
+// covers: COUNT, SUM, AVG, MIN, MAX 
 private static String GROUPBY1 = "select "
-+ "count(*), count(distinct uri), sum(appcpu), avg(appcpu), uri, 
min(id), max(id) from %s "
++ "count(*), sum(appcpu), avg(appcpu), uri, min(id), max(id) from 
%s "
 + "group by uri";
 
 private static String GROUPBY2 = "select count(distinct uri) from %s";
@@ -135,13 +135,12 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 
 int count = 0;
 while (rs.next()) {
-String uri = rs.getString(5);
+String uri = rs.getString(4);
 assertEquals(2, rs.getInt(1));
-assertEquals(1, rs.getInt(2));
-assertEquals(20, rs.getInt(3));
-assertEquals(10, rs.getInt(4));
-int a = Integer.valueOf(rs.getString(6)).intValue();
-int b = Integer.valueOf(rs.getString(7)).intValue();
+assertEquals(20, rs.getInt(2));
+assertEquals(10, rs.getInt(3));
+int a = Integer.valueOf(rs.getString(5)).intValue();
+int b = Integer.valueOf(rs.getString(6)).intValue();
 assertEquals(Integer.valueOf(uri).intValue(), Math.min(a, b));
 assertEquals(NUM_ROWS_INSERTED / 2 + Integer.valueOf(uri), 
Math.max(a, b));
 count++;
@@ -206,4 +205,4 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 }
 
 }
-}
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/phoenix/blob/ea6771df/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
index 201bcec..a6fa6a5 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
@@ -124,53 +124,56 @@ public class GroupedAggregateRegionObserver extends 
BaseScannerRegionObserver {
 }
 
 List expressions = 
deserializeGroupByExpressions(expressionBytes, 0);
-ServerAggregators aggregators =
-ServerAggregators.deserialize(scan
-

[5/5] phoenix git commit: Changes for CDH 5.12.x

2018-05-01 Thread pboado
Changes for CDH 5.12.x


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/5de0ec17
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/5de0ec17
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/5de0ec17

Branch: refs/heads/4.x-cdh5.12
Commit: 5de0ec17e3dc89a4300edf0bb9d78619066b68cb
Parents: 6b5ff31
Author: Pedro Boado 
Authored: Sat Mar 10 17:54:04 2018 +
Committer: Pedro Boado 
Committed: Tue May 1 22:50:33 2018 +0100

--
 phoenix-assembly/pom.xml|  2 +-
 phoenix-client/pom.xml  |  2 +-
 phoenix-core/pom.xml|  2 +-
 .../hadoop/hbase/ipc/PhoenixRpcScheduler.java   | 34 ++--
 phoenix-flume/pom.xml   |  2 +-
 phoenix-hive/pom.xml|  2 +-
 phoenix-kafka/pom.xml   |  2 +-
 phoenix-load-balancer/pom.xml   |  2 +-
 phoenix-parcel/pom.xml  |  2 +-
 phoenix-pherf/pom.xml   |  2 +-
 phoenix-pig/pom.xml |  2 +-
 phoenix-queryserver-client/pom.xml  |  2 +-
 phoenix-queryserver/pom.xml |  2 +-
 phoenix-server/pom.xml  |  2 +-
 phoenix-spark/pom.xml   |  2 +-
 phoenix-tracing-webapp/pom.xml  |  2 +-
 pom.xml |  4 +--
 17 files changed, 49 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/5de0ec17/phoenix-assembly/pom.xml
--
diff --git a/phoenix-assembly/pom.xml b/phoenix-assembly/pom.xml
index 55a9a6e..14225ee 100644
--- a/phoenix-assembly/pom.xml
+++ b/phoenix-assembly/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.14.0-cdh5.11.2-SNAPSHOT
+4.14.0-cdh5.12.2-SNAPSHOT
   
   phoenix-assembly
   Phoenix Assembly

http://git-wip-us.apache.org/repos/asf/phoenix/blob/5de0ec17/phoenix-client/pom.xml
--
diff --git a/phoenix-client/pom.xml b/phoenix-client/pom.xml
index 2454de6..e211008 100644
--- a/phoenix-client/pom.xml
+++ b/phoenix-client/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.14.0-cdh5.11.2-SNAPSHOT
+4.14.0-cdh5.12.2-SNAPSHOT
   
   phoenix-client
   Phoenix Client

http://git-wip-us.apache.org/repos/asf/phoenix/blob/5de0ec17/phoenix-core/pom.xml
--
diff --git a/phoenix-core/pom.xml b/phoenix-core/pom.xml
index e1f8e2a..2d837a2 100644
--- a/phoenix-core/pom.xml
+++ b/phoenix-core/pom.xml
@@ -4,7 +4,7 @@
   
 org.apache.phoenix
 phoenix
-4.14.0-cdh5.11.2-SNAPSHOT
+4.14.0-cdh5.12.2-SNAPSHOT
   
   phoenix-core
   Phoenix Core

http://git-wip-us.apache.org/repos/asf/phoenix/blob/5de0ec17/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcScheduler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcScheduler.java
 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcScheduler.java
index 4fdddf5..d1f05f8 100644
--- 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcScheduler.java
+++ 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcScheduler.java
@@ -124,6 +124,36 @@ public class PhoenixRpcScheduler extends RpcScheduler {
 public void setMetadataExecutorForTesting(RpcExecutor executor) {
 this.metadataCallExecutor = executor;
 }
-
-
+
+@Override
+public int getReadQueueLength() {
+return delegate.getReadQueueLength();
+}
+
+@Override
+public int getWriteQueueLength() {
+return delegate.getWriteQueueLength();
+}
+
+@Override
+public int getScanQueueLength() {
+return delegate.getScanQueueLength();
+}
+
+@Override
+public int getActiveReadRpcHandlerCount() {
+return delegate.getActiveReadRpcHandlerCount();
+}
+
+@Override
+public int getActiveWriteRpcHandlerCount() {
+return delegate.getActiveWriteRpcHandlerCount();
+}
+
+@Override
+public int getActiveScanRpcHandlerCount() {
+return delegate.getActiveScanRpcHandlerCount();
+}
+
+
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/5de0ec17/phoenix-flume/pom.xml
--
diff --git a/phoenix-flume/pom.xml b/phoenix-flume/pom.xml
index d61a9aa..8a78010 100644
--- a/phoenix-flume/pom.xml
+++ b/phoenix-flume/pom.xml
@@ -26,7 +26,7 @@
   
  

[4/4] phoenix git commit: PHOENIX-4721 Adding PK column to a table with multiple secondary indexes fails

2018-05-01 Thread jamestaylor
PHOENIX-4721 Adding PK column to a table with multiple secondary indexes fails


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/ec5929a9
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/ec5929a9
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/ec5929a9

Branch: refs/heads/5.x-HBase-2.0
Commit: ec5929a9537605caea2eedd427f26ab82cd238aa
Parents: 08ce089
Author: James Taylor 
Authored: Mon Apr 30 22:57:49 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:46:13 2018 -0700

--
 .../phoenix/end2end/AlterTableWithViewsIT.java  | 45 +++-
 .../apache/phoenix/schema/MetaDataClient.java   |  3 +-
 2 files changed, 46 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/ec5929a9/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
index 3a9f787..472331b 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
@@ -33,6 +33,7 @@ import java.sql.ResultSetMetaData;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.Collection;
+import java.util.Properties;
 
 import org.apache.commons.lang.ArrayUtils;
 import org.apache.hadoop.hbase.client.Table;
@@ -47,7 +48,9 @@ import org.apache.phoenix.schema.PNameFactory;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.StringUtil;
+import org.apache.phoenix.util.TestUtil;
 import org.junit.Test;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
@@ -863,5 +866,45 @@ public class AlterTableWithViewsIT extends 
ParallelStatsDisabledIT {
 assertTrue(viewTable.isAppendOnlySchema());
 }
 }
-
+
+@Test
+public void testAlterTableWithIndexesExtendPk() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+conn.setAutoCommit(false);
+String tableName = generateUniqueName();
+String indexName1 = "I_" + generateUniqueName();
+String indexName2 = "I_" + generateUniqueName();
+
+try {
+String ddl = "CREATE TABLE " + tableName +
+" (ORG_ID CHAR(15) NOT NULL," +
+" PARTITION_KEY CHAR(3) NOT NULL, " +
+" ACTIVITY_DATE DATE NOT NULL, " +
+" FK1_ID CHAR(15) NOT NULL, " +
+" FK2_ID CHAR(15) NOT NULL, " +
+" TYPE VARCHAR NOT NULL, " +
+" IS_OPEN BOOLEAN " +
+" CONSTRAINT PKVIEW PRIMARY KEY " +
+"(" +
+"ORG_ID, PARTITION_KEY, ACTIVITY_DATE, FK1_ID, FK2_ID, TYPE" +
+"))";
+createTestTable(getUrl(), ddl);
+
+String idx1ddl = "CREATE INDEX " + indexName1 + " ON " + tableName 
+ " (FK1_ID, ACTIVITY_DATE DESC) INCLUDE (IS_OPEN)";
+PreparedStatement stmt1 = conn.prepareStatement(idx1ddl);
+stmt1.execute();
+
+String idx2ddl = "CREATE INDEX " + indexName2 + " ON " + tableName 
+ " (FK2_ID, ACTIVITY_DATE DESC) INCLUDE (IS_OPEN)";
+PreparedStatement stmt2 = conn.prepareStatement(idx2ddl);
+stmt2.execute();
+
+ddl = "ALTER TABLE " + tableName + " ADD SOURCE VARCHAR(25) NULL 
PRIMARY KEY";
+PreparedStatement stmt3 = conn.prepareStatement(ddl);
+stmt3.execute();
+} finally {
+conn.close();
+}
+}
+
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/ec5929a9/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
index 60a7ca9..2333acc 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
@@ -3382,13 +3382,14 @@ public class MetaDataClient {
 if (colDef.isPK()) {
 PDataType indexColDataType = 
IndexUtil.getIndexColumnDataType(colDef.isNull(), 

[2/4] phoenix git commit: PHOENIX-4718 Decrease overhead of tracking aggregate heap size

2018-05-01 Thread jamestaylor
PHOENIX-4718 Decrease overhead of tracking aggregate heap size


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/08ce089c
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/08ce089c
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/08ce089c

Branch: refs/heads/5.x-HBase-2.0
Commit: 08ce089c3f14d531dc2c78c8b1788b4f93508ce0
Parents: 4e2901c
Author: James Taylor 
Authored: Mon Apr 30 22:03:38 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:46:13 2018 -0700

--
 .../phoenix/end2end/SpillableGroupByIT.java | 17 ++--
 .../GroupedAggregateRegionObserver.java | 95 ++--
 .../UngroupedAggregateRegionObserver.java   | 48 +-
 .../phoenix/execute/ClientAggregatePlan.java|  2 +-
 .../expression/aggregator/Aggregator.java   |  9 +-
 .../expression/aggregator/Aggregators.java  |  3 +-
 .../expression/aggregator/BaseAggregator.java   |  4 +
 .../aggregator/ClientAggregators.java   |  3 +-
 .../DistinctValueWithCountServerAggregator.java | 15 ++--
 .../NonSizeTrackingServerAggregators.java   | 42 +
 .../aggregator/ServerAggregators.java   | 42 +
 .../SizeTrackingServerAggregators.java  | 59 
 .../org/apache/phoenix/query/QueryServices.java |  1 +
 .../phoenix/query/QueryServicesOptions.java |  1 +
 .../phoenix/compile/QueryCompilerTest.java  |  4 +-
 .../phoenix/query/QueryServicesTestImpl.java|  2 +
 16 files changed, 231 insertions(+), 116 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/08ce089c/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
index 3689c4c..21b2ac9 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
@@ -53,9 +53,9 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 
 private static final int NUM_ROWS_INSERTED = 1000;
 
-// covers: COUNT, COUNT(DISTINCT) SUM, AVG, MIN, MAX 
+// covers: COUNT, SUM, AVG, MIN, MAX 
 private static String GROUPBY1 = "select "
-+ "count(*), count(distinct uri), sum(appcpu), avg(appcpu), uri, 
min(id), max(id) from %s "
++ "count(*), sum(appcpu), avg(appcpu), uri, min(id), max(id) from 
%s "
 + "group by uri";
 
 private static String GROUPBY2 = "select count(distinct uri) from %s";
@@ -135,13 +135,12 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 
 int count = 0;
 while (rs.next()) {
-String uri = rs.getString(5);
+String uri = rs.getString(4);
 assertEquals(2, rs.getInt(1));
-assertEquals(1, rs.getInt(2));
-assertEquals(20, rs.getInt(3));
-assertEquals(10, rs.getInt(4));
-int a = Integer.valueOf(rs.getString(6)).intValue();
-int b = Integer.valueOf(rs.getString(7)).intValue();
+assertEquals(20, rs.getInt(2));
+assertEquals(10, rs.getInt(3));
+int a = Integer.valueOf(rs.getString(5)).intValue();
+int b = Integer.valueOf(rs.getString(6)).intValue();
 assertEquals(Integer.valueOf(uri).intValue(), Math.min(a, b));
 assertEquals(NUM_ROWS_INSERTED / 2 + Integer.valueOf(uri), 
Math.max(a, b));
 count++;
@@ -206,4 +205,4 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 }
 
 }
-}
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/phoenix/blob/08ce089c/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
index 5d89e8e..8c878cb 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
@@ -132,53 +132,56 @@ public class GroupedAggregateRegionObserver extends 
BaseScannerRegionObserver im
 }
 
 List expressions = 
deserializeGroupByExpressions(expressionBytes, 0);
-ServerAggregators aggregators =
-ServerAggregators.deserialize(scan
-

[3/4] phoenix git commit: PHOENIX-4720 SequenceIT is flapping

2018-05-01 Thread jamestaylor
PHOENIX-4720 SequenceIT is flapping


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/87564a86
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/87564a86
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/87564a86

Branch: refs/heads/5.x-HBase-2.0
Commit: 87564a8647705d85921126ae538ba922a4c69037
Parents: ec5929a
Author: James Taylor 
Authored: Mon Apr 30 19:50:34 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:46:13 2018 -0700

--
 .../org/apache/phoenix/end2end/SequenceIT.java  | 42 +++-
 .../coprocessor/SequenceRegionObserver.java |  3 +-
 2 files changed, 43 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/87564a86/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
index 9b870e1..4cc9628 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
@@ -41,6 +41,7 @@ import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.SchemaNotFoundException;
 import org.apache.phoenix.schema.SequenceAlreadyExistsException;
 import org.apache.phoenix.schema.SequenceNotFoundException;
+import org.apache.phoenix.util.EnvironmentEdge;
 import org.apache.phoenix.util.EnvironmentEdgeManager;
 import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
@@ -90,6 +91,19 @@ public class SequenceIT extends ParallelStatsDisabledIT {
assertTrue(rs.next());
}
 
+private static class MyClock extends EnvironmentEdge {
+public volatile long time;
+
+public MyClock (long time) {
+this.time = time;
+}
+
+@Override
+public long currentTime() {
+return time;
+}
+}
+
@Test
public void testDuplicateSequences() throws Exception {
 String sequenceName = generateSequenceNameWithSchema();
@@ -105,7 +119,28 @@ public class SequenceIT extends ParallelStatsDisabledIT {
}
}
 
-   @Test
+@Test
+public void testDuplicateSequencesAtSameTimestamp() throws Exception {
+final MyClock clock = new MyClock(1000);
+EnvironmentEdgeManager.injectEdge(clock);
+try {
+String sequenceName = generateSequenceNameWithSchema();
+
+
+conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + 
" START WITH 2 INCREMENT BY 4\n");
+
+try {
+conn.createStatement().execute("CREATE SEQUENCE " + 
sequenceName + " START WITH 2 INCREMENT BY 4\n");
+Assert.fail("Duplicate sequences");
+} catch (SequenceAlreadyExistsException e){
+
+}
+} finally {
+EnvironmentEdgeManager.reset();
+}
+}
+
+@Test
public void testSequenceNotFound() throws Exception {
 String sequenceName = generateSequenceNameWithSchema();

@@ -753,26 +788,31 @@ public class SequenceIT extends ParallelStatsDisabledIT {
 assertSequenceValuesForSingleRow(sequenceName, 1, 2, 3);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
INCREMENT BY -1");
 
 assertSequenceValuesForSingleRow(sequenceName, 1, 0, -1);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
MINVALUE 10");
 
 assertSequenceValuesForSingleRow(sequenceName, 10, 11, 12);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
INCREMENT BY -1 MINVALUE 10 ");
 
 assertSequenceValuesForSingleRow(sequenceName, Long.MAX_VALUE, 
Long.MAX_VALUE - 1, Long.MAX_VALUE - 2);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
MAXVALUE 0");
 
 assertSequenceValuesForSingleRow(sequenceName, Long.MIN_VALUE, 
Long.MIN_VALUE + 1, 

[1/4] phoenix git commit: PHOENIX-4719 Avoid static initialization deadlock while loading regions (Pedro Boado)

2018-05-01 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/5.x-HBase-2.0 8e2a64e22 -> 87564a864


PHOENIX-4719 Avoid static initialization deadlock while loading regions (Pedro 
Boado)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/4e2901ca
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/4e2901ca
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/4e2901ca

Branch: refs/heads/5.x-HBase-2.0
Commit: 4e2901ca2ff7b2595961e73653b4d24dc6d4a30c
Parents: 8e2a64e
Author: James Taylor 
Authored: Mon Apr 30 19:49:39 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:46:12 2018 -0700

--
 .../main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java | 2 ++
 .../src/main/java/org/apache/phoenix/query/QueryConstants.java | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/4e2901ca/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
index 415f748..68f94c1 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
@@ -291,6 +291,8 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 public static final String ASYNC_REBUILD_TIMESTAMP = 
"ASYNC_REBUILD_TIMESTAMP";
 public static final byte[] ASYNC_REBUILD_TIMESTAMP_BYTES = 
Bytes.toBytes(ASYNC_REBUILD_TIMESTAMP);
 
+public static final String COLUMN_ENCODED_BYTES = "COLUMN_ENCODED_BYTES";
+
 public static final String PARENT_TENANT_ID = "PARENT_TENANT_ID";
 public static final byte[] PARENT_TENANT_ID_BYTES = 
Bytes.toBytes(PARENT_TENANT_ID);
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/4e2901ca/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java 
b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
index 479562a..296b504 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
@@ -434,7 +434,7 @@ public interface QueryConstants {
 " CONSTRAINT " + SYSTEM_TABLE_PK_NAME + " PRIMARY KEY 
(QUERY_ID))\n" +
 PhoenixDatabaseMetaData.TRANSACTIONAL + "=" + Boolean.FALSE+ ",\n" 
+
 ColumnFamilyDescriptorBuilder.TTL + "=" + 
MetaDataProtocol.DEFAULT_LOG_TTL+",\n"+
-TableProperty.COLUMN_ENCODED_BYTES.toString()+" = 0";
+PhoenixDatabaseMetaData.COLUMN_ENCODED_BYTES +" = 0";
 
 public static final byte[] OFFSET_FAMILY = "f_offset".getBytes();
 public static final byte[] OFFSET_COLUMN = "c_offset".getBytes();



[4/4] phoenix git commit: PHOENIX-4720 SequenceIT is flapping

2018-05-01 Thread jamestaylor
PHOENIX-4720 SequenceIT is flapping


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/882c059e
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/882c059e
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/882c059e

Branch: refs/heads/4.x-HBase-0.98
Commit: 882c059ebc5fe3a90dabe89d7fd03995b2ba7239
Parents: 8c9f2f6
Author: James Taylor 
Authored: Mon Apr 30 19:50:34 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:42:09 2018 -0700

--
 .../org/apache/phoenix/end2end/SequenceIT.java  | 42 +++-
 .../coprocessor/SequenceRegionObserver.java |  3 +-
 2 files changed, 43 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/882c059e/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
index 9b870e1..4cc9628 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
@@ -41,6 +41,7 @@ import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.SchemaNotFoundException;
 import org.apache.phoenix.schema.SequenceAlreadyExistsException;
 import org.apache.phoenix.schema.SequenceNotFoundException;
+import org.apache.phoenix.util.EnvironmentEdge;
 import org.apache.phoenix.util.EnvironmentEdgeManager;
 import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
@@ -90,6 +91,19 @@ public class SequenceIT extends ParallelStatsDisabledIT {
assertTrue(rs.next());
}
 
+private static class MyClock extends EnvironmentEdge {
+public volatile long time;
+
+public MyClock (long time) {
+this.time = time;
+}
+
+@Override
+public long currentTime() {
+return time;
+}
+}
+
@Test
public void testDuplicateSequences() throws Exception {
 String sequenceName = generateSequenceNameWithSchema();
@@ -105,7 +119,28 @@ public class SequenceIT extends ParallelStatsDisabledIT {
}
}
 
-   @Test
+@Test
+public void testDuplicateSequencesAtSameTimestamp() throws Exception {
+final MyClock clock = new MyClock(1000);
+EnvironmentEdgeManager.injectEdge(clock);
+try {
+String sequenceName = generateSequenceNameWithSchema();
+
+
+conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + 
" START WITH 2 INCREMENT BY 4\n");
+
+try {
+conn.createStatement().execute("CREATE SEQUENCE " + 
sequenceName + " START WITH 2 INCREMENT BY 4\n");
+Assert.fail("Duplicate sequences");
+} catch (SequenceAlreadyExistsException e){
+
+}
+} finally {
+EnvironmentEdgeManager.reset();
+}
+}
+
+@Test
public void testSequenceNotFound() throws Exception {
 String sequenceName = generateSequenceNameWithSchema();

@@ -753,26 +788,31 @@ public class SequenceIT extends ParallelStatsDisabledIT {
 assertSequenceValuesForSingleRow(sequenceName, 1, 2, 3);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
INCREMENT BY -1");
 
 assertSequenceValuesForSingleRow(sequenceName, 1, 0, -1);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
MINVALUE 10");
 
 assertSequenceValuesForSingleRow(sequenceName, 10, 11, 12);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
INCREMENT BY -1 MINVALUE 10 ");
 
 assertSequenceValuesForSingleRow(sequenceName, Long.MAX_VALUE, 
Long.MAX_VALUE - 1, Long.MAX_VALUE - 2);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
MAXVALUE 0");
 
 assertSequenceValuesForSingleRow(sequenceName, Long.MIN_VALUE, 
Long.MIN_VALUE + 1, 

[2/4] phoenix git commit: PHOENIX-4718 Decrease overhead of tracking aggregate heap size

2018-05-01 Thread jamestaylor
PHOENIX-4718 Decrease overhead of tracking aggregate heap size


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/dcbaeac5
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/dcbaeac5
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/dcbaeac5

Branch: refs/heads/4.x-HBase-0.98
Commit: dcbaeac5613aa1c91adf070e51e9a3e96ccf7c0a
Parents: b85e09a
Author: James Taylor 
Authored: Mon Apr 30 22:03:38 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:42:06 2018 -0700

--
 .../phoenix/end2end/SpillableGroupByIT.java | 17 ++--
 .../GroupedAggregateRegionObserver.java | 96 ++--
 .../UngroupedAggregateRegionObserver.java   | 48 +-
 .../phoenix/execute/ClientAggregatePlan.java|  2 +-
 .../expression/aggregator/Aggregator.java   |  9 +-
 .../expression/aggregator/Aggregators.java  |  3 +-
 .../expression/aggregator/BaseAggregator.java   |  4 +
 .../aggregator/ClientAggregators.java   |  3 +-
 .../DistinctValueWithCountServerAggregator.java | 15 ++-
 .../NonSizeTrackingServerAggregators.java   | 42 +
 .../aggregator/ServerAggregators.java   | 42 +
 .../SizeTrackingServerAggregators.java  | 59 
 .../org/apache/phoenix/query/QueryServices.java |  1 +
 .../phoenix/query/QueryServicesOptions.java |  1 +
 .../phoenix/compile/QueryCompilerTest.java  |  4 +-
 .../phoenix/query/QueryServicesTestImpl.java|  2 +
 16 files changed, 231 insertions(+), 117 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/dcbaeac5/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
index 3689c4c..21b2ac9 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
@@ -53,9 +53,9 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 
 private static final int NUM_ROWS_INSERTED = 1000;
 
-// covers: COUNT, COUNT(DISTINCT) SUM, AVG, MIN, MAX 
+// covers: COUNT, SUM, AVG, MIN, MAX 
 private static String GROUPBY1 = "select "
-+ "count(*), count(distinct uri), sum(appcpu), avg(appcpu), uri, 
min(id), max(id) from %s "
++ "count(*), sum(appcpu), avg(appcpu), uri, min(id), max(id) from 
%s "
 + "group by uri";
 
 private static String GROUPBY2 = "select count(distinct uri) from %s";
@@ -135,13 +135,12 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 
 int count = 0;
 while (rs.next()) {
-String uri = rs.getString(5);
+String uri = rs.getString(4);
 assertEquals(2, rs.getInt(1));
-assertEquals(1, rs.getInt(2));
-assertEquals(20, rs.getInt(3));
-assertEquals(10, rs.getInt(4));
-int a = Integer.valueOf(rs.getString(6)).intValue();
-int b = Integer.valueOf(rs.getString(7)).intValue();
+assertEquals(20, rs.getInt(2));
+assertEquals(10, rs.getInt(3));
+int a = Integer.valueOf(rs.getString(5)).intValue();
+int b = Integer.valueOf(rs.getString(6)).intValue();
 assertEquals(Integer.valueOf(uri).intValue(), Math.min(a, b));
 assertEquals(NUM_ROWS_INSERTED / 2 + Integer.valueOf(uri), 
Math.max(a, b));
 count++;
@@ -206,4 +205,4 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 }
 
 }
-}
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/phoenix/blob/dcbaeac5/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
index b428663..dcd5897 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
@@ -123,54 +123,56 @@ public class GroupedAggregateRegionObserver extends 
BaseScannerRegionObserver {
 }
 
 List expressions = 
deserializeGroupByExpressions(expressionBytes, 0);
-ServerAggregators aggregators =
-ServerAggregators.deserialize(scan
-   

[3/4] phoenix git commit: PHOENIX-4721 Adding PK column to a table with multiple secondary indexes fails

2018-05-01 Thread jamestaylor
PHOENIX-4721 Adding PK column to a table with multiple secondary indexes fails


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/8c9f2f68
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/8c9f2f68
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/8c9f2f68

Branch: refs/heads/4.x-HBase-0.98
Commit: 8c9f2f68d91bd11e057fcee6abae9ea60acae124
Parents: dcbaeac
Author: James Taylor 
Authored: Mon Apr 30 22:57:49 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:42:08 2018 -0700

--
 .../phoenix/end2end/AlterTableWithViewsIT.java  | 45 +++-
 .../apache/phoenix/schema/MetaDataClient.java   |  3 +-
 2 files changed, 46 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/8c9f2f68/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
index e1b1372..ab3a4ab 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
@@ -33,6 +33,7 @@ import java.sql.ResultSetMetaData;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.Collection;
+import java.util.Properties;
 
 import org.apache.commons.lang.ArrayUtils;
 import org.apache.hadoop.hbase.client.HTableInterface;
@@ -47,7 +48,9 @@ import org.apache.phoenix.schema.PNameFactory;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.StringUtil;
+import org.apache.phoenix.util.TestUtil;
 import org.junit.Test;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
@@ -863,5 +866,45 @@ public class AlterTableWithViewsIT extends 
ParallelStatsDisabledIT {
 assertTrue(viewTable.isAppendOnlySchema());
 }
 }
-
+
+@Test
+public void testAlterTableWithIndexesExtendPk() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+conn.setAutoCommit(false);
+String tableName = generateUniqueName();
+String indexName1 = "I_" + generateUniqueName();
+String indexName2 = "I_" + generateUniqueName();
+
+try {
+String ddl = "CREATE TABLE " + tableName +
+" (ORG_ID CHAR(15) NOT NULL," +
+" PARTITION_KEY CHAR(3) NOT NULL, " +
+" ACTIVITY_DATE DATE NOT NULL, " +
+" FK1_ID CHAR(15) NOT NULL, " +
+" FK2_ID CHAR(15) NOT NULL, " +
+" TYPE VARCHAR NOT NULL, " +
+" IS_OPEN BOOLEAN " +
+" CONSTRAINT PKVIEW PRIMARY KEY " +
+"(" +
+"ORG_ID, PARTITION_KEY, ACTIVITY_DATE, FK1_ID, FK2_ID, TYPE" +
+"))";
+createTestTable(getUrl(), ddl);
+
+String idx1ddl = "CREATE INDEX " + indexName1 + " ON " + tableName 
+ " (FK1_ID, ACTIVITY_DATE DESC) INCLUDE (IS_OPEN)";
+PreparedStatement stmt1 = conn.prepareStatement(idx1ddl);
+stmt1.execute();
+
+String idx2ddl = "CREATE INDEX " + indexName2 + " ON " + tableName 
+ " (FK2_ID, ACTIVITY_DATE DESC) INCLUDE (IS_OPEN)";
+PreparedStatement stmt2 = conn.prepareStatement(idx2ddl);
+stmt2.execute();
+
+ddl = "ALTER TABLE " + tableName + " ADD SOURCE VARCHAR(25) NULL 
PRIMARY KEY";
+PreparedStatement stmt3 = conn.prepareStatement(ddl);
+stmt3.execute();
+} finally {
+conn.close();
+}
+}
+
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/8c9f2f68/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
index c8f7d5e..372371b 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
@@ -3376,13 +3376,14 @@ public class MetaDataClient {
 if (colDef.isPK()) {
 PDataType indexColDataType = 

[1/4] phoenix git commit: PHOENIX-4719 Avoid static initialization deadlock while loading regions (Pedro Boado)

2018-05-01 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 6a893aeb9 -> 882c059eb


PHOENIX-4719 Avoid static initialization deadlock while loading regions (Pedro 
Boado)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/b85e09ad
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/b85e09ad
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/b85e09ad

Branch: refs/heads/4.x-HBase-0.98
Commit: b85e09ad6229e11143c6a88896cddce84361e548
Parents: 6a893ae
Author: James Taylor 
Authored: Mon Apr 30 19:49:39 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:36:17 2018 -0700

--
 .../java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java| 2 ++
 .../src/main/java/org/apache/phoenix/query/QueryConstants.java   | 4 ++--
 2 files changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/b85e09ad/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
index 0e7f05e..29673b5 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
@@ -290,6 +290,8 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 public static final String ASYNC_REBUILD_TIMESTAMP = 
"ASYNC_REBUILD_TIMESTAMP";
 public static final byte[] ASYNC_REBUILD_TIMESTAMP_BYTES = 
Bytes.toBytes(ASYNC_REBUILD_TIMESTAMP);
 
+public static final String COLUMN_ENCODED_BYTES = "COLUMN_ENCODED_BYTES";
+
 public static final String PARENT_TENANT_ID = "PARENT_TENANT_ID";
 public static final byte[] PARENT_TENANT_ID_BYTES = 
Bytes.toBytes(PARENT_TENANT_ID);
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/b85e09ad/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java 
b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
index eba6eb4..bb45324 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
@@ -432,8 +432,8 @@ public interface QueryConstants {
 " CONSTRAINT " + SYSTEM_TABLE_PK_NAME + " PRIMARY KEY 
(QUERY_ID))\n" +
 PhoenixDatabaseMetaData.TRANSACTIONAL + "=" + Boolean.FALSE+ ",\n" 
+
 HColumnDescriptor.TTL + "=" + 
MetaDataProtocol.DEFAULT_LOG_TTL+",\n"+
-TableProperty.COLUMN_ENCODED_BYTES.toString()+" = 0";
-
+PhoenixDatabaseMetaData.COLUMN_ENCODED_BYTES +" = 0";
+
 public static final byte[] OFFSET_FAMILY = "f_offset".getBytes();
 public static final byte[] OFFSET_COLUMN = "c_offset".getBytes();
 public static final String LAST_SCAN = "LAST_SCAN";



[3/4] phoenix git commit: PHOENIX-4721 Adding PK column to a table with multiple secondary indexes fails

2018-05-01 Thread jamestaylor
PHOENIX-4721 Adding PK column to a table with multiple secondary indexes fails


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/706f5462
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/706f5462
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/706f5462

Branch: refs/heads/4.x-HBase-1.1
Commit: 706f54620611eb1f858c4cb75e5d84d8c316cb39
Parents: d4d0343
Author: James Taylor 
Authored: Mon Apr 30 22:57:49 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:31:50 2018 -0700

--
 .../phoenix/end2end/AlterTableWithViewsIT.java  | 45 +++-
 .../apache/phoenix/schema/MetaDataClient.java   |  3 +-
 2 files changed, 46 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/706f5462/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
index e1b1372..ab3a4ab 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
@@ -33,6 +33,7 @@ import java.sql.ResultSetMetaData;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.Collection;
+import java.util.Properties;
 
 import org.apache.commons.lang.ArrayUtils;
 import org.apache.hadoop.hbase.client.HTableInterface;
@@ -47,7 +48,9 @@ import org.apache.phoenix.schema.PNameFactory;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.StringUtil;
+import org.apache.phoenix.util.TestUtil;
 import org.junit.Test;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
@@ -863,5 +866,45 @@ public class AlterTableWithViewsIT extends 
ParallelStatsDisabledIT {
 assertTrue(viewTable.isAppendOnlySchema());
 }
 }
-
+
+@Test
+public void testAlterTableWithIndexesExtendPk() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+conn.setAutoCommit(false);
+String tableName = generateUniqueName();
+String indexName1 = "I_" + generateUniqueName();
+String indexName2 = "I_" + generateUniqueName();
+
+try {
+String ddl = "CREATE TABLE " + tableName +
+" (ORG_ID CHAR(15) NOT NULL," +
+" PARTITION_KEY CHAR(3) NOT NULL, " +
+" ACTIVITY_DATE DATE NOT NULL, " +
+" FK1_ID CHAR(15) NOT NULL, " +
+" FK2_ID CHAR(15) NOT NULL, " +
+" TYPE VARCHAR NOT NULL, " +
+" IS_OPEN BOOLEAN " +
+" CONSTRAINT PKVIEW PRIMARY KEY " +
+"(" +
+"ORG_ID, PARTITION_KEY, ACTIVITY_DATE, FK1_ID, FK2_ID, TYPE" +
+"))";
+createTestTable(getUrl(), ddl);
+
+String idx1ddl = "CREATE INDEX " + indexName1 + " ON " + tableName 
+ " (FK1_ID, ACTIVITY_DATE DESC) INCLUDE (IS_OPEN)";
+PreparedStatement stmt1 = conn.prepareStatement(idx1ddl);
+stmt1.execute();
+
+String idx2ddl = "CREATE INDEX " + indexName2 + " ON " + tableName 
+ " (FK2_ID, ACTIVITY_DATE DESC) INCLUDE (IS_OPEN)";
+PreparedStatement stmt2 = conn.prepareStatement(idx2ddl);
+stmt2.execute();
+
+ddl = "ALTER TABLE " + tableName + " ADD SOURCE VARCHAR(25) NULL 
PRIMARY KEY";
+PreparedStatement stmt3 = conn.prepareStatement(ddl);
+stmt3.execute();
+} finally {
+conn.close();
+}
+}
+
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/706f5462/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
index dcbe7e6..7fecaad 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
@@ -3374,13 +3374,14 @@ public class MetaDataClient {
 if (colDef.isPK()) {
 PDataType indexColDataType = 

[1/4] phoenix git commit: PHOENIX-4718 Decrease overhead of tracking aggregate heap size

2018-05-01 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 b14098b1a -> 6e25a5f74


PHOENIX-4718 Decrease overhead of tracking aggregate heap size


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/d4d0343e
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/d4d0343e
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/d4d0343e

Branch: refs/heads/4.x-HBase-1.1
Commit: d4d0343ee1550ed07e9c86166e15986528ad1693
Parents: 9db45e3
Author: James Taylor 
Authored: Mon Apr 30 22:03:38 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:31:49 2018 -0700

--
 .../phoenix/end2end/SpillableGroupByIT.java | 17 ++--
 .../GroupedAggregateRegionObserver.java | 95 ++--
 .../UngroupedAggregateRegionObserver.java   | 48 +-
 .../phoenix/execute/ClientAggregatePlan.java|  2 +-
 .../expression/aggregator/Aggregator.java   |  9 +-
 .../expression/aggregator/Aggregators.java  |  3 +-
 .../expression/aggregator/BaseAggregator.java   |  4 +
 .../aggregator/ClientAggregators.java   |  3 +-
 .../DistinctValueWithCountServerAggregator.java | 15 ++--
 .../NonSizeTrackingServerAggregators.java   | 42 +
 .../aggregator/ServerAggregators.java   | 42 +
 .../SizeTrackingServerAggregators.java  | 59 
 .../org/apache/phoenix/query/QueryServices.java |  1 +
 .../phoenix/query/QueryServicesOptions.java |  1 +
 .../phoenix/compile/QueryCompilerTest.java  |  4 +-
 .../phoenix/query/QueryServicesTestImpl.java|  2 +
 16 files changed, 231 insertions(+), 116 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/d4d0343e/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
index 3689c4c..21b2ac9 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
@@ -53,9 +53,9 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 
 private static final int NUM_ROWS_INSERTED = 1000;
 
-// covers: COUNT, COUNT(DISTINCT) SUM, AVG, MIN, MAX 
+// covers: COUNT, SUM, AVG, MIN, MAX 
 private static String GROUPBY1 = "select "
-+ "count(*), count(distinct uri), sum(appcpu), avg(appcpu), uri, 
min(id), max(id) from %s "
++ "count(*), sum(appcpu), avg(appcpu), uri, min(id), max(id) from 
%s "
 + "group by uri";
 
 private static String GROUPBY2 = "select count(distinct uri) from %s";
@@ -135,13 +135,12 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 
 int count = 0;
 while (rs.next()) {
-String uri = rs.getString(5);
+String uri = rs.getString(4);
 assertEquals(2, rs.getInt(1));
-assertEquals(1, rs.getInt(2));
-assertEquals(20, rs.getInt(3));
-assertEquals(10, rs.getInt(4));
-int a = Integer.valueOf(rs.getString(6)).intValue();
-int b = Integer.valueOf(rs.getString(7)).intValue();
+assertEquals(20, rs.getInt(2));
+assertEquals(10, rs.getInt(3));
+int a = Integer.valueOf(rs.getString(5)).intValue();
+int b = Integer.valueOf(rs.getString(6)).intValue();
 assertEquals(Integer.valueOf(uri).intValue(), Math.min(a, b));
 assertEquals(NUM_ROWS_INSERTED / 2 + Integer.valueOf(uri), 
Math.max(a, b));
 count++;
@@ -206,4 +205,4 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 }
 
 }
-}
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/phoenix/blob/d4d0343e/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
index 201bcec..a6fa6a5 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
@@ -124,53 +124,56 @@ public class GroupedAggregateRegionObserver extends 
BaseScannerRegionObserver {
 }
 
 List expressions = 
deserializeGroupByExpressions(expressionBytes, 0);
-ServerAggregators 

[2/4] phoenix git commit: PHOENIX-4719 Avoid static initialization deadlock while loading regions (Pedro Boado)

2018-05-01 Thread jamestaylor
PHOENIX-4719 Avoid static initialization deadlock while loading regions (Pedro 
Boado)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/9db45e3e
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/9db45e3e
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/9db45e3e

Branch: refs/heads/4.x-HBase-1.1
Commit: 9db45e3e2ea9dc7c8f326d0e30a163fd5572358b
Parents: b14098b
Author: James Taylor 
Authored: Mon Apr 30 19:49:39 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:31:49 2018 -0700

--
 .../main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java | 2 ++
 .../src/main/java/org/apache/phoenix/query/QueryConstants.java | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/9db45e3e/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
index 320c6e7..55de772 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
@@ -290,6 +290,8 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 public static final String ASYNC_REBUILD_TIMESTAMP = 
"ASYNC_REBUILD_TIMESTAMP";
 public static final byte[] ASYNC_REBUILD_TIMESTAMP_BYTES = 
Bytes.toBytes(ASYNC_REBUILD_TIMESTAMP);
 
+public static final String COLUMN_ENCODED_BYTES = "COLUMN_ENCODED_BYTES";
+
 public static final String PARENT_TENANT_ID = "PARENT_TENANT_ID";
 public static final byte[] PARENT_TENANT_ID_BYTES = 
Bytes.toBytes(PARENT_TENANT_ID);
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/9db45e3e/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java 
b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
index 2fe7b14..65806ae 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
@@ -433,7 +433,7 @@ public interface QueryConstants {
 " CONSTRAINT " + SYSTEM_TABLE_PK_NAME + " PRIMARY KEY 
(QUERY_ID))\n" +
 PhoenixDatabaseMetaData.TRANSACTIONAL + "=" + Boolean.FALSE+ ",\n" 
+
 HColumnDescriptor.TTL + "=" + 
MetaDataProtocol.DEFAULT_LOG_TTL+",\n"+
-TableProperty.COLUMN_ENCODED_BYTES.toString()+" = 0";
+PhoenixDatabaseMetaData.COLUMN_ENCODED_BYTES +" = 0";
 
 public static final byte[] OFFSET_FAMILY = "f_offset".getBytes();
 public static final byte[] OFFSET_COLUMN = "c_offset".getBytes();



[4/4] phoenix git commit: PHOENIX-4720 SequenceIT is flapping

2018-05-01 Thread jamestaylor
PHOENIX-4720 SequenceIT is flapping


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/6e25a5f7
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/6e25a5f7
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/6e25a5f7

Branch: refs/heads/4.x-HBase-1.1
Commit: 6e25a5f74b211783bad07a2cca5b2828e9c80a52
Parents: 706f546
Author: James Taylor 
Authored: Mon Apr 30 19:50:34 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:31:50 2018 -0700

--
 .../org/apache/phoenix/end2end/SequenceIT.java  | 42 +++-
 .../coprocessor/SequenceRegionObserver.java |  3 +-
 2 files changed, 43 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/6e25a5f7/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
index 9b870e1..4cc9628 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
@@ -41,6 +41,7 @@ import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.SchemaNotFoundException;
 import org.apache.phoenix.schema.SequenceAlreadyExistsException;
 import org.apache.phoenix.schema.SequenceNotFoundException;
+import org.apache.phoenix.util.EnvironmentEdge;
 import org.apache.phoenix.util.EnvironmentEdgeManager;
 import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
@@ -90,6 +91,19 @@ public class SequenceIT extends ParallelStatsDisabledIT {
assertTrue(rs.next());
}
 
+private static class MyClock extends EnvironmentEdge {
+public volatile long time;
+
+public MyClock (long time) {
+this.time = time;
+}
+
+@Override
+public long currentTime() {
+return time;
+}
+}
+
@Test
public void testDuplicateSequences() throws Exception {
 String sequenceName = generateSequenceNameWithSchema();
@@ -105,7 +119,28 @@ public class SequenceIT extends ParallelStatsDisabledIT {
}
}
 
-   @Test
+@Test
+public void testDuplicateSequencesAtSameTimestamp() throws Exception {
+final MyClock clock = new MyClock(1000);
+EnvironmentEdgeManager.injectEdge(clock);
+try {
+String sequenceName = generateSequenceNameWithSchema();
+
+
+conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + 
" START WITH 2 INCREMENT BY 4\n");
+
+try {
+conn.createStatement().execute("CREATE SEQUENCE " + 
sequenceName + " START WITH 2 INCREMENT BY 4\n");
+Assert.fail("Duplicate sequences");
+} catch (SequenceAlreadyExistsException e){
+
+}
+} finally {
+EnvironmentEdgeManager.reset();
+}
+}
+
+@Test
public void testSequenceNotFound() throws Exception {
 String sequenceName = generateSequenceNameWithSchema();

@@ -753,26 +788,31 @@ public class SequenceIT extends ParallelStatsDisabledIT {
 assertSequenceValuesForSingleRow(sequenceName, 1, 2, 3);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
INCREMENT BY -1");
 
 assertSequenceValuesForSingleRow(sequenceName, 1, 0, -1);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
MINVALUE 10");
 
 assertSequenceValuesForSingleRow(sequenceName, 10, 11, 12);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
INCREMENT BY -1 MINVALUE 10 ");
 
 assertSequenceValuesForSingleRow(sequenceName, Long.MAX_VALUE, 
Long.MAX_VALUE - 1, Long.MAX_VALUE - 2);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
MAXVALUE 0");
 
 assertSequenceValuesForSingleRow(sequenceName, Long.MIN_VALUE, 
Long.MIN_VALUE + 1, 

[2/4] phoenix git commit: PHOENIX-4718 Decrease overhead of tracking aggregate heap size

2018-05-01 Thread jamestaylor
PHOENIX-4718 Decrease overhead of tracking aggregate heap size


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/ea6771df
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/ea6771df
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/ea6771df

Branch: refs/heads/4.x-cdh5.11
Commit: ea6771df9cb1110cdf8d0bf8580d454cb86d395a
Parents: 540d5a9
Author: James Taylor 
Authored: Mon Apr 30 22:03:38 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:30:11 2018 -0700

--
 .../phoenix/end2end/SpillableGroupByIT.java | 17 ++--
 .../GroupedAggregateRegionObserver.java | 95 ++--
 .../UngroupedAggregateRegionObserver.java   | 48 +-
 .../phoenix/execute/ClientAggregatePlan.java|  2 +-
 .../expression/aggregator/Aggregator.java   |  9 +-
 .../expression/aggregator/Aggregators.java  |  3 +-
 .../expression/aggregator/BaseAggregator.java   |  4 +
 .../aggregator/ClientAggregators.java   |  3 +-
 .../DistinctValueWithCountServerAggregator.java | 15 ++--
 .../NonSizeTrackingServerAggregators.java   | 42 +
 .../aggregator/ServerAggregators.java   | 42 +
 .../SizeTrackingServerAggregators.java  | 59 
 .../org/apache/phoenix/query/QueryServices.java |  1 +
 .../phoenix/query/QueryServicesOptions.java |  1 +
 .../phoenix/compile/QueryCompilerTest.java  |  4 +-
 .../phoenix/query/QueryServicesTestImpl.java|  2 +
 16 files changed, 231 insertions(+), 116 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/ea6771df/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
index 3689c4c..21b2ac9 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
@@ -53,9 +53,9 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 
 private static final int NUM_ROWS_INSERTED = 1000;
 
-// covers: COUNT, COUNT(DISTINCT) SUM, AVG, MIN, MAX 
+// covers: COUNT, SUM, AVG, MIN, MAX 
 private static String GROUPBY1 = "select "
-+ "count(*), count(distinct uri), sum(appcpu), avg(appcpu), uri, 
min(id), max(id) from %s "
++ "count(*), sum(appcpu), avg(appcpu), uri, min(id), max(id) from 
%s "
 + "group by uri";
 
 private static String GROUPBY2 = "select count(distinct uri) from %s";
@@ -135,13 +135,12 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 
 int count = 0;
 while (rs.next()) {
-String uri = rs.getString(5);
+String uri = rs.getString(4);
 assertEquals(2, rs.getInt(1));
-assertEquals(1, rs.getInt(2));
-assertEquals(20, rs.getInt(3));
-assertEquals(10, rs.getInt(4));
-int a = Integer.valueOf(rs.getString(6)).intValue();
-int b = Integer.valueOf(rs.getString(7)).intValue();
+assertEquals(20, rs.getInt(2));
+assertEquals(10, rs.getInt(3));
+int a = Integer.valueOf(rs.getString(5)).intValue();
+int b = Integer.valueOf(rs.getString(6)).intValue();
 assertEquals(Integer.valueOf(uri).intValue(), Math.min(a, b));
 assertEquals(NUM_ROWS_INSERTED / 2 + Integer.valueOf(uri), 
Math.max(a, b));
 count++;
@@ -206,4 +205,4 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 }
 
 }
-}
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/phoenix/blob/ea6771df/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
index 201bcec..a6fa6a5 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
@@ -124,53 +124,56 @@ public class GroupedAggregateRegionObserver extends 
BaseScannerRegionObserver {
 }
 
 List expressions = 
deserializeGroupByExpressions(expressionBytes, 0);
-ServerAggregators aggregators =
-ServerAggregators.deserialize(scan
-

[4/4] phoenix git commit: PHOENIX-4719 Avoid static initialization deadlock while loading regions (Pedro Boado)

2018-05-01 Thread jamestaylor
PHOENIX-4719 Avoid static initialization deadlock while loading regions (Pedro 
Boado)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/540d5a94
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/540d5a94
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/540d5a94

Branch: refs/heads/4.x-cdh5.11
Commit: 540d5a94a7a419251a0eb31566b374ef4958eef0
Parents: 01df0cb
Author: James Taylor 
Authored: Mon Apr 30 19:49:39 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:30:11 2018 -0700

--
 .../main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java | 2 ++
 .../src/main/java/org/apache/phoenix/query/QueryConstants.java | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/540d5a94/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
index 320c6e7..55de772 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
@@ -290,6 +290,8 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 public static final String ASYNC_REBUILD_TIMESTAMP = 
"ASYNC_REBUILD_TIMESTAMP";
 public static final byte[] ASYNC_REBUILD_TIMESTAMP_BYTES = 
Bytes.toBytes(ASYNC_REBUILD_TIMESTAMP);
 
+public static final String COLUMN_ENCODED_BYTES = "COLUMN_ENCODED_BYTES";
+
 public static final String PARENT_TENANT_ID = "PARENT_TENANT_ID";
 public static final byte[] PARENT_TENANT_ID_BYTES = 
Bytes.toBytes(PARENT_TENANT_ID);
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/540d5a94/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java 
b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
index 2fe7b14..65806ae 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
@@ -433,7 +433,7 @@ public interface QueryConstants {
 " CONSTRAINT " + SYSTEM_TABLE_PK_NAME + " PRIMARY KEY 
(QUERY_ID))\n" +
 PhoenixDatabaseMetaData.TRANSACTIONAL + "=" + Boolean.FALSE+ ",\n" 
+
 HColumnDescriptor.TTL + "=" + 
MetaDataProtocol.DEFAULT_LOG_TTL+",\n"+
-TableProperty.COLUMN_ENCODED_BYTES.toString()+" = 0";
+PhoenixDatabaseMetaData.COLUMN_ENCODED_BYTES +" = 0";
 
 public static final byte[] OFFSET_FAMILY = "f_offset".getBytes();
 public static final byte[] OFFSET_COLUMN = "c_offset".getBytes();



[1/4] phoenix git commit: PHOENIX-4720 SequenceIT is flapping

2018-05-01 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-cdh5.11 01df0cb59 -> 6b5ff3177


PHOENIX-4720 SequenceIT is flapping


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/6b5ff317
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/6b5ff317
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/6b5ff317

Branch: refs/heads/4.x-cdh5.11
Commit: 6b5ff317798981ba38d2cdcc37891d11d346d838
Parents: 4e0b3fb
Author: James Taylor 
Authored: Mon Apr 30 19:50:34 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:30:11 2018 -0700

--
 .../org/apache/phoenix/end2end/SequenceIT.java  | 42 +++-
 .../coprocessor/SequenceRegionObserver.java |  3 +-
 2 files changed, 43 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/6b5ff317/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
index 9b870e1..4cc9628 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
@@ -41,6 +41,7 @@ import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.SchemaNotFoundException;
 import org.apache.phoenix.schema.SequenceAlreadyExistsException;
 import org.apache.phoenix.schema.SequenceNotFoundException;
+import org.apache.phoenix.util.EnvironmentEdge;
 import org.apache.phoenix.util.EnvironmentEdgeManager;
 import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
@@ -90,6 +91,19 @@ public class SequenceIT extends ParallelStatsDisabledIT {
assertTrue(rs.next());
}
 
+private static class MyClock extends EnvironmentEdge {
+public volatile long time;
+
+public MyClock (long time) {
+this.time = time;
+}
+
+@Override
+public long currentTime() {
+return time;
+}
+}
+
@Test
public void testDuplicateSequences() throws Exception {
 String sequenceName = generateSequenceNameWithSchema();
@@ -105,7 +119,28 @@ public class SequenceIT extends ParallelStatsDisabledIT {
}
}
 
-   @Test
+@Test
+public void testDuplicateSequencesAtSameTimestamp() throws Exception {
+final MyClock clock = new MyClock(1000);
+EnvironmentEdgeManager.injectEdge(clock);
+try {
+String sequenceName = generateSequenceNameWithSchema();
+
+
+conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + 
" START WITH 2 INCREMENT BY 4\n");
+
+try {
+conn.createStatement().execute("CREATE SEQUENCE " + 
sequenceName + " START WITH 2 INCREMENT BY 4\n");
+Assert.fail("Duplicate sequences");
+} catch (SequenceAlreadyExistsException e){
+
+}
+} finally {
+EnvironmentEdgeManager.reset();
+}
+}
+
+@Test
public void testSequenceNotFound() throws Exception {
 String sequenceName = generateSequenceNameWithSchema();

@@ -753,26 +788,31 @@ public class SequenceIT extends ParallelStatsDisabledIT {
 assertSequenceValuesForSingleRow(sequenceName, 1, 2, 3);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
INCREMENT BY -1");
 
 assertSequenceValuesForSingleRow(sequenceName, 1, 0, -1);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
MINVALUE 10");
 
 assertSequenceValuesForSingleRow(sequenceName, 10, 11, 12);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
INCREMENT BY -1 MINVALUE 10 ");
 
 assertSequenceValuesForSingleRow(sequenceName, Long.MAX_VALUE, 
Long.MAX_VALUE - 1, Long.MAX_VALUE - 2);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
MAXVALUE 0");
 
 

[3/4] phoenix git commit: PHOENIX-4721 Adding PK column to a table with multiple secondary indexes fails

2018-05-01 Thread jamestaylor
PHOENIX-4721 Adding PK column to a table with multiple secondary indexes fails


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/4e0b3fbc
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/4e0b3fbc
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/4e0b3fbc

Branch: refs/heads/4.x-cdh5.11
Commit: 4e0b3fbc08b5d4289f2b3c04496a8282602ef6ea
Parents: ea6771d
Author: James Taylor 
Authored: Mon Apr 30 22:57:49 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:30:11 2018 -0700

--
 .../phoenix/end2end/AlterTableWithViewsIT.java  | 45 +++-
 .../apache/phoenix/schema/MetaDataClient.java   |  3 +-
 2 files changed, 46 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/4e0b3fbc/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
index e1b1372..ab3a4ab 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
@@ -33,6 +33,7 @@ import java.sql.ResultSetMetaData;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.Collection;
+import java.util.Properties;
 
 import org.apache.commons.lang.ArrayUtils;
 import org.apache.hadoop.hbase.client.HTableInterface;
@@ -47,7 +48,9 @@ import org.apache.phoenix.schema.PNameFactory;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.StringUtil;
+import org.apache.phoenix.util.TestUtil;
 import org.junit.Test;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
@@ -863,5 +866,45 @@ public class AlterTableWithViewsIT extends 
ParallelStatsDisabledIT {
 assertTrue(viewTable.isAppendOnlySchema());
 }
 }
-
+
+@Test
+public void testAlterTableWithIndexesExtendPk() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+conn.setAutoCommit(false);
+String tableName = generateUniqueName();
+String indexName1 = "I_" + generateUniqueName();
+String indexName2 = "I_" + generateUniqueName();
+
+try {
+String ddl = "CREATE TABLE " + tableName +
+" (ORG_ID CHAR(15) NOT NULL," +
+" PARTITION_KEY CHAR(3) NOT NULL, " +
+" ACTIVITY_DATE DATE NOT NULL, " +
+" FK1_ID CHAR(15) NOT NULL, " +
+" FK2_ID CHAR(15) NOT NULL, " +
+" TYPE VARCHAR NOT NULL, " +
+" IS_OPEN BOOLEAN " +
+" CONSTRAINT PKVIEW PRIMARY KEY " +
+"(" +
+"ORG_ID, PARTITION_KEY, ACTIVITY_DATE, FK1_ID, FK2_ID, TYPE" +
+"))";
+createTestTable(getUrl(), ddl);
+
+String idx1ddl = "CREATE INDEX " + indexName1 + " ON " + tableName 
+ " (FK1_ID, ACTIVITY_DATE DESC) INCLUDE (IS_OPEN)";
+PreparedStatement stmt1 = conn.prepareStatement(idx1ddl);
+stmt1.execute();
+
+String idx2ddl = "CREATE INDEX " + indexName2 + " ON " + tableName 
+ " (FK2_ID, ACTIVITY_DATE DESC) INCLUDE (IS_OPEN)";
+PreparedStatement stmt2 = conn.prepareStatement(idx2ddl);
+stmt2.execute();
+
+ddl = "ALTER TABLE " + tableName + " ADD SOURCE VARCHAR(25) NULL 
PRIMARY KEY";
+PreparedStatement stmt3 = conn.prepareStatement(ddl);
+stmt3.execute();
+} finally {
+conn.close();
+}
+}
+
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/4e0b3fbc/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
index a3d2baf..69d8a56 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
@@ -3381,13 +3381,14 @@ public class MetaDataClient {
 if (colDef.isPK()) {
 PDataType indexColDataType = 

[2/4] phoenix git commit: PHOENIX-4718 Decrease overhead of tracking aggregate heap size

2018-05-01 Thread jamestaylor
PHOENIX-4718 Decrease overhead of tracking aggregate heap size


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/05f53349
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/05f53349
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/05f53349

Branch: refs/heads/4.x-HBase-1.2
Commit: 05f53349709cbb917647b3a6c9348c9068c68dd6
Parents: 1f01869
Author: James Taylor 
Authored: Mon Apr 30 22:03:38 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:28:33 2018 -0700

--
 .../phoenix/end2end/SpillableGroupByIT.java | 17 ++--
 .../GroupedAggregateRegionObserver.java | 95 ++--
 .../UngroupedAggregateRegionObserver.java   | 48 +-
 .../phoenix/execute/ClientAggregatePlan.java|  2 +-
 .../expression/aggregator/Aggregator.java   |  9 +-
 .../expression/aggregator/Aggregators.java  |  3 +-
 .../expression/aggregator/BaseAggregator.java   |  4 +
 .../aggregator/ClientAggregators.java   |  3 +-
 .../DistinctValueWithCountServerAggregator.java | 15 ++--
 .../NonSizeTrackingServerAggregators.java   | 42 +
 .../aggregator/ServerAggregators.java   | 42 +
 .../SizeTrackingServerAggregators.java  | 59 
 .../org/apache/phoenix/query/QueryServices.java |  1 +
 .../phoenix/query/QueryServicesOptions.java |  1 +
 .../phoenix/compile/QueryCompilerTest.java  |  4 +-
 .../phoenix/query/QueryServicesTestImpl.java|  2 +
 16 files changed, 231 insertions(+), 116 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/05f53349/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
index 3689c4c..21b2ac9 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
@@ -53,9 +53,9 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 
 private static final int NUM_ROWS_INSERTED = 1000;
 
-// covers: COUNT, COUNT(DISTINCT) SUM, AVG, MIN, MAX 
+// covers: COUNT, SUM, AVG, MIN, MAX 
 private static String GROUPBY1 = "select "
-+ "count(*), count(distinct uri), sum(appcpu), avg(appcpu), uri, 
min(id), max(id) from %s "
++ "count(*), sum(appcpu), avg(appcpu), uri, min(id), max(id) from 
%s "
 + "group by uri";
 
 private static String GROUPBY2 = "select count(distinct uri) from %s";
@@ -135,13 +135,12 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 
 int count = 0;
 while (rs.next()) {
-String uri = rs.getString(5);
+String uri = rs.getString(4);
 assertEquals(2, rs.getInt(1));
-assertEquals(1, rs.getInt(2));
-assertEquals(20, rs.getInt(3));
-assertEquals(10, rs.getInt(4));
-int a = Integer.valueOf(rs.getString(6)).intValue();
-int b = Integer.valueOf(rs.getString(7)).intValue();
+assertEquals(20, rs.getInt(2));
+assertEquals(10, rs.getInt(3));
+int a = Integer.valueOf(rs.getString(5)).intValue();
+int b = Integer.valueOf(rs.getString(6)).intValue();
 assertEquals(Integer.valueOf(uri).intValue(), Math.min(a, b));
 assertEquals(NUM_ROWS_INSERTED / 2 + Integer.valueOf(uri), 
Math.max(a, b));
 count++;
@@ -206,4 +205,4 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 }
 
 }
-}
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/phoenix/blob/05f53349/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
index 201bcec..a6fa6a5 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
@@ -124,53 +124,56 @@ public class GroupedAggregateRegionObserver extends 
BaseScannerRegionObserver {
 }
 
 List expressions = 
deserializeGroupByExpressions(expressionBytes, 0);
-ServerAggregators aggregators =
-ServerAggregators.deserialize(scan
-

[4/4] phoenix git commit: PHOENIX-4719 Avoid static initialization deadlock while loading regions (Pedro Boado)

2018-05-01 Thread jamestaylor
PHOENIX-4719 Avoid static initialization deadlock while loading regions (Pedro 
Boado)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/1f018696
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/1f018696
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/1f018696

Branch: refs/heads/4.x-HBase-1.2
Commit: 1f018696d467fb149c913be34ce338b529baa299
Parents: 7f5d8d3
Author: James Taylor 
Authored: Mon Apr 30 19:49:39 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:28:33 2018 -0700

--
 .../main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java | 2 ++
 .../src/main/java/org/apache/phoenix/query/QueryConstants.java | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/1f018696/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
index 320c6e7..55de772 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
@@ -290,6 +290,8 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 public static final String ASYNC_REBUILD_TIMESTAMP = 
"ASYNC_REBUILD_TIMESTAMP";
 public static final byte[] ASYNC_REBUILD_TIMESTAMP_BYTES = 
Bytes.toBytes(ASYNC_REBUILD_TIMESTAMP);
 
+public static final String COLUMN_ENCODED_BYTES = "COLUMN_ENCODED_BYTES";
+
 public static final String PARENT_TENANT_ID = "PARENT_TENANT_ID";
 public static final byte[] PARENT_TENANT_ID_BYTES = 
Bytes.toBytes(PARENT_TENANT_ID);
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/1f018696/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java 
b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
index 2fe7b14..65806ae 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
@@ -433,7 +433,7 @@ public interface QueryConstants {
 " CONSTRAINT " + SYSTEM_TABLE_PK_NAME + " PRIMARY KEY 
(QUERY_ID))\n" +
 PhoenixDatabaseMetaData.TRANSACTIONAL + "=" + Boolean.FALSE+ ",\n" 
+
 HColumnDescriptor.TTL + "=" + 
MetaDataProtocol.DEFAULT_LOG_TTL+",\n"+
-TableProperty.COLUMN_ENCODED_BYTES.toString()+" = 0";
+PhoenixDatabaseMetaData.COLUMN_ENCODED_BYTES +" = 0";
 
 public static final byte[] OFFSET_FAMILY = "f_offset".getBytes();
 public static final byte[] OFFSET_COLUMN = "c_offset".getBytes();



[3/4] phoenix git commit: PHOENIX-4721 Adding PK column to a table with multiple secondary indexes fails

2018-05-01 Thread jamestaylor
PHOENIX-4721 Adding PK column to a table with multiple secondary indexes fails


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/fcb2678d
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/fcb2678d
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/fcb2678d

Branch: refs/heads/4.x-HBase-1.2
Commit: fcb2678d56276473f8fb2a137be99335938f03c6
Parents: 05f5334
Author: James Taylor 
Authored: Mon Apr 30 22:57:49 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:28:33 2018 -0700

--
 .../phoenix/end2end/AlterTableWithViewsIT.java  | 45 +++-
 .../apache/phoenix/schema/MetaDataClient.java   |  3 +-
 2 files changed, 46 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/fcb2678d/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
index e1b1372..ab3a4ab 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
@@ -33,6 +33,7 @@ import java.sql.ResultSetMetaData;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.Collection;
+import java.util.Properties;
 
 import org.apache.commons.lang.ArrayUtils;
 import org.apache.hadoop.hbase.client.HTableInterface;
@@ -47,7 +48,9 @@ import org.apache.phoenix.schema.PNameFactory;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.StringUtil;
+import org.apache.phoenix.util.TestUtil;
 import org.junit.Test;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
@@ -863,5 +866,45 @@ public class AlterTableWithViewsIT extends 
ParallelStatsDisabledIT {
 assertTrue(viewTable.isAppendOnlySchema());
 }
 }
-
+
+@Test
+public void testAlterTableWithIndexesExtendPk() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+conn.setAutoCommit(false);
+String tableName = generateUniqueName();
+String indexName1 = "I_" + generateUniqueName();
+String indexName2 = "I_" + generateUniqueName();
+
+try {
+String ddl = "CREATE TABLE " + tableName +
+" (ORG_ID CHAR(15) NOT NULL," +
+" PARTITION_KEY CHAR(3) NOT NULL, " +
+" ACTIVITY_DATE DATE NOT NULL, " +
+" FK1_ID CHAR(15) NOT NULL, " +
+" FK2_ID CHAR(15) NOT NULL, " +
+" TYPE VARCHAR NOT NULL, " +
+" IS_OPEN BOOLEAN " +
+" CONSTRAINT PKVIEW PRIMARY KEY " +
+"(" +
+"ORG_ID, PARTITION_KEY, ACTIVITY_DATE, FK1_ID, FK2_ID, TYPE" +
+"))";
+createTestTable(getUrl(), ddl);
+
+String idx1ddl = "CREATE INDEX " + indexName1 + " ON " + tableName 
+ " (FK1_ID, ACTIVITY_DATE DESC) INCLUDE (IS_OPEN)";
+PreparedStatement stmt1 = conn.prepareStatement(idx1ddl);
+stmt1.execute();
+
+String idx2ddl = "CREATE INDEX " + indexName2 + " ON " + tableName 
+ " (FK2_ID, ACTIVITY_DATE DESC) INCLUDE (IS_OPEN)";
+PreparedStatement stmt2 = conn.prepareStatement(idx2ddl);
+stmt2.execute();
+
+ddl = "ALTER TABLE " + tableName + " ADD SOURCE VARCHAR(25) NULL 
PRIMARY KEY";
+PreparedStatement stmt3 = conn.prepareStatement(ddl);
+stmt3.execute();
+} finally {
+conn.close();
+}
+}
+
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/fcb2678d/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
index a3d2baf..69d8a56 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
@@ -3381,13 +3381,14 @@ public class MetaDataClient {
 if (colDef.isPK()) {
 PDataType indexColDataType = 

[1/4] phoenix git commit: PHOENIX-4720 SequenceIT is flapping

2018-05-01 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 7f5d8d371 -> cfac2daac


PHOENIX-4720 SequenceIT is flapping


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/cfac2daa
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/cfac2daa
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/cfac2daa

Branch: refs/heads/4.x-HBase-1.2
Commit: cfac2daac86fbb192b6bc21c9179220fcf1f1ebd
Parents: fcb2678
Author: James Taylor 
Authored: Mon Apr 30 19:50:34 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:28:33 2018 -0700

--
 .../org/apache/phoenix/end2end/SequenceIT.java  | 42 +++-
 .../coprocessor/SequenceRegionObserver.java |  3 +-
 2 files changed, 43 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/cfac2daa/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
index 9b870e1..4cc9628 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
@@ -41,6 +41,7 @@ import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.SchemaNotFoundException;
 import org.apache.phoenix.schema.SequenceAlreadyExistsException;
 import org.apache.phoenix.schema.SequenceNotFoundException;
+import org.apache.phoenix.util.EnvironmentEdge;
 import org.apache.phoenix.util.EnvironmentEdgeManager;
 import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
@@ -90,6 +91,19 @@ public class SequenceIT extends ParallelStatsDisabledIT {
assertTrue(rs.next());
}
 
+private static class MyClock extends EnvironmentEdge {
+public volatile long time;
+
+public MyClock (long time) {
+this.time = time;
+}
+
+@Override
+public long currentTime() {
+return time;
+}
+}
+
@Test
public void testDuplicateSequences() throws Exception {
 String sequenceName = generateSequenceNameWithSchema();
@@ -105,7 +119,28 @@ public class SequenceIT extends ParallelStatsDisabledIT {
}
}
 
-   @Test
+@Test
+public void testDuplicateSequencesAtSameTimestamp() throws Exception {
+final MyClock clock = new MyClock(1000);
+EnvironmentEdgeManager.injectEdge(clock);
+try {
+String sequenceName = generateSequenceNameWithSchema();
+
+
+conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + 
" START WITH 2 INCREMENT BY 4\n");
+
+try {
+conn.createStatement().execute("CREATE SEQUENCE " + 
sequenceName + " START WITH 2 INCREMENT BY 4\n");
+Assert.fail("Duplicate sequences");
+} catch (SequenceAlreadyExistsException e){
+
+}
+} finally {
+EnvironmentEdgeManager.reset();
+}
+}
+
+@Test
public void testSequenceNotFound() throws Exception {
 String sequenceName = generateSequenceNameWithSchema();

@@ -753,26 +788,31 @@ public class SequenceIT extends ParallelStatsDisabledIT {
 assertSequenceValuesForSingleRow(sequenceName, 1, 2, 3);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
INCREMENT BY -1");
 
 assertSequenceValuesForSingleRow(sequenceName, 1, 0, -1);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
MINVALUE 10");
 
 assertSequenceValuesForSingleRow(sequenceName, 10, 11, 12);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
INCREMENT BY -1 MINVALUE 10 ");
 
 assertSequenceValuesForSingleRow(sequenceName, Long.MAX_VALUE, 
Long.MAX_VALUE - 1, Long.MAX_VALUE - 2);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
MAXVALUE 0");
 

[1/4] phoenix git commit: PHOENIX-4720 SequenceIT is flapping

2018-05-01 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/master d10151e33 -> de87cf50a


PHOENIX-4720 SequenceIT is flapping


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/de87cf50
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/de87cf50
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/de87cf50

Branch: refs/heads/master
Commit: de87cf50a6dfd01113f06f5f36a9d84099459b31
Parents: c47a63d
Author: James Taylor 
Authored: Mon Apr 30 19:50:34 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:26:39 2018 -0700

--
 .../org/apache/phoenix/end2end/SequenceIT.java  | 42 +++-
 .../coprocessor/SequenceRegionObserver.java |  3 +-
 2 files changed, 43 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/de87cf50/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
index 9b870e1..4cc9628 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
@@ -41,6 +41,7 @@ import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.SchemaNotFoundException;
 import org.apache.phoenix.schema.SequenceAlreadyExistsException;
 import org.apache.phoenix.schema.SequenceNotFoundException;
+import org.apache.phoenix.util.EnvironmentEdge;
 import org.apache.phoenix.util.EnvironmentEdgeManager;
 import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
@@ -90,6 +91,19 @@ public class SequenceIT extends ParallelStatsDisabledIT {
assertTrue(rs.next());
}
 
+private static class MyClock extends EnvironmentEdge {
+public volatile long time;
+
+public MyClock (long time) {
+this.time = time;
+}
+
+@Override
+public long currentTime() {
+return time;
+}
+}
+
@Test
public void testDuplicateSequences() throws Exception {
 String sequenceName = generateSequenceNameWithSchema();
@@ -105,7 +119,28 @@ public class SequenceIT extends ParallelStatsDisabledIT {
}
}
 
-   @Test
+@Test
+public void testDuplicateSequencesAtSameTimestamp() throws Exception {
+final MyClock clock = new MyClock(1000);
+EnvironmentEdgeManager.injectEdge(clock);
+try {
+String sequenceName = generateSequenceNameWithSchema();
+
+
+conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + 
" START WITH 2 INCREMENT BY 4\n");
+
+try {
+conn.createStatement().execute("CREATE SEQUENCE " + 
sequenceName + " START WITH 2 INCREMENT BY 4\n");
+Assert.fail("Duplicate sequences");
+} catch (SequenceAlreadyExistsException e){
+
+}
+} finally {
+EnvironmentEdgeManager.reset();
+}
+}
+
+@Test
public void testSequenceNotFound() throws Exception {
 String sequenceName = generateSequenceNameWithSchema();

@@ -753,26 +788,31 @@ public class SequenceIT extends ParallelStatsDisabledIT {
 assertSequenceValuesForSingleRow(sequenceName, 1, 2, 3);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
INCREMENT BY -1");
 
 assertSequenceValuesForSingleRow(sequenceName, 1, 0, -1);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
MINVALUE 10");
 
 assertSequenceValuesForSingleRow(sequenceName, 10, 11, 12);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
INCREMENT BY -1 MINVALUE 10 ");
 
 assertSequenceValuesForSingleRow(sequenceName, Long.MAX_VALUE, 
Long.MAX_VALUE - 1, Long.MAX_VALUE - 2);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
MAXVALUE 0");
 
 

[4/4] phoenix git commit: PHOENIX-4719 Avoid static initialization deadlock while loading regions (Pedro Boado)

2018-05-01 Thread jamestaylor
PHOENIX-4719 Avoid static initialization deadlock while loading regions (Pedro 
Boado)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/92e594d8
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/92e594d8
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/92e594d8

Branch: refs/heads/master
Commit: 92e594d8a85d3b4c4f44070f00f7e455d287aa21
Parents: d10151e
Author: James Taylor 
Authored: Mon Apr 30 19:49:39 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:26:39 2018 -0700

--
 .../main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java | 2 ++
 .../src/main/java/org/apache/phoenix/query/QueryConstants.java | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/92e594d8/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
index 320c6e7..55de772 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
@@ -290,6 +290,8 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 public static final String ASYNC_REBUILD_TIMESTAMP = 
"ASYNC_REBUILD_TIMESTAMP";
 public static final byte[] ASYNC_REBUILD_TIMESTAMP_BYTES = 
Bytes.toBytes(ASYNC_REBUILD_TIMESTAMP);
 
+public static final String COLUMN_ENCODED_BYTES = "COLUMN_ENCODED_BYTES";
+
 public static final String PARENT_TENANT_ID = "PARENT_TENANT_ID";
 public static final byte[] PARENT_TENANT_ID_BYTES = 
Bytes.toBytes(PARENT_TENANT_ID);
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/92e594d8/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java 
b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
index 2fe7b14..65806ae 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
@@ -433,7 +433,7 @@ public interface QueryConstants {
 " CONSTRAINT " + SYSTEM_TABLE_PK_NAME + " PRIMARY KEY 
(QUERY_ID))\n" +
 PhoenixDatabaseMetaData.TRANSACTIONAL + "=" + Boolean.FALSE+ ",\n" 
+
 HColumnDescriptor.TTL + "=" + 
MetaDataProtocol.DEFAULT_LOG_TTL+",\n"+
-TableProperty.COLUMN_ENCODED_BYTES.toString()+" = 0";
+PhoenixDatabaseMetaData.COLUMN_ENCODED_BYTES +" = 0";
 
 public static final byte[] OFFSET_FAMILY = "f_offset".getBytes();
 public static final byte[] OFFSET_COLUMN = "c_offset".getBytes();



[3/4] phoenix git commit: PHOENIX-4721 Adding PK column to a table with multiple secondary indexes fails

2018-05-01 Thread jamestaylor
PHOENIX-4721 Adding PK column to a table with multiple secondary indexes fails


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/c47a63db
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/c47a63db
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/c47a63db

Branch: refs/heads/master
Commit: c47a63db938481cb3efe2f14e4a9544f27036568
Parents: 4096fc4
Author: James Taylor 
Authored: Mon Apr 30 22:57:49 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:26:39 2018 -0700

--
 .../phoenix/end2end/AlterTableWithViewsIT.java  | 45 +++-
 .../apache/phoenix/schema/MetaDataClient.java   |  3 +-
 2 files changed, 46 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/c47a63db/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
index e1b1372..ab3a4ab 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
@@ -33,6 +33,7 @@ import java.sql.ResultSetMetaData;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.Collection;
+import java.util.Properties;
 
 import org.apache.commons.lang.ArrayUtils;
 import org.apache.hadoop.hbase.client.HTableInterface;
@@ -47,7 +48,9 @@ import org.apache.phoenix.schema.PNameFactory;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.StringUtil;
+import org.apache.phoenix.util.TestUtil;
 import org.junit.Test;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
@@ -863,5 +866,45 @@ public class AlterTableWithViewsIT extends 
ParallelStatsDisabledIT {
 assertTrue(viewTable.isAppendOnlySchema());
 }
 }
-
+
+@Test
+public void testAlterTableWithIndexesExtendPk() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+conn.setAutoCommit(false);
+String tableName = generateUniqueName();
+String indexName1 = "I_" + generateUniqueName();
+String indexName2 = "I_" + generateUniqueName();
+
+try {
+String ddl = "CREATE TABLE " + tableName +
+" (ORG_ID CHAR(15) NOT NULL," +
+" PARTITION_KEY CHAR(3) NOT NULL, " +
+" ACTIVITY_DATE DATE NOT NULL, " +
+" FK1_ID CHAR(15) NOT NULL, " +
+" FK2_ID CHAR(15) NOT NULL, " +
+" TYPE VARCHAR NOT NULL, " +
+" IS_OPEN BOOLEAN " +
+" CONSTRAINT PKVIEW PRIMARY KEY " +
+"(" +
+"ORG_ID, PARTITION_KEY, ACTIVITY_DATE, FK1_ID, FK2_ID, TYPE" +
+"))";
+createTestTable(getUrl(), ddl);
+
+String idx1ddl = "CREATE INDEX " + indexName1 + " ON " + tableName 
+ " (FK1_ID, ACTIVITY_DATE DESC) INCLUDE (IS_OPEN)";
+PreparedStatement stmt1 = conn.prepareStatement(idx1ddl);
+stmt1.execute();
+
+String idx2ddl = "CREATE INDEX " + indexName2 + " ON " + tableName 
+ " (FK2_ID, ACTIVITY_DATE DESC) INCLUDE (IS_OPEN)";
+PreparedStatement stmt2 = conn.prepareStatement(idx2ddl);
+stmt2.execute();
+
+ddl = "ALTER TABLE " + tableName + " ADD SOURCE VARCHAR(25) NULL 
PRIMARY KEY";
+PreparedStatement stmt3 = conn.prepareStatement(ddl);
+stmt3.execute();
+} finally {
+conn.close();
+}
+}
+
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/c47a63db/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
index a3d2baf..69d8a56 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
@@ -3381,13 +3381,14 @@ public class MetaDataClient {
 if (colDef.isPK()) {
 PDataType indexColDataType = 
IndexUtil.getIndexColumnDataType(colDef.isNull(), 

[2/4] phoenix git commit: PHOENIX-4718 Decrease overhead of tracking aggregate heap size

2018-05-01 Thread jamestaylor
PHOENIX-4718 Decrease overhead of tracking aggregate heap size


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/4096fc48
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/4096fc48
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/4096fc48

Branch: refs/heads/master
Commit: 4096fc4893adeca871e97d5d5d60d2f332b572c4
Parents: 92e594d
Author: James Taylor 
Authored: Mon Apr 30 22:03:38 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 11:26:39 2018 -0700

--
 .../phoenix/end2end/SpillableGroupByIT.java | 17 ++--
 .../GroupedAggregateRegionObserver.java | 95 ++--
 .../UngroupedAggregateRegionObserver.java   | 48 +-
 .../phoenix/execute/ClientAggregatePlan.java|  2 +-
 .../expression/aggregator/Aggregator.java   |  9 +-
 .../expression/aggregator/Aggregators.java  |  3 +-
 .../expression/aggregator/BaseAggregator.java   |  4 +
 .../aggregator/ClientAggregators.java   |  3 +-
 .../DistinctValueWithCountServerAggregator.java | 15 ++--
 .../NonSizeTrackingServerAggregators.java   | 42 +
 .../aggregator/ServerAggregators.java   | 42 +
 .../SizeTrackingServerAggregators.java  | 59 
 .../org/apache/phoenix/query/QueryServices.java |  1 +
 .../phoenix/query/QueryServicesOptions.java |  1 +
 .../phoenix/compile/QueryCompilerTest.java  |  4 +-
 .../phoenix/query/QueryServicesTestImpl.java|  2 +
 16 files changed, 231 insertions(+), 116 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/4096fc48/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
index 3689c4c..21b2ac9 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
@@ -53,9 +53,9 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 
 private static final int NUM_ROWS_INSERTED = 1000;
 
-// covers: COUNT, COUNT(DISTINCT) SUM, AVG, MIN, MAX 
+// covers: COUNT, SUM, AVG, MIN, MAX 
 private static String GROUPBY1 = "select "
-+ "count(*), count(distinct uri), sum(appcpu), avg(appcpu), uri, 
min(id), max(id) from %s "
++ "count(*), sum(appcpu), avg(appcpu), uri, min(id), max(id) from 
%s "
 + "group by uri";
 
 private static String GROUPBY2 = "select count(distinct uri) from %s";
@@ -135,13 +135,12 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 
 int count = 0;
 while (rs.next()) {
-String uri = rs.getString(5);
+String uri = rs.getString(4);
 assertEquals(2, rs.getInt(1));
-assertEquals(1, rs.getInt(2));
-assertEquals(20, rs.getInt(3));
-assertEquals(10, rs.getInt(4));
-int a = Integer.valueOf(rs.getString(6)).intValue();
-int b = Integer.valueOf(rs.getString(7)).intValue();
+assertEquals(20, rs.getInt(2));
+assertEquals(10, rs.getInt(3));
+int a = Integer.valueOf(rs.getString(5)).intValue();
+int b = Integer.valueOf(rs.getString(6)).intValue();
 assertEquals(Integer.valueOf(uri).intValue(), Math.min(a, b));
 assertEquals(NUM_ROWS_INSERTED / 2 + Integer.valueOf(uri), 
Math.max(a, b));
 count++;
@@ -206,4 +205,4 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 }
 
 }
-}
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/phoenix/blob/4096fc48/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
index 201bcec..a6fa6a5 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
@@ -124,53 +124,56 @@ public class GroupedAggregateRegionObserver extends 
BaseScannerRegionObserver {
 }
 
 List expressions = 
deserializeGroupByExpressions(expressionBytes, 0);
-ServerAggregators aggregators =
-ServerAggregators.deserialize(scan
-

[2/4] phoenix git commit: PHOENIX-4718 Decrease overhead of tracking aggregate heap size

2018-05-01 Thread jamestaylor
PHOENIX-4718 Decrease overhead of tracking aggregate heap size


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/dec9f289
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/dec9f289
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/dec9f289

Branch: refs/heads/4.x-HBase-1.3
Commit: dec9f2897a62709d0b9b73670ea73c8438997b03
Parents: 25e47ea
Author: James Taylor 
Authored: Mon Apr 30 22:03:38 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 09:29:17 2018 -0700

--
 .../phoenix/end2end/SpillableGroupByIT.java | 17 ++--
 .../GroupedAggregateRegionObserver.java | 95 ++--
 .../UngroupedAggregateRegionObserver.java   | 48 +-
 .../phoenix/execute/ClientAggregatePlan.java|  2 +-
 .../expression/aggregator/Aggregator.java   |  9 +-
 .../expression/aggregator/Aggregators.java  |  3 +-
 .../expression/aggregator/BaseAggregator.java   |  4 +
 .../aggregator/ClientAggregators.java   |  3 +-
 .../DistinctValueWithCountServerAggregator.java | 15 ++--
 .../NonSizeTrackingServerAggregators.java   | 42 +
 .../aggregator/ServerAggregators.java   | 42 +
 .../SizeTrackingServerAggregators.java  | 59 
 .../org/apache/phoenix/query/QueryServices.java |  1 +
 .../phoenix/query/QueryServicesOptions.java |  1 +
 .../phoenix/compile/QueryCompilerTest.java  |  4 +-
 .../phoenix/query/QueryServicesTestImpl.java|  2 +
 16 files changed, 231 insertions(+), 116 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/dec9f289/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
index 3689c4c..21b2ac9 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
@@ -53,9 +53,9 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 
 private static final int NUM_ROWS_INSERTED = 1000;
 
-// covers: COUNT, COUNT(DISTINCT) SUM, AVG, MIN, MAX 
+// covers: COUNT, SUM, AVG, MIN, MAX 
 private static String GROUPBY1 = "select "
-+ "count(*), count(distinct uri), sum(appcpu), avg(appcpu), uri, 
min(id), max(id) from %s "
++ "count(*), sum(appcpu), avg(appcpu), uri, min(id), max(id) from 
%s "
 + "group by uri";
 
 private static String GROUPBY2 = "select count(distinct uri) from %s";
@@ -135,13 +135,12 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 
 int count = 0;
 while (rs.next()) {
-String uri = rs.getString(5);
+String uri = rs.getString(4);
 assertEquals(2, rs.getInt(1));
-assertEquals(1, rs.getInt(2));
-assertEquals(20, rs.getInt(3));
-assertEquals(10, rs.getInt(4));
-int a = Integer.valueOf(rs.getString(6)).intValue();
-int b = Integer.valueOf(rs.getString(7)).intValue();
+assertEquals(20, rs.getInt(2));
+assertEquals(10, rs.getInt(3));
+int a = Integer.valueOf(rs.getString(5)).intValue();
+int b = Integer.valueOf(rs.getString(6)).intValue();
 assertEquals(Integer.valueOf(uri).intValue(), Math.min(a, b));
 assertEquals(NUM_ROWS_INSERTED / 2 + Integer.valueOf(uri), 
Math.max(a, b));
 count++;
@@ -206,4 +205,4 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 }
 
 }
-}
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/phoenix/blob/dec9f289/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
index 201bcec..a6fa6a5 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
@@ -124,53 +124,56 @@ public class GroupedAggregateRegionObserver extends 
BaseScannerRegionObserver {
 }
 
 List expressions = 
deserializeGroupByExpressions(expressionBytes, 0);
-ServerAggregators aggregators =
-ServerAggregators.deserialize(scan
-

[4/4] phoenix git commit: PHOENIX-4720 SequenceIT is flapping

2018-05-01 Thread jamestaylor
PHOENIX-4720 SequenceIT is flapping


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/5c515355
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/5c515355
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/5c515355

Branch: refs/heads/4.x-HBase-1.3
Commit: 5c51535571a411952c81918516ffca6a4af1f612
Parents: 9b4fbcd
Author: James Taylor 
Authored: Mon Apr 30 19:50:34 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 09:29:18 2018 -0700

--
 .../org/apache/phoenix/end2end/SequenceIT.java  | 42 +++-
 .../coprocessor/SequenceRegionObserver.java |  3 +-
 2 files changed, 43 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/5c515355/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
index 9b870e1..4cc9628 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SequenceIT.java
@@ -41,6 +41,7 @@ import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.SchemaNotFoundException;
 import org.apache.phoenix.schema.SequenceAlreadyExistsException;
 import org.apache.phoenix.schema.SequenceNotFoundException;
+import org.apache.phoenix.util.EnvironmentEdge;
 import org.apache.phoenix.util.EnvironmentEdgeManager;
 import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
@@ -90,6 +91,19 @@ public class SequenceIT extends ParallelStatsDisabledIT {
assertTrue(rs.next());
}
 
+private static class MyClock extends EnvironmentEdge {
+public volatile long time;
+
+public MyClock (long time) {
+this.time = time;
+}
+
+@Override
+public long currentTime() {
+return time;
+}
+}
+
@Test
public void testDuplicateSequences() throws Exception {
 String sequenceName = generateSequenceNameWithSchema();
@@ -105,7 +119,28 @@ public class SequenceIT extends ParallelStatsDisabledIT {
}
}
 
-   @Test
+@Test
+public void testDuplicateSequencesAtSameTimestamp() throws Exception {
+final MyClock clock = new MyClock(1000);
+EnvironmentEdgeManager.injectEdge(clock);
+try {
+String sequenceName = generateSequenceNameWithSchema();
+
+
+conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + 
" START WITH 2 INCREMENT BY 4\n");
+
+try {
+conn.createStatement().execute("CREATE SEQUENCE " + 
sequenceName + " START WITH 2 INCREMENT BY 4\n");
+Assert.fail("Duplicate sequences");
+} catch (SequenceAlreadyExistsException e){
+
+}
+} finally {
+EnvironmentEdgeManager.reset();
+}
+}
+
+@Test
public void testSequenceNotFound() throws Exception {
 String sequenceName = generateSequenceNameWithSchema();

@@ -753,26 +788,31 @@ public class SequenceIT extends ParallelStatsDisabledIT {
 assertSequenceValuesForSingleRow(sequenceName, 1, 2, 3);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
INCREMENT BY -1");
 
 assertSequenceValuesForSingleRow(sequenceName, 1, 0, -1);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
MINVALUE 10");
 
 assertSequenceValuesForSingleRow(sequenceName, 10, 11, 12);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
INCREMENT BY -1 MINVALUE 10 ");
 
 assertSequenceValuesForSingleRow(sequenceName, Long.MAX_VALUE, 
Long.MAX_VALUE - 1, Long.MAX_VALUE - 2);
 conn.createStatement().execute("DROP SEQUENCE " + sequenceName);
 
+sequenceName = generateSequenceNameWithSchema();
 conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
MAXVALUE 0");
 
 assertSequenceValuesForSingleRow(sequenceName, Long.MIN_VALUE, 
Long.MIN_VALUE + 1, 

[1/4] phoenix git commit: PHOENIX-4719 Avoid static initialization deadlock while loading regions (Pedro Boado)

2018-05-01 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.3 c08d8e6b7 -> 5c5153557


PHOENIX-4719 Avoid static initialization deadlock while loading regions (Pedro 
Boado)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/25e47ea3
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/25e47ea3
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/25e47ea3

Branch: refs/heads/4.x-HBase-1.3
Commit: 25e47ea3e48b6faf1da4043865d6eca35a7c8926
Parents: c08d8e6
Author: James Taylor 
Authored: Mon Apr 30 19:49:39 2018 -0700
Committer: James Taylor 
Committed: Mon Apr 30 19:49:39 2018 -0700

--
 .../main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java | 2 ++
 .../src/main/java/org/apache/phoenix/query/QueryConstants.java | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/25e47ea3/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
index 320c6e7..55de772 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
@@ -290,6 +290,8 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 public static final String ASYNC_REBUILD_TIMESTAMP = 
"ASYNC_REBUILD_TIMESTAMP";
 public static final byte[] ASYNC_REBUILD_TIMESTAMP_BYTES = 
Bytes.toBytes(ASYNC_REBUILD_TIMESTAMP);
 
+public static final String COLUMN_ENCODED_BYTES = "COLUMN_ENCODED_BYTES";
+
 public static final String PARENT_TENANT_ID = "PARENT_TENANT_ID";
 public static final byte[] PARENT_TENANT_ID_BYTES = 
Bytes.toBytes(PARENT_TENANT_ID);
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/25e47ea3/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java 
b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
index 2fe7b14..65806ae 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
@@ -433,7 +433,7 @@ public interface QueryConstants {
 " CONSTRAINT " + SYSTEM_TABLE_PK_NAME + " PRIMARY KEY 
(QUERY_ID))\n" +
 PhoenixDatabaseMetaData.TRANSACTIONAL + "=" + Boolean.FALSE+ ",\n" 
+
 HColumnDescriptor.TTL + "=" + 
MetaDataProtocol.DEFAULT_LOG_TTL+",\n"+
-TableProperty.COLUMN_ENCODED_BYTES.toString()+" = 0";
+PhoenixDatabaseMetaData.COLUMN_ENCODED_BYTES +" = 0";
 
 public static final byte[] OFFSET_FAMILY = "f_offset".getBytes();
 public static final byte[] OFFSET_COLUMN = "c_offset".getBytes();



[3/4] phoenix git commit: PHOENIX-4721 Adding PK column to a table with multiple secondary indexes fails

2018-05-01 Thread jamestaylor
PHOENIX-4721 Adding PK column to a table with multiple secondary indexes fails


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/9b4fbcdd
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/9b4fbcdd
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/9b4fbcdd

Branch: refs/heads/4.x-HBase-1.3
Commit: 9b4fbcdd50db221b15d526b3edcae26bbe2c1931
Parents: dec9f28
Author: James Taylor 
Authored: Mon Apr 30 22:57:49 2018 -0700
Committer: James Taylor 
Committed: Tue May 1 09:29:17 2018 -0700

--
 .../phoenix/end2end/AlterTableWithViewsIT.java  | 45 +++-
 .../apache/phoenix/schema/MetaDataClient.java   |  3 +-
 2 files changed, 46 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/9b4fbcdd/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
index e1b1372..ab3a4ab 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
@@ -33,6 +33,7 @@ import java.sql.ResultSetMetaData;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.Collection;
+import java.util.Properties;
 
 import org.apache.commons.lang.ArrayUtils;
 import org.apache.hadoop.hbase.client.HTableInterface;
@@ -47,7 +48,9 @@ import org.apache.phoenix.schema.PNameFactory;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.StringUtil;
+import org.apache.phoenix.util.TestUtil;
 import org.junit.Test;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
@@ -863,5 +866,45 @@ public class AlterTableWithViewsIT extends 
ParallelStatsDisabledIT {
 assertTrue(viewTable.isAppendOnlySchema());
 }
 }
-
+
+@Test
+public void testAlterTableWithIndexesExtendPk() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+conn.setAutoCommit(false);
+String tableName = generateUniqueName();
+String indexName1 = "I_" + generateUniqueName();
+String indexName2 = "I_" + generateUniqueName();
+
+try {
+String ddl = "CREATE TABLE " + tableName +
+" (ORG_ID CHAR(15) NOT NULL," +
+" PARTITION_KEY CHAR(3) NOT NULL, " +
+" ACTIVITY_DATE DATE NOT NULL, " +
+" FK1_ID CHAR(15) NOT NULL, " +
+" FK2_ID CHAR(15) NOT NULL, " +
+" TYPE VARCHAR NOT NULL, " +
+" IS_OPEN BOOLEAN " +
+" CONSTRAINT PKVIEW PRIMARY KEY " +
+"(" +
+"ORG_ID, PARTITION_KEY, ACTIVITY_DATE, FK1_ID, FK2_ID, TYPE" +
+"))";
+createTestTable(getUrl(), ddl);
+
+String idx1ddl = "CREATE INDEX " + indexName1 + " ON " + tableName 
+ " (FK1_ID, ACTIVITY_DATE DESC) INCLUDE (IS_OPEN)";
+PreparedStatement stmt1 = conn.prepareStatement(idx1ddl);
+stmt1.execute();
+
+String idx2ddl = "CREATE INDEX " + indexName2 + " ON " + tableName 
+ " (FK2_ID, ACTIVITY_DATE DESC) INCLUDE (IS_OPEN)";
+PreparedStatement stmt2 = conn.prepareStatement(idx2ddl);
+stmt2.execute();
+
+ddl = "ALTER TABLE " + tableName + " ADD SOURCE VARCHAR(25) NULL 
PRIMARY KEY";
+PreparedStatement stmt3 = conn.prepareStatement(ddl);
+stmt3.execute();
+} finally {
+conn.close();
+}
+}
+
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/9b4fbcdd/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
index a3d2baf..69d8a56 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
@@ -3381,13 +3381,14 @@ public class MetaDataClient {
 if (colDef.isPK()) {
 PDataType indexColDataType = 

Build failed in Jenkins: Phoenix Compile Compatibility with HBase #623

2018-05-01 Thread Apache Jenkins Server
See 


--
[...truncated 37.15 KB...]
  symbol:   class HBaseRpcController
  location: class 
org.apache.hadoop.hbase.ipc.controller.ClientRpcControllerFactory
[ERROR] 
:[52,9]
 cannot find symbol
  symbol:   class HBaseRpcController
  location: class 
org.apache.hadoop.hbase.ipc.controller.ClientRpcControllerFactory
[ERROR] 
:[180,14]
 cannot find symbol
  symbol: class MetricRegistry
[ERROR] 
:[179,7]
 method does not override or implement a method from a supertype
[ERROR] 
:[454,78]
 cannot find symbol
  symbol: class HBaseRpcController
[ERROR] 
:[432,17]
 cannot find symbol
  symbol: class HBaseRpcController
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.0:compile (default-compile) on 
project phoenix-core: Compilation failure: Compilation failure: 
[ERROR] 
:[34,39]
 cannot find symbol
[ERROR]   symbol:   class MetricRegistry
[ERROR]   location: package org.apache.hadoop.hbase.metrics
[ERROR] 
:[144,16]
 cannot find symbol
[ERROR]   symbol:   class MetricRegistry
[ERROR]   location: class 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.PhoenixMetaDataControllerEnvironment
[ERROR] 
:[24,35]
 cannot find symbol
[ERROR]   symbol:   class DelegatingHBaseRpcController
[ERROR]   location: package org.apache.hadoop.hbase.ipc
[ERROR] 
:[25,35]
 cannot find symbol
[ERROR]   symbol:   class HBaseRpcController
[ERROR]   location: package org.apache.hadoop.hbase.ipc
[ERROR] 
:[37,37]
 cannot find symbol
[ERROR]   symbol: class DelegatingHBaseRpcController
[ERROR] 
:[56,38]
 cannot find symbol
[ERROR]   symbol:   class HBaseRpcController
[ERROR]   location: class 
org.apache.hadoop.hbase.ipc.controller.MetadataRpcController
[ERROR] 
:[26,35]
 cannot find symbol
[ERROR]   symbol:   class HBaseRpcController
[ERROR]   location: package org.apache.hadoop.hbase.ipc
[ERROR] 
:[40,12]
 cannot find symbol
[ERROR]   symbol:   class HBaseRpcController
[ERROR]   location: class 
org.apache.hadoop.hbase.ipc.controller.InterRegionServerMetadataRpcControllerFactory
[ERROR] 
:[46,12]
 cannot find symbol
[ERROR]   symbol:   class HBaseRpcController
[ERROR]   location: class 
org.apache.hadoop.hbase.ipc.controller.InterRegionServerMetadataRpcControllerFactory
[ERROR]