Build failed in Jenkins: Phoenix | Master #1773

2017-09-05 Thread Apache Jenkins Server
See 


Changes:

[samarth] PHOENIX-4151 Addendum to fix test failure

--
[...truncated 96.98 KB...]
[INFO] Running org.apache.phoenix.end2end.index.ViewIndexIT
[INFO] Running org.apache.phoenix.end2end.index.MutableIndexSplitReverseScanIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.682 s 
- in org.apache.phoenix.end2end.index.ViewIndexIT
[INFO] Running org.apache.phoenix.end2end.index.txn.MutableRollbackIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 308.78 
s - in org.apache.phoenix.end2end.index.DropColumnIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.971 s 
- in org.apache.phoenix.end2end.index.txn.MutableRollbackIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
[INFO] Running org.apache.phoenix.end2end.index.txn.RollbackIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.344 s 
- in org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.407 s 
- in org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
[INFO] Running org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.475 s 
- in org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Running org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.105 s 
- in org.apache.phoenix.end2end.index.txn.RollbackIT
[INFO] Running org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.174 s 
- in org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Running org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.147 s 
- in org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.373 s 
- in org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Running org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Running org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.398 s 
- in org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Running org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.886 s 
- in org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Running org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 67, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 414.434 
s - in org.apache.phoenix.end2end.index.IndexExpressionIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 88.798 s 
- in org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Running org.apache.phoenix.tx.TransactionIT
[INFO] Running org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
991.848 s - in org.apache.phoenix.end2end.SortMergeJoinIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.137 s 
- in org.apache.phoenix.tx.TransactionIT
[INFO] Running org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.022 s 
- in org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 64, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 447.767 
s - in org.apache.phoenix.end2end.index.MutableIndexIT
[WARNING] Tests run: 52, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
190.84 s - in org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 263.972 
s - in org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 696.86 s 
- in org.apache.phoenix.end2end.index.MutableIndexSplitForwardScanIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 700.38 s 
- in org.apache.phoenix.end2end.index.MutableIndexSplitReverseScanIT
[INFO] Tests run: 304, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
1,822.358 s - in org.apache.phoenix.end2end.index.IndexIT
[INFO] 
[INFO] Results:
[INFO] 
[WARNING] Tests run: 3057, Failures: 0, Errors: 0, Skipped: 5
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test (ClientManagedTimeTests) 
@ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.phoenix.end2end.CustomEntityDataIT
[INFO] Running org.apache.phoenix.end2end.FunkyNamesIT
[INFO] Running org.apache.phoenix.end2end.DistinctCountIT
[INFO] Runn

Build failed in Jenkins: Phoenix | Master #1772

2017-09-05 Thread Apache Jenkins Server
See 


Changes:

[samarth] PHOENIX-4151 Tests extending BaseQueryIT are flapping

--
[...truncated 105.93 KB...]
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.613 s 
- in org.apache.phoenix.end2end.index.txn.RollbackIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.336 s 
- in org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
[INFO] Running org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Running org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.036 s 
- in org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Running org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.185 s 
- in org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Running org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Tests run: 67, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 419.536 
s - in org.apache.phoenix.end2end.index.IndexExpressionIT
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.286 s 
- in org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Running org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.643 s 
- in org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Running org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.71 s - 
in org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Running org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Running org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
984.024 s - in org.apache.phoenix.end2end.SortMergeJoinIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.388 s 
- in org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Running org.apache.phoenix.tx.TxCheckpointIT
[INFO] Running org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.452 s 
- in org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Tests run: 64, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 447.742 
s - in org.apache.phoenix.end2end.index.MutableIndexIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.855 s 
- in org.apache.phoenix.tx.TransactionIT
[INFO] Running org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.13 s 
- in org.apache.phoenix.util.IndexScrutinyIT
[WARNING] Tests run: 52, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
191.176 s - in org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 277.774 
s - in org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 658.539 
s - in org.apache.phoenix.end2end.index.MutableIndexSplitForwardScanIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 700.889 
s - in org.apache.phoenix.end2end.index.MutableIndexSplitReverseScanIT
[INFO] Tests run: 304, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
1,880.638 s - in org.apache.phoenix.end2end.index.IndexIT
[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Errors: 
[ERROR]   MutableQueryIT.testPointInTimeScan:316 » SQL ERROR  (XCL11): 
Connectioin i...
[ERROR]   MutableQueryIT.testPointInTimeScan:316 » SQL ERROR  (XCL11): 
Connectioin i...
[ERROR]   MutableQueryIT.testPointInTimeScan:316 » SQL ERROR  (XCL11): 
Connectioin i...
[ERROR]   MutableQueryIT.testPointInTimeScan:316 » SQL ERROR  (XCL11): 
Connectioin i...
[ERROR]   MutableQueryIT.testPointInTimeScan:316 » SQL ERROR  (XCL11): 
Connectioin i...
[ERROR]   MutableQueryIT.testPointInTimeScan:316 » SQL ERROR  (XCL11): 
Connectioin i...
[ERROR]   MutableQueryIT.testPointInTimeScan:316 » SQL ERROR  (XCL11): 
Connectioin i...
[ERROR]   MutableQueryIT.testPointInTimeScan:316 » SQL ERROR  (XCL11): 
Connectioin i...
[ERROR]   MutableQueryIT.testPointInTimeScan:316 » SQL ERROR  (XCL11): 
Connectioin i...
[ERROR]   MutableQueryIT.testPointInTimeScan:316 » SQL ERROR  (XCL11): 
Connectioin i...
[ERROR]   MutableQueryIT.testPointInTimeScan:316 » SQL ERROR  (XCL11): 
Connectioin i...
[ERROR]   MutableQueryIT.testPointInTimeScan:316 » SQL ERROR  (XCL11): 
Connectioin i...
[ERROR]   MutableQueryIT.testPointInTimeScan:316 » SQL ERROR  (XCL11): 
Connectioin i...
[ERROR]   MutableQueryIT.testPointInTimeScan:316 » SQL ERROR  (XCL11): 
Connectioin i...
[INFO] 
[ERROR] Tests run: 3057, Failures: 0, Errors: 14, Skipped: 5
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test (ClientManagedTime

phoenix git commit: PHOENIX-4151 Addendum to fix test failure

2017-09-05 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 1bcccb40c -> 3ae9f552a


PHOENIX-4151 Addendum to fix test failure


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/3ae9f552
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/3ae9f552
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/3ae9f552

Branch: refs/heads/4.x-HBase-1.2
Commit: 3ae9f552ac0da53d4da01a5fbb3cab7e9d36b7f9
Parents: 1bcccb4
Author: Samarth Jain 
Authored: Tue Sep 5 19:41:43 2017 -0700
Committer: Samarth Jain 
Committed: Tue Sep 5 19:41:43 2017 -0700

--
 .../it/java/org/apache/phoenix/end2end/AggregateQueryIT.java| 1 -
 .../src/it/java/org/apache/phoenix/end2end/MutableQueryIT.java  | 5 +
 2 files changed, 1 insertion(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/3ae9f552/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
index 01c6e37..3dc0184 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
@@ -39,7 +39,6 @@ import org.apache.hadoop.hbase.client.HTable;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.util.ByteUtil;
-import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.junit.Test;
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/3ae9f552/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableQueryIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableQueryIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableQueryIT.java
index 770c015..746d274 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableQueryIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableQueryIT.java
@@ -311,21 +311,18 @@ public class MutableQueryIT extends BaseQueryIT {
 "A_INTEGER) " +
 "VALUES (?, ?, ?)";
 upsertConn.setAutoCommit(true); // Test auto commit
-upsertConn.close();
-
 PreparedStatement stmt = upsertConn.prepareStatement(upsertStmt);
 stmt.setString(1, tenantId);
 stmt.setString(2, ROW4);
 stmt.setInt(3, 5);
 stmt.execute(); // should commit too
+upsertConn.close();
 long upsert1Time = System.currentTimeMillis();
 long timeDelta = 100;
 Thread.sleep(timeDelta);
 
-// Override value again, but should be ignored since it's past the SCN
 upsertConn = DriverManager.getConnection(url, props);
 upsertConn.setAutoCommit(true); // Test auto commit
-// Insert all rows at ts
 stmt = upsertConn.prepareStatement(upsertStmt);
 stmt.setString(1, tenantId);
 stmt.setString(2, ROW4);



phoenix git commit: PHOENIX-4151 Addendum to fix test failure

2017-09-05 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 12dbe15fa -> 7151d610e


PHOENIX-4151 Addendum to fix test failure


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/7151d610
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/7151d610
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/7151d610

Branch: refs/heads/4.x-HBase-1.1
Commit: 7151d610e31235336f3fb24d4a79736c52a6c0a0
Parents: 12dbe15
Author: Samarth Jain 
Authored: Tue Sep 5 19:41:27 2017 -0700
Committer: Samarth Jain 
Committed: Tue Sep 5 19:41:27 2017 -0700

--
 .../it/java/org/apache/phoenix/end2end/AggregateQueryIT.java| 1 -
 .../src/it/java/org/apache/phoenix/end2end/MutableQueryIT.java  | 5 +
 2 files changed, 1 insertion(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/7151d610/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
index 01c6e37..3dc0184 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
@@ -39,7 +39,6 @@ import org.apache.hadoop.hbase.client.HTable;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.util.ByteUtil;
-import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.junit.Test;
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/7151d610/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableQueryIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableQueryIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableQueryIT.java
index 770c015..746d274 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableQueryIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableQueryIT.java
@@ -311,21 +311,18 @@ public class MutableQueryIT extends BaseQueryIT {
 "A_INTEGER) " +
 "VALUES (?, ?, ?)";
 upsertConn.setAutoCommit(true); // Test auto commit
-upsertConn.close();
-
 PreparedStatement stmt = upsertConn.prepareStatement(upsertStmt);
 stmt.setString(1, tenantId);
 stmt.setString(2, ROW4);
 stmt.setInt(3, 5);
 stmt.execute(); // should commit too
+upsertConn.close();
 long upsert1Time = System.currentTimeMillis();
 long timeDelta = 100;
 Thread.sleep(timeDelta);
 
-// Override value again, but should be ignored since it's past the SCN
 upsertConn = DriverManager.getConnection(url, props);
 upsertConn.setAutoCommit(true); // Test auto commit
-// Insert all rows at ts
 stmt = upsertConn.prepareStatement(upsertStmt);
 stmt.setString(1, tenantId);
 stmt.setString(2, ROW4);



phoenix git commit: PHOENIX-4151 Addendum to fix test failure

2017-09-05 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 611c86063 -> c5fe17624


PHOENIX-4151 Addendum to fix test failure


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/c5fe1762
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/c5fe1762
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/c5fe1762

Branch: refs/heads/4.x-HBase-0.98
Commit: c5fe176249f15d414a1c164ce779b1d0d5615a17
Parents: 611c860
Author: Samarth Jain 
Authored: Tue Sep 5 19:40:36 2017 -0700
Committer: Samarth Jain 
Committed: Tue Sep 5 19:40:36 2017 -0700

--
 .../it/java/org/apache/phoenix/end2end/AggregateQueryIT.java| 1 -
 .../src/it/java/org/apache/phoenix/end2end/MutableQueryIT.java  | 5 +
 2 files changed, 1 insertion(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/c5fe1762/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
index 01c6e37..3dc0184 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
@@ -39,7 +39,6 @@ import org.apache.hadoop.hbase.client.HTable;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.util.ByteUtil;
-import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.junit.Test;
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/c5fe1762/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableQueryIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableQueryIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableQueryIT.java
index 770c015..746d274 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableQueryIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableQueryIT.java
@@ -311,21 +311,18 @@ public class MutableQueryIT extends BaseQueryIT {
 "A_INTEGER) " +
 "VALUES (?, ?, ?)";
 upsertConn.setAutoCommit(true); // Test auto commit
-upsertConn.close();
-
 PreparedStatement stmt = upsertConn.prepareStatement(upsertStmt);
 stmt.setString(1, tenantId);
 stmt.setString(2, ROW4);
 stmt.setInt(3, 5);
 stmt.execute(); // should commit too
+upsertConn.close();
 long upsert1Time = System.currentTimeMillis();
 long timeDelta = 100;
 Thread.sleep(timeDelta);
 
-// Override value again, but should be ignored since it's past the SCN
 upsertConn = DriverManager.getConnection(url, props);
 upsertConn.setAutoCommit(true); // Test auto commit
-// Insert all rows at ts
 stmt = upsertConn.prepareStatement(upsertStmt);
 stmt.setString(1, tenantId);
 stmt.setString(2, ROW4);



phoenix git commit: PHOENIX-4151 Addendum to fix test failure

2017-09-05 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/master a3bb174bc -> cdfb08bd8


PHOENIX-4151 Addendum to fix test failure


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/cdfb08bd
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/cdfb08bd
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/cdfb08bd

Branch: refs/heads/master
Commit: cdfb08bd8862693b4826c4714259a63737dd2c3f
Parents: a3bb174
Author: Samarth Jain 
Authored: Tue Sep 5 19:40:09 2017 -0700
Committer: Samarth Jain 
Committed: Tue Sep 5 19:40:09 2017 -0700

--
 .../it/java/org/apache/phoenix/end2end/AggregateQueryIT.java| 1 -
 .../src/it/java/org/apache/phoenix/end2end/MutableQueryIT.java  | 5 +
 2 files changed, 1 insertion(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/cdfb08bd/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
index 01c6e37..3dc0184 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
@@ -39,7 +39,6 @@ import org.apache.hadoop.hbase.client.HTable;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.util.ByteUtil;
-import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.junit.Test;
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/cdfb08bd/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableQueryIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableQueryIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableQueryIT.java
index 770c015..746d274 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableQueryIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableQueryIT.java
@@ -311,21 +311,18 @@ public class MutableQueryIT extends BaseQueryIT {
 "A_INTEGER) " +
 "VALUES (?, ?, ?)";
 upsertConn.setAutoCommit(true); // Test auto commit
-upsertConn.close();
-
 PreparedStatement stmt = upsertConn.prepareStatement(upsertStmt);
 stmt.setString(1, tenantId);
 stmt.setString(2, ROW4);
 stmt.setInt(3, 5);
 stmt.execute(); // should commit too
+upsertConn.close();
 long upsert1Time = System.currentTimeMillis();
 long timeDelta = 100;
 Thread.sleep(timeDelta);
 
-// Override value again, but should be ignored since it's past the SCN
 upsertConn = DriverManager.getConnection(url, props);
 upsertConn.setAutoCommit(true); // Test auto commit
-// Insert all rows at ts
 stmt = upsertConn.prepareStatement(upsertStmt);
 stmt.setString(1, tenantId);
 stmt.setString(2, ROW4);



[1/2] phoenix git commit: PHOENIX-4151 Tests extending BaseQueryIT are flapping

2017-09-05 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 bf4262a99 -> 611c86063


http://git-wip-us.apache.org/repos/asf/phoenix/blob/611c8606/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryIT.java
index f72caf9..c5283cd 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryIT.java
@@ -50,7 +50,6 @@ import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.schema.types.PTimestamp;
-import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.junit.Test;
 
@@ -76,25 +75,16 @@ public class QueryIT extends BaseQueryIT {
 "A_INTEGER) " +
 "VALUES (?, ?, ?)";
 // Override value that was set at creation time
-String url = getUrl() + ";" + PhoenixRuntime.CURRENT_SCN_ATTRIB + "=" 
+ (ts + 1);
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-Connection upsertConn = DriverManager.getConnection(url, props);
-upsertConn.setAutoCommit(true); // Test auto commit
-PreparedStatement stmt = upsertConn.prepareStatement(updateStmt);
+Connection conn = DriverManager.getConnection(url, props);
+PreparedStatement stmt = conn.prepareStatement(updateStmt);
 stmt.setString(1, tenantId);
 stmt.setString(2, ROW4);
 stmt.setInt(3, -10);
 stmt.execute();
-upsertConn.close();
-url = getUrl() + ";" + PhoenixRuntime.CURRENT_SCN_ATTRIB + "=" + (ts + 
6);
-props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-upsertConn = DriverManager.getConnection(url, props);
-analyzeTable(upsertConn, tableName);
-upsertConn.close();
-
+conn.commit();
+
 String query = "SELECT entity_id FROM " + tableName + " WHERE 
organization_id=? and a_integer >= ?";
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
-Connection conn = DriverManager.getConnection(getUrl(), props);
 PreparedStatement statement = conn.prepareStatement(query);
 statement.setString(1, tenantId);
 statement.setInt(2, 7);
@@ -127,7 +117,6 @@ public class QueryIT extends BaseQueryIT {
 public void testToDateOnString() throws Exception { // TODO: test more 
conversion combinations
 String query = "SELECT a_string FROM " + tableName + " WHERE 
organization_id=? and a_integer = 5";
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
 Connection conn = DriverManager.getConnection(getUrl(), props);
 try {
 PreparedStatement statement = conn.prepareStatement(query);
@@ -143,12 +132,10 @@ public class QueryIT extends BaseQueryIT {
 }
 }
 
-
 @Test
 public void testColumnOnBothSides() throws Exception {
 String query = "SELECT entity_id FROM " + tableName + " WHERE 
organization_id=? and a_string = b_string";
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
 Connection conn = DriverManager.getConnection(getUrl(), props);
 try {
 PreparedStatement statement = conn.prepareStatement(query);
@@ -165,7 +152,6 @@ public class QueryIT extends BaseQueryIT {
 @Test
 public void testDateInList() throws Exception {
 String query = "SELECT entity_id FROM " + tableName + " WHERE a_date 
IN (?,?) AND a_integer < 4";
-String url = getUrl() + ";" + PhoenixRuntime.CURRENT_SCN_ATTRIB + "=" 
+ (ts + 5); // Run query at timestamp 5
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 Connection conn = DriverManager.getConnection(url, props);
 try {
@@ -191,7 +177,6 @@ public class QueryIT extends BaseQueryIT {
 "A_TIMESTAMP) " +
 "VALUES (?, ?, ?)";
 // Override value that was set at creation time
-String url = getUrl() + ";" + PhoenixRuntime.CURRENT_SCN_ATTRIB + "=" 
+ (ts + 10);
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 Connection upsertConn = DriverManager.getConnection(url, props);
 upsertConn.setAutoCommit(true); // Test auto commit
@@ -203,7 +188,6 @@ public class QueryIT extends BaseQueryIT {
 stmt.setTimestamp(3, tsValue1);
 stmt.execute();
 
-url = getUrl() + ";" + PhoenixRu

[1/2] phoenix git commit: PHOENIX-4151 Tests extending BaseQueryIT are flapping

2017-09-05 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 131abf8ff -> 12dbe15fa


http://git-wip-us.apache.org/repos/asf/phoenix/blob/12dbe15f/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryIT.java
index f72caf9..c5283cd 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryIT.java
@@ -50,7 +50,6 @@ import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.schema.types.PTimestamp;
-import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.junit.Test;
 
@@ -76,25 +75,16 @@ public class QueryIT extends BaseQueryIT {
 "A_INTEGER) " +
 "VALUES (?, ?, ?)";
 // Override value that was set at creation time
-String url = getUrl() + ";" + PhoenixRuntime.CURRENT_SCN_ATTRIB + "=" 
+ (ts + 1);
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-Connection upsertConn = DriverManager.getConnection(url, props);
-upsertConn.setAutoCommit(true); // Test auto commit
-PreparedStatement stmt = upsertConn.prepareStatement(updateStmt);
+Connection conn = DriverManager.getConnection(url, props);
+PreparedStatement stmt = conn.prepareStatement(updateStmt);
 stmt.setString(1, tenantId);
 stmt.setString(2, ROW4);
 stmt.setInt(3, -10);
 stmt.execute();
-upsertConn.close();
-url = getUrl() + ";" + PhoenixRuntime.CURRENT_SCN_ATTRIB + "=" + (ts + 
6);
-props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-upsertConn = DriverManager.getConnection(url, props);
-analyzeTable(upsertConn, tableName);
-upsertConn.close();
-
+conn.commit();
+
 String query = "SELECT entity_id FROM " + tableName + " WHERE 
organization_id=? and a_integer >= ?";
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
-Connection conn = DriverManager.getConnection(getUrl(), props);
 PreparedStatement statement = conn.prepareStatement(query);
 statement.setString(1, tenantId);
 statement.setInt(2, 7);
@@ -127,7 +117,6 @@ public class QueryIT extends BaseQueryIT {
 public void testToDateOnString() throws Exception { // TODO: test more 
conversion combinations
 String query = "SELECT a_string FROM " + tableName + " WHERE 
organization_id=? and a_integer = 5";
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
 Connection conn = DriverManager.getConnection(getUrl(), props);
 try {
 PreparedStatement statement = conn.prepareStatement(query);
@@ -143,12 +132,10 @@ public class QueryIT extends BaseQueryIT {
 }
 }
 
-
 @Test
 public void testColumnOnBothSides() throws Exception {
 String query = "SELECT entity_id FROM " + tableName + " WHERE 
organization_id=? and a_string = b_string";
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
 Connection conn = DriverManager.getConnection(getUrl(), props);
 try {
 PreparedStatement statement = conn.prepareStatement(query);
@@ -165,7 +152,6 @@ public class QueryIT extends BaseQueryIT {
 @Test
 public void testDateInList() throws Exception {
 String query = "SELECT entity_id FROM " + tableName + " WHERE a_date 
IN (?,?) AND a_integer < 4";
-String url = getUrl() + ";" + PhoenixRuntime.CURRENT_SCN_ATTRIB + "=" 
+ (ts + 5); // Run query at timestamp 5
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 Connection conn = DriverManager.getConnection(url, props);
 try {
@@ -191,7 +177,6 @@ public class QueryIT extends BaseQueryIT {
 "A_TIMESTAMP) " +
 "VALUES (?, ?, ?)";
 // Override value that was set at creation time
-String url = getUrl() + ";" + PhoenixRuntime.CURRENT_SCN_ATTRIB + "=" 
+ (ts + 10);
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 Connection upsertConn = DriverManager.getConnection(url, props);
 upsertConn.setAutoCommit(true); // Test auto commit
@@ -203,7 +188,6 @@ public class QueryIT extends BaseQueryIT {
 stmt.setTimestamp(3, tsValue1);
 stmt.execute();
 
-url = getUrl() + ";" + PhoenixRun

[1/2] phoenix git commit: PHOENIX-4151 Tests extending BaseQueryIT are flapping

2017-09-05 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/master a1c75a9ec -> a3bb174bc


http://git-wip-us.apache.org/repos/asf/phoenix/blob/a3bb174b/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryIT.java
index f72caf9..c5283cd 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryIT.java
@@ -50,7 +50,6 @@ import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.schema.types.PTimestamp;
-import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.junit.Test;
 
@@ -76,25 +75,16 @@ public class QueryIT extends BaseQueryIT {
 "A_INTEGER) " +
 "VALUES (?, ?, ?)";
 // Override value that was set at creation time
-String url = getUrl() + ";" + PhoenixRuntime.CURRENT_SCN_ATTRIB + "=" 
+ (ts + 1);
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-Connection upsertConn = DriverManager.getConnection(url, props);
-upsertConn.setAutoCommit(true); // Test auto commit
-PreparedStatement stmt = upsertConn.prepareStatement(updateStmt);
+Connection conn = DriverManager.getConnection(url, props);
+PreparedStatement stmt = conn.prepareStatement(updateStmt);
 stmt.setString(1, tenantId);
 stmt.setString(2, ROW4);
 stmt.setInt(3, -10);
 stmt.execute();
-upsertConn.close();
-url = getUrl() + ";" + PhoenixRuntime.CURRENT_SCN_ATTRIB + "=" + (ts + 
6);
-props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-upsertConn = DriverManager.getConnection(url, props);
-analyzeTable(upsertConn, tableName);
-upsertConn.close();
-
+conn.commit();
+
 String query = "SELECT entity_id FROM " + tableName + " WHERE 
organization_id=? and a_integer >= ?";
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
-Connection conn = DriverManager.getConnection(getUrl(), props);
 PreparedStatement statement = conn.prepareStatement(query);
 statement.setString(1, tenantId);
 statement.setInt(2, 7);
@@ -127,7 +117,6 @@ public class QueryIT extends BaseQueryIT {
 public void testToDateOnString() throws Exception { // TODO: test more 
conversion combinations
 String query = "SELECT a_string FROM " + tableName + " WHERE 
organization_id=? and a_integer = 5";
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
 Connection conn = DriverManager.getConnection(getUrl(), props);
 try {
 PreparedStatement statement = conn.prepareStatement(query);
@@ -143,12 +132,10 @@ public class QueryIT extends BaseQueryIT {
 }
 }
 
-
 @Test
 public void testColumnOnBothSides() throws Exception {
 String query = "SELECT entity_id FROM " + tableName + " WHERE 
organization_id=? and a_string = b_string";
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
 Connection conn = DriverManager.getConnection(getUrl(), props);
 try {
 PreparedStatement statement = conn.prepareStatement(query);
@@ -165,7 +152,6 @@ public class QueryIT extends BaseQueryIT {
 @Test
 public void testDateInList() throws Exception {
 String query = "SELECT entity_id FROM " + tableName + " WHERE a_date 
IN (?,?) AND a_integer < 4";
-String url = getUrl() + ";" + PhoenixRuntime.CURRENT_SCN_ATTRIB + "=" 
+ (ts + 5); // Run query at timestamp 5
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 Connection conn = DriverManager.getConnection(url, props);
 try {
@@ -191,7 +177,6 @@ public class QueryIT extends BaseQueryIT {
 "A_TIMESTAMP) " +
 "VALUES (?, ?, ?)";
 // Override value that was set at creation time
-String url = getUrl() + ";" + PhoenixRuntime.CURRENT_SCN_ATTRIB + "=" 
+ (ts + 10);
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 Connection upsertConn = DriverManager.getConnection(url, props);
 upsertConn.setAutoCommit(true); // Test auto commit
@@ -203,7 +188,6 @@ public class QueryIT extends BaseQueryIT {
 stmt.setTimestamp(3, tsValue1);
 stmt.execute();
 
-url = getUrl() + ";" + PhoenixRuntime.CU

[1/2] phoenix git commit: PHOENIX-4151 Tests extending BaseQueryIT are flapping

2017-09-05 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 2859a0595 -> 1bcccb40c


http://git-wip-us.apache.org/repos/asf/phoenix/blob/1bcccb40/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryIT.java
index f72caf9..c5283cd 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryIT.java
@@ -50,7 +50,6 @@ import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.schema.types.PTimestamp;
-import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.junit.Test;
 
@@ -76,25 +75,16 @@ public class QueryIT extends BaseQueryIT {
 "A_INTEGER) " +
 "VALUES (?, ?, ?)";
 // Override value that was set at creation time
-String url = getUrl() + ";" + PhoenixRuntime.CURRENT_SCN_ATTRIB + "=" 
+ (ts + 1);
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-Connection upsertConn = DriverManager.getConnection(url, props);
-upsertConn.setAutoCommit(true); // Test auto commit
-PreparedStatement stmt = upsertConn.prepareStatement(updateStmt);
+Connection conn = DriverManager.getConnection(url, props);
+PreparedStatement stmt = conn.prepareStatement(updateStmt);
 stmt.setString(1, tenantId);
 stmt.setString(2, ROW4);
 stmt.setInt(3, -10);
 stmt.execute();
-upsertConn.close();
-url = getUrl() + ";" + PhoenixRuntime.CURRENT_SCN_ATTRIB + "=" + (ts + 
6);
-props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-upsertConn = DriverManager.getConnection(url, props);
-analyzeTable(upsertConn, tableName);
-upsertConn.close();
-
+conn.commit();
+
 String query = "SELECT entity_id FROM " + tableName + " WHERE 
organization_id=? and a_integer >= ?";
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
-Connection conn = DriverManager.getConnection(getUrl(), props);
 PreparedStatement statement = conn.prepareStatement(query);
 statement.setString(1, tenantId);
 statement.setInt(2, 7);
@@ -127,7 +117,6 @@ public class QueryIT extends BaseQueryIT {
 public void testToDateOnString() throws Exception { // TODO: test more 
conversion combinations
 String query = "SELECT a_string FROM " + tableName + " WHERE 
organization_id=? and a_integer = 5";
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
 Connection conn = DriverManager.getConnection(getUrl(), props);
 try {
 PreparedStatement statement = conn.prepareStatement(query);
@@ -143,12 +132,10 @@ public class QueryIT extends BaseQueryIT {
 }
 }
 
-
 @Test
 public void testColumnOnBothSides() throws Exception {
 String query = "SELECT entity_id FROM " + tableName + " WHERE 
organization_id=? and a_string = b_string";
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
 Connection conn = DriverManager.getConnection(getUrl(), props);
 try {
 PreparedStatement statement = conn.prepareStatement(query);
@@ -165,7 +152,6 @@ public class QueryIT extends BaseQueryIT {
 @Test
 public void testDateInList() throws Exception {
 String query = "SELECT entity_id FROM " + tableName + " WHERE a_date 
IN (?,?) AND a_integer < 4";
-String url = getUrl() + ";" + PhoenixRuntime.CURRENT_SCN_ATTRIB + "=" 
+ (ts + 5); // Run query at timestamp 5
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 Connection conn = DriverManager.getConnection(url, props);
 try {
@@ -191,7 +177,6 @@ public class QueryIT extends BaseQueryIT {
 "A_TIMESTAMP) " +
 "VALUES (?, ?, ?)";
 // Override value that was set at creation time
-String url = getUrl() + ";" + PhoenixRuntime.CURRENT_SCN_ATTRIB + "=" 
+ (ts + 10);
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 Connection upsertConn = DriverManager.getConnection(url, props);
 upsertConn.setAutoCommit(true); // Test auto commit
@@ -203,7 +188,6 @@ public class QueryIT extends BaseQueryIT {
 stmt.setTimestamp(3, tsValue1);
 stmt.execute();
 
-url = getUrl() + ";" + PhoenixRun

[2/2] phoenix git commit: PHOENIX-4151 Tests extending BaseQueryIT are flapping

2017-09-05 Thread samarth
PHOENIX-4151 Tests extending BaseQueryIT are flapping


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/611c8606
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/611c8606
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/611c8606

Branch: refs/heads/4.x-HBase-0.98
Commit: 611c860636c62966ce851809d66b254696c094a7
Parents: bf4262a
Author: Samarth Jain 
Authored: Tue Sep 5 19:28:22 2017 -0700
Committer: Samarth Jain 
Committed: Tue Sep 5 19:28:22 2017 -0700

--
 .../phoenix/end2end/AggregateQueryIT.java   |  11 +-
 .../org/apache/phoenix/end2end/BaseQueryIT.java |   7 +-
 .../apache/phoenix/end2end/CaseStatementIT.java |  16 +-
 .../apache/phoenix/end2end/CastAndCoerceIT.java |  13 +-
 .../end2end/ClientTimeArithmeticQueryIT.java|  35 +--
 .../org/apache/phoenix/end2end/GroupByIT.java   | 300 +--
 .../apache/phoenix/end2end/MutableQueryIT.java  |  59 ++--
 .../org/apache/phoenix/end2end/NotQueryIT.java  |  11 -
 .../phoenix/end2end/PointInTimeQueryIT.java | 105 ---
 .../org/apache/phoenix/end2end/QueryIT.java |  38 +--
 .../org/apache/phoenix/end2end/ScanQueryIT.java |  17 --
 .../org/apache/phoenix/end2end/SequenceIT.java  |  65 
 12 files changed, 258 insertions(+), 419 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/611c8606/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
index cec8a1f..01c6e37 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
@@ -53,7 +53,6 @@ public class AggregateQueryIT extends BaseQueryIT {
 public void testGroupByPlusOne() throws Exception {
 String query = "SELECT a_integer+1 FROM " + tableName + " WHERE 
organization_id=? and a_integer = 5 GROUP BY a_integer+1";
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
 Connection conn = DriverManager.getConnection(getUrl(), props);
 try {
 PreparedStatement statement = conn.prepareStatement(query);
@@ -72,7 +71,6 @@ public class AggregateQueryIT extends BaseQueryIT {
 // Tests that you don't get an ambiguous column exception when using 
the same alias as the column name
 String query = "SELECT a_string, b_string, count(1) FROM " + tableName 
+ " WHERE organization_id=? and entity_id<=? GROUP BY a_string,b_string";
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
 Connection conn = DriverManager.getConnection(getUrl(), props);
 HBaseAdmin admin = null;
 try {
@@ -99,7 +97,7 @@ public class AggregateQueryIT extends BaseQueryIT {
 HTable htable = (HTable) 
conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(tableNameBytes);
 htable.clearRegionCache();
 int nRegions = htable.getRegionLocations().size();
-admin.split(tableNameBytes, 
ByteUtil.concat(Bytes.toBytes(tenantId), Bytes.toBytes("00A" + 
Character.valueOf((char) ('3' + nextRunCount())) + ts))); // vary split point 
with test run
+admin.split(tableNameBytes, 
ByteUtil.concat(Bytes.toBytes(tenantId), Bytes.toBytes("00A" + 
Character.valueOf((char) ('3' + nextRunCount()); // vary split point with 
test run
 int retryCount = 0;
 do {
 Thread.sleep(2000);
@@ -135,7 +133,6 @@ public class AggregateQueryIT extends BaseQueryIT {
 public void testCountIsNull() throws Exception {
 String query = "SELECT count(1) FROM " + tableName + " WHERE X_DECIMAL 
is null";
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
 Connection conn = DriverManager.getConnection(getUrl(), props);
 try {
 PreparedStatement statement = conn.prepareStatement(query);
@@ -153,8 +150,7 @@ public class AggregateQueryIT extends BaseQueryIT {
 public void testCountWithNoScanRanges() throws Exception {
 String query = "SELECT count(1) FROM " + tableName + " WHERE 
organization_id = 'not_existing_organization_id'";
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SC

[2/2] phoenix git commit: PHOENIX-4151 Tests extending BaseQueryIT are flapping

2017-09-05 Thread samarth
PHOENIX-4151 Tests extending BaseQueryIT are flapping


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/12dbe15f
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/12dbe15f
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/12dbe15f

Branch: refs/heads/4.x-HBase-1.1
Commit: 12dbe15fa2d4471d505ca427e77aaa6c67190e50
Parents: 131abf8
Author: Samarth Jain 
Authored: Tue Sep 5 19:27:57 2017 -0700
Committer: Samarth Jain 
Committed: Tue Sep 5 19:27:57 2017 -0700

--
 .../phoenix/end2end/AggregateQueryIT.java   |  11 +-
 .../org/apache/phoenix/end2end/BaseQueryIT.java |   7 +-
 .../apache/phoenix/end2end/CaseStatementIT.java |  16 +-
 .../apache/phoenix/end2end/CastAndCoerceIT.java |  13 +-
 .../end2end/ClientTimeArithmeticQueryIT.java|  35 +--
 .../org/apache/phoenix/end2end/GroupByIT.java   | 300 +--
 .../apache/phoenix/end2end/MutableQueryIT.java  |  59 ++--
 .../org/apache/phoenix/end2end/NotQueryIT.java  |  11 -
 .../phoenix/end2end/PointInTimeQueryIT.java | 105 ---
 .../org/apache/phoenix/end2end/QueryIT.java |  38 +--
 .../org/apache/phoenix/end2end/ScanQueryIT.java |  17 --
 .../org/apache/phoenix/end2end/SequenceIT.java  |  65 
 12 files changed, 258 insertions(+), 419 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/12dbe15f/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
index cec8a1f..01c6e37 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
@@ -53,7 +53,6 @@ public class AggregateQueryIT extends BaseQueryIT {
 public void testGroupByPlusOne() throws Exception {
 String query = "SELECT a_integer+1 FROM " + tableName + " WHERE 
organization_id=? and a_integer = 5 GROUP BY a_integer+1";
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
 Connection conn = DriverManager.getConnection(getUrl(), props);
 try {
 PreparedStatement statement = conn.prepareStatement(query);
@@ -72,7 +71,6 @@ public class AggregateQueryIT extends BaseQueryIT {
 // Tests that you don't get an ambiguous column exception when using 
the same alias as the column name
 String query = "SELECT a_string, b_string, count(1) FROM " + tableName 
+ " WHERE organization_id=? and entity_id<=? GROUP BY a_string,b_string";
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
 Connection conn = DriverManager.getConnection(getUrl(), props);
 HBaseAdmin admin = null;
 try {
@@ -99,7 +97,7 @@ public class AggregateQueryIT extends BaseQueryIT {
 HTable htable = (HTable) 
conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(tableNameBytes);
 htable.clearRegionCache();
 int nRegions = htable.getRegionLocations().size();
-admin.split(tableNameBytes, 
ByteUtil.concat(Bytes.toBytes(tenantId), Bytes.toBytes("00A" + 
Character.valueOf((char) ('3' + nextRunCount())) + ts))); // vary split point 
with test run
+admin.split(tableNameBytes, 
ByteUtil.concat(Bytes.toBytes(tenantId), Bytes.toBytes("00A" + 
Character.valueOf((char) ('3' + nextRunCount()); // vary split point with 
test run
 int retryCount = 0;
 do {
 Thread.sleep(2000);
@@ -135,7 +133,6 @@ public class AggregateQueryIT extends BaseQueryIT {
 public void testCountIsNull() throws Exception {
 String query = "SELECT count(1) FROM " + tableName + " WHERE X_DECIMAL 
is null";
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
 Connection conn = DriverManager.getConnection(getUrl(), props);
 try {
 PreparedStatement statement = conn.prepareStatement(query);
@@ -153,8 +150,7 @@ public class AggregateQueryIT extends BaseQueryIT {
 public void testCountWithNoScanRanges() throws Exception {
 String query = "SELECT count(1) FROM " + tableName + " WHERE 
organization_id = 'not_existing_organization_id'";
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SCN

[2/2] phoenix git commit: PHOENIX-4151 Tests extending BaseQueryIT are flapping

2017-09-05 Thread samarth
PHOENIX-4151 Tests extending BaseQueryIT are flapping


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/1bcccb40
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/1bcccb40
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/1bcccb40

Branch: refs/heads/4.x-HBase-1.2
Commit: 1bcccb40c908a7af023cd58d3127b3698cde4090
Parents: 2859a05
Author: Samarth Jain 
Authored: Tue Sep 5 19:27:30 2017 -0700
Committer: Samarth Jain 
Committed: Tue Sep 5 19:27:30 2017 -0700

--
 .../phoenix/end2end/AggregateQueryIT.java   |  11 +-
 .../org/apache/phoenix/end2end/BaseQueryIT.java |   7 +-
 .../apache/phoenix/end2end/CaseStatementIT.java |  16 +-
 .../apache/phoenix/end2end/CastAndCoerceIT.java |  13 +-
 .../end2end/ClientTimeArithmeticQueryIT.java|  35 +--
 .../org/apache/phoenix/end2end/GroupByIT.java   | 300 +--
 .../apache/phoenix/end2end/MutableQueryIT.java  |  59 ++--
 .../org/apache/phoenix/end2end/NotQueryIT.java  |  11 -
 .../phoenix/end2end/PointInTimeQueryIT.java | 105 ---
 .../org/apache/phoenix/end2end/QueryIT.java |  38 +--
 .../org/apache/phoenix/end2end/ScanQueryIT.java |  17 --
 .../org/apache/phoenix/end2end/SequenceIT.java  |  65 
 12 files changed, 258 insertions(+), 419 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/1bcccb40/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
index cec8a1f..01c6e37 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
@@ -53,7 +53,6 @@ public class AggregateQueryIT extends BaseQueryIT {
 public void testGroupByPlusOne() throws Exception {
 String query = "SELECT a_integer+1 FROM " + tableName + " WHERE 
organization_id=? and a_integer = 5 GROUP BY a_integer+1";
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
 Connection conn = DriverManager.getConnection(getUrl(), props);
 try {
 PreparedStatement statement = conn.prepareStatement(query);
@@ -72,7 +71,6 @@ public class AggregateQueryIT extends BaseQueryIT {
 // Tests that you don't get an ambiguous column exception when using 
the same alias as the column name
 String query = "SELECT a_string, b_string, count(1) FROM " + tableName 
+ " WHERE organization_id=? and entity_id<=? GROUP BY a_string,b_string";
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
 Connection conn = DriverManager.getConnection(getUrl(), props);
 HBaseAdmin admin = null;
 try {
@@ -99,7 +97,7 @@ public class AggregateQueryIT extends BaseQueryIT {
 HTable htable = (HTable) 
conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(tableNameBytes);
 htable.clearRegionCache();
 int nRegions = htable.getRegionLocations().size();
-admin.split(tableNameBytes, 
ByteUtil.concat(Bytes.toBytes(tenantId), Bytes.toBytes("00A" + 
Character.valueOf((char) ('3' + nextRunCount())) + ts))); // vary split point 
with test run
+admin.split(tableNameBytes, 
ByteUtil.concat(Bytes.toBytes(tenantId), Bytes.toBytes("00A" + 
Character.valueOf((char) ('3' + nextRunCount()); // vary split point with 
test run
 int retryCount = 0;
 do {
 Thread.sleep(2000);
@@ -135,7 +133,6 @@ public class AggregateQueryIT extends BaseQueryIT {
 public void testCountIsNull() throws Exception {
 String query = "SELECT count(1) FROM " + tableName + " WHERE X_DECIMAL 
is null";
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
 Connection conn = DriverManager.getConnection(getUrl(), props);
 try {
 PreparedStatement statement = conn.prepareStatement(query);
@@ -153,8 +150,7 @@ public class AggregateQueryIT extends BaseQueryIT {
 public void testCountWithNoScanRanges() throws Exception {
 String query = "SELECT count(1) FROM " + tableName + " WHERE 
organization_id = 'not_existing_organization_id'";
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SCN

[2/2] phoenix git commit: PHOENIX-4151 Tests extending BaseQueryIT are flapping

2017-09-05 Thread samarth
PHOENIX-4151 Tests extending BaseQueryIT are flapping


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/a3bb174b
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/a3bb174b
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/a3bb174b

Branch: refs/heads/master
Commit: a3bb174bc82743aa2ac7b59b9d1978a788a4d19f
Parents: a1c75a9
Author: Samarth Jain 
Authored: Tue Sep 5 19:27:01 2017 -0700
Committer: Samarth Jain 
Committed: Tue Sep 5 19:27:01 2017 -0700

--
 .../phoenix/end2end/AggregateQueryIT.java   |  11 +-
 .../org/apache/phoenix/end2end/BaseQueryIT.java |   7 +-
 .../apache/phoenix/end2end/CaseStatementIT.java |  16 +-
 .../apache/phoenix/end2end/CastAndCoerceIT.java |  13 +-
 .../end2end/ClientTimeArithmeticQueryIT.java|  35 +--
 .../org/apache/phoenix/end2end/GroupByIT.java   | 300 +--
 .../apache/phoenix/end2end/MutableQueryIT.java  |  59 ++--
 .../org/apache/phoenix/end2end/NotQueryIT.java  |  11 -
 .../phoenix/end2end/PointInTimeQueryIT.java | 105 ---
 .../org/apache/phoenix/end2end/QueryIT.java |  38 +--
 .../org/apache/phoenix/end2end/ScanQueryIT.java |  17 --
 .../org/apache/phoenix/end2end/SequenceIT.java  |  65 
 12 files changed, 258 insertions(+), 419 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/a3bb174b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
index cec8a1f..01c6e37 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
@@ -53,7 +53,6 @@ public class AggregateQueryIT extends BaseQueryIT {
 public void testGroupByPlusOne() throws Exception {
 String query = "SELECT a_integer+1 FROM " + tableName + " WHERE 
organization_id=? and a_integer = 5 GROUP BY a_integer+1";
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
 Connection conn = DriverManager.getConnection(getUrl(), props);
 try {
 PreparedStatement statement = conn.prepareStatement(query);
@@ -72,7 +71,6 @@ public class AggregateQueryIT extends BaseQueryIT {
 // Tests that you don't get an ambiguous column exception when using 
the same alias as the column name
 String query = "SELECT a_string, b_string, count(1) FROM " + tableName 
+ " WHERE organization_id=? and entity_id<=? GROUP BY a_string,b_string";
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
 Connection conn = DriverManager.getConnection(getUrl(), props);
 HBaseAdmin admin = null;
 try {
@@ -99,7 +97,7 @@ public class AggregateQueryIT extends BaseQueryIT {
 HTable htable = (HTable) 
conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(tableNameBytes);
 htable.clearRegionCache();
 int nRegions = htable.getRegionLocations().size();
-admin.split(tableNameBytes, 
ByteUtil.concat(Bytes.toBytes(tenantId), Bytes.toBytes("00A" + 
Character.valueOf((char) ('3' + nextRunCount())) + ts))); // vary split point 
with test run
+admin.split(tableNameBytes, 
ByteUtil.concat(Bytes.toBytes(tenantId), Bytes.toBytes("00A" + 
Character.valueOf((char) ('3' + nextRunCount()); // vary split point with 
test run
 int retryCount = 0;
 do {
 Thread.sleep(2000);
@@ -135,7 +133,6 @@ public class AggregateQueryIT extends BaseQueryIT {
 public void testCountIsNull() throws Exception {
 String query = "SELECT count(1) FROM " + tableName + " WHERE X_DECIMAL 
is null";
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
 Connection conn = DriverManager.getConnection(getUrl(), props);
 try {
 PreparedStatement statement = conn.prepareStatement(query);
@@ -153,8 +150,7 @@ public class AggregateQueryIT extends BaseQueryIT {
 public void testCountWithNoScanRanges() throws Exception {
 String query = "SELECT count(1) FROM " + tableName + " WHERE 
organization_id = 'not_existing_organization_id'";
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB

Build failed in Jenkins: Phoenix | Master #1771

2017-09-05 Thread Apache Jenkins Server
See 


Changes:

[ssa] PHOENIX-4068 Atomic Upsert salted table with

[samarth] PHOENIX-4156 Fix flapping MutableIndexFailureIT

[samarth] PHOENIX-4141 Fix flapping TableSnapshotReadsMapReduceIT

[jtaylor] PHOENIX-3953 Clear INDEX_DISABLED_TIMESTAMP and disable index on

--
[...truncated 98.10 KB...]
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 320.116 
s - in org.apache.phoenix.end2end.index.DropColumnIT
[INFO] Running org.apache.phoenix.end2end.index.txn.RollbackIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.748 s 
- in org.apache.phoenix.end2end.index.txn.MutableRollbackIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.376 s 
- in org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.326 s 
- in org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
[INFO] Running org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.352 s 
- in org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Running org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.079 s 
- in org.apache.phoenix.end2end.index.txn.RollbackIT
[INFO] Running org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.104 s 
- in org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Running org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.427 s 
- in org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Running org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.17 s 
- in org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.443 s 
- in org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Running org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.967 s 
- in org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Running org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Running org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Tests run: 67, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 416.163 
s - in org.apache.phoenix.end2end.index.IndexExpressionIT
[INFO] Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
1,001.802 s - in org.apache.phoenix.end2end.SortMergeJoinIT
[INFO] Running org.apache.phoenix.tx.TransactionIT
[INFO] Running org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 64, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 443.844 
s - in org.apache.phoenix.end2end.index.MutableIndexIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.607 s 
- in org.apache.phoenix.tx.TransactionIT
[INFO] Running org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 87.535 s 
- in org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.139 s 
- in org.apache.phoenix.util.IndexScrutinyIT
[WARNING] Tests run: 52, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
202.575 s - in org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 623.55 s 
- in org.apache.phoenix.end2end.index.MutableIndexSplitForwardScanIT
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 275.959 
s - in org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 698.993 
s - in org.apache.phoenix.end2end.index.MutableIndexSplitReverseScanIT
[INFO] Tests run: 304, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
1,861.107 s - in org.apache.phoenix.end2end.index.IndexIT
[INFO] 
[INFO] Results:
[INFO] 
[WARNING] Tests run: 3064, Failures: 0, Errors: 0, Skipped: 5
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test (ClientManagedTimeTests) 
@ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.phoenix.end2end.DistinctCountIT
[INFO] Running org.apache.phoenix.end2end.DropSchemaIT
[INFO] Running org.apache.phoenix.end2end.DerivedTableIT
[INFO] Running org.apache.phoenix.end2end.ExtendedQueryExecIT
[INFO] Running org.apache.phoenix.end2end.FunkyNamesIT
[INFO] Running org.apache.phoenix

Build failed in Jenkins: Phoenix | Master #1770

2017-09-05 Thread Apache Jenkins Server
See 


Changes:

[ssa] PHOENIX-3406 CSV BulkLoad MR job incorrectly handle ROW_TIMESTAMP

--
[...truncated 100.62 KB...]
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.381 s 
- in org.apache.phoenix.end2end.index.txn.RollbackIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.395 s 
- in org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
[INFO] Running org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.686 s 
- in org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Running org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.604 s 
- in org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
[INFO] Running org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.133 s 
- in org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Running org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.725 s 
- in org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.332 s 
- in org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Running org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Running org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
1,033.344 s - in org.apache.phoenix.end2end.SortMergeJoinIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.417 s 
- in org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Running org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 67, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 418.043 
s - in org.apache.phoenix.end2end.index.IndexExpressionIT
[INFO] Running org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.304 s 
- in org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Running org.apache.phoenix.tx.TxCheckpointIT
[INFO] Running org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 88.426 s 
- in org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Running org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 64, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 448.417 
s - in org.apache.phoenix.end2end.index.MutableIndexIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 51.464 s 
- in org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.299 s 
- in org.apache.phoenix.util.IndexScrutinyIT
[WARNING] Tests run: 52, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
205.361 s - in org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 277.134 
s - in org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 701.658 
s - in org.apache.phoenix.end2end.index.MutableIndexSplitForwardScanIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 699.119 
s - in org.apache.phoenix.end2end.index.MutableIndexSplitReverseScanIT
[INFO] Tests run: 304, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
1,891.03 s - in org.apache.phoenix.end2end.index.IndexIT
[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Failures: 
[ERROR]   
TableSnapshotReadsMapReduceIT.testMapReduceSnapshotWithLimit:117->configureJob:130
[ERROR]   
TableSnapshotReadsMapReduceIT.testMapReduceSnapshots:78->configureJob:130
[ERROR]   
TableSnapshotReadsMapReduceIT.testMapReduceSnapshotsWithCondition:96->configureJob:130
[ERROR] Errors: 
[ERROR]   MutableQueryIT.:66->BaseQueryIT.:139 » SQLTimeout 
Operation timed ...
[INFO] 
[ERROR] Tests run: 3064, Failures: 3, Errors: 1, Skipped: 5
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test (ClientManagedTimeTests) 
@ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.phoenix.end2end.CustomEntityDataIT
[INFO] Running org.apache.phoenix.end2end.CreateSchemaIT
[INFO] Running org.apache.phoenix.end2end.DropSchemaIT
[INFO] Running org.apache.phoenix.end2end.DistinctCountIT
[INFO] Running org.apache.phoenix.end2end.FunkyNamesIT
[INFO] Running org.apache.phoenix.end2end.DerivedTableIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.957 s 
- in org.apach

phoenix git commit: PHOENIX-3953 Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction (addendum)

2017-09-05 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 df3cc354c -> 2859a0595


PHOENIX-3953 Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction 
(addendum)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/2859a059
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/2859a059
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/2859a059

Branch: refs/heads/4.x-HBase-1.2
Commit: 2859a05954ef3f0fe606e1d98b943edd7227cc6e
Parents: df3cc35
Author: James Taylor 
Authored: Tue Sep 5 13:05:17 2017 -0700
Committer: James Taylor 
Committed: Tue Sep 5 14:24:32 2017 -0700

--
 .../end2end/index/PartialIndexRebuilderIT.java  | 21 +++--
 .../UngroupedAggregateRegionObserver.java   |  2 +-
 .../stats/DefaultStatisticsCollector.java   | 83 ++--
 .../schema/stats/NoOpStatisticsCollector.java   |  2 +-
 .../schema/stats/StatisticsCollector.java   |  2 +-
 5 files changed, 75 insertions(+), 35 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/2859a059/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
index 9483e87..139725f 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
@@ -248,29 +248,34 @@ public class PartialIndexRebuilderIT extends 
BaseUniqueNamesOwnClusterIT {
 public void testCompactionDuringRebuild() throws Throwable {
 String schemaName = generateUniqueName();
 String tableName = generateUniqueName();
-String indexName = generateUniqueName();
+String indexName1 = generateUniqueName();
+String indexName2 = generateUniqueName();
 final String fullTableName = SchemaUtil.getTableName(schemaName, 
tableName);
-String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+String fullIndexName1 = SchemaUtil.getTableName(schemaName, 
indexName1);
+String fullIndexName2 = SchemaUtil.getTableName(schemaName, 
indexName2);
 final MyClock clock = new MyClock(1000);
 // Use our own clock to prevent race between partial rebuilder and 
compaction
 EnvironmentEdgeManager.injectEdge(clock);
 try (Connection conn = DriverManager.getConnection(getUrl())) {
 conn.createStatement().execute("CREATE TABLE " + fullTableName + 
"(k INTEGER PRIMARY KEY, v1 INTEGER, v2 INTEGER) COLUMN_ENCODED_BYTES = 0, 
STORE_NULLS=true, GUIDE_POSTS_WIDTH=1000");
 clock.time += 1000;
-conn.createStatement().execute("CREATE INDEX " + indexName + " ON 
" + fullTableName + " (v1) INCLUDE (v2)");
+conn.createStatement().execute("CREATE INDEX " + indexName1 + " ON 
" + fullTableName + " (v1) INCLUDE (v2)");
+clock.time += 1000;
+conn.createStatement().execute("CREATE INDEX " + indexName2 + " ON 
" + fullTableName + " (v2) INCLUDE (v1)");
 clock.time += 1000;
 conn.createStatement().execute("UPSERT INTO " + fullTableName + " 
VALUES(1, 2, 3)");
 conn.commit();
 clock.time += 1000;
 long disableTS = EnvironmentEdgeManager.currentTimeMillis();
 HTableInterface metaTable = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES);
-IndexUtil.updateIndexState(fullIndexName, disableTS, metaTable, 
PIndexState.DISABLE);
-TestUtil.doMajorCompaction(conn, fullIndexName);
-assertFalse(TestUtil.checkIndexState(conn, fullIndexName, 
PIndexState.DISABLE, 0L));
+IndexUtil.updateIndexState(fullIndexName1, disableTS, metaTable, 
PIndexState.DISABLE);
+IndexUtil.updateIndexState(fullIndexName2, disableTS, metaTable, 
PIndexState.DISABLE);
+TestUtil.doMajorCompaction(conn, fullIndexName1);
+assertTrue(TestUtil.checkIndexState(conn, fullIndexName1, 
PIndexState.DISABLE, 0L));
 TestUtil.analyzeTable(conn, fullTableName);
-assertFalse(TestUtil.checkIndexState(conn, fullIndexName, 
PIndexState.DISABLE, 0L));
+assertFalse(TestUtil.checkIndexState(conn, fullIndexName2, 
PIndexState.DISABLE, 0L));
 TestUtil.doMajorCompaction(conn, fullTableName);
-assertTrue(TestUtil.checkIndexState(conn, fullIndexName, 
PIndexState.DISABLE, 0L));
+assertTrue(TestUtil.checkIndexState(conn, fullIndexName2, 
PIndex

phoenix git commit: PHOENIX-3953 Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction (addendum)

2017-09-05 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 d82e55168 -> 131abf8ff


PHOENIX-3953 Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction 
(addendum)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/131abf8f
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/131abf8f
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/131abf8f

Branch: refs/heads/4.x-HBase-1.1
Commit: 131abf8ff786acd8a889303b99aafbff1205
Parents: d82e551
Author: James Taylor 
Authored: Tue Sep 5 13:05:17 2017 -0700
Committer: James Taylor 
Committed: Tue Sep 5 14:23:18 2017 -0700

--
 .../end2end/index/PartialIndexRebuilderIT.java  | 21 +++--
 .../UngroupedAggregateRegionObserver.java   |  2 +-
 .../stats/DefaultStatisticsCollector.java   | 83 ++--
 .../schema/stats/NoOpStatisticsCollector.java   |  2 +-
 .../schema/stats/StatisticsCollector.java   |  2 +-
 5 files changed, 75 insertions(+), 35 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/131abf8f/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
index 9483e87..139725f 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
@@ -248,29 +248,34 @@ public class PartialIndexRebuilderIT extends 
BaseUniqueNamesOwnClusterIT {
 public void testCompactionDuringRebuild() throws Throwable {
 String schemaName = generateUniqueName();
 String tableName = generateUniqueName();
-String indexName = generateUniqueName();
+String indexName1 = generateUniqueName();
+String indexName2 = generateUniqueName();
 final String fullTableName = SchemaUtil.getTableName(schemaName, 
tableName);
-String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+String fullIndexName1 = SchemaUtil.getTableName(schemaName, 
indexName1);
+String fullIndexName2 = SchemaUtil.getTableName(schemaName, 
indexName2);
 final MyClock clock = new MyClock(1000);
 // Use our own clock to prevent race between partial rebuilder and 
compaction
 EnvironmentEdgeManager.injectEdge(clock);
 try (Connection conn = DriverManager.getConnection(getUrl())) {
 conn.createStatement().execute("CREATE TABLE " + fullTableName + 
"(k INTEGER PRIMARY KEY, v1 INTEGER, v2 INTEGER) COLUMN_ENCODED_BYTES = 0, 
STORE_NULLS=true, GUIDE_POSTS_WIDTH=1000");
 clock.time += 1000;
-conn.createStatement().execute("CREATE INDEX " + indexName + " ON 
" + fullTableName + " (v1) INCLUDE (v2)");
+conn.createStatement().execute("CREATE INDEX " + indexName1 + " ON 
" + fullTableName + " (v1) INCLUDE (v2)");
+clock.time += 1000;
+conn.createStatement().execute("CREATE INDEX " + indexName2 + " ON 
" + fullTableName + " (v2) INCLUDE (v1)");
 clock.time += 1000;
 conn.createStatement().execute("UPSERT INTO " + fullTableName + " 
VALUES(1, 2, 3)");
 conn.commit();
 clock.time += 1000;
 long disableTS = EnvironmentEdgeManager.currentTimeMillis();
 HTableInterface metaTable = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES);
-IndexUtil.updateIndexState(fullIndexName, disableTS, metaTable, 
PIndexState.DISABLE);
-TestUtil.doMajorCompaction(conn, fullIndexName);
-assertFalse(TestUtil.checkIndexState(conn, fullIndexName, 
PIndexState.DISABLE, 0L));
+IndexUtil.updateIndexState(fullIndexName1, disableTS, metaTable, 
PIndexState.DISABLE);
+IndexUtil.updateIndexState(fullIndexName2, disableTS, metaTable, 
PIndexState.DISABLE);
+TestUtil.doMajorCompaction(conn, fullIndexName1);
+assertTrue(TestUtil.checkIndexState(conn, fullIndexName1, 
PIndexState.DISABLE, 0L));
 TestUtil.analyzeTable(conn, fullTableName);
-assertFalse(TestUtil.checkIndexState(conn, fullIndexName, 
PIndexState.DISABLE, 0L));
+assertFalse(TestUtil.checkIndexState(conn, fullIndexName2, 
PIndexState.DISABLE, 0L));
 TestUtil.doMajorCompaction(conn, fullTableName);
-assertTrue(TestUtil.checkIndexState(conn, fullIndexName, 
PIndexState.DISABLE, 0L));
+assertTrue(TestUtil.checkIndexState(conn, fullIndexName2, 
PIndex

phoenix git commit: PHOENIX-3953 Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction (addendum)

2017-09-05 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 0ae4765ec -> bf4262a99


PHOENIX-3953 Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction 
(addendum)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/bf4262a9
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/bf4262a9
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/bf4262a9

Branch: refs/heads/4.x-HBase-0.98
Commit: bf4262a991cfa8d369b3669d1086c13d5276dfbc
Parents: 0ae4765
Author: James Taylor 
Authored: Tue Sep 5 13:05:17 2017 -0700
Committer: James Taylor 
Committed: Tue Sep 5 14:21:59 2017 -0700

--
 .../end2end/index/PartialIndexRebuilderIT.java  | 21 +++--
 .../UngroupedAggregateRegionObserver.java   |  2 +-
 .../stats/DefaultStatisticsCollector.java   | 83 ++--
 .../schema/stats/NoOpStatisticsCollector.java   |  2 +-
 .../schema/stats/StatisticsCollector.java   |  2 +-
 5 files changed, 75 insertions(+), 35 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/bf4262a9/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
index 9483e87..139725f 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
@@ -248,29 +248,34 @@ public class PartialIndexRebuilderIT extends 
BaseUniqueNamesOwnClusterIT {
 public void testCompactionDuringRebuild() throws Throwable {
 String schemaName = generateUniqueName();
 String tableName = generateUniqueName();
-String indexName = generateUniqueName();
+String indexName1 = generateUniqueName();
+String indexName2 = generateUniqueName();
 final String fullTableName = SchemaUtil.getTableName(schemaName, 
tableName);
-String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+String fullIndexName1 = SchemaUtil.getTableName(schemaName, 
indexName1);
+String fullIndexName2 = SchemaUtil.getTableName(schemaName, 
indexName2);
 final MyClock clock = new MyClock(1000);
 // Use our own clock to prevent race between partial rebuilder and 
compaction
 EnvironmentEdgeManager.injectEdge(clock);
 try (Connection conn = DriverManager.getConnection(getUrl())) {
 conn.createStatement().execute("CREATE TABLE " + fullTableName + 
"(k INTEGER PRIMARY KEY, v1 INTEGER, v2 INTEGER) COLUMN_ENCODED_BYTES = 0, 
STORE_NULLS=true, GUIDE_POSTS_WIDTH=1000");
 clock.time += 1000;
-conn.createStatement().execute("CREATE INDEX " + indexName + " ON 
" + fullTableName + " (v1) INCLUDE (v2)");
+conn.createStatement().execute("CREATE INDEX " + indexName1 + " ON 
" + fullTableName + " (v1) INCLUDE (v2)");
+clock.time += 1000;
+conn.createStatement().execute("CREATE INDEX " + indexName2 + " ON 
" + fullTableName + " (v2) INCLUDE (v1)");
 clock.time += 1000;
 conn.createStatement().execute("UPSERT INTO " + fullTableName + " 
VALUES(1, 2, 3)");
 conn.commit();
 clock.time += 1000;
 long disableTS = EnvironmentEdgeManager.currentTimeMillis();
 HTableInterface metaTable = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES);
-IndexUtil.updateIndexState(fullIndexName, disableTS, metaTable, 
PIndexState.DISABLE);
-TestUtil.doMajorCompaction(conn, fullIndexName);
-assertFalse(TestUtil.checkIndexState(conn, fullIndexName, 
PIndexState.DISABLE, 0L));
+IndexUtil.updateIndexState(fullIndexName1, disableTS, metaTable, 
PIndexState.DISABLE);
+IndexUtil.updateIndexState(fullIndexName2, disableTS, metaTable, 
PIndexState.DISABLE);
+TestUtil.doMajorCompaction(conn, fullIndexName1);
+assertTrue(TestUtil.checkIndexState(conn, fullIndexName1, 
PIndexState.DISABLE, 0L));
 TestUtil.analyzeTable(conn, fullTableName);
-assertFalse(TestUtil.checkIndexState(conn, fullIndexName, 
PIndexState.DISABLE, 0L));
+assertFalse(TestUtil.checkIndexState(conn, fullIndexName2, 
PIndexState.DISABLE, 0L));
 TestUtil.doMajorCompaction(conn, fullTableName);
-assertTrue(TestUtil.checkIndexState(conn, fullIndexName, 
PIndexState.DISABLE, 0L));
+assertTrue(TestUtil.checkIndexState(conn, fullIndexName2, 
PInd

phoenix git commit: PHOENIX-3953 Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction (addendum)

2017-09-05 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/master 27ef19f8c -> a1c75a9ec


PHOENIX-3953 Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction 
(addendum)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/a1c75a9e
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/a1c75a9e
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/a1c75a9e

Branch: refs/heads/master
Commit: a1c75a9ec31eb646d7f4e0eb2363e8dc6d465103
Parents: 27ef19f
Author: James Taylor 
Authored: Tue Sep 5 13:05:17 2017 -0700
Committer: James Taylor 
Committed: Tue Sep 5 14:18:40 2017 -0700

--
 .../end2end/index/PartialIndexRebuilderIT.java  | 21 +++--
 .../UngroupedAggregateRegionObserver.java   |  2 +-
 .../stats/DefaultStatisticsCollector.java   | 83 ++--
 .../schema/stats/NoOpStatisticsCollector.java   |  2 +-
 .../schema/stats/StatisticsCollector.java   |  2 +-
 5 files changed, 75 insertions(+), 35 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/a1c75a9e/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
index 9483e87..139725f 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
@@ -248,29 +248,34 @@ public class PartialIndexRebuilderIT extends 
BaseUniqueNamesOwnClusterIT {
 public void testCompactionDuringRebuild() throws Throwable {
 String schemaName = generateUniqueName();
 String tableName = generateUniqueName();
-String indexName = generateUniqueName();
+String indexName1 = generateUniqueName();
+String indexName2 = generateUniqueName();
 final String fullTableName = SchemaUtil.getTableName(schemaName, 
tableName);
-String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+String fullIndexName1 = SchemaUtil.getTableName(schemaName, 
indexName1);
+String fullIndexName2 = SchemaUtil.getTableName(schemaName, 
indexName2);
 final MyClock clock = new MyClock(1000);
 // Use our own clock to prevent race between partial rebuilder and 
compaction
 EnvironmentEdgeManager.injectEdge(clock);
 try (Connection conn = DriverManager.getConnection(getUrl())) {
 conn.createStatement().execute("CREATE TABLE " + fullTableName + 
"(k INTEGER PRIMARY KEY, v1 INTEGER, v2 INTEGER) COLUMN_ENCODED_BYTES = 0, 
STORE_NULLS=true, GUIDE_POSTS_WIDTH=1000");
 clock.time += 1000;
-conn.createStatement().execute("CREATE INDEX " + indexName + " ON 
" + fullTableName + " (v1) INCLUDE (v2)");
+conn.createStatement().execute("CREATE INDEX " + indexName1 + " ON 
" + fullTableName + " (v1) INCLUDE (v2)");
+clock.time += 1000;
+conn.createStatement().execute("CREATE INDEX " + indexName2 + " ON 
" + fullTableName + " (v2) INCLUDE (v1)");
 clock.time += 1000;
 conn.createStatement().execute("UPSERT INTO " + fullTableName + " 
VALUES(1, 2, 3)");
 conn.commit();
 clock.time += 1000;
 long disableTS = EnvironmentEdgeManager.currentTimeMillis();
 HTableInterface metaTable = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES);
-IndexUtil.updateIndexState(fullIndexName, disableTS, metaTable, 
PIndexState.DISABLE);
-TestUtil.doMajorCompaction(conn, fullIndexName);
-assertFalse(TestUtil.checkIndexState(conn, fullIndexName, 
PIndexState.DISABLE, 0L));
+IndexUtil.updateIndexState(fullIndexName1, disableTS, metaTable, 
PIndexState.DISABLE);
+IndexUtil.updateIndexState(fullIndexName2, disableTS, metaTable, 
PIndexState.DISABLE);
+TestUtil.doMajorCompaction(conn, fullIndexName1);
+assertTrue(TestUtil.checkIndexState(conn, fullIndexName1, 
PIndexState.DISABLE, 0L));
 TestUtil.analyzeTable(conn, fullTableName);
-assertFalse(TestUtil.checkIndexState(conn, fullIndexName, 
PIndexState.DISABLE, 0L));
+assertFalse(TestUtil.checkIndexState(conn, fullIndexName2, 
PIndexState.DISABLE, 0L));
 TestUtil.doMajorCompaction(conn, fullTableName);
-assertTrue(TestUtil.checkIndexState(conn, fullIndexName, 
PIndexState.DISABLE, 0L));
+assertTrue(TestUtil.checkIndexState(conn, fullIndexName2, 
PIndexState.DISABLE,

phoenix git commit: PHOENIX-4141 Fix flapping TableSnapshotReadsMapReduceIT

2017-09-05 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 2c38afffd -> d82e55168


PHOENIX-4141 Fix flapping TableSnapshotReadsMapReduceIT


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/d82e5516
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/d82e5516
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/d82e5516

Branch: refs/heads/4.x-HBase-1.1
Commit: d82e551685a42ebab1b3f7744a42f90c417b70bf
Parents: 2c38aff
Author: Samarth Jain 
Authored: Tue Sep 5 13:58:48 2017 -0700
Committer: Samarth Jain 
Committed: Tue Sep 5 13:58:48 2017 -0700

--
 .../end2end/TableSnapshotReadsMapReduceIT.java  | 58 
 1 file changed, 35 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/d82e5516/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
index 6fe863c..4b2cdad 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
@@ -18,11 +18,26 @@
 
 package org.apache.phoenix.end2end;
 
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.UUID;
+
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
-
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos;
 import org.apache.hadoop.io.NullWritable;
@@ -32,20 +47,14 @@ import 
org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.index.PhoenixIndexDBWritable;
 import org.apache.phoenix.mapreduce.util.PhoenixMapReduceUtil;
-import org.junit.*;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.Test;
 
-import java.io.IOException;
-import java.sql.*;
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Properties;
-import java.util.UUID;
+import com.google.common.collect.Maps;
 
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertTrue;
-
-public class TableSnapshotReadsMapReduceIT extends ParallelStatsDisabledIT {
+public class TableSnapshotReadsMapReduceIT extends BaseUniqueNamesOwnClusterIT 
{
   private final static String SNAPSHOT_NAME = "FOO";
   private static final String FIELD1 = "FIELD1";
   private static final String FIELD2 = "FIELD2";
@@ -58,6 +67,11 @@ public class TableSnapshotReadsMapReduceIT extends 
ParallelStatsDisabledIT {
   private long timestamp;
   private String tableName;
 
+  @BeforeClass
+  public static void doSetup() throws Exception {
+  Map props = Maps.newHashMapWithExpectedSize(1);
+  setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+  }
 
   @Test
   public void testMapReduceSnapshots() throws Exception {
@@ -155,7 +169,7 @@ public class TableSnapshotReadsMapReduceIT extends 
ParallelStatsDisabledIT {
 
   assertFalse("Should only have stored" + result.size() + "rows in the 
table for the timestamp!", rs.next());
 } finally {
-  deleteSnapshotAndTable(tableName);
+  deleteSnapshot(tableName);
 }
   }
 
@@ -195,15 +209,13 @@ public class TableSnapshotReadsMapReduceIT extends 
ParallelStatsDisabledIT {
 conn.commit();
   }
 
-  public void deleteSnapshotAndTable(String tableName) throws Exception {
-Connection conn = DriverManager.getConnection(getUrl());
-HBaseAdmin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();
-admin.deleteSnapshot(SNAPSHOT_NAME);
-
-conn.createStatement().execute("DROP TABLE " + tableName);
-conn.close();
-
-  }
+public void deleteSnapshot(String tableName) throws Exception {
+try (Connection conn = DriverManager.getConnection(getUrl());
+HBaseAdmin admin =
+
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();) {
+admin.d

phoenix git commit: PHOENIX-4141 Fix flapping TableSnapshotReadsMapReduceIT

2017-09-05 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 811bd8b0d -> 0ae4765ec


PHOENIX-4141 Fix flapping TableSnapshotReadsMapReduceIT


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/0ae4765e
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/0ae4765e
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/0ae4765e

Branch: refs/heads/4.x-HBase-0.98
Commit: 0ae4765ec4b384f1beeff66b41a3ce836d81a0b6
Parents: 811bd8b
Author: Samarth Jain 
Authored: Tue Sep 5 13:58:28 2017 -0700
Committer: Samarth Jain 
Committed: Tue Sep 5 13:58:28 2017 -0700

--
 .../end2end/TableSnapshotReadsMapReduceIT.java  | 58 
 1 file changed, 35 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/0ae4765e/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
index aa51b2a..7f7b8c5 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
@@ -18,11 +18,26 @@
 
 package org.apache.phoenix.end2end;
 
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.UUID;
+
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
-
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos;
 import org.apache.hadoop.io.NullWritable;
@@ -32,20 +47,14 @@ import 
org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.index.PhoenixIndexDBWritable;
 import org.apache.phoenix.mapreduce.util.PhoenixMapReduceUtil;
-import org.junit.*;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.Test;
 
-import java.io.IOException;
-import java.sql.*;
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Properties;
-import java.util.UUID;
+import com.google.common.collect.Maps;
 
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertTrue;
-
-public class TableSnapshotReadsMapReduceIT extends ParallelStatsDisabledIT {
+public class TableSnapshotReadsMapReduceIT extends BaseUniqueNamesOwnClusterIT 
{
   private final static String SNAPSHOT_NAME = "FOO";
   private static final String FIELD1 = "FIELD1";
   private static final String FIELD2 = "FIELD2";
@@ -58,6 +67,11 @@ public class TableSnapshotReadsMapReduceIT extends 
ParallelStatsDisabledIT {
   private long timestamp;
   private String tableName;
 
+  @BeforeClass
+  public static void doSetup() throws Exception {
+  Map props = Maps.newHashMapWithExpectedSize(1);
+  setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+  }
 
   @Test
   public void testMapReduceSnapshots() throws Exception {
@@ -155,7 +169,7 @@ public class TableSnapshotReadsMapReduceIT extends 
ParallelStatsDisabledIT {
 
   assertFalse("Should only have stored" + result.size() + "rows in the 
table for the timestamp!", rs.next());
 } finally {
-  deleteSnapshotAndTable(tableName);
+  deleteSnapshot(tableName);
 }
   }
 
@@ -195,15 +209,13 @@ public class TableSnapshotReadsMapReduceIT extends 
ParallelStatsDisabledIT {
 conn.commit();
   }
 
-  public void deleteSnapshotAndTable(String tableName) throws Exception {
-Connection conn = DriverManager.getConnection(getUrl());
-HBaseAdmin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();
-admin.deleteSnapshot(SNAPSHOT_NAME);
-
-conn.createStatement().execute("DROP TABLE " + tableName);
-conn.close();
-
-  }
+public void deleteSnapshot(String tableName) throws Exception {
+try (Connection conn = DriverManager.getConnection(getUrl());
+HBaseAdmin admin =
+
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();) {
+admin

phoenix git commit: PHOENIX-4141 Fix flapping TableSnapshotReadsMapReduceIT

2017-09-05 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/master 6e9ce8742 -> 27ef19f8c


PHOENIX-4141 Fix flapping TableSnapshotReadsMapReduceIT


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/27ef19f8
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/27ef19f8
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/27ef19f8

Branch: refs/heads/master
Commit: 27ef19f8ce5b956deb795a45ba04ca865fef7ad9
Parents: 6e9ce87
Author: Samarth Jain 
Authored: Tue Sep 5 13:58:06 2017 -0700
Committer: Samarth Jain 
Committed: Tue Sep 5 13:58:06 2017 -0700

--
 .../end2end/TableSnapshotReadsMapReduceIT.java  | 58 
 1 file changed, 35 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/27ef19f8/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
index 4cc2a20..cae91a3 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
@@ -18,11 +18,26 @@
 
 package org.apache.phoenix.end2end;
 
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.UUID;
+
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
-
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos;
 import org.apache.hadoop.io.NullWritable;
@@ -32,20 +47,14 @@ import 
org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.index.PhoenixIndexDBWritable;
 import org.apache.phoenix.mapreduce.util.PhoenixMapReduceUtil;
-import org.junit.*;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.Test;
 
-import java.io.IOException;
-import java.sql.*;
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Properties;
-import java.util.UUID;
+import com.google.common.collect.Maps;
 
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertTrue;
-
-public class TableSnapshotReadsMapReduceIT extends ParallelStatsDisabledIT {
+public class TableSnapshotReadsMapReduceIT extends BaseUniqueNamesOwnClusterIT 
{
   private final static String SNAPSHOT_NAME = "FOO";
   private static final String FIELD1 = "FIELD1";
   private static final String FIELD2 = "FIELD2";
@@ -58,6 +67,11 @@ public class TableSnapshotReadsMapReduceIT extends 
ParallelStatsDisabledIT {
   private long timestamp;
   private String tableName;
 
+  @BeforeClass
+  public static void doSetup() throws Exception {
+  Map props = Maps.newHashMapWithExpectedSize(1);
+  setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+  }
 
   @Test
   public void testMapReduceSnapshots() throws Exception {
@@ -155,7 +169,7 @@ public class TableSnapshotReadsMapReduceIT extends 
ParallelStatsDisabledIT {
 
   assertFalse("Should only have stored" + result.size() + "rows in the 
table for the timestamp!", rs.next());
 } finally {
-  deleteSnapshotAndTable(tableName);
+  deleteSnapshot(tableName);
 }
   }
 
@@ -195,15 +209,13 @@ public class TableSnapshotReadsMapReduceIT extends 
ParallelStatsDisabledIT {
 conn.commit();
   }
 
-  public void deleteSnapshotAndTable(String tableName) throws Exception {
-Connection conn = DriverManager.getConnection(getUrl());
-HBaseAdmin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();
-admin.deleteSnapshot(SNAPSHOT_NAME);
-
-conn.createStatement().execute("DROP TABLE " + tableName);
-conn.close();
-
-  }
+public void deleteSnapshot(String tableName) throws Exception {
+try (Connection conn = DriverManager.getConnection(getUrl());
+HBaseAdmin admin =
+
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();) {
+admin.deleteSnapshot(

phoenix git commit: PHOENIX-4141 Fix flapping TableSnapshotReadsMapReduceIT

2017-09-05 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 4461bc54d -> df3cc354c


PHOENIX-4141 Fix flapping TableSnapshotReadsMapReduceIT


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/df3cc354
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/df3cc354
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/df3cc354

Branch: refs/heads/4.x-HBase-1.2
Commit: df3cc354ca3512a7c2385e59ee5052039243442b
Parents: 4461bc5
Author: Samarth Jain 
Authored: Tue Sep 5 13:59:14 2017 -0700
Committer: Samarth Jain 
Committed: Tue Sep 5 13:59:14 2017 -0700

--
 .../end2end/TableSnapshotReadsMapReduceIT.java  | 58 
 1 file changed, 35 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/df3cc354/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
index 4cc2a20..cae91a3 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
@@ -18,11 +18,26 @@
 
 package org.apache.phoenix.end2end;
 
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.UUID;
+
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
-
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos;
 import org.apache.hadoop.io.NullWritable;
@@ -32,20 +47,14 @@ import 
org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.index.PhoenixIndexDBWritable;
 import org.apache.phoenix.mapreduce.util.PhoenixMapReduceUtil;
-import org.junit.*;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.Test;
 
-import java.io.IOException;
-import java.sql.*;
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Properties;
-import java.util.UUID;
+import com.google.common.collect.Maps;
 
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertTrue;
-
-public class TableSnapshotReadsMapReduceIT extends ParallelStatsDisabledIT {
+public class TableSnapshotReadsMapReduceIT extends BaseUniqueNamesOwnClusterIT 
{
   private final static String SNAPSHOT_NAME = "FOO";
   private static final String FIELD1 = "FIELD1";
   private static final String FIELD2 = "FIELD2";
@@ -58,6 +67,11 @@ public class TableSnapshotReadsMapReduceIT extends 
ParallelStatsDisabledIT {
   private long timestamp;
   private String tableName;
 
+  @BeforeClass
+  public static void doSetup() throws Exception {
+  Map props = Maps.newHashMapWithExpectedSize(1);
+  setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+  }
 
   @Test
   public void testMapReduceSnapshots() throws Exception {
@@ -155,7 +169,7 @@ public class TableSnapshotReadsMapReduceIT extends 
ParallelStatsDisabledIT {
 
   assertFalse("Should only have stored" + result.size() + "rows in the 
table for the timestamp!", rs.next());
 } finally {
-  deleteSnapshotAndTable(tableName);
+  deleteSnapshot(tableName);
 }
   }
 
@@ -195,15 +209,13 @@ public class TableSnapshotReadsMapReduceIT extends 
ParallelStatsDisabledIT {
 conn.commit();
   }
 
-  public void deleteSnapshotAndTable(String tableName) throws Exception {
-Connection conn = DriverManager.getConnection(getUrl());
-HBaseAdmin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();
-admin.deleteSnapshot(SNAPSHOT_NAME);
-
-conn.createStatement().execute("DROP TABLE " + tableName);
-conn.close();
-
-  }
+public void deleteSnapshot(String tableName) throws Exception {
+try (Connection conn = DriverManager.getConnection(getUrl());
+HBaseAdmin admin =
+
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();) {
+admin.d

phoenix git commit: PHOENIX-4156 Fix flapping MutableIndexFailureIT

2017-09-05 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 9f282eff4 -> 811bd8b0d


PHOENIX-4156 Fix flapping MutableIndexFailureIT


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/811bd8b0
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/811bd8b0
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/811bd8b0

Branch: refs/heads/4.x-HBase-0.98
Commit: 811bd8b0d85bd6e083c5aa8600a5e2451a138fe3
Parents: 9f282ef
Author: Samarth Jain 
Authored: Tue Sep 5 13:51:52 2017 -0700
Committer: Samarth Jain 
Committed: Tue Sep 5 13:51:52 2017 -0700

--
 .../end2end/index/MutableIndexFailureIT.java| 79 +---
 .../coprocessor/MetaDataRegionObserver.java | 12 ++-
 .../org/apache/phoenix/query/QueryServices.java |  1 +
 .../phoenix/query/QueryServicesOptions.java |  1 +
 4 files changed, 82 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/811bd8b0/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
index 8bab163..462916e 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
@@ -44,15 +44,21 @@ import 
org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
 import org.apache.hadoop.hbase.coprocessor.SimpleRegionObserver;
 import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
 import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.coprocessor.MetaDataRegionObserver;
+import 
org.apache.phoenix.coprocessor.MetaDataRegionObserver.BuildIndexScheduleTask;
 import org.apache.phoenix.end2end.NeedsOwnMiniClusterTest;
 import org.apache.phoenix.execute.CommitException;
 import org.apache.phoenix.hbase.index.write.IndexWriterUtils;
 import org.apache.phoenix.index.PhoenixIndexFailurePolicy;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
 import org.apache.phoenix.query.BaseTest;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.schema.PIndexState;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.SortOrder;
 import org.apache.phoenix.util.MetaDataUtil;
@@ -100,6 +106,10 @@ public class MutableIndexFailureIT extends BaseTest {
 private final boolean throwIndexWriteFailure;
 private String schema = generateUniqueName();
 private List exceptions = Lists.newArrayList();
+private static RegionCoprocessorEnvironment 
indexRebuildTaskRegionEnvironment;
+private static final int forwardOverlapMs = 1000;
+private static final int disableTimestampThresholdMs = 1;
+private static final int numRpcRetries = 2;
 
 public MutableIndexFailureIT(boolean transactional, boolean localIndex, 
boolean isNamespaceMapped, Boolean disableIndexOnWriteFailure, Boolean 
rebuildIndexOnWriteFailure, boolean failRebuildTask, Boolean 
throwIndexWriteFailure) {
 this.transactional = transactional;
@@ -128,15 +138,27 @@ public class MutableIndexFailureIT extends BaseTest {
 serverProps.put(IndexWriterUtils.INDEX_WRITER_RPC_PAUSE, "5000");
 serverProps.put("data.tx.snapshot.dir", "/tmp");
 serverProps.put("hbase.balancer.period", 
String.valueOf(Integer.MAX_VALUE));
-serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_ATTRIB, 
Boolean.TRUE.toString());
-
serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_INTERVAL_ATTRIB, 
"4000");
-
serverProps.put(QueryServices.INDEX_REBUILD_DISABLE_TIMESTAMP_THRESHOLD, 
"3"); // give up rebuilding after 30 seconds
 // need to override rpc retries otherwise test doesn't pass
-serverProps.put(QueryServices.INDEX_REBUILD_RPC_RETRIES_COUNTER, 
Long.toString(1));
-
serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_OVERLAP_FORWARD_TIME_ATTRIB,
 Long.toString(1000));
+serverProps.put(QueryServices.INDEX_REBUILD_RPC_RETRIES_COUNTER, 
Long.toString(numRpcRetries));
+
serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_OVERLAP_FORWARD_TIME_ATTRIB,
 Long.toString(forwardOverlapMs));
+/*
+ * Effectively disable running the index rebuild task by having an 
infinite delay
+ * because we want to

phoenix git commit: PHOENIX-4156 Fix flapping MutableIndexFailureIT

2017-09-05 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 63779600d -> 2c38afffd


PHOENIX-4156 Fix flapping MutableIndexFailureIT


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/2c38afff
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/2c38afff
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/2c38afff

Branch: refs/heads/4.x-HBase-1.1
Commit: 2c38afffd1a9363bc7892706eb69bc476a634e08
Parents: 6377960
Author: Samarth Jain 
Authored: Tue Sep 5 13:51:26 2017 -0700
Committer: Samarth Jain 
Committed: Tue Sep 5 13:51:26 2017 -0700

--
 .../end2end/index/MutableIndexFailureIT.java| 79 +---
 .../coprocessor/MetaDataRegionObserver.java | 12 ++-
 .../org/apache/phoenix/query/QueryServices.java |  1 +
 .../phoenix/query/QueryServicesOptions.java |  1 +
 4 files changed, 82 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/2c38afff/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
index 0abd5ae..5797819 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
@@ -44,15 +44,21 @@ import 
org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
 import org.apache.hadoop.hbase.coprocessor.SimpleRegionObserver;
 import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
 import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.coprocessor.MetaDataRegionObserver;
+import 
org.apache.phoenix.coprocessor.MetaDataRegionObserver.BuildIndexScheduleTask;
 import org.apache.phoenix.end2end.NeedsOwnMiniClusterTest;
 import org.apache.phoenix.execute.CommitException;
 import org.apache.phoenix.hbase.index.write.IndexWriterUtils;
 import org.apache.phoenix.index.PhoenixIndexFailurePolicy;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
 import org.apache.phoenix.query.BaseTest;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.schema.PIndexState;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.SortOrder;
 import org.apache.phoenix.util.MetaDataUtil;
@@ -102,6 +108,10 @@ public class MutableIndexFailureIT extends BaseTest {
 private final boolean throwIndexWriteFailure;
 private String schema = generateUniqueName();
 private List exceptions = Lists.newArrayList();
+private static RegionCoprocessorEnvironment 
indexRebuildTaskRegionEnvironment;
+private static final int forwardOverlapMs = 1000;
+private static final int disableTimestampThresholdMs = 1;
+private static final int numRpcRetries = 2;
 
 public MutableIndexFailureIT(boolean transactional, boolean localIndex, 
boolean isNamespaceMapped, Boolean disableIndexOnWriteFailure, Boolean 
rebuildIndexOnWriteFailure, boolean failRebuildTask, Boolean 
throwIndexWriteFailure) {
 this.transactional = transactional;
@@ -130,15 +140,27 @@ public class MutableIndexFailureIT extends BaseTest {
 serverProps.put(IndexWriterUtils.INDEX_WRITER_RPC_PAUSE, "5000");
 serverProps.put("data.tx.snapshot.dir", "/tmp");
 serverProps.put("hbase.balancer.period", 
String.valueOf(Integer.MAX_VALUE));
-serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_ATTRIB, 
Boolean.TRUE.toString());
-
serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_INTERVAL_ATTRIB, 
"4000");
-
serverProps.put(QueryServices.INDEX_REBUILD_DISABLE_TIMESTAMP_THRESHOLD, 
"3"); // give up rebuilding after 30 seconds
 // need to override rpc retries otherwise test doesn't pass
-serverProps.put(QueryServices.INDEX_REBUILD_RPC_RETRIES_COUNTER, 
Long.toString(1));
-
serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_OVERLAP_FORWARD_TIME_ATTRIB,
 Long.toString(1000));
+serverProps.put(QueryServices.INDEX_REBUILD_RPC_RETRIES_COUNTER, 
Long.toString(numRpcRetries));
+
serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_OVERLAP_FORWARD_TIME_ATTRIB,
 Long.toString(forwardOverlapMs));
+/*
+ * Effectively disable running the index rebuild task by having an 
infinite delay
+ * because we want to c

phoenix git commit: PHOENIX-4156 Fix flapping MutableIndexFailureIT

2017-09-05 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 f1d2b6f03 -> 4461bc54d


PHOENIX-4156 Fix flapping MutableIndexFailureIT


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/4461bc54
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/4461bc54
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/4461bc54

Branch: refs/heads/4.x-HBase-1.2
Commit: 4461bc54dba84352ab47e1a72672fc6bac31a8dd
Parents: f1d2b6f
Author: Samarth Jain 
Authored: Tue Sep 5 13:51:01 2017 -0700
Committer: Samarth Jain 
Committed: Tue Sep 5 13:51:01 2017 -0700

--
 .../end2end/index/MutableIndexFailureIT.java| 79 +---
 .../coprocessor/MetaDataRegionObserver.java | 12 ++-
 .../org/apache/phoenix/query/QueryServices.java |  1 +
 .../phoenix/query/QueryServicesOptions.java |  1 +
 4 files changed, 82 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/4461bc54/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
index f8697b1..ee6f6e5 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
@@ -44,15 +44,21 @@ import 
org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
 import org.apache.hadoop.hbase.coprocessor.SimpleRegionObserver;
 import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
 import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.coprocessor.MetaDataRegionObserver;
+import 
org.apache.phoenix.coprocessor.MetaDataRegionObserver.BuildIndexScheduleTask;
 import org.apache.phoenix.end2end.NeedsOwnMiniClusterTest;
 import org.apache.phoenix.execute.CommitException;
 import org.apache.phoenix.hbase.index.write.IndexWriterUtils;
 import org.apache.phoenix.index.PhoenixIndexFailurePolicy;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
 import org.apache.phoenix.query.BaseTest;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.schema.PIndexState;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.SortOrder;
 import org.apache.phoenix.util.MetaDataUtil;
@@ -100,6 +106,10 @@ public class MutableIndexFailureIT extends BaseTest {
 private final boolean throwIndexWriteFailure;
 private String schema = generateUniqueName();
 private List exceptions = Lists.newArrayList();
+private static RegionCoprocessorEnvironment 
indexRebuildTaskRegionEnvironment;
+private static final int forwardOverlapMs = 1000;
+private static final int disableTimestampThresholdMs = 1;
+private static final int numRpcRetries = 2;
 
 public MutableIndexFailureIT(boolean transactional, boolean localIndex, 
boolean isNamespaceMapped, Boolean disableIndexOnWriteFailure, Boolean 
rebuildIndexOnWriteFailure, boolean failRebuildTask, Boolean 
throwIndexWriteFailure) {
 this.transactional = transactional;
@@ -128,15 +138,27 @@ public class MutableIndexFailureIT extends BaseTest {
 serverProps.put(IndexWriterUtils.INDEX_WRITER_RPC_PAUSE, "5000");
 serverProps.put("data.tx.snapshot.dir", "/tmp");
 serverProps.put("hbase.balancer.period", 
String.valueOf(Integer.MAX_VALUE));
-serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_ATTRIB, 
Boolean.TRUE.toString());
-
serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_INTERVAL_ATTRIB, 
"4000");
-
serverProps.put(QueryServices.INDEX_REBUILD_DISABLE_TIMESTAMP_THRESHOLD, 
"3"); // give up rebuilding after 30 seconds
 // need to override rpc retries otherwise test doesn't pass
-serverProps.put(QueryServices.INDEX_REBUILD_RPC_RETRIES_COUNTER, 
Long.toString(1));
-
serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_OVERLAP_FORWARD_TIME_ATTRIB,
 Long.toString(1000));
+serverProps.put(QueryServices.INDEX_REBUILD_RPC_RETRIES_COUNTER, 
Long.toString(numRpcRetries));
+
serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_OVERLAP_FORWARD_TIME_ATTRIB,
 Long.toString(forwardOverlapMs));
+/*
+ * Effectively disable running the index rebuild task by having an 
infinite delay
+ * because we want to c

phoenix git commit: PHOENIX-4156 Fix flapping MutableIndexFailureIT

2017-09-05 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/master 839be97e9 -> 6e9ce8742


PHOENIX-4156 Fix flapping MutableIndexFailureIT


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/6e9ce874
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/6e9ce874
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/6e9ce874

Branch: refs/heads/master
Commit: 6e9ce8742e73eccb0d7678ae8f09be9e76b9be98
Parents: 839be97
Author: Samarth Jain 
Authored: Tue Sep 5 13:50:28 2017 -0700
Committer: Samarth Jain 
Committed: Tue Sep 5 13:50:36 2017 -0700

--
 .../end2end/index/MutableIndexFailureIT.java| 79 +---
 .../coprocessor/MetaDataRegionObserver.java | 12 ++-
 .../org/apache/phoenix/query/QueryServices.java |  1 +
 .../phoenix/query/QueryServicesOptions.java |  1 +
 4 files changed, 82 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/6e9ce874/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
index a1e2b9e..1f425cf 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
@@ -44,15 +44,21 @@ import 
org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
 import org.apache.hadoop.hbase.coprocessor.SimpleRegionObserver;
 import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
 import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.coprocessor.MetaDataRegionObserver;
+import 
org.apache.phoenix.coprocessor.MetaDataRegionObserver.BuildIndexScheduleTask;
 import org.apache.phoenix.end2end.NeedsOwnMiniClusterTest;
 import org.apache.phoenix.execute.CommitException;
 import org.apache.phoenix.hbase.index.write.IndexWriterUtils;
 import org.apache.phoenix.index.PhoenixIndexFailurePolicy;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
 import org.apache.phoenix.query.BaseTest;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.schema.PIndexState;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.SortOrder;
 import org.apache.phoenix.util.MetaDataUtil;
@@ -100,6 +106,10 @@ public class MutableIndexFailureIT extends BaseTest {
 private final boolean throwIndexWriteFailure;
 private String schema = generateUniqueName();
 private List exceptions = Lists.newArrayList();
+private static RegionCoprocessorEnvironment 
indexRebuildTaskRegionEnvironment;
+private static final int forwardOverlapMs = 1000;
+private static final int disableTimestampThresholdMs = 1;
+private static final int numRpcRetries = 2;
 
 public MutableIndexFailureIT(boolean transactional, boolean localIndex, 
boolean isNamespaceMapped, Boolean disableIndexOnWriteFailure, Boolean 
rebuildIndexOnWriteFailure, boolean failRebuildTask, Boolean 
throwIndexWriteFailure) {
 this.transactional = transactional;
@@ -128,15 +138,27 @@ public class MutableIndexFailureIT extends BaseTest {
 serverProps.put(IndexWriterUtils.INDEX_WRITER_RPC_PAUSE, "5000");
 serverProps.put("data.tx.snapshot.dir", "/tmp");
 serverProps.put("hbase.balancer.period", 
String.valueOf(Integer.MAX_VALUE));
-serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_ATTRIB, 
Boolean.TRUE.toString());
-
serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_INTERVAL_ATTRIB, 
"4000");
-
serverProps.put(QueryServices.INDEX_REBUILD_DISABLE_TIMESTAMP_THRESHOLD, 
"3"); // give up rebuilding after 30 seconds
 // need to override rpc retries otherwise test doesn't pass
-serverProps.put(QueryServices.INDEX_REBUILD_RPC_RETRIES_COUNTER, 
Long.toString(1));
-
serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_OVERLAP_FORWARD_TIME_ATTRIB,
 Long.toString(1000));
+serverProps.put(QueryServices.INDEX_REBUILD_RPC_RETRIES_COUNTER, 
Long.toString(numRpcRetries));
+
serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_OVERLAP_FORWARD_TIME_ATTRIB,
 Long.toString(forwardOverlapMs));
+/*
+ * Effectively disable running the index rebuild task by having an 
infinite delay
+ * because we want to control it's ex

[4/4] phoenix git commit: PHOENIX-4068 Atomic Upsert salted table with error(java.lang.NullPointerException)

2017-09-05 Thread ssa
PHOENIX-4068 Atomic Upsert salted table with 
error(java.lang.NullPointerException)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/f1d2b6f0
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/f1d2b6f0
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/f1d2b6f0

Branch: refs/heads/4.x-HBase-1.2
Commit: f1d2b6f03123eee2f49a02cd442f58a2ad0a3694
Parents: 41c0521
Author: Sergey Soldatov 
Authored: Thu Aug 10 22:06:49 2017 -0700
Committer: Sergey Soldatov 
Committed: Tue Sep 5 13:39:49 2017 -0700

--
 .../phoenix/end2end/OnDuplicateKeyIT.java   | 33 +++-
 .../apache/phoenix/compile/UpsertCompiler.java  |  9 +++---
 .../phoenix/index/PhoenixIndexBuilder.java  |  3 +-
 3 files changed, 39 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/f1d2b6f0/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
index 2477f56..f1ee0e7 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
@@ -21,6 +21,7 @@ import static 
org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 import java.sql.Connection;
 import java.sql.Date;
@@ -549,6 +550,36 @@ public class OnDuplicateKeyIT extends 
ParallelStatsDisabledIT {
 
 conn.close();
 }
-
+@Test
+public void testDuplicateUpdateWithSaltedTable() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+final Connection conn = DriverManager.getConnection(getUrl(), props);
+String tableName = generateUniqueName();
+try {
+String ddl = "create table " + tableName + " (id varchar not 
null,id1 varchar not null, counter1 bigint, counter2 bigint CONSTRAINT pk 
PRIMARY KEY (id,id1)) SALT_BUCKETS=6";
+conn.createStatement().execute(ddl);
+createIndex(conn, tableName);
+String dml = "UPSERT INTO " + tableName + " (id,id1, counter1, 
counter2) VALUES ('abc','123', 0, 0) ON DUPLICATE KEY UPDATE counter1 = 
counter1 + 1, counter2 = counter2 + 1";
+conn.createStatement().execute(dml);
+conn.commit();
+ResultSet rs = conn.createStatement().executeQuery("SELECT * FROM 
" + tableName);
+assertTrue(rs.next());
+assertEquals("0",rs.getString(3));
+assertEquals("0",rs.getString(4));
+conn.createStatement().execute(dml);
+conn.commit();
+rs = conn.createStatement().executeQuery("SELECT * FROM " + 
tableName);
+assertTrue(rs.next());
+assertEquals("1",rs.getString(3));
+assertEquals("1",rs.getString(4));
+
+} catch (Exception e) {
+fail();
+} finally {
+conn.close();
+}
+}
+
+
 }
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/f1d2b6f0/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
index 0d09e9d..c384292 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
@@ -916,15 +916,16 @@ public class UpsertCompiler {
 }
 if (onDupKeyPairs.isEmpty()) { // ON DUPLICATE KEY IGNORE
 onDupKeyBytesToBe = 
PhoenixIndexBuilder.serializeOnDupKeyIgnore();
-} else {   // ON DUPLICATE KEY UPDATE
-int position = 1;
+} else {   // ON DUPLICATE KEY UPDATE;
+int position = table.getBucketNum() == null ? 0 : 1;
 UpdateColumnCompiler compiler = new 
UpdateColumnCompiler(context);
 int nColumns = onDupKeyPairs.size();
 List updateExpressions = 
Lists.newArrayListWithExpectedSize(nColumns);
 LinkedHashSet updateColumns = 
Sets.newLinkedHashSetWithExpectedSize(nColumns + 1);
 updateColumns.add(new PColumnImpl(
-table.getPKColumns().get(0).getName(), // Use first PK 
column name as we know it 

[2/4] phoenix git commit: PHOENIX-4068 Atomic Upsert salted table with error(java.lang.NullPointerException)

2017-09-05 Thread ssa
PHOENIX-4068 Atomic Upsert salted table with 
error(java.lang.NullPointerException)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/9f282eff
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/9f282eff
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/9f282eff

Branch: refs/heads/4.x-HBase-0.98
Commit: 9f282eff45931d5fab3a59a2c81f1c6ea3e8bb96
Parents: 7513d66
Author: Sergey Soldatov 
Authored: Thu Aug 10 22:06:49 2017 -0700
Committer: Sergey Soldatov 
Committed: Tue Sep 5 13:39:31 2017 -0700

--
 .../phoenix/end2end/OnDuplicateKeyIT.java   | 33 +++-
 .../apache/phoenix/compile/UpsertCompiler.java  |  9 +++---
 .../phoenix/index/PhoenixIndexBuilder.java  |  3 +-
 3 files changed, 39 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/9f282eff/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
index 2477f56..f1ee0e7 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
@@ -21,6 +21,7 @@ import static 
org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 import java.sql.Connection;
 import java.sql.Date;
@@ -549,6 +550,36 @@ public class OnDuplicateKeyIT extends 
ParallelStatsDisabledIT {
 
 conn.close();
 }
-
+@Test
+public void testDuplicateUpdateWithSaltedTable() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+final Connection conn = DriverManager.getConnection(getUrl(), props);
+String tableName = generateUniqueName();
+try {
+String ddl = "create table " + tableName + " (id varchar not 
null,id1 varchar not null, counter1 bigint, counter2 bigint CONSTRAINT pk 
PRIMARY KEY (id,id1)) SALT_BUCKETS=6";
+conn.createStatement().execute(ddl);
+createIndex(conn, tableName);
+String dml = "UPSERT INTO " + tableName + " (id,id1, counter1, 
counter2) VALUES ('abc','123', 0, 0) ON DUPLICATE KEY UPDATE counter1 = 
counter1 + 1, counter2 = counter2 + 1";
+conn.createStatement().execute(dml);
+conn.commit();
+ResultSet rs = conn.createStatement().executeQuery("SELECT * FROM 
" + tableName);
+assertTrue(rs.next());
+assertEquals("0",rs.getString(3));
+assertEquals("0",rs.getString(4));
+conn.createStatement().execute(dml);
+conn.commit();
+rs = conn.createStatement().executeQuery("SELECT * FROM " + 
tableName);
+assertTrue(rs.next());
+assertEquals("1",rs.getString(3));
+assertEquals("1",rs.getString(4));
+
+} catch (Exception e) {
+fail();
+} finally {
+conn.close();
+}
+}
+
+
 }
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/9f282eff/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
index 1669ab9..763c81a 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
@@ -916,15 +916,16 @@ public class UpsertCompiler {
 }
 if (onDupKeyPairs.isEmpty()) { // ON DUPLICATE KEY IGNORE
 onDupKeyBytesToBe = 
PhoenixIndexBuilder.serializeOnDupKeyIgnore();
-} else {   // ON DUPLICATE KEY UPDATE
-int position = 1;
+} else {   // ON DUPLICATE KEY UPDATE;
+int position = table.getBucketNum() == null ? 0 : 1;
 UpdateColumnCompiler compiler = new 
UpdateColumnCompiler(context);
 int nColumns = onDupKeyPairs.size();
 List updateExpressions = 
Lists.newArrayListWithExpectedSize(nColumns);
 LinkedHashSet updateColumns = 
Sets.newLinkedHashSetWithExpectedSize(nColumns + 1);
 updateColumns.add(new PColumnImpl(
-table.getPKColumns().get(0).getName(), // Use first PK 
column name as we know it

[3/4] phoenix git commit: PHOENIX-4068 Atomic Upsert salted table with error(java.lang.NullPointerException)

2017-09-05 Thread ssa
PHOENIX-4068 Atomic Upsert salted table with 
error(java.lang.NullPointerException)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/63779600
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/63779600
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/63779600

Branch: refs/heads/4.x-HBase-1.1
Commit: 63779600dd0d2df3df5d443de631fd6f00dd0304
Parents: cb12016
Author: Sergey Soldatov 
Authored: Thu Aug 10 22:06:49 2017 -0700
Committer: Sergey Soldatov 
Committed: Tue Sep 5 13:39:42 2017 -0700

--
 .../phoenix/end2end/OnDuplicateKeyIT.java   | 33 +++-
 .../apache/phoenix/compile/UpsertCompiler.java  |  9 +++---
 .../phoenix/index/PhoenixIndexBuilder.java  |  3 +-
 3 files changed, 39 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/63779600/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
index 2477f56..f1ee0e7 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
@@ -21,6 +21,7 @@ import static 
org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 import java.sql.Connection;
 import java.sql.Date;
@@ -549,6 +550,36 @@ public class OnDuplicateKeyIT extends 
ParallelStatsDisabledIT {
 
 conn.close();
 }
-
+@Test
+public void testDuplicateUpdateWithSaltedTable() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+final Connection conn = DriverManager.getConnection(getUrl(), props);
+String tableName = generateUniqueName();
+try {
+String ddl = "create table " + tableName + " (id varchar not 
null,id1 varchar not null, counter1 bigint, counter2 bigint CONSTRAINT pk 
PRIMARY KEY (id,id1)) SALT_BUCKETS=6";
+conn.createStatement().execute(ddl);
+createIndex(conn, tableName);
+String dml = "UPSERT INTO " + tableName + " (id,id1, counter1, 
counter2) VALUES ('abc','123', 0, 0) ON DUPLICATE KEY UPDATE counter1 = 
counter1 + 1, counter2 = counter2 + 1";
+conn.createStatement().execute(dml);
+conn.commit();
+ResultSet rs = conn.createStatement().executeQuery("SELECT * FROM 
" + tableName);
+assertTrue(rs.next());
+assertEquals("0",rs.getString(3));
+assertEquals("0",rs.getString(4));
+conn.createStatement().execute(dml);
+conn.commit();
+rs = conn.createStatement().executeQuery("SELECT * FROM " + 
tableName);
+assertTrue(rs.next());
+assertEquals("1",rs.getString(3));
+assertEquals("1",rs.getString(4));
+
+} catch (Exception e) {
+fail();
+} finally {
+conn.close();
+}
+}
+
+
 }
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/63779600/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
index 0d09e9d..c384292 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
@@ -916,15 +916,16 @@ public class UpsertCompiler {
 }
 if (onDupKeyPairs.isEmpty()) { // ON DUPLICATE KEY IGNORE
 onDupKeyBytesToBe = 
PhoenixIndexBuilder.serializeOnDupKeyIgnore();
-} else {   // ON DUPLICATE KEY UPDATE
-int position = 1;
+} else {   // ON DUPLICATE KEY UPDATE;
+int position = table.getBucketNum() == null ? 0 : 1;
 UpdateColumnCompiler compiler = new 
UpdateColumnCompiler(context);
 int nColumns = onDupKeyPairs.size();
 List updateExpressions = 
Lists.newArrayListWithExpectedSize(nColumns);
 LinkedHashSet updateColumns = 
Sets.newLinkedHashSetWithExpectedSize(nColumns + 1);
 updateColumns.add(new PColumnImpl(
-table.getPKColumns().get(0).getName(), // Use first PK 
column name as we know it 

[1/4] phoenix git commit: PHOENIX-4068 Atomic Upsert salted table with error(java.lang.NullPointerException)

2017-09-05 Thread ssa
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 7513d663f -> 9f282eff4
  refs/heads/4.x-HBase-1.1 cb1201620 -> 63779600d
  refs/heads/4.x-HBase-1.2 41c05215c -> f1d2b6f03
  refs/heads/master cec7e1cf8 -> 839be97e9


PHOENIX-4068 Atomic Upsert salted table with 
error(java.lang.NullPointerException)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/839be97e
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/839be97e
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/839be97e

Branch: refs/heads/master
Commit: 839be97e9ef2d23d3e9713313d4a93521bc74028
Parents: cec7e1c
Author: Sergey Soldatov 
Authored: Thu Aug 10 22:06:49 2017 -0700
Committer: Sergey Soldatov 
Committed: Tue Sep 5 13:31:30 2017 -0700

--
 .../phoenix/end2end/OnDuplicateKeyIT.java   | 33 +++-
 .../apache/phoenix/compile/UpsertCompiler.java  |  9 +++---
 .../phoenix/index/PhoenixIndexBuilder.java  |  3 +-
 3 files changed, 39 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/839be97e/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
index 2477f56..f1ee0e7 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
@@ -21,6 +21,7 @@ import static 
org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 import java.sql.Connection;
 import java.sql.Date;
@@ -549,6 +550,36 @@ public class OnDuplicateKeyIT extends 
ParallelStatsDisabledIT {
 
 conn.close();
 }
-
+@Test
+public void testDuplicateUpdateWithSaltedTable() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+final Connection conn = DriverManager.getConnection(getUrl(), props);
+String tableName = generateUniqueName();
+try {
+String ddl = "create table " + tableName + " (id varchar not 
null,id1 varchar not null, counter1 bigint, counter2 bigint CONSTRAINT pk 
PRIMARY KEY (id,id1)) SALT_BUCKETS=6";
+conn.createStatement().execute(ddl);
+createIndex(conn, tableName);
+String dml = "UPSERT INTO " + tableName + " (id,id1, counter1, 
counter2) VALUES ('abc','123', 0, 0) ON DUPLICATE KEY UPDATE counter1 = 
counter1 + 1, counter2 = counter2 + 1";
+conn.createStatement().execute(dml);
+conn.commit();
+ResultSet rs = conn.createStatement().executeQuery("SELECT * FROM 
" + tableName);
+assertTrue(rs.next());
+assertEquals("0",rs.getString(3));
+assertEquals("0",rs.getString(4));
+conn.createStatement().execute(dml);
+conn.commit();
+rs = conn.createStatement().executeQuery("SELECT * FROM " + 
tableName);
+assertTrue(rs.next());
+assertEquals("1",rs.getString(3));
+assertEquals("1",rs.getString(4));
+
+} catch (Exception e) {
+fail();
+} finally {
+conn.close();
+}
+}
+
+
 }
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/839be97e/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
index 0d09e9d..c384292 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
@@ -916,15 +916,16 @@ public class UpsertCompiler {
 }
 if (onDupKeyPairs.isEmpty()) { // ON DUPLICATE KEY IGNORE
 onDupKeyBytesToBe = 
PhoenixIndexBuilder.serializeOnDupKeyIgnore();
-} else {   // ON DUPLICATE KEY UPDATE
-int position = 1;
+} else {   // ON DUPLICATE KEY UPDATE;
+int position = table.getBucketNum() == null ? 0 : 1;
 UpdateColumnCompiler compiler = new 
UpdateColumnCompiler(context);
 int nColumns = onDupKeyPairs.size();
 List updateExpressions = 
Lists.newArrayListWithExpectedSize(nColumns);
 LinkedHashSet up

[4/4] phoenix git commit: PHOENIX-3406 CSV BulkLoad MR job incorrectly handle ROW_TIMESTAMP

2017-09-05 Thread ssa
PHOENIX-3406 CSV BulkLoad MR job incorrectly handle ROW_TIMESTAMP


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/cec7e1cf
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/cec7e1cf
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/cec7e1cf

Branch: refs/heads/master
Commit: cec7e1cf8794e7ec0ee5c8be9a32e33cd211ec3b
Parents: c8cbb5e
Author: Sergey Soldatov 
Authored: Tue Oct 25 14:09:54 2016 -0700
Committer: Sergey Soldatov 
Committed: Tue Sep 5 12:46:47 2017 -0700

--
 .../phoenix/end2end/CsvBulkLoadToolIT.java  | 38 
 .../mapreduce/FormatToBytesWritableMapper.java  |  1 +
 .../mapreduce/FormatToKeyValueReducer.java  |  7 ++--
 3 files changed, 43 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/cec7e1cf/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
index 5a186a0..40fe900 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
@@ -92,6 +92,44 @@ public class CsvBulkLoadToolIT extends BaseOwnClusterIT {
 rs.close();
 stmt.close();
 }
+@Test
+public void testImportWithRowTimestamp() throws Exception {
+
+Statement stmt = conn.createStatement();
+stmt.execute("CREATE TABLE S.TABLE9 (ID INTEGER NOT NULL , NAME 
VARCHAR, T DATE NOT NULL," +
+" " +
+"CONSTRAINT PK PRIMARY KEY (ID, T ROW_TIMESTAMP))");
+
+FileSystem fs = FileSystem.get(getUtility().getConfiguration());
+FSDataOutputStream outputStream = fs.create(new 
Path("/tmp/input1.csv"));
+PrintWriter printWriter = new PrintWriter(outputStream);
+printWriter.println("1,Name 1,1970/01/01");
+printWriter.println("2,Name 2,1971/01/01");
+printWriter.println("3,Name 2,1972/01/01");
+printWriter.close();
+
+CsvBulkLoadTool csvBulkLoadTool = new CsvBulkLoadTool();
+csvBulkLoadTool.setConf(new 
Configuration(getUtility().getConfiguration()));
+csvBulkLoadTool.getConf().set(DATE_FORMAT_ATTRIB,"/MM/dd");
+int exitCode = csvBulkLoadTool.run(new String[] {
+"--input", "/tmp/input1.csv",
+"--table", "table9",
+"--schema", "s",
+"--zookeeper", zkQuorum});
+assertEquals(0, exitCode);
+
+ResultSet rs = stmt.executeQuery("SELECT id, name, t FROM s.table9 
WHERE T < to_date" +
+"('1972-01-01') AND T > to_date('1970-01-01') ORDER BY id");
+assertTrue(rs.next());
+assertEquals(2, rs.getInt(1));
+assertEquals("Name 2", rs.getString(2));
+assertEquals(DateUtil.parseDate("1971-01-01"), rs.getDate(3));
+assertFalse(rs.next());
+
+rs.close();
+stmt.close();
+}
+
 
 @Test
 public void testImportWithTabs() throws Exception {

http://git-wip-us.apache.org/repos/asf/phoenix/blob/cec7e1cf/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToBytesWritableMapper.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToBytesWritableMapper.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToBytesWritableMapper.java
index 1dae981..360859e 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToBytesWritableMapper.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToBytesWritableMapper.java
@@ -314,6 +314,7 @@ public abstract class FormatToBytesWritableMapper 
extends Mapperhttp://git-wip-us.apache.org/repos/asf/phoenix/blob/cec7e1cf/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
index 07cf285..72af1a7 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
@@ -144,6 +144,7 @@ public class FormatToKeyValueReducer
 DataInputStream input = new DataInputStream(new 
ByteArrayInputStream(aggregatedArray.get()));
 while (input.available() != 0) {
 byte type = input.readByte();

[3/4] phoenix git commit: PHOENIX-3406 CSV BulkLoad MR job incorrectly handle ROW_TIMESTAMP

2017-09-05 Thread ssa
PHOENIX-3406 CSV BulkLoad MR job incorrectly handle ROW_TIMESTAMP


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/41c05215
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/41c05215
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/41c05215

Branch: refs/heads/4.x-HBase-1.2
Commit: 41c05215ca6c4d06db352398065479d6a228b2d8
Parents: 3b8468e
Author: Sergey Soldatov 
Authored: Tue Oct 25 14:09:54 2016 -0700
Committer: Sergey Soldatov 
Committed: Tue Sep 5 12:45:59 2017 -0700

--
 .../phoenix/end2end/CsvBulkLoadToolIT.java  | 38 
 .../mapreduce/FormatToBytesWritableMapper.java  |  1 +
 .../mapreduce/FormatToKeyValueReducer.java  |  7 ++--
 3 files changed, 43 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/41c05215/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
index 5a186a0..40fe900 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
@@ -92,6 +92,44 @@ public class CsvBulkLoadToolIT extends BaseOwnClusterIT {
 rs.close();
 stmt.close();
 }
+@Test
+public void testImportWithRowTimestamp() throws Exception {
+
+Statement stmt = conn.createStatement();
+stmt.execute("CREATE TABLE S.TABLE9 (ID INTEGER NOT NULL , NAME 
VARCHAR, T DATE NOT NULL," +
+" " +
+"CONSTRAINT PK PRIMARY KEY (ID, T ROW_TIMESTAMP))");
+
+FileSystem fs = FileSystem.get(getUtility().getConfiguration());
+FSDataOutputStream outputStream = fs.create(new 
Path("/tmp/input1.csv"));
+PrintWriter printWriter = new PrintWriter(outputStream);
+printWriter.println("1,Name 1,1970/01/01");
+printWriter.println("2,Name 2,1971/01/01");
+printWriter.println("3,Name 2,1972/01/01");
+printWriter.close();
+
+CsvBulkLoadTool csvBulkLoadTool = new CsvBulkLoadTool();
+csvBulkLoadTool.setConf(new 
Configuration(getUtility().getConfiguration()));
+csvBulkLoadTool.getConf().set(DATE_FORMAT_ATTRIB,"/MM/dd");
+int exitCode = csvBulkLoadTool.run(new String[] {
+"--input", "/tmp/input1.csv",
+"--table", "table9",
+"--schema", "s",
+"--zookeeper", zkQuorum});
+assertEquals(0, exitCode);
+
+ResultSet rs = stmt.executeQuery("SELECT id, name, t FROM s.table9 
WHERE T < to_date" +
+"('1972-01-01') AND T > to_date('1970-01-01') ORDER BY id");
+assertTrue(rs.next());
+assertEquals(2, rs.getInt(1));
+assertEquals("Name 2", rs.getString(2));
+assertEquals(DateUtil.parseDate("1971-01-01"), rs.getDate(3));
+assertFalse(rs.next());
+
+rs.close();
+stmt.close();
+}
+
 
 @Test
 public void testImportWithTabs() throws Exception {

http://git-wip-us.apache.org/repos/asf/phoenix/blob/41c05215/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToBytesWritableMapper.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToBytesWritableMapper.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToBytesWritableMapper.java
index 1dae981..360859e 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToBytesWritableMapper.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToBytesWritableMapper.java
@@ -314,6 +314,7 @@ public abstract class FormatToBytesWritableMapper 
extends Mapperhttp://git-wip-us.apache.org/repos/asf/phoenix/blob/41c05215/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
index 07cf285..72af1a7 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
@@ -144,6 +144,7 @@ public class FormatToKeyValueReducer
 DataInputStream input = new DataInputStream(new 
ByteArrayInputStream(aggregatedArray.get()));
 while (input.available() != 0) {
 byte type = input.readB

[1/4] phoenix git commit: PHOENIX-3406 CSV BulkLoad MR job incorrectly handle ROW_TIMESTAMP

2017-09-05 Thread ssa
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 d81ad6a5f -> 7513d663f
  refs/heads/4.x-HBase-1.1 6fcf5bb2e -> cb1201620
  refs/heads/4.x-HBase-1.2 3b8468e27 -> 41c05215c
  refs/heads/master c8cbb5e5e -> cec7e1cf8


PHOENIX-3406 CSV BulkLoad MR job incorrectly handle ROW_TIMESTAMP


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/7513d663
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/7513d663
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/7513d663

Branch: refs/heads/4.x-HBase-0.98
Commit: 7513d663f74ae90c4e1f65066bfd2ffcb326e1e7
Parents: d81ad6a
Author: Sergey Soldatov 
Authored: Tue Oct 25 14:09:54 2016 -0700
Committer: Sergey Soldatov 
Committed: Tue Sep 5 12:45:23 2017 -0700

--
 .../phoenix/end2end/CsvBulkLoadToolIT.java  | 38 
 .../mapreduce/FormatToBytesWritableMapper.java  |  1 +
 .../mapreduce/FormatToKeyValueReducer.java  |  7 ++--
 3 files changed, 43 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/7513d663/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
index 5a186a0..40fe900 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
@@ -92,6 +92,44 @@ public class CsvBulkLoadToolIT extends BaseOwnClusterIT {
 rs.close();
 stmt.close();
 }
+@Test
+public void testImportWithRowTimestamp() throws Exception {
+
+Statement stmt = conn.createStatement();
+stmt.execute("CREATE TABLE S.TABLE9 (ID INTEGER NOT NULL , NAME 
VARCHAR, T DATE NOT NULL," +
+" " +
+"CONSTRAINT PK PRIMARY KEY (ID, T ROW_TIMESTAMP))");
+
+FileSystem fs = FileSystem.get(getUtility().getConfiguration());
+FSDataOutputStream outputStream = fs.create(new 
Path("/tmp/input1.csv"));
+PrintWriter printWriter = new PrintWriter(outputStream);
+printWriter.println("1,Name 1,1970/01/01");
+printWriter.println("2,Name 2,1971/01/01");
+printWriter.println("3,Name 2,1972/01/01");
+printWriter.close();
+
+CsvBulkLoadTool csvBulkLoadTool = new CsvBulkLoadTool();
+csvBulkLoadTool.setConf(new 
Configuration(getUtility().getConfiguration()));
+csvBulkLoadTool.getConf().set(DATE_FORMAT_ATTRIB,"/MM/dd");
+int exitCode = csvBulkLoadTool.run(new String[] {
+"--input", "/tmp/input1.csv",
+"--table", "table9",
+"--schema", "s",
+"--zookeeper", zkQuorum});
+assertEquals(0, exitCode);
+
+ResultSet rs = stmt.executeQuery("SELECT id, name, t FROM s.table9 
WHERE T < to_date" +
+"('1972-01-01') AND T > to_date('1970-01-01') ORDER BY id");
+assertTrue(rs.next());
+assertEquals(2, rs.getInt(1));
+assertEquals("Name 2", rs.getString(2));
+assertEquals(DateUtil.parseDate("1971-01-01"), rs.getDate(3));
+assertFalse(rs.next());
+
+rs.close();
+stmt.close();
+}
+
 
 @Test
 public void testImportWithTabs() throws Exception {

http://git-wip-us.apache.org/repos/asf/phoenix/blob/7513d663/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToBytesWritableMapper.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToBytesWritableMapper.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToBytesWritableMapper.java
index 1dae981..360859e 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToBytesWritableMapper.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToBytesWritableMapper.java
@@ -314,6 +314,7 @@ public abstract class FormatToBytesWritableMapper 
extends Mapperhttp://git-wip-us.apache.org/repos/asf/phoenix/blob/7513d663/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
index 07cf285..72af1a7 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
@@ -144,6 +144,7

[2/4] phoenix git commit: PHOENIX-3406 CSV BulkLoad MR job incorrectly handle ROW_TIMESTAMP

2017-09-05 Thread ssa
PHOENIX-3406 CSV BulkLoad MR job incorrectly handle ROW_TIMESTAMP


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/cb120162
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/cb120162
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/cb120162

Branch: refs/heads/4.x-HBase-1.1
Commit: cb12016206c0b589b9781d5cb06555ab276d7d9a
Parents: 6fcf5bb
Author: Sergey Soldatov 
Authored: Tue Oct 25 14:09:54 2016 -0700
Committer: Sergey Soldatov 
Committed: Tue Sep 5 12:45:46 2017 -0700

--
 .../phoenix/end2end/CsvBulkLoadToolIT.java  | 38 
 .../mapreduce/FormatToBytesWritableMapper.java  |  1 +
 .../mapreduce/FormatToKeyValueReducer.java  |  7 ++--
 3 files changed, 43 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/cb120162/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
index 5a186a0..40fe900 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
@@ -92,6 +92,44 @@ public class CsvBulkLoadToolIT extends BaseOwnClusterIT {
 rs.close();
 stmt.close();
 }
+@Test
+public void testImportWithRowTimestamp() throws Exception {
+
+Statement stmt = conn.createStatement();
+stmt.execute("CREATE TABLE S.TABLE9 (ID INTEGER NOT NULL , NAME 
VARCHAR, T DATE NOT NULL," +
+" " +
+"CONSTRAINT PK PRIMARY KEY (ID, T ROW_TIMESTAMP))");
+
+FileSystem fs = FileSystem.get(getUtility().getConfiguration());
+FSDataOutputStream outputStream = fs.create(new 
Path("/tmp/input1.csv"));
+PrintWriter printWriter = new PrintWriter(outputStream);
+printWriter.println("1,Name 1,1970/01/01");
+printWriter.println("2,Name 2,1971/01/01");
+printWriter.println("3,Name 2,1972/01/01");
+printWriter.close();
+
+CsvBulkLoadTool csvBulkLoadTool = new CsvBulkLoadTool();
+csvBulkLoadTool.setConf(new 
Configuration(getUtility().getConfiguration()));
+csvBulkLoadTool.getConf().set(DATE_FORMAT_ATTRIB,"/MM/dd");
+int exitCode = csvBulkLoadTool.run(new String[] {
+"--input", "/tmp/input1.csv",
+"--table", "table9",
+"--schema", "s",
+"--zookeeper", zkQuorum});
+assertEquals(0, exitCode);
+
+ResultSet rs = stmt.executeQuery("SELECT id, name, t FROM s.table9 
WHERE T < to_date" +
+"('1972-01-01') AND T > to_date('1970-01-01') ORDER BY id");
+assertTrue(rs.next());
+assertEquals(2, rs.getInt(1));
+assertEquals("Name 2", rs.getString(2));
+assertEquals(DateUtil.parseDate("1971-01-01"), rs.getDate(3));
+assertFalse(rs.next());
+
+rs.close();
+stmt.close();
+}
+
 
 @Test
 public void testImportWithTabs() throws Exception {

http://git-wip-us.apache.org/repos/asf/phoenix/blob/cb120162/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToBytesWritableMapper.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToBytesWritableMapper.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToBytesWritableMapper.java
index 1dae981..360859e 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToBytesWritableMapper.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToBytesWritableMapper.java
@@ -314,6 +314,7 @@ public abstract class FormatToBytesWritableMapper 
extends Mapperhttp://git-wip-us.apache.org/repos/asf/phoenix/blob/cb120162/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
index 07cf285..72af1a7 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
@@ -144,6 +144,7 @@ public class FormatToKeyValueReducer
 DataInputStream input = new DataInputStream(new 
ByteArrayInputStream(aggregatedArray.get()));
 while (input.available() != 0) {
 byte type = input.readB

Build failed in Jenkins: Phoenix | Master #1769

2017-09-05 Thread Apache Jenkins Server
See 


Changes:

[rajeshbabu] PHOENIX-3496 Figure out why LocalIndexIT#testLocalIndexRoundTrip is

--
[...truncated 102.08 KB...]
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.193 s 
- in org.apache.phoenix.end2end.index.SaltedIndexIT
[INFO] Running org.apache.phoenix.end2end.index.ViewIndexIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 57.774 s 
- in org.apache.phoenix.end2end.index.ViewIndexIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 308.07 
s - in org.apache.phoenix.end2end.index.DropColumnIT
[INFO] Running org.apache.phoenix.end2end.index.txn.MutableRollbackIT
[INFO] Running org.apache.phoenix.end2end.index.txn.RollbackIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 66.91 s 
- in org.apache.phoenix.end2end.index.txn.MutableRollbackIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 51.588 s 
- in org.apache.phoenix.end2end.index.txn.RollbackIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.328 s 
- in org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
[INFO] Running org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.661 s 
- in org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Running org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
973.247 s - in org.apache.phoenix.end2end.SortMergeJoinIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.814 s 
- in org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
[INFO] Running org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Running org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.144 s 
- in org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Running org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.158 s 
- in org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.468 s 
- in org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Running org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.422 s 
- in org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Running org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.231 s 
- in org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Running org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 67, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 420.756 
s - in org.apache.phoenix.end2end.index.IndexExpressionIT
[INFO] Running org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Running org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.122 s 
- in org.apache.phoenix.tx.TransactionIT
[INFO] Running org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.228 s 
- in org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 64, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 441.081 
s - in org.apache.phoenix.end2end.index.MutableIndexIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 90.118 s 
- in org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[WARNING] Tests run: 52, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
201.237 s - in org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 265.727 
s - in org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 645.079 
s - in org.apache.phoenix.end2end.index.MutableIndexSplitReverseScanIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 700.346 
s - in org.apache.phoenix.end2end.index.MutableIndexSplitForwardScanIT
[INFO] Tests run: 304, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
1,888.214 s - in org.apache.phoenix.end2end.index.IndexIT
[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Failures: 
[ERROR]   
TableSnapshotReadsMapReduceIT.testMapReduceSnapshotWithLimit:117->configureJob:130
[ERROR]   
TableSnapshotReadsMapReduceIT.testMapReduceSnapshots:78->configureJob:130
[ERROR]   
TableSnapshotReadsMapReduceIT.testMapReduceSnapshotsWithCondition:96->configureJob:130
[ERROR] Errors: 
[ERROR]   MutableQueryIT.:66->BaseQueryIT.:139 » SQLTimeout 
Operation timed ...
[ERROR]   
TenantSpecificViewIndexS

phoenix git commit: PHOENIX-3496 Figure out why LocalIndexIT#testLocalIndexRoundTrip is flapping(Rajeshbabu)

2017-09-05 Thread rajeshbabu
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 82a4c0e02 -> d81ad6a5f


PHOENIX-3496 Figure out why LocalIndexIT#testLocalIndexRoundTrip is 
flapping(Rajeshbabu)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/d81ad6a5
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/d81ad6a5
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/d81ad6a5

Branch: refs/heads/4.x-HBase-0.98
Commit: d81ad6a5fbaa68418dc67d24c6003e18b8543f69
Parents: 82a4c0e
Author: Rajeshbabu Chintaguntla 
Authored: Wed Sep 6 00:09:15 2017 +0530
Committer: Rajeshbabu Chintaguntla 
Committed: Wed Sep 6 00:09:15 2017 +0530

--
 .../phoenix/end2end/FlappingLocalIndexIT.java   | 79 +++-
 .../phoenix/end2end/index/BaseLocalIndexIT.java |  6 +-
 .../phoenix/end2end/index/LocalIndexIT.java |  3 +-
 .../UngroupedAggregateRegionObserver.java   | 42 ++-
 .../phoenix/iterate/BaseResultIterators.java| 60 ---
 5 files changed, 170 insertions(+), 20 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/d81ad6a5/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java
index 7509997..e2f3970 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java
@@ -21,22 +21,31 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 
+import java.io.IOException;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.ResultSet;
 import java.util.Properties;
+import java.util.concurrent.CountDownLatch;
 
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.hadoop.hbase.client.HTable;
 import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.client.ResultScanner;
 import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.coprocessor.BaseRegionObserver;
+import org.apache.hadoop.hbase.coprocessor.ObserverContext;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.regionserver.RegionScanner;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.hbase.util.Pair;
 import org.apache.phoenix.end2end.index.BaseLocalIndexIT;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.util.QueryUtil;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.TestUtil;
@@ -297,4 +306,72 @@ public class FlappingLocalIndexIT extends BaseLocalIndexIT 
{
 indexTable.close();
 }
 
-}
+@Test
+public void 
testBuildingLocalIndexShouldHandleNoSuchColumnFamilyException() throws 
Exception {
+testBuildingLocalIndexShouldHandleNoSuchColumnFamilyException(false);
+}
+
+@Test
+public void 
testBuildingLocalCoveredIndexShouldHandleNoSuchColumnFamilyException() throws 
Exception {
+testBuildingLocalIndexShouldHandleNoSuchColumnFamilyException(true);
+}
+
+private void 
testBuildingLocalIndexShouldHandleNoSuchColumnFamilyException(boolean 
coveredIndex) throws Exception {
+String tableName = schemaName + "." + generateUniqueName();
+String indexName = "IDX_" + generateUniqueName();
+String indexTableName = schemaName + "." + indexName;
+TableName physicalTableName = 
SchemaUtil.getPhysicalTableName(tableName.getBytes(), isNamespaceMapped);
+
+createBaseTable(tableName, null, null, coveredIndex ? "cf" : null);
+Connection conn1 = DriverManager.getConnection(getUrl());
+conn1.createStatement().execute("UPSERT INTO "+tableName+" 
values('b',1,2,4,'z')");
+conn1.createStatement().execute("UPSERT INTO "+tableName+" 
values('f',1,2,3,'z')");
+conn1.createStatement().execute("UPSERT INTO "+tableName+" 
values('j',2,4,2,'a')");
+conn1.createStatement().execute("UPSERT INTO "+tableName+" 
values('q',3,1,1,'c')");
+conn1.commit();
+HBaseAdmin admin = driver.getConnectionQueryServices(getUrl(), 
TestUtil.TEST_PROPERTIES).getAdmin();
+HTableDescriptor tableDescriptor = 
admin.getTableDescriptor(physicalTableName);
+
tableDescriptor.addCoprocessor(DeleyOpenRegion

phoenix git commit: PHOENIX-3496 Figure out why LocalIndexIT#testLocalIndexRoundTrip is flapping(Rajeshbabu)

2017-09-05 Thread rajeshbabu
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 f2c8785b2 -> 6fcf5bb2e


PHOENIX-3496 Figure out why LocalIndexIT#testLocalIndexRoundTrip is 
flapping(Rajeshbabu)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/6fcf5bb2
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/6fcf5bb2
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/6fcf5bb2

Branch: refs/heads/4.x-HBase-1.1
Commit: 6fcf5bb2ed71033ec4490701a1a8ee1488aaf64b
Parents: f2c8785
Author: Rajeshbabu Chintaguntla 
Authored: Tue Sep 5 23:46:49 2017 +0530
Committer: Rajeshbabu Chintaguntla 
Committed: Tue Sep 5 23:46:49 2017 +0530

--
 .../phoenix/end2end/FlappingLocalIndexIT.java   | 79 +++-
 .../phoenix/end2end/index/BaseLocalIndexIT.java |  6 +-
 .../phoenix/end2end/index/LocalIndexIT.java |  3 +-
 .../UngroupedAggregateRegionObserver.java   | 42 ++-
 .../phoenix/iterate/BaseResultIterators.java| 50 +
 5 files changed, 159 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/6fcf5bb2/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java
index 7509997..e2f3970 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java
@@ -21,22 +21,31 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 
+import java.io.IOException;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.ResultSet;
 import java.util.Properties;
+import java.util.concurrent.CountDownLatch;
 
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.hadoop.hbase.client.HTable;
 import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.client.ResultScanner;
 import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.coprocessor.BaseRegionObserver;
+import org.apache.hadoop.hbase.coprocessor.ObserverContext;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.regionserver.RegionScanner;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.hbase.util.Pair;
 import org.apache.phoenix.end2end.index.BaseLocalIndexIT;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.util.QueryUtil;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.TestUtil;
@@ -297,4 +306,72 @@ public class FlappingLocalIndexIT extends BaseLocalIndexIT 
{
 indexTable.close();
 }
 
-}
+@Test
+public void 
testBuildingLocalIndexShouldHandleNoSuchColumnFamilyException() throws 
Exception {
+testBuildingLocalIndexShouldHandleNoSuchColumnFamilyException(false);
+}
+
+@Test
+public void 
testBuildingLocalCoveredIndexShouldHandleNoSuchColumnFamilyException() throws 
Exception {
+testBuildingLocalIndexShouldHandleNoSuchColumnFamilyException(true);
+}
+
+private void 
testBuildingLocalIndexShouldHandleNoSuchColumnFamilyException(boolean 
coveredIndex) throws Exception {
+String tableName = schemaName + "." + generateUniqueName();
+String indexName = "IDX_" + generateUniqueName();
+String indexTableName = schemaName + "." + indexName;
+TableName physicalTableName = 
SchemaUtil.getPhysicalTableName(tableName.getBytes(), isNamespaceMapped);
+
+createBaseTable(tableName, null, null, coveredIndex ? "cf" : null);
+Connection conn1 = DriverManager.getConnection(getUrl());
+conn1.createStatement().execute("UPSERT INTO "+tableName+" 
values('b',1,2,4,'z')");
+conn1.createStatement().execute("UPSERT INTO "+tableName+" 
values('f',1,2,3,'z')");
+conn1.createStatement().execute("UPSERT INTO "+tableName+" 
values('j',2,4,2,'a')");
+conn1.createStatement().execute("UPSERT INTO "+tableName+" 
values('q',3,1,1,'c')");
+conn1.commit();
+HBaseAdmin admin = driver.getConnectionQueryServices(getUrl(), 
TestUtil.TEST_PROPERTIES).getAdmin();
+HTableDescriptor tableDescriptor = 
admin.getTableDescriptor(physicalTableName);
+
tableDescriptor.addCoprocessor(DeleyOpenRegionObse

phoenix git commit: PHOENIX-3496 Figure out why LocalIndexIT#testLocalIndexRoundTrip is flapping(Rajeshbabu)

2017-09-05 Thread rajeshbabu
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 98d67a7ee -> 3b8468e27


PHOENIX-3496 Figure out why LocalIndexIT#testLocalIndexRoundTrip is 
flapping(Rajeshbabu)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/3b8468e2
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/3b8468e2
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/3b8468e2

Branch: refs/heads/4.x-HBase-1.2
Commit: 3b8468e27157247714731532732522bb285e1763
Parents: 98d67a7
Author: Rajeshbabu Chintaguntla 
Authored: Tue Sep 5 23:41:01 2017 +0530
Committer: Rajeshbabu Chintaguntla 
Committed: Tue Sep 5 23:41:01 2017 +0530

--
 .../phoenix/end2end/FlappingLocalIndexIT.java   | 79 +++-
 .../phoenix/end2end/index/BaseLocalIndexIT.java |  6 +-
 .../phoenix/end2end/index/LocalIndexIT.java |  3 +-
 .../UngroupedAggregateRegionObserver.java   | 42 ++-
 .../phoenix/iterate/BaseResultIterators.java| 50 +
 5 files changed, 159 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/3b8468e2/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java
index 7509997..e2f3970 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java
@@ -21,22 +21,31 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 
+import java.io.IOException;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.ResultSet;
 import java.util.Properties;
+import java.util.concurrent.CountDownLatch;
 
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.hadoop.hbase.client.HTable;
 import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.client.ResultScanner;
 import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.coprocessor.BaseRegionObserver;
+import org.apache.hadoop.hbase.coprocessor.ObserverContext;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.regionserver.RegionScanner;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.hbase.util.Pair;
 import org.apache.phoenix.end2end.index.BaseLocalIndexIT;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.util.QueryUtil;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.TestUtil;
@@ -297,4 +306,72 @@ public class FlappingLocalIndexIT extends BaseLocalIndexIT 
{
 indexTable.close();
 }
 
-}
+@Test
+public void 
testBuildingLocalIndexShouldHandleNoSuchColumnFamilyException() throws 
Exception {
+testBuildingLocalIndexShouldHandleNoSuchColumnFamilyException(false);
+}
+
+@Test
+public void 
testBuildingLocalCoveredIndexShouldHandleNoSuchColumnFamilyException() throws 
Exception {
+testBuildingLocalIndexShouldHandleNoSuchColumnFamilyException(true);
+}
+
+private void 
testBuildingLocalIndexShouldHandleNoSuchColumnFamilyException(boolean 
coveredIndex) throws Exception {
+String tableName = schemaName + "." + generateUniqueName();
+String indexName = "IDX_" + generateUniqueName();
+String indexTableName = schemaName + "." + indexName;
+TableName physicalTableName = 
SchemaUtil.getPhysicalTableName(tableName.getBytes(), isNamespaceMapped);
+
+createBaseTable(tableName, null, null, coveredIndex ? "cf" : null);
+Connection conn1 = DriverManager.getConnection(getUrl());
+conn1.createStatement().execute("UPSERT INTO "+tableName+" 
values('b',1,2,4,'z')");
+conn1.createStatement().execute("UPSERT INTO "+tableName+" 
values('f',1,2,3,'z')");
+conn1.createStatement().execute("UPSERT INTO "+tableName+" 
values('j',2,4,2,'a')");
+conn1.createStatement().execute("UPSERT INTO "+tableName+" 
values('q',3,1,1,'c')");
+conn1.commit();
+HBaseAdmin admin = driver.getConnectionQueryServices(getUrl(), 
TestUtil.TEST_PROPERTIES).getAdmin();
+HTableDescriptor tableDescriptor = 
admin.getTableDescriptor(physicalTableName);
+
tableDescriptor.addCoprocessor(DeleyOpenRegionObse

phoenix git commit: PHOENIX-3496 Figure out why LocalIndexIT#testLocalIndexRoundTrip is flapping(Rajeshbabu)

2017-09-05 Thread rajeshbabu
Repository: phoenix
Updated Branches:
  refs/heads/master 0a3ef6c1b -> c8cbb5e5e


PHOENIX-3496 Figure out why LocalIndexIT#testLocalIndexRoundTrip is 
flapping(Rajeshbabu)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/c8cbb5e5
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/c8cbb5e5
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/c8cbb5e5

Branch: refs/heads/master
Commit: c8cbb5e5e196299d5cc50385bd5ebb3791170d2f
Parents: 0a3ef6c
Author: Rajeshbabu Chintaguntla 
Authored: Tue Sep 5 23:34:57 2017 +0530
Committer: Rajeshbabu Chintaguntla 
Committed: Tue Sep 5 23:34:57 2017 +0530

--
 .../phoenix/end2end/FlappingLocalIndexIT.java   | 79 +++-
 .../phoenix/end2end/index/BaseLocalIndexIT.java |  6 +-
 .../phoenix/end2end/index/LocalIndexIT.java |  3 +-
 .../UngroupedAggregateRegionObserver.java   | 42 ++-
 .../phoenix/iterate/BaseResultIterators.java| 50 +
 5 files changed, 159 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/c8cbb5e5/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java
index 7509997..e2f3970 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/FlappingLocalIndexIT.java
@@ -21,22 +21,31 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 
+import java.io.IOException;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.ResultSet;
 import java.util.Properties;
+import java.util.concurrent.CountDownLatch;
 
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.hadoop.hbase.client.HTable;
 import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.client.ResultScanner;
 import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.coprocessor.BaseRegionObserver;
+import org.apache.hadoop.hbase.coprocessor.ObserverContext;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.regionserver.RegionScanner;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.hbase.util.Pair;
 import org.apache.phoenix.end2end.index.BaseLocalIndexIT;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.util.QueryUtil;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.TestUtil;
@@ -297,4 +306,72 @@ public class FlappingLocalIndexIT extends BaseLocalIndexIT 
{
 indexTable.close();
 }
 
-}
+@Test
+public void 
testBuildingLocalIndexShouldHandleNoSuchColumnFamilyException() throws 
Exception {
+testBuildingLocalIndexShouldHandleNoSuchColumnFamilyException(false);
+}
+
+@Test
+public void 
testBuildingLocalCoveredIndexShouldHandleNoSuchColumnFamilyException() throws 
Exception {
+testBuildingLocalIndexShouldHandleNoSuchColumnFamilyException(true);
+}
+
+private void 
testBuildingLocalIndexShouldHandleNoSuchColumnFamilyException(boolean 
coveredIndex) throws Exception {
+String tableName = schemaName + "." + generateUniqueName();
+String indexName = "IDX_" + generateUniqueName();
+String indexTableName = schemaName + "." + indexName;
+TableName physicalTableName = 
SchemaUtil.getPhysicalTableName(tableName.getBytes(), isNamespaceMapped);
+
+createBaseTable(tableName, null, null, coveredIndex ? "cf" : null);
+Connection conn1 = DriverManager.getConnection(getUrl());
+conn1.createStatement().execute("UPSERT INTO "+tableName+" 
values('b',1,2,4,'z')");
+conn1.createStatement().execute("UPSERT INTO "+tableName+" 
values('f',1,2,3,'z')");
+conn1.createStatement().execute("UPSERT INTO "+tableName+" 
values('j',2,4,2,'a')");
+conn1.createStatement().execute("UPSERT INTO "+tableName+" 
values('q',3,1,1,'c')");
+conn1.commit();
+HBaseAdmin admin = driver.getConnectionQueryServices(getUrl(), 
TestUtil.TEST_PROPERTIES).getAdmin();
+HTableDescriptor tableDescriptor = 
admin.getTableDescriptor(physicalTableName);
+
tableDescriptor.addCoprocessor(DeleyOpenRegionObserver.class.get

Build failed in Jenkins: Phoenix Compile Compatibility with HBase #397

2017-09-05 Thread Apache Jenkins Server
See 


--
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on qnode3 (ubuntu) in workspace 

[Phoenix_Compile_Compat_wHBase] $ /bin/bash /tmp/jenkins3926538928913652330.sh
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 128341
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 6
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 10240
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited
core id : 0
core id : 1
core id : 2
core id : 3
core id : 4
core id : 5
core id : 6
core id : 7
physical id : 0
MemTotal:   32865152 kB
MemFree:12340424 kB
Filesystem  Size  Used Avail Use% Mounted on
none 16G 0   16G   0% /dev
tmpfs   3.2G  351M  2.8G  11% /run
/dev/nbd046G   36G  7.8G  83% /
tmpfs16G 0   16G   0% /dev/shm
tmpfs   5.0M 0  5.0M   0% /run/lock
tmpfs16G 0   16G   0% /sys/fs/cgroup
/dev/sda1   235G  124G  100G  56% /home
tmpfs   3.2G 0  3.2G   0% /run/user/9997
tmpfs   3.2G 0  3.2G   0% /run/user/999
apache-maven-2.2.1
apache-maven-3.0.4
apache-maven-3.0.5
apache-maven-3.2.1
apache-maven-3.2.5
apache-maven-3.3.3
apache-maven-3.3.9
apache-maven-3.5.0
latest
latest2
latest3


===
Verifying compile level compatibility with HBase 0.98 with Phoenix 
4.x-HBase-0.98
===

Cloning into 'hbase'...
Switched to a new branch '0.98'
Branch 0.98 set up to track remote branch 0.98 from origin.

main:
 [exec] 
~/jenkins-slave/workspace/Phoenix_Compile_Compat_wHBase/hbase/hbase-common 
~/jenkins-slave/workspace/Phoenix_Compile_Compat_wHBase/hbase/hbase-common
 [exec] 
~/jenkins-slave/workspace/Phoenix_Compile_Compat_wHBase/hbase/hbase-common

main:
[mkdir] Created dir: 

 [exec] tar: hadoop-snappy-nativelibs.tar: Cannot open: No such file or 
directory
 [exec] tar: Error is not recoverable: exiting now
 [exec] Result: 2

main:
[mkdir] Created dir: 

 [copy] Copying 20 files to 

[mkdir] Created dir: 

[mkdir] Created dir: 


main:
[mkdir] Created dir: 

 [copy] Copying 17 files to 

[mkdir] Created dir: 


main:
[mkdir] Created dir: 

 [copy] Copying 1 file to 

[mkdir] Created dir: 


HBase pom.xml:

Got HBase version as 0.98.25-SNAPSHOT
Cloning into 'phoenix'...
Switched to a new branch '4.x-HBase-0.98'
Branch 4.x-HBase-0.98 set up to track remote branch 4.x-HBase-0.98 from origin.
ANTLR Parser Generator  Version 3.5.2
Output file 

 does not exist: must build 

PhoenixSQL.g


===
Verify