[phoenix] branch 4.14-HBase-1.4 updated: PHOENIX-5564 Restructure read repair to improve readability and correctness (addendum)

2019-11-15 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch 4.14-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.4 by this push:
 new 8a580fd  PHOENIX-5564 Restructure read repair to improve readability 
and correctness (addendum)
8a580fd is described below

commit 8a580fdc6b32f2da1dbeed1aa4c24dfa22bd8d40
Author: Kadir 
AuthorDate: Fri Nov 15 23:00:18 2019 -0800

PHOENIX-5564 Restructure read repair to improve readability and correctness 
(addendum)
---
 .../src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java   | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java 
b/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java
index 9ecf876..6acdfbc 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java
@@ -326,7 +326,10 @@ public class GlobalIndexChecker extends BaseRegionObserver 
{
 // Delete the unverified row from index if it is old enough
 deleteRowIfAgedEnough(indexRowKey, row, ts, false);
 // Open a new scanner starting from the row after the current 
row
-indexScan.setStartRow(indexRowKey);
+byte[] nextIndexRowKey = new byte[indexRowKey.length + 1];
+System.arraycopy(indexRowKey, 0, nextIndexRowKey, 0, 
indexRowKey.length);
+nextIndexRowKey[indexRowKey.length] = 0;
+indexScan.setStartRow(nextIndexRowKey);
 scanner = region.getScanner(indexScan);
 // Skip this unverified row (i.e., do not return it to the 
client). Just retuning empty row is
 // sufficient to do that



[phoenix] branch 4.14-HBase-1.3 updated: PHOENIX-5564 Restructure read repair to improve readability and correctness (addendum)

2019-11-15 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.3 by this push:
 new 42ee41b  PHOENIX-5564 Restructure read repair to improve readability 
and correctness (addendum)
42ee41b is described below

commit 42ee41b3d77dc1a6d49cb93e3c5ef0ebe72dba02
Author: Kadir 
AuthorDate: Fri Nov 15 23:00:18 2019 -0800

PHOENIX-5564 Restructure read repair to improve readability and correctness 
(addendum)
---
 .../src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java   | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java 
b/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java
index 9ecf876..6acdfbc 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java
@@ -326,7 +326,10 @@ public class GlobalIndexChecker extends BaseRegionObserver 
{
 // Delete the unverified row from index if it is old enough
 deleteRowIfAgedEnough(indexRowKey, row, ts, false);
 // Open a new scanner starting from the row after the current 
row
-indexScan.setStartRow(indexRowKey);
+byte[] nextIndexRowKey = new byte[indexRowKey.length + 1];
+System.arraycopy(indexRowKey, 0, nextIndexRowKey, 0, 
indexRowKey.length);
+nextIndexRowKey[indexRowKey.length] = 0;
+indexScan.setStartRow(nextIndexRowKey);
 scanner = region.getScanner(indexScan);
 // Skip this unverified row (i.e., do not return it to the 
client). Just retuning empty row is
 // sufficient to do that



[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5564 Restructure read repair to improve readability and correctness (addendum)

2019-11-15 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new a9491b4  PHOENIX-5564 Restructure read repair to improve readability 
and correctness (addendum)
a9491b4 is described below

commit a9491b4339ceef1c09922c752147fd97068039cd
Author: Kadir 
AuthorDate: Fri Nov 15 23:00:18 2019 -0800

PHOENIX-5564 Restructure read repair to improve readability and correctness 
(addendum)
---
 .../src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java   | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java 
b/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java
index 9ecf876..6acdfbc 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java
@@ -326,7 +326,10 @@ public class GlobalIndexChecker extends BaseRegionObserver 
{
 // Delete the unverified row from index if it is old enough
 deleteRowIfAgedEnough(indexRowKey, row, ts, false);
 // Open a new scanner starting from the row after the current 
row
-indexScan.setStartRow(indexRowKey);
+byte[] nextIndexRowKey = new byte[indexRowKey.length + 1];
+System.arraycopy(indexRowKey, 0, nextIndexRowKey, 0, 
indexRowKey.length);
+nextIndexRowKey[indexRowKey.length] = 0;
+indexScan.setStartRow(nextIndexRowKey);
 scanner = region.getScanner(indexScan);
 // Skip this unverified row (i.e., do not return it to the 
client). Just retuning empty row is
 // sufficient to do that



Apache-Phoenix | 4.x-HBase-1.3 | Build Successful

2019-11-15 Thread Apache Jenkins Server
4.x-HBase-1.3 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-1.3

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastCompletedBuild/testReport/

Changes
[kadir] PHOENIX-5565 Unify index update structures in IndexRegionObserver and



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Apache-Phoenix | 4.x-HBase-1.3 | Build Successful

2019-11-15 Thread Apache Jenkins Server
4.x-HBase-1.3 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-1.3

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastCompletedBuild/testReport/

Changes
[kadir] PHOENIX-5565 Unify index update structures in IndexRegionObserver and



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Apache Phoenix - Timeout crawler - Build https://builds.apache.org/job/Phoenix-master/2573/

2019-11-15 Thread Apache Jenkins Server
[...truncated 28 lines...]
Looking at the log, list of test(s) that timed-out:

Build:
https://builds.apache.org/job/Phoenix-master/2573/


Affected test class(es):
Set(['as SYSTEM'])


Build step 'Execute shell' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

Apache-Phoenix | Master | Build Successful

2019-11-15 Thread Apache Jenkins Server
Master branch build status Successful
Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/master

Last Successful Compiled Artifacts https://builds.apache.org/job/Phoenix-master/lastSuccessfulBuild/artifact/

Last Complete Test Report https://builds.apache.org/job/Phoenix-master/lastCompletedBuild/testReport/

Changes
[kadir] PHOENIX-5565 Unify index update structures in IndexRegionObserver and



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Build failed in Jenkins: Phoenix-4.x-HBase-1.3 #599

2019-11-15 Thread Apache Jenkins Server
See 


Changes:

[kadir] PHOENIX-5565 Unify index update structures in IndexRegionObserver and


--
[...truncated 104.74 KB...]
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.003 
s - in 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.286 s 
- in org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedMutableNonTxStatsCollectorIT
[INFO] Running org.apache.phoenix.end2end.ConcurrentMutationsExtendedIT
[INFO] Running org.apache.phoenix.end2end.ColumnEncodedMutableTxStatsCollectorIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.05 s 
- in org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.631 s 
- in org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Running org.apache.phoenix.end2end.CostBasedDecisionIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.007 s 
<<< FAILURE! - in 
org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT
[ERROR] org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT  
Time elapsed: 0.007 s  <<< ERROR!
java.lang.RuntimeException: java.io.IOException: Shutting down
Caused by: java.io.IOException: Shutting down
Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
seconds

[INFO] Running org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.666 s 
- in org.apache.phoenix.end2end.CountDistinctCompressionIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
199.573 s - in 
org.apache.phoenix.end2end.ColumnEncodedMutableNonTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
201.252 s - in org.apache.phoenix.end2end.ColumnEncodedMutableTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
204.763 s - in 
org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
[INFO] Running org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.DropSchemaIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.254 s 
- in org.apache.phoenix.end2end.DropSchemaIT
[INFO] Running org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Running org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Running org.apache.phoenix.end2end.IndexBuildTimestampIT
[INFO] Running org.apache.phoenix.end2end.IndexRebuildTaskIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 154.472 
s - in org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.685 s 
- in org.apache.phoenix.end2end.IndexRebuildTaskIT
[INFO] Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 101.698 
s - in org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Running org.apache.phoenix.end2end.IndexScrutinyToolForTenantIT
[INFO] Running org.apache.phoenix.end2end.IndexScrutinyToolIT
[INFO] Running org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.995 s 
- in org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 193.438 
s - in org.apache.phoenix.end2end.IndexBuildTimestampIT
[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 210.281 
s - in org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 101.858 
s - in org.apache.phoenix.end2end.IndexScrutinyToolForTenantIT
[INFO] Running org.apache.phoenix.end2end.MigrateSystemTablesToSystemNamespaceIT
[INFO] Running 
org.apache.phoenix.end2end.IndexToolForPartialBuildWithNamespaceEnabledIT
[INFO] Running org.apache.phoenix.end2end.IndexToolIT
[INFO] Running org.apache.phoenix.end2end.LocalIndexSplitMergeIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.781 s 
- in org.apache.phoenix.end2end.IndexToolForPartialBuildWithNamespaceEnabledIT
[INFO] Running 
org.apache.phoenix.end2end.NonColumnEncodedImmutableNonTxStatsCollectorIT
[INFO] Tests run: 3, 

Apache-Phoenix | 4.x-HBase-1.3 | Build Successful

2019-11-15 Thread Apache Jenkins Server
4.x-HBase-1.3 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-1.3

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastCompletedBuild/testReport/

Changes
[kadir] PHOENIX-5564 Restructure read repair to improve readability and



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Jenkins build is back to normal : Phoenix-4.x-HBase-1.4 #312

2019-11-15 Thread Apache Jenkins Server
See 




[phoenix] 01/03: PHOENIX-5556 Avoid repeatedly loading IndexMetaData For IndexRegionObserver

2019-11-15 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch 4.14-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit da9f90559da5b3a980797aa90856b3f57ccb080e
Author: chenglei 
AuthorDate: Thu Nov 7 10:29:05 2019 +0800

PHOENIX-5556 Avoid repeatedly loading IndexMetaData For IndexRegionObserver
---
 .../phoenix/hbase/index/IndexRegionObserver.java   | 39 ++
 .../hbase/index/builder/IndexBuildManager.java |  4 +--
 2 files changed, 28 insertions(+), 15 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
index 83a54f6..e8d9a05 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
@@ -518,16 +518,15 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
   }
   }
 
-  private void 
prepareIndexMutations(ObserverContext c,
- MiniBatchOperationInProgress 
miniBatchOp, BatchMutateContext context,
- Collection mutations, 
long now) throws Throwable {
-  IndexMetaData indexMetaData = this.builder.getIndexMetaData(miniBatchOp);
-  if (!(indexMetaData instanceof PhoenixIndexMetaData)) {
-  throw new DoNotRetryIOException(
-  "preBatchMutateWithExceptions: indexMetaData is not an 
instance of PhoenixIndexMetaData " +
-  
c.getEnvironment().getRegion().getRegionInfo().getTable().getNameAsString());
-  }
-  List maintainers = 
((PhoenixIndexMetaData)indexMetaData).getIndexMaintainers();
+  private void prepareIndexMutations(
+  ObserverContext c,
+  MiniBatchOperationInProgress miniBatchOp,
+  BatchMutateContext context,
+  Collection mutations,
+  long now,
+  PhoenixIndexMetaData indexMetaData) throws Throwable {
+
+  List maintainers = indexMetaData.getIndexMaintainers();
 
   // get the current span, or just use a null-span to avoid a bunch of if 
statements
   try (TraceScope scope = Trace.startSpan("Starting to build index 
updates")) {
@@ -538,7 +537,7 @@ public class IndexRegionObserver extends BaseRegionObserver 
{
 
   // get the index updates for all elements in this batch
   Collection, byte[]>> indexUpdates =
-  this.builder.getIndexUpdates(miniBatchOp, mutations);
+  this.builder.getIndexUpdates(miniBatchOp, mutations, 
indexMetaData);
 
   current.addTimelineAnnotation("Built index updates, doing preStep");
   TracingUtils.addAnnotation(current, "index update count", 
indexUpdates.size());
@@ -607,10 +606,24 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
   }
   }
 
+  protected PhoenixIndexMetaData getPhoenixIndexMetaData(
+  ObserverContext observerContext,
+  MiniBatchOperationInProgress miniBatchOp) throws 
IOException {
+  IndexMetaData indexMetaData = this.builder.getIndexMetaData(miniBatchOp);
+  if (!(indexMetaData instanceof PhoenixIndexMetaData)) {
+  throw new DoNotRetryIOException(
+  "preBatchMutateWithExceptions: indexMetaData is not an 
instance of "+PhoenixIndexMetaData.class.getName() +
+  ", current table is:" +
+  
observerContext.getEnvironment().getRegion().getRegionInfo().getTable().getNameAsString());
+  }
+  return (PhoenixIndexMetaData)indexMetaData;
+  }
+
   public void 
preBatchMutateWithExceptions(ObserverContext c,
   MiniBatchOperationInProgress miniBatchOp) throws Throwable 
{
   ignoreAtomicOperations(miniBatchOp);
-  BatchMutateContext context = new 
BatchMutateContext(this.builder.getIndexMetaData(miniBatchOp).getClientVersion());
+  PhoenixIndexMetaData indexMetaData = getPhoenixIndexMetaData(c, 
miniBatchOp);
+  BatchMutateContext context = new 
BatchMutateContext(indexMetaData.getClientVersion());
   setBatchMutateContext(c, context);
   Mutation firstMutation = miniBatchOp.getOperation(0);
   ReplayWrite replayWrite = this.builder.getReplayWrite(firstMutation);
@@ -636,7 +649,7 @@ public class IndexRegionObserver extends BaseRegionObserver 
{
   }
 
   long start = EnvironmentEdgeManager.currentTimeMillis();
-  prepareIndexMutations(c, miniBatchOp, context, mutations, now);
+  prepareIndexMutations(c, miniBatchOp, context, mutations, now, 
indexMetaData);
   
metricSource.updateIndexPrepareTime(EnvironmentEdgeManager.currentTimeMillis() 
- start);
 
   // Sleep for one millisecond if we have prepared the index updates in 
less than 1 ms. The sleep is necessary to
diff --git 

[phoenix] 02/03: PHOENIX-5562 Simplify detection of concurrent updates on data tables with indexes

2019-11-15 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch 4.14-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit da68fc893d18872d16b4f30ac97dd6a642eac1fc
Author: Kadir 
AuthorDate: Wed Nov 6 22:04:20 2019 -0800

PHOENIX-5562 Simplify detection of concurrent updates on data tables with 
indexes
---
 .../phoenix/hbase/index/IndexRegionObserver.java   | 51 --
 1 file changed, 17 insertions(+), 34 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
index e8d9a05..b058b33 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
@@ -104,19 +104,12 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
* Class to represent pending data table rows
*/
   private static class PendingRow {
-  private long latestTimestamp;
-  private long count;
+  private boolean concurrent = false;
+  private long count = 1;
 
-  PendingRow(long latestTimestamp) {
-  count = 1;
-  this.latestTimestamp = latestTimestamp;
-  }
-
-  public void add(long timestamp) {
+  public void add() {
   count++;
-  if (latestTimestamp < timestamp) {
-  latestTimestamp = timestamp;
-  }
+  concurrent = true;
   }
 
   public void remove() {
@@ -127,8 +120,8 @@ public class IndexRegionObserver extends BaseRegionObserver 
{
   return count;
   }
 
-  public long getLatestTimestamp() {
-  return latestTimestamp;
+  public boolean isConcurrent() {
+  return concurrent;
   }
   }
 
@@ -159,10 +152,6 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
   // The collection of candidate index mutations that will be applied 
after the data table mutations
   private Collection, byte[]>> 
intermediatePostIndexUpdates;
   private List rowLocks = 
Lists.newArrayListWithExpectedSize(QueryServicesOptions.DEFAULT_MUTATE_BATCH_SIZE);
-  // The set of row keys for the data table rows of this batch such that 
for each of these rows there exists another
-  // batch with a timestamp earlier than the timestamp of this batch and 
the earlier batch has a mutation on the
-  // row (i.e., concurrent updates).
-  private HashSet pendingRows = new HashSet<>();
   private HashSet rowsToLock = new HashSet<>();
   long dataWriteStartTime;
 
@@ -401,16 +390,15 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
   }
   }
 
-  private void populatePendingRows(BatchMutateContext context, long now) {
+  private void populatePendingRows(BatchMutateContext context) {
   for (RowLock rowLock : context.rowLocks) {
   ImmutableBytesPtr rowKey = rowLock.getRowKey();
   PendingRow pendingRow = pendingRows.get(rowKey);
   if (pendingRow == null) {
-  pendingRows.put(rowKey, new PendingRow(now));
+  pendingRows.put(rowKey, new PendingRow());
   } else {
   // m is a mutation on a row that has already a pending mutation 
in progress from another batch
-  pendingRow.add(now);
-  context.pendingRows.add(rowKey);
+  pendingRow.add();
   }
   }
   }
@@ -579,17 +567,12 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
   Put unverifiedPut = new Put(m.getRow());
   unverifiedPut.addColumn(emptyCF, emptyCQ, now - 1, 
UNVERIFIED_BYTES);
   context.preIndexUpdates.add(new Pair (unverifiedPut, next.getFirst().getSecond()));
-  // Ignore post index updates (i.e., the third write 
phase updates) for this row if it is
-  // going through concurrent updates
-  ImmutableBytesPtr rowKey = new 
ImmutableBytesPtr(next.getSecond());
-  if (!context.pendingRows.contains(rowKey)) {
-  if (m instanceof Put) {
-  // Remove the empty column prepared by Index 
codec as we need to change its value
-  removeEmptyColumn(m, emptyCF, emptyCQ);
-  ((Put) m).addColumn(emptyCF, emptyCQ, now, 
VERIFIED_BYTES);
-  }
-  context.intermediatePostIndexUpdates.add(next);
+  if (m instanceof Put) {
+  // Remove the empty column prepared by Index codec 
as we need to change its value
+  removeEmptyColumn(m, emptyCF, emptyCQ);
+  ((Put) m).addColumn(emptyCF, emptyCQ, now, 
VERIFIED_BYTES);
   }
+

[phoenix] 03/03: PHOENIX-5565 Unify index update structures in IndexRegionObserver and IndexCommitter

2019-11-15 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch 4.14-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit 5688c5fabde4c6622512d2439d8e075bb16aae25
Author: Kadir 
AuthorDate: Thu Nov 7 15:50:40 2019 -0800

PHOENIX-5565 Unify index update structures in IndexRegionObserver and 
IndexCommitter
---
 .../phoenix/hbase/index/IndexRegionObserver.java   | 155 +++--
 .../hbase/index/builder/IndexBuildManager.java |  15 +-
 2 files changed, 84 insertions(+), 86 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
index b058b33..340832f 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
@@ -32,6 +32,8 @@ import java.util.List;
 import java.util.Map;
 import java.util.concurrent.ConcurrentHashMap;
 
+import com.google.common.collect.ArrayListMultimap;
+import com.google.common.collect.ListMultimap;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
@@ -67,6 +69,7 @@ import org.apache.phoenix.hbase.index.builder.IndexBuilder;
 import org.apache.phoenix.hbase.index.covered.IndexMetaData;
 import org.apache.phoenix.hbase.index.metrics.MetricsIndexerSource;
 import org.apache.phoenix.hbase.index.metrics.MetricsIndexerSourceFactory;
+import org.apache.phoenix.hbase.index.table.HTableInterfaceReference;
 import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 import org.apache.phoenix.hbase.index.util.IndexManagementUtil;
 import org.apache.phoenix.hbase.index.write.IndexWriter;
@@ -145,16 +148,16 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
   private final int clientVersion;
   // The collection of index mutations that will be applied before the 
data table mutations. The empty column (i.e.,
   // the verified column) will have the value false ("unverified") on 
these mutations
-  private Collection> preIndexUpdates = 
Collections.emptyList();
+  private ListMultimap preIndexUpdates;
   // The collection of index mutations that will be applied after the data 
table mutations. The empty column (i.e.,
   // the verified column) will have the value true ("verified") on the put 
mutations
-  private Collection> postIndexUpdates = 
Collections.emptyList();
+  private ListMultimap 
postIndexUpdates;
   // The collection of candidate index mutations that will be applied 
after the data table mutations
-  private Collection, byte[]>> 
intermediatePostIndexUpdates;
+  private ListMultimap> 
intermediatePostIndexUpdates;
   private List rowLocks = 
Lists.newArrayListWithExpectedSize(QueryServicesOptions.DEFAULT_MUTATE_BATCH_SIZE);
   private HashSet rowsToLock = new HashSet<>();
-  long dataWriteStartTime;
-
+  private long dataWriteStartTime;
+  private boolean rebuild;
   private BatchMutateContext(int clientVersion) {
   this.clientVersion = clientVersion;
   }
@@ -506,6 +509,27 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
   }
   }
 
+  private void 
handleLocalIndexUpdates(ObserverContext c,
+   MiniBatchOperationInProgress 
miniBatchOp,
+   ListMultimap> indexUpdates) {
+  byte[] tableName = 
c.getEnvironment().getRegion().getTableDesc().getTableName().getName();
+  HTableInterfaceReference hTableInterfaceReference =
+  new HTableInterfaceReference(new 
ImmutableBytesPtr(tableName));
+  List> localIndexUpdates = 
indexUpdates.removeAll(hTableInterfaceReference);
+  if (localIndexUpdates == null || localIndexUpdates.isEmpty()) {
+  return;
+  }
+  List localUpdates = new ArrayList();
+  Iterator> indexUpdatesItr = 
localIndexUpdates.iterator();
+  while (indexUpdatesItr.hasNext()) {
+  Pair next = indexUpdatesItr.next();
+  localUpdates.add(next.getFirst());
+  }
+  if (!localUpdates.isEmpty()) {
+  miniBatchOp.addOperationsFromCP(0, localUpdates.toArray(new 
Mutation[localUpdates.size()]));
+  }
+  }
+
   private void prepareIndexMutations(
   ObserverContext c,
   MiniBatchOperationInProgress miniBatchOp,
@@ -513,79 +537,56 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
   Collection mutations,
   long now,
   PhoenixIndexMetaData indexMetaData) throws Throwable {
-
   List maintainers = indexMetaData.getIndexMaintainers();
-
   // get the current span, or just use a null-span to avoid a bunch of if 
statements
   try (TraceScope scope = Trace.startSpan("Starting to build index 
updates")) {
   

[phoenix] branch 4.14-HBase-1.4 updated (22e0238 -> 5688c5f)

2019-11-15 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a change to branch 4.14-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git.


from 22e0238  PHOENIX-5564 Restructure read repair to improve readability 
and correctness
 new da9f905  PHOENIX-5556 Avoid repeatedly loading IndexMetaData For 
IndexRegionObserver
 new da68fc8  PHOENIX-5562 Simplify detection of concurrent updates on data 
tables with indexes
 new 5688c5f  PHOENIX-5565 Unify index update structures in 
IndexRegionObserver and IndexCommitter

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../phoenix/hbase/index/IndexRegionObserver.java   | 229 ++---
 .../hbase/index/builder/IndexBuildManager.java |  19 +-
 2 files changed, 121 insertions(+), 127 deletions(-)



[phoenix] 02/02: PHOENIX-5565 Unify index update structures in IndexRegionObserver and IndexCommitter

2019-11-15 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit 2e7ac9944df652d59f21fa26c11c4d88789cf1c5
Author: Kadir 
AuthorDate: Thu Nov 7 15:50:40 2019 -0800

PHOENIX-5565 Unify index update structures in IndexRegionObserver and 
IndexCommitter
---
 .../phoenix/hbase/index/IndexRegionObserver.java   | 155 +++--
 .../hbase/index/builder/IndexBuildManager.java |  15 +-
 2 files changed, 84 insertions(+), 86 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
index b058b33..340832f 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
@@ -32,6 +32,8 @@ import java.util.List;
 import java.util.Map;
 import java.util.concurrent.ConcurrentHashMap;
 
+import com.google.common.collect.ArrayListMultimap;
+import com.google.common.collect.ListMultimap;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
@@ -67,6 +69,7 @@ import org.apache.phoenix.hbase.index.builder.IndexBuilder;
 import org.apache.phoenix.hbase.index.covered.IndexMetaData;
 import org.apache.phoenix.hbase.index.metrics.MetricsIndexerSource;
 import org.apache.phoenix.hbase.index.metrics.MetricsIndexerSourceFactory;
+import org.apache.phoenix.hbase.index.table.HTableInterfaceReference;
 import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 import org.apache.phoenix.hbase.index.util.IndexManagementUtil;
 import org.apache.phoenix.hbase.index.write.IndexWriter;
@@ -145,16 +148,16 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
   private final int clientVersion;
   // The collection of index mutations that will be applied before the 
data table mutations. The empty column (i.e.,
   // the verified column) will have the value false ("unverified") on 
these mutations
-  private Collection> preIndexUpdates = 
Collections.emptyList();
+  private ListMultimap preIndexUpdates;
   // The collection of index mutations that will be applied after the data 
table mutations. The empty column (i.e.,
   // the verified column) will have the value true ("verified") on the put 
mutations
-  private Collection> postIndexUpdates = 
Collections.emptyList();
+  private ListMultimap 
postIndexUpdates;
   // The collection of candidate index mutations that will be applied 
after the data table mutations
-  private Collection, byte[]>> 
intermediatePostIndexUpdates;
+  private ListMultimap> 
intermediatePostIndexUpdates;
   private List rowLocks = 
Lists.newArrayListWithExpectedSize(QueryServicesOptions.DEFAULT_MUTATE_BATCH_SIZE);
   private HashSet rowsToLock = new HashSet<>();
-  long dataWriteStartTime;
-
+  private long dataWriteStartTime;
+  private boolean rebuild;
   private BatchMutateContext(int clientVersion) {
   this.clientVersion = clientVersion;
   }
@@ -506,6 +509,27 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
   }
   }
 
+  private void 
handleLocalIndexUpdates(ObserverContext c,
+   MiniBatchOperationInProgress 
miniBatchOp,
+   ListMultimap> indexUpdates) {
+  byte[] tableName = 
c.getEnvironment().getRegion().getTableDesc().getTableName().getName();
+  HTableInterfaceReference hTableInterfaceReference =
+  new HTableInterfaceReference(new 
ImmutableBytesPtr(tableName));
+  List> localIndexUpdates = 
indexUpdates.removeAll(hTableInterfaceReference);
+  if (localIndexUpdates == null || localIndexUpdates.isEmpty()) {
+  return;
+  }
+  List localUpdates = new ArrayList();
+  Iterator> indexUpdatesItr = 
localIndexUpdates.iterator();
+  while (indexUpdatesItr.hasNext()) {
+  Pair next = indexUpdatesItr.next();
+  localUpdates.add(next.getFirst());
+  }
+  if (!localUpdates.isEmpty()) {
+  miniBatchOp.addOperationsFromCP(0, localUpdates.toArray(new 
Mutation[localUpdates.size()]));
+  }
+  }
+
   private void prepareIndexMutations(
   ObserverContext c,
   MiniBatchOperationInProgress miniBatchOp,
@@ -513,79 +537,56 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
   Collection mutations,
   long now,
   PhoenixIndexMetaData indexMetaData) throws Throwable {
-
   List maintainers = indexMetaData.getIndexMaintainers();
-
   // get the current span, or just use a null-span to avoid a bunch of if 
statements
   try (TraceScope scope = Trace.startSpan("Starting to build index 
updates")) {
   

[phoenix] 01/02: PHOENIX-5556 Avoid repeatedly loading IndexMetaData For IndexRegionObserver

2019-11-15 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit 62382d7d085ba976a44facd0a98da02172e04d5c
Author: chenglei 
AuthorDate: Thu Nov 7 10:29:05 2019 +0800

PHOENIX-5556 Avoid repeatedly loading IndexMetaData For IndexRegionObserver
---
 .../phoenix/hbase/index/IndexRegionObserver.java   | 39 ++
 .../hbase/index/builder/IndexBuildManager.java |  4 +--
 2 files changed, 28 insertions(+), 15 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
index 27eb647..b058b33 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
@@ -506,16 +506,15 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
   }
   }
 
-  private void 
prepareIndexMutations(ObserverContext c,
- MiniBatchOperationInProgress 
miniBatchOp, BatchMutateContext context,
- Collection mutations, 
long now) throws Throwable {
-  IndexMetaData indexMetaData = this.builder.getIndexMetaData(miniBatchOp);
-  if (!(indexMetaData instanceof PhoenixIndexMetaData)) {
-  throw new DoNotRetryIOException(
-  "preBatchMutateWithExceptions: indexMetaData is not an 
instance of PhoenixIndexMetaData " +
-  
c.getEnvironment().getRegion().getRegionInfo().getTable().getNameAsString());
-  }
-  List maintainers = 
((PhoenixIndexMetaData)indexMetaData).getIndexMaintainers();
+  private void prepareIndexMutations(
+  ObserverContext c,
+  MiniBatchOperationInProgress miniBatchOp,
+  BatchMutateContext context,
+  Collection mutations,
+  long now,
+  PhoenixIndexMetaData indexMetaData) throws Throwable {
+
+  List maintainers = indexMetaData.getIndexMaintainers();
 
   // get the current span, or just use a null-span to avoid a bunch of if 
statements
   try (TraceScope scope = Trace.startSpan("Starting to build index 
updates")) {
@@ -526,7 +525,7 @@ public class IndexRegionObserver extends BaseRegionObserver 
{
 
   // get the index updates for all elements in this batch
   Collection, byte[]>> indexUpdates =
-  this.builder.getIndexUpdates(miniBatchOp, mutations);
+  this.builder.getIndexUpdates(miniBatchOp, mutations, 
indexMetaData);
 
   current.addTimelineAnnotation("Built index updates, doing preStep");
   TracingUtils.addAnnotation(current, "index update count", 
indexUpdates.size());
@@ -590,10 +589,24 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
   }
   }
 
+  protected PhoenixIndexMetaData getPhoenixIndexMetaData(
+  ObserverContext observerContext,
+  MiniBatchOperationInProgress miniBatchOp) throws 
IOException {
+  IndexMetaData indexMetaData = this.builder.getIndexMetaData(miniBatchOp);
+  if (!(indexMetaData instanceof PhoenixIndexMetaData)) {
+  throw new DoNotRetryIOException(
+  "preBatchMutateWithExceptions: indexMetaData is not an 
instance of "+PhoenixIndexMetaData.class.getName() +
+  ", current table is:" +
+  
observerContext.getEnvironment().getRegion().getRegionInfo().getTable().getNameAsString());
+  }
+  return (PhoenixIndexMetaData)indexMetaData;
+  }
+
   public void 
preBatchMutateWithExceptions(ObserverContext c,
   MiniBatchOperationInProgress miniBatchOp) throws Throwable 
{
   ignoreAtomicOperations(miniBatchOp);
-  BatchMutateContext context = new 
BatchMutateContext(this.builder.getIndexMetaData(miniBatchOp).getClientVersion());
+  PhoenixIndexMetaData indexMetaData = getPhoenixIndexMetaData(c, 
miniBatchOp);
+  BatchMutateContext context = new 
BatchMutateContext(indexMetaData.getClientVersion());
   setBatchMutateContext(c, context);
   Mutation firstMutation = miniBatchOp.getOperation(0);
   ReplayWrite replayWrite = this.builder.getReplayWrite(firstMutation);
@@ -619,7 +632,7 @@ public class IndexRegionObserver extends BaseRegionObserver 
{
   }
 
   long start = EnvironmentEdgeManager.currentTimeMillis();
-  prepareIndexMutations(c, miniBatchOp, context, mutations, now);
+  prepareIndexMutations(c, miniBatchOp, context, mutations, now, 
indexMetaData);
   
metricSource.updateIndexPrepareTime(EnvironmentEdgeManager.currentTimeMillis() 
- start);
 
   // Sleep for one millisecond if we have prepared the index updates in 
less than 1 ms. The sleep is necessary to
diff --git 

[phoenix] branch 4.14-HBase-1.3 updated (ce28cb0 -> 2e7ac99)

2019-11-15 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a change to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git.


from ce28cb0  PHOENIX-5564 Restructure read repair to improve readability 
and correctness
 new 62382d7  PHOENIX-5556 Avoid repeatedly loading IndexMetaData For 
IndexRegionObserver
 new 2e7ac99  PHOENIX-5565 Unify index update structures in 
IndexRegionObserver and IndexCommitter

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../phoenix/hbase/index/IndexRegionObserver.java   | 186 +++--
 .../hbase/index/builder/IndexBuildManager.java |  19 +--
 2 files changed, 108 insertions(+), 97 deletions(-)



[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5565 Unify index update structures in IndexRegionObserver and IndexCommitter

2019-11-15 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 241a328  PHOENIX-5565 Unify index update structures in 
IndexRegionObserver and IndexCommitter
241a328 is described below

commit 241a3284cf9128a08d0db49d07c2cfecceeada06
Author: Kadir 
AuthorDate: Thu Nov 7 15:50:40 2019 -0800

PHOENIX-5565 Unify index update structures in IndexRegionObserver and 
IndexCommitter
---
 .../phoenix/hbase/index/IndexRegionObserver.java   | 155 +++--
 .../hbase/index/builder/IndexBuildManager.java |  15 +-
 2 files changed, 84 insertions(+), 86 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
index b058b33..340832f 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
@@ -32,6 +32,8 @@ import java.util.List;
 import java.util.Map;
 import java.util.concurrent.ConcurrentHashMap;
 
+import com.google.common.collect.ArrayListMultimap;
+import com.google.common.collect.ListMultimap;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
@@ -67,6 +69,7 @@ import org.apache.phoenix.hbase.index.builder.IndexBuilder;
 import org.apache.phoenix.hbase.index.covered.IndexMetaData;
 import org.apache.phoenix.hbase.index.metrics.MetricsIndexerSource;
 import org.apache.phoenix.hbase.index.metrics.MetricsIndexerSourceFactory;
+import org.apache.phoenix.hbase.index.table.HTableInterfaceReference;
 import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 import org.apache.phoenix.hbase.index.util.IndexManagementUtil;
 import org.apache.phoenix.hbase.index.write.IndexWriter;
@@ -145,16 +148,16 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
   private final int clientVersion;
   // The collection of index mutations that will be applied before the 
data table mutations. The empty column (i.e.,
   // the verified column) will have the value false ("unverified") on 
these mutations
-  private Collection> preIndexUpdates = 
Collections.emptyList();
+  private ListMultimap preIndexUpdates;
   // The collection of index mutations that will be applied after the data 
table mutations. The empty column (i.e.,
   // the verified column) will have the value true ("verified") on the put 
mutations
-  private Collection> postIndexUpdates = 
Collections.emptyList();
+  private ListMultimap 
postIndexUpdates;
   // The collection of candidate index mutations that will be applied 
after the data table mutations
-  private Collection, byte[]>> 
intermediatePostIndexUpdates;
+  private ListMultimap> 
intermediatePostIndexUpdates;
   private List rowLocks = 
Lists.newArrayListWithExpectedSize(QueryServicesOptions.DEFAULT_MUTATE_BATCH_SIZE);
   private HashSet rowsToLock = new HashSet<>();
-  long dataWriteStartTime;
-
+  private long dataWriteStartTime;
+  private boolean rebuild;
   private BatchMutateContext(int clientVersion) {
   this.clientVersion = clientVersion;
   }
@@ -506,6 +509,27 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
   }
   }
 
+  private void 
handleLocalIndexUpdates(ObserverContext c,
+   MiniBatchOperationInProgress 
miniBatchOp,
+   ListMultimap> indexUpdates) {
+  byte[] tableName = 
c.getEnvironment().getRegion().getTableDesc().getTableName().getName();
+  HTableInterfaceReference hTableInterfaceReference =
+  new HTableInterfaceReference(new 
ImmutableBytesPtr(tableName));
+  List> localIndexUpdates = 
indexUpdates.removeAll(hTableInterfaceReference);
+  if (localIndexUpdates == null || localIndexUpdates.isEmpty()) {
+  return;
+  }
+  List localUpdates = new ArrayList();
+  Iterator> indexUpdatesItr = 
localIndexUpdates.iterator();
+  while (indexUpdatesItr.hasNext()) {
+  Pair next = indexUpdatesItr.next();
+  localUpdates.add(next.getFirst());
+  }
+  if (!localUpdates.isEmpty()) {
+  miniBatchOp.addOperationsFromCP(0, localUpdates.toArray(new 
Mutation[localUpdates.size()]));
+  }
+  }
+
   private void prepareIndexMutations(
   ObserverContext c,
   MiniBatchOperationInProgress miniBatchOp,
@@ -513,79 +537,56 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
   Collection mutations,
   long now,
   PhoenixIndexMetaData indexMetaData) throws Throwable {
-
   List maintainers = 

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5565 Unify index update structures in IndexRegionObserver and IndexCommitter

2019-11-15 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new f95b193  PHOENIX-5565 Unify index update structures in 
IndexRegionObserver and IndexCommitter
f95b193 is described below

commit f95b1937af1a525c2bf49e0a50bf73fce09a5314
Author: Kadir 
AuthorDate: Thu Nov 7 15:50:40 2019 -0800

PHOENIX-5565 Unify index update structures in IndexRegionObserver and 
IndexCommitter
---
 .../phoenix/hbase/index/IndexRegionObserver.java   | 155 +++--
 .../hbase/index/builder/IndexBuildManager.java |  15 +-
 2 files changed, 84 insertions(+), 86 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
index b058b33..340832f 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
@@ -32,6 +32,8 @@ import java.util.List;
 import java.util.Map;
 import java.util.concurrent.ConcurrentHashMap;
 
+import com.google.common.collect.ArrayListMultimap;
+import com.google.common.collect.ListMultimap;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
@@ -67,6 +69,7 @@ import org.apache.phoenix.hbase.index.builder.IndexBuilder;
 import org.apache.phoenix.hbase.index.covered.IndexMetaData;
 import org.apache.phoenix.hbase.index.metrics.MetricsIndexerSource;
 import org.apache.phoenix.hbase.index.metrics.MetricsIndexerSourceFactory;
+import org.apache.phoenix.hbase.index.table.HTableInterfaceReference;
 import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 import org.apache.phoenix.hbase.index.util.IndexManagementUtil;
 import org.apache.phoenix.hbase.index.write.IndexWriter;
@@ -145,16 +148,16 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
   private final int clientVersion;
   // The collection of index mutations that will be applied before the 
data table mutations. The empty column (i.e.,
   // the verified column) will have the value false ("unverified") on 
these mutations
-  private Collection> preIndexUpdates = 
Collections.emptyList();
+  private ListMultimap preIndexUpdates;
   // The collection of index mutations that will be applied after the data 
table mutations. The empty column (i.e.,
   // the verified column) will have the value true ("verified") on the put 
mutations
-  private Collection> postIndexUpdates = 
Collections.emptyList();
+  private ListMultimap 
postIndexUpdates;
   // The collection of candidate index mutations that will be applied 
after the data table mutations
-  private Collection, byte[]>> 
intermediatePostIndexUpdates;
+  private ListMultimap> 
intermediatePostIndexUpdates;
   private List rowLocks = 
Lists.newArrayListWithExpectedSize(QueryServicesOptions.DEFAULT_MUTATE_BATCH_SIZE);
   private HashSet rowsToLock = new HashSet<>();
-  long dataWriteStartTime;
-
+  private long dataWriteStartTime;
+  private boolean rebuild;
   private BatchMutateContext(int clientVersion) {
   this.clientVersion = clientVersion;
   }
@@ -506,6 +509,27 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
   }
   }
 
+  private void 
handleLocalIndexUpdates(ObserverContext c,
+   MiniBatchOperationInProgress 
miniBatchOp,
+   ListMultimap> indexUpdates) {
+  byte[] tableName = 
c.getEnvironment().getRegion().getTableDesc().getTableName().getName();
+  HTableInterfaceReference hTableInterfaceReference =
+  new HTableInterfaceReference(new 
ImmutableBytesPtr(tableName));
+  List> localIndexUpdates = 
indexUpdates.removeAll(hTableInterfaceReference);
+  if (localIndexUpdates == null || localIndexUpdates.isEmpty()) {
+  return;
+  }
+  List localUpdates = new ArrayList();
+  Iterator> indexUpdatesItr = 
localIndexUpdates.iterator();
+  while (indexUpdatesItr.hasNext()) {
+  Pair next = indexUpdatesItr.next();
+  localUpdates.add(next.getFirst());
+  }
+  if (!localUpdates.isEmpty()) {
+  miniBatchOp.addOperationsFromCP(0, localUpdates.toArray(new 
Mutation[localUpdates.size()]));
+  }
+  }
+
   private void prepareIndexMutations(
   ObserverContext c,
   MiniBatchOperationInProgress miniBatchOp,
@@ -513,79 +537,56 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
   Collection mutations,
   long now,
   PhoenixIndexMetaData indexMetaData) throws Throwable {
-
   List maintainers = 

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5565 Unify index update structures in IndexRegionObserver and IndexCommitter

2019-11-15 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 689ac95  PHOENIX-5565 Unify index update structures in 
IndexRegionObserver and IndexCommitter
689ac95 is described below

commit 689ac9577f564fb02f2d6e886ff6f5e226d2f81e
Author: Kadir 
AuthorDate: Thu Nov 7 15:50:40 2019 -0800

PHOENIX-5565 Unify index update structures in IndexRegionObserver and 
IndexCommitter
---
 .../phoenix/hbase/index/IndexRegionObserver.java   | 155 +++--
 .../hbase/index/builder/IndexBuildManager.java |  15 +-
 2 files changed, 84 insertions(+), 86 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
index b058b33..340832f 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
@@ -32,6 +32,8 @@ import java.util.List;
 import java.util.Map;
 import java.util.concurrent.ConcurrentHashMap;
 
+import com.google.common.collect.ArrayListMultimap;
+import com.google.common.collect.ListMultimap;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
@@ -67,6 +69,7 @@ import org.apache.phoenix.hbase.index.builder.IndexBuilder;
 import org.apache.phoenix.hbase.index.covered.IndexMetaData;
 import org.apache.phoenix.hbase.index.metrics.MetricsIndexerSource;
 import org.apache.phoenix.hbase.index.metrics.MetricsIndexerSourceFactory;
+import org.apache.phoenix.hbase.index.table.HTableInterfaceReference;
 import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 import org.apache.phoenix.hbase.index.util.IndexManagementUtil;
 import org.apache.phoenix.hbase.index.write.IndexWriter;
@@ -145,16 +148,16 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
   private final int clientVersion;
   // The collection of index mutations that will be applied before the 
data table mutations. The empty column (i.e.,
   // the verified column) will have the value false ("unverified") on 
these mutations
-  private Collection> preIndexUpdates = 
Collections.emptyList();
+  private ListMultimap preIndexUpdates;
   // The collection of index mutations that will be applied after the data 
table mutations. The empty column (i.e.,
   // the verified column) will have the value true ("verified") on the put 
mutations
-  private Collection> postIndexUpdates = 
Collections.emptyList();
+  private ListMultimap 
postIndexUpdates;
   // The collection of candidate index mutations that will be applied 
after the data table mutations
-  private Collection, byte[]>> 
intermediatePostIndexUpdates;
+  private ListMultimap> 
intermediatePostIndexUpdates;
   private List rowLocks = 
Lists.newArrayListWithExpectedSize(QueryServicesOptions.DEFAULT_MUTATE_BATCH_SIZE);
   private HashSet rowsToLock = new HashSet<>();
-  long dataWriteStartTime;
-
+  private long dataWriteStartTime;
+  private boolean rebuild;
   private BatchMutateContext(int clientVersion) {
   this.clientVersion = clientVersion;
   }
@@ -506,6 +509,27 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
   }
   }
 
+  private void 
handleLocalIndexUpdates(ObserverContext c,
+   MiniBatchOperationInProgress 
miniBatchOp,
+   ListMultimap> indexUpdates) {
+  byte[] tableName = 
c.getEnvironment().getRegion().getTableDesc().getTableName().getName();
+  HTableInterfaceReference hTableInterfaceReference =
+  new HTableInterfaceReference(new 
ImmutableBytesPtr(tableName));
+  List> localIndexUpdates = 
indexUpdates.removeAll(hTableInterfaceReference);
+  if (localIndexUpdates == null || localIndexUpdates.isEmpty()) {
+  return;
+  }
+  List localUpdates = new ArrayList();
+  Iterator> indexUpdatesItr = 
localIndexUpdates.iterator();
+  while (indexUpdatesItr.hasNext()) {
+  Pair next = indexUpdatesItr.next();
+  localUpdates.add(next.getFirst());
+  }
+  if (!localUpdates.isEmpty()) {
+  miniBatchOp.addOperationsFromCP(0, localUpdates.toArray(new 
Mutation[localUpdates.size()]));
+  }
+  }
+
   private void prepareIndexMutations(
   ObserverContext c,
   MiniBatchOperationInProgress miniBatchOp,
@@ -513,79 +537,56 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
   Collection mutations,
   long now,
   PhoenixIndexMetaData indexMetaData) throws Throwable {
-
   List maintainers = 

Build failed in Jenkins: Phoenix-4.x-HBase-1.3 #598

2019-11-15 Thread Apache Jenkins Server
See 


Changes:

[kadir] PHOENIX-5564 Restructure read repair to improve readability and


--
[...truncated 107.12 KB...]
[INFO] 
[INFO] --- maven-failsafe-plugin:2.22.2:integration-test 
(NeedTheirOwnClusterTests) @ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.004 
s - in 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running org.apache.phoenix.end2end.ColumnEncodedMutableTxStatsCollectorIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.455 s 
- in org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedMutableNonTxStatsCollectorIT
[INFO] Running org.apache.phoenix.end2end.ConcurrentMutationsExtendedIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 57.707 s 
- in org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.533 s 
- in org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Running org.apache.phoenix.end2end.CostBasedDecisionIT
[INFO] Running org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.583 s 
- in org.apache.phoenix.end2end.CountDistinctCompressionIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
184.942 s - in org.apache.phoenix.end2end.ColumnEncodedMutableTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
183.861 s - in 
org.apache.phoenix.end2end.ColumnEncodedMutableNonTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
193.955 s - in 
org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
195.666 s - in 
org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT
[INFO] Running org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.DropSchemaIT
[INFO] Running org.apache.phoenix.end2end.IndexBuildTimestampIT
[INFO] Running org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Running org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.506 s 
- in org.apache.phoenix.end2end.DropSchemaIT
[INFO] Running org.apache.phoenix.end2end.IndexRebuildTaskIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 144.802 
s - in org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 97.587 
s - in org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.383 s 
- in org.apache.phoenix.end2end.IndexRebuildTaskIT
[INFO] Running org.apache.phoenix.end2end.IndexScrutinyToolForTenantIT
[INFO] Running org.apache.phoenix.end2end.IndexScrutinyToolIT
[INFO] Running org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 189.146 
s - in org.apache.phoenix.end2end.IndexBuildTimestampIT
[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 201.761 
s - in org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.59 s 
- in org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Running 
org.apache.phoenix.end2end.IndexToolForPartialBuildWithNamespaceEnabledIT
[INFO] Running org.apache.phoenix.end2end.IndexToolIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 97.446 s 
- in org.apache.phoenix.end2end.IndexScrutinyToolForTenantIT
[INFO] Running org.apache.phoenix.end2end.LocalIndexSplitMergeIT
[INFO] Running org.apache.phoenix.end2end.MigrateSystemTablesToSystemNamespaceIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.67 s 
- in org.apache.phoenix.end2end.IndexToolForPartialBuildWithNamespaceEnabledIT
[INFO] Running 
org.apache.phoenix.end2end.NonColumnEncodedImmutableNonTxStatsCollectorIT
[INFO] Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 632.623 
s - in 

[phoenix] branch master updated: PHOENIX-5565 Unify index update structures in IndexRegionObserver and IndexCommitter

2019-11-15 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 234660e  PHOENIX-5565 Unify index update structures in 
IndexRegionObserver and IndexCommitter
234660e is described below

commit 234660ea0082815c59704a61dc9c33bb11d64381
Author: Kadir 
AuthorDate: Thu Nov 7 15:50:40 2019 -0800

PHOENIX-5565 Unify index update structures in IndexRegionObserver and 
IndexCommitter
---
 .../phoenix/hbase/index/IndexRegionObserver.java   | 155 +++--
 .../hbase/index/builder/IndexBuildManager.java |  15 +-
 2 files changed, 84 insertions(+), 86 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
index 1585495..a33a3ee 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
@@ -33,6 +33,8 @@ import java.util.Map;
 import java.util.Optional;
 import java.util.concurrent.ConcurrentHashMap;
 
+import com.google.common.collect.ArrayListMultimap;
+import com.google.common.collect.ListMultimap;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
@@ -71,6 +73,7 @@ import org.apache.phoenix.hbase.index.builder.IndexBuilder;
 import org.apache.phoenix.hbase.index.covered.IndexMetaData;
 import org.apache.phoenix.hbase.index.metrics.MetricsIndexerSource;
 import org.apache.phoenix.hbase.index.metrics.MetricsIndexerSourceFactory;
+import org.apache.phoenix.hbase.index.table.HTableInterfaceReference;
 import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 import org.apache.phoenix.hbase.index.util.IndexManagementUtil;
 import org.apache.phoenix.hbase.index.write.IndexWriter;
@@ -149,16 +152,16 @@ public class IndexRegionObserver implements 
RegionObserver, RegionCoprocessor {
   private final int clientVersion;
   // The collection of index mutations that will be applied before the 
data table mutations. The empty column (i.e.,
   // the verified column) will have the value false ("unverified") on 
these mutations
-  private Collection> preIndexUpdates = 
Collections.emptyList();
+  private ListMultimap preIndexUpdates;
   // The collection of index mutations that will be applied after the data 
table mutations. The empty column (i.e.,
   // the verified column) will have the value true ("verified") on the put 
mutations
-  private Collection> postIndexUpdates = 
Collections.emptyList();
+  private ListMultimap 
postIndexUpdates;
   // The collection of candidate index mutations that will be applied 
after the data table mutations
-  private Collection, byte[]>> 
intermediatePostIndexUpdates;
+  private ListMultimap> 
intermediatePostIndexUpdates;
   private List rowLocks = 
Lists.newArrayListWithExpectedSize(QueryServicesOptions.DEFAULT_MUTATE_BATCH_SIZE);
   private HashSet rowsToLock = new HashSet<>();
-  long dataWriteStartTime;
-
+  private long dataWriteStartTime;
+  private boolean rebuild;
   private BatchMutateContext(int clientVersion) {
   this.clientVersion = clientVersion;
   }
@@ -512,6 +515,27 @@ public class IndexRegionObserver implements 
RegionObserver, RegionCoprocessor {
   }
   }
 
+  private void 
handleLocalIndexUpdates(ObserverContext c,
+   MiniBatchOperationInProgress 
miniBatchOp,
+   ListMultimap> indexUpdates) {
+  byte[] tableName = 
c.getEnvironment().getRegion().getTableDescriptor().getTableName().getName();
+  HTableInterfaceReference hTableInterfaceReference =
+  new HTableInterfaceReference(new 
ImmutableBytesPtr(tableName));
+  List> localIndexUpdates = 
indexUpdates.removeAll(hTableInterfaceReference);
+  if (localIndexUpdates == null || localIndexUpdates.isEmpty()) {
+  return;
+  }
+  List localUpdates = new ArrayList();
+  Iterator> indexUpdatesItr = 
localIndexUpdates.iterator();
+  while (indexUpdatesItr.hasNext()) {
+  Pair next = indexUpdatesItr.next();
+  localUpdates.add(next.getFirst());
+  }
+  if (!localUpdates.isEmpty()) {
+  miniBatchOp.addOperationsFromCP(0, localUpdates.toArray(new 
Mutation[localUpdates.size()]));
+  }
+  }
+
   private void prepareIndexMutations(
   ObserverContext c,
   MiniBatchOperationInProgress miniBatchOp,
@@ -519,79 +543,56 @@ public class IndexRegionObserver implements 
RegionObserver, RegionCoprocessor {
   Collection mutations,
   long now,
   PhoenixIndexMetaData indexMetaData) throws 

Apache-Phoenix | Master | Build Successful

2019-11-15 Thread Apache Jenkins Server
Master branch build status Successful
Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/master

Last Successful Compiled Artifacts https://builds.apache.org/job/Phoenix-master/lastSuccessfulBuild/artifact/

Last Complete Test Report https://builds.apache.org/job/Phoenix-master/lastCompletedBuild/testReport/

Changes
[kadir] PHOENIX-5564 Restructure read repair to improve readability and



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Apache Phoenix - Timeout crawler - Build https://builds.apache.org/job/Phoenix-master/2572/

2019-11-15 Thread Apache Jenkins Server
[...truncated 27 lines...]
Looking at the log, list of test(s) that timed-out:

Build:
https://builds.apache.org/job/Phoenix-master/2572/


Affected test class(es):
Set(['as SYSTEM'])


Build step 'Execute shell' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5564 Restructure read repair to improve readability and correctness

2019-11-15 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new cf61722  PHOENIX-5564 Restructure read repair to improve readability 
and correctness
cf61722 is described below

commit cf617225defa340771f843651d0575331d229adf
Author: Kadir 
AuthorDate: Sat Nov 9 17:05:18 2019 -0800

PHOENIX-5564 Restructure read repair to improve readability and correctness
---
 .../UngroupedAggregateRegionObserver.java  |  26 ++--
 .../apache/phoenix/index/GlobalIndexChecker.java   | 141 ++---
 2 files changed, 113 insertions(+), 54 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
index c2b53a6..23091a8 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
@@ -97,6 +97,7 @@ import 
org.apache.phoenix.hbase.index.exception.IndexWriteException;
 import org.apache.phoenix.hbase.index.util.GenericKeyValueBuilder;
 import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 import org.apache.phoenix.hbase.index.util.KeyValueBuilder;
+import org.apache.phoenix.index.GlobalIndexChecker;
 import org.apache.phoenix.index.IndexMaintainer;
 import org.apache.phoenix.index.PhoenixIndexCodec;
 import org.apache.phoenix.index.PhoenixIndexFailurePolicy;
@@ -1146,8 +1147,7 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 if (!includedColumns.contains(column)) {
 if (del == null) {
 Cell cell = row.get(0);
-rowKey = new byte[cell.getRowLength()];
-System.arraycopy(cell.getRowArray(), 
cell.getRowOffset(), rowKey, 0, cell.getRowLength());
+rowKey = CellUtil.cloneRow(cell);
 del = new Delete(rowKey);
 }
 del.addColumns(column.getFamily(), column.getQualifier(), 
ts);
@@ -1225,15 +1225,6 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 del.addDeleteMarker(cell);
 }
 }
-if (indexRowKey != null) {
-// GlobalIndexChecker passed the index row 
key. This is to build a single index row.
-// Check if the data table row we have just 
scanned matches with the index row key.
-// If not, there is no need to build the index 
row from this data table row,
-// and just return zero row count.
-if (!checkIndexRow(indexRowKey, put)) {
-break;
-}
-}
 uuidValue = commitIfReady(uuidValue);
 if (!scan.isRaw()) {
 Delete deleteMarkers = 
generateDeleteMarkers(row);
@@ -1243,6 +1234,19 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 uuidValue = commitIfReady(uuidValue);
 }
 }
+if (indexRowKey != null) {
+// GlobalIndexChecker passed the index row 
key. This is to build a single index row.
+// Check if the data table row we have just 
scanned matches with the index row key.
+// If not, there is no need to build the index 
row from this data table row,
+// and just return zero row count.
+if (checkIndexRow(indexRowKey, put)) {
+rowCount = 
GlobalIndexChecker.RebuildReturnCode.INDEX_ROW_EXISTS.getValue();
+}
+else {
+rowCount = 
GlobalIndexChecker.RebuildReturnCode.NO_INDEX_ROW.getValue();
+}
+break;
+}
 rowCount++;
 }
 
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java 
b/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java
index 1a737ba..48794c1 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5564 Restructure read repair to improve readability and correctness

2019-11-15 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new cc3cde9  PHOENIX-5564 Restructure read repair to improve readability 
and correctness
cc3cde9 is described below

commit cc3cde98ccb3f66b4fcdfbf291c5c19c4a1718dd
Author: Kadir 
AuthorDate: Sat Nov 9 17:05:18 2019 -0800

PHOENIX-5564 Restructure read repair to improve readability and correctness
---
 .../UngroupedAggregateRegionObserver.java  |  26 ++--
 .../apache/phoenix/index/GlobalIndexChecker.java   | 141 ++---
 2 files changed, 113 insertions(+), 54 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
index c2b53a6..23091a8 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
@@ -97,6 +97,7 @@ import 
org.apache.phoenix.hbase.index.exception.IndexWriteException;
 import org.apache.phoenix.hbase.index.util.GenericKeyValueBuilder;
 import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 import org.apache.phoenix.hbase.index.util.KeyValueBuilder;
+import org.apache.phoenix.index.GlobalIndexChecker;
 import org.apache.phoenix.index.IndexMaintainer;
 import org.apache.phoenix.index.PhoenixIndexCodec;
 import org.apache.phoenix.index.PhoenixIndexFailurePolicy;
@@ -1146,8 +1147,7 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 if (!includedColumns.contains(column)) {
 if (del == null) {
 Cell cell = row.get(0);
-rowKey = new byte[cell.getRowLength()];
-System.arraycopy(cell.getRowArray(), 
cell.getRowOffset(), rowKey, 0, cell.getRowLength());
+rowKey = CellUtil.cloneRow(cell);
 del = new Delete(rowKey);
 }
 del.addColumns(column.getFamily(), column.getQualifier(), 
ts);
@@ -1225,15 +1225,6 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 del.addDeleteMarker(cell);
 }
 }
-if (indexRowKey != null) {
-// GlobalIndexChecker passed the index row 
key. This is to build a single index row.
-// Check if the data table row we have just 
scanned matches with the index row key.
-// If not, there is no need to build the index 
row from this data table row,
-// and just return zero row count.
-if (!checkIndexRow(indexRowKey, put)) {
-break;
-}
-}
 uuidValue = commitIfReady(uuidValue);
 if (!scan.isRaw()) {
 Delete deleteMarkers = 
generateDeleteMarkers(row);
@@ -1243,6 +1234,19 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 uuidValue = commitIfReady(uuidValue);
 }
 }
+if (indexRowKey != null) {
+// GlobalIndexChecker passed the index row 
key. This is to build a single index row.
+// Check if the data table row we have just 
scanned matches with the index row key.
+// If not, there is no need to build the index 
row from this data table row,
+// and just return zero row count.
+if (checkIndexRow(indexRowKey, put)) {
+rowCount = 
GlobalIndexChecker.RebuildReturnCode.INDEX_ROW_EXISTS.getValue();
+}
+else {
+rowCount = 
GlobalIndexChecker.RebuildReturnCode.NO_INDEX_ROW.getValue();
+}
+break;
+}
 rowCount++;
 }
 
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java 
b/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java
index 1a737ba..48794c1 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5564 Restructure read repair to improve readability and correctness

2019-11-15 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 730cdab  PHOENIX-5564 Restructure read repair to improve readability 
and correctness
730cdab is described below

commit 730cdab3e14cce555954d9e83f177dec3ced81d3
Author: Kadir 
AuthorDate: Sat Nov 9 17:05:18 2019 -0800

PHOENIX-5564 Restructure read repair to improve readability and correctness
---
 .../UngroupedAggregateRegionObserver.java  |  26 ++--
 .../apache/phoenix/index/GlobalIndexChecker.java   | 141 ++---
 2 files changed, 113 insertions(+), 54 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
index 8477625..6bcbff7 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
@@ -97,6 +97,7 @@ import 
org.apache.phoenix.hbase.index.exception.IndexWriteException;
 import org.apache.phoenix.hbase.index.util.GenericKeyValueBuilder;
 import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 import org.apache.phoenix.hbase.index.util.KeyValueBuilder;
+import org.apache.phoenix.index.GlobalIndexChecker;
 import org.apache.phoenix.index.IndexMaintainer;
 import org.apache.phoenix.index.PhoenixIndexCodec;
 import org.apache.phoenix.index.PhoenixIndexFailurePolicy;
@@ -1140,8 +1141,7 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 if (!includedColumns.contains(column)) {
 if (del == null) {
 Cell cell = row.get(0);
-rowKey = new byte[cell.getRowLength()];
-System.arraycopy(cell.getRowArray(), 
cell.getRowOffset(), rowKey, 0, cell.getRowLength());
+rowKey = CellUtil.cloneRow(cell);
 del = new Delete(rowKey);
 }
 del.addColumns(column.getFamily(), column.getQualifier(), 
ts);
@@ -1219,15 +1219,6 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 del.addDeleteMarker(cell);
 }
 }
-if (indexRowKey != null) {
-// GlobalIndexChecker passed the index row 
key. This is to build a single index row.
-// Check if the data table row we have just 
scanned matches with the index row key.
-// If not, there is no need to build the index 
row from this data table row,
-// and just return zero row count.
-if (!checkIndexRow(indexRowKey, put)) {
-break;
-}
-}
 uuidValue = commitIfReady(uuidValue);
 if (!scan.isRaw()) {
 Delete deleteMarkers = 
generateDeleteMarkers(row);
@@ -1237,6 +1228,19 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 uuidValue = commitIfReady(uuidValue);
 }
 }
+if (indexRowKey != null) {
+// GlobalIndexChecker passed the index row 
key. This is to build a single index row.
+// Check if the data table row we have just 
scanned matches with the index row key.
+// If not, there is no need to build the index 
row from this data table row,
+// and just return zero row count.
+if (checkIndexRow(indexRowKey, put)) {
+rowCount = 
GlobalIndexChecker.RebuildReturnCode.INDEX_ROW_EXISTS.getValue();
+}
+else {
+rowCount = 
GlobalIndexChecker.RebuildReturnCode.NO_INDEX_ROW.getValue();
+}
+break;
+}
 rowCount++;
 }
 
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java 
b/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java
index 9cd78b3..9ecf876 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java

[phoenix] branch 4.14-HBase-1.4 updated: PHOENIX-5564 Restructure read repair to improve readability and correctness

2019-11-15 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch 4.14-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.4 by this push:
 new 22e0238  PHOENIX-5564 Restructure read repair to improve readability 
and correctness
22e0238 is described below

commit 22e0238e5de2f58499a218d5bea1f14aadfcfa87
Author: Kadir 
AuthorDate: Sat Nov 9 17:05:18 2019 -0800

PHOENIX-5564 Restructure read repair to improve readability and correctness
---
 .../UngroupedAggregateRegionObserver.java  |  26 ++--
 .../apache/phoenix/index/GlobalIndexChecker.java   | 141 ++---
 2 files changed, 113 insertions(+), 54 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
index a40c4e5..8e838ef 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
@@ -96,6 +96,7 @@ import 
org.apache.phoenix.hbase.index.exception.IndexWriteException;
 import org.apache.phoenix.hbase.index.util.GenericKeyValueBuilder;
 import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 import org.apache.phoenix.hbase.index.util.KeyValueBuilder;
+import org.apache.phoenix.index.GlobalIndexChecker;
 import org.apache.phoenix.index.IndexMaintainer;
 import org.apache.phoenix.index.PhoenixIndexCodec;
 import org.apache.phoenix.index.PhoenixIndexFailurePolicy;
@@ -1124,8 +1125,7 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 if (!includedColumns.contains(column)) {
 if (del == null) {
 Cell cell = row.get(0);
-rowKey = new byte[cell.getRowLength()];
-System.arraycopy(cell.getRowArray(), 
cell.getRowOffset(), rowKey, 0, cell.getRowLength());
+rowKey = CellUtil.cloneRow(cell);
 del = new Delete(rowKey);
 }
 del.addColumns(column.getFamily(), column.getQualifier(), 
ts);
@@ -1203,15 +1203,6 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 del.addDeleteMarker(cell);
 }
 }
-if (indexRowKey != null) {
-// GlobalIndexChecker passed the index row 
key. This is to build a single index row.
-// Check if the data table row we have just 
scanned matches with the index row key.
-// If not, there is no need to build the index 
row from this data table row,
-// and just return zero row count.
-if (!checkIndexRow(indexRowKey, put)) {
-break;
-}
-}
 uuidValue = commitIfReady(uuidValue);
 if (!scan.isRaw()) {
 Delete deleteMarkers = 
generateDeleteMarkers(row);
@@ -1221,6 +1212,19 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 uuidValue = commitIfReady(uuidValue);
 }
 }
+if (indexRowKey != null) {
+// GlobalIndexChecker passed the index row 
key. This is to build a single index row.
+// Check if the data table row we have just 
scanned matches with the index row key.
+// If not, there is no need to build the index 
row from this data table row,
+// and just return zero row count.
+if (checkIndexRow(indexRowKey, put)) {
+rowCount = 
GlobalIndexChecker.RebuildReturnCode.INDEX_ROW_EXISTS.getValue();
+}
+else {
+rowCount = 
GlobalIndexChecker.RebuildReturnCode.NO_INDEX_ROW.getValue();
+}
+break;
+}
 rowCount++;
 }
 
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java 
b/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java
index 9cd78b3..9ecf876 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java

[phoenix] branch 4.14-HBase-1.3 updated: PHOENIX-5564 Restructure read repair to improve readability and correctness

2019-11-15 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.3 by this push:
 new ce28cb0  PHOENIX-5564 Restructure read repair to improve readability 
and correctness
ce28cb0 is described below

commit ce28cb032512645001bd3699e9e1b2928ff16597
Author: Kadir 
AuthorDate: Sat Nov 9 17:05:18 2019 -0800

PHOENIX-5564 Restructure read repair to improve readability and correctness
---
 .../UngroupedAggregateRegionObserver.java  |  26 ++--
 .../apache/phoenix/index/GlobalIndexChecker.java   | 141 ++---
 2 files changed, 113 insertions(+), 54 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
index a40c4e5..8e838ef 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
@@ -96,6 +96,7 @@ import 
org.apache.phoenix.hbase.index.exception.IndexWriteException;
 import org.apache.phoenix.hbase.index.util.GenericKeyValueBuilder;
 import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 import org.apache.phoenix.hbase.index.util.KeyValueBuilder;
+import org.apache.phoenix.index.GlobalIndexChecker;
 import org.apache.phoenix.index.IndexMaintainer;
 import org.apache.phoenix.index.PhoenixIndexCodec;
 import org.apache.phoenix.index.PhoenixIndexFailurePolicy;
@@ -1124,8 +1125,7 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 if (!includedColumns.contains(column)) {
 if (del == null) {
 Cell cell = row.get(0);
-rowKey = new byte[cell.getRowLength()];
-System.arraycopy(cell.getRowArray(), 
cell.getRowOffset(), rowKey, 0, cell.getRowLength());
+rowKey = CellUtil.cloneRow(cell);
 del = new Delete(rowKey);
 }
 del.addColumns(column.getFamily(), column.getQualifier(), 
ts);
@@ -1203,15 +1203,6 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 del.addDeleteMarker(cell);
 }
 }
-if (indexRowKey != null) {
-// GlobalIndexChecker passed the index row 
key. This is to build a single index row.
-// Check if the data table row we have just 
scanned matches with the index row key.
-// If not, there is no need to build the index 
row from this data table row,
-// and just return zero row count.
-if (!checkIndexRow(indexRowKey, put)) {
-break;
-}
-}
 uuidValue = commitIfReady(uuidValue);
 if (!scan.isRaw()) {
 Delete deleteMarkers = 
generateDeleteMarkers(row);
@@ -1221,6 +1212,19 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 uuidValue = commitIfReady(uuidValue);
 }
 }
+if (indexRowKey != null) {
+// GlobalIndexChecker passed the index row 
key. This is to build a single index row.
+// Check if the data table row we have just 
scanned matches with the index row key.
+// If not, there is no need to build the index 
row from this data table row,
+// and just return zero row count.
+if (checkIndexRow(indexRowKey, put)) {
+rowCount = 
GlobalIndexChecker.RebuildReturnCode.INDEX_ROW_EXISTS.getValue();
+}
+else {
+rowCount = 
GlobalIndexChecker.RebuildReturnCode.NO_INDEX_ROW.getValue();
+}
+break;
+}
 rowCount++;
 }
 
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java 
b/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java
index 9cd78b3..9ecf876 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java

[phoenix] branch master updated: PHOENIX-5564 Restructure read repair to improve readability and correctness

2019-11-15 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 98e2b16  PHOENIX-5564 Restructure read repair to improve readability 
and correctness
98e2b16 is described below

commit 98e2b1686e134bef3cfc96d03caeb888f2734c6f
Author: Kadir 
AuthorDate: Sat Nov 9 17:05:18 2019 -0800

PHOENIX-5564 Restructure read repair to improve readability and correctness
---
 .../UngroupedAggregateRegionObserver.java  |  26 ++--
 .../apache/phoenix/index/GlobalIndexChecker.java   | 141 ++---
 2 files changed, 113 insertions(+), 54 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
index c3d8dd9..a018733 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
@@ -104,6 +104,7 @@ import 
org.apache.phoenix.hbase.index.exception.IndexWriteException;
 import org.apache.phoenix.hbase.index.util.GenericKeyValueBuilder;
 import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 import org.apache.phoenix.hbase.index.util.KeyValueBuilder;
+import org.apache.phoenix.index.GlobalIndexChecker;
 import org.apache.phoenix.index.IndexMaintainer;
 import org.apache.phoenix.index.PhoenixIndexCodec;
 import org.apache.phoenix.index.PhoenixIndexFailurePolicy;
@@ -1174,8 +1175,7 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 if (!includedColumns.contains(column)) {
 if (del == null) {
 Cell cell = row.get(0);
-rowKey = new byte[cell.getRowLength()];
-System.arraycopy(cell.getRowArray(), 
cell.getRowOffset(), rowKey, 0, cell.getRowLength());
+rowKey = CellUtil.cloneRow(cell);
 del = new Delete(rowKey);
 }
 del.addColumns(column.getFamily(), column.getQualifier(), 
ts);
@@ -1253,15 +1253,6 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 del.addDeleteMarker(cell);
 }
 }
-if (indexRowKey != null) {
-// GlobalIndexChecker passed the index row 
key. This is to build a single index row.
-// Check if the data table row we have just 
scanned matches with the index row key.
-// If not, there is no need to build the index 
row from this data table row,
-// and just return zero row count.
-if (!checkIndexRow(indexRowKey, put)) {
-break;
-}
-}
 uuidValue = commitIfReady(uuidValue);
 if (!scan.isRaw()) {
 Delete deleteMarkers = 
generateDeleteMarkers(row);
@@ -1271,6 +1262,19 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 uuidValue = commitIfReady(uuidValue);
 }
 }
+if (indexRowKey != null) {
+// GlobalIndexChecker passed the index row 
key. This is to build a single index row.
+// Check if the data table row we have just 
scanned matches with the index row key.
+// If not, there is no need to build the index 
row from this data table row,
+// and just return zero row count.
+if (checkIndexRow(indexRowKey, put)) {
+rowCount = 
GlobalIndexChecker.RebuildReturnCode.INDEX_ROW_EXISTS.getValue();
+}
+else {
+rowCount = 
GlobalIndexChecker.RebuildReturnCode.NO_INDEX_ROW.getValue();
+}
+break;
+}
 rowCount++;
 }
 
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java 
b/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java
index 5a22a4b..9d27406 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/index/GlobalIndexChecker.java
+++ 

Build failed in Jenkins: Phoenix-4.x-HBase-1.3 #597

2019-11-15 Thread Apache Jenkins Server
See 

Changes:


--
[...truncated 103.86 KB...]
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.22.2:integration-test 
(NeedTheirOwnClusterTests) @ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.003 
s - in 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.609 s 
- in org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedMutableNonTxStatsCollectorIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
[INFO] Running org.apache.phoenix.end2end.ColumnEncodedMutableTxStatsCollectorIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT
[INFO] Running org.apache.phoenix.end2end.ConcurrentMutationsExtendedIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.512 s 
- in org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.533 s 
- in org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Running org.apache.phoenix.end2end.CostBasedDecisionIT
[INFO] Running org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.889 s 
- in org.apache.phoenix.end2end.CountDistinctCompressionIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
187.188 s - in 
org.apache.phoenix.end2end.ColumnEncodedMutableNonTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
186.421 s - in 
org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
190.287 s - in org.apache.phoenix.end2end.ColumnEncodedMutableTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
183.377 s - in 
org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT
[INFO] Running org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.DropSchemaIT
[INFO] Running org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Running org.apache.phoenix.end2end.IndexBuildTimestampIT
[INFO] Running org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 278.101 
s - in org.apache.phoenix.end2end.ConcurrentMutationsExtendedIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.498 s 
- in org.apache.phoenix.end2end.DropSchemaIT
[INFO] Running org.apache.phoenix.end2end.IndexRebuildTaskIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 146.771 
s - in org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.IndexScrutinyToolForTenantIT
[INFO] Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 95.655 
s - in org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.388 s 
- in org.apache.phoenix.end2end.IndexRebuildTaskIT
[INFO] Running org.apache.phoenix.end2end.IndexScrutinyToolIT
[INFO] Running org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Running 
org.apache.phoenix.end2end.IndexToolForPartialBuildWithNamespaceEnabledIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 95.647 s 
- in org.apache.phoenix.end2end.IndexScrutinyToolForTenantIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 190.882 
s - in org.apache.phoenix.end2end.IndexBuildTimestampIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.882 s 
- in org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 198.734 
s - in org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Running org.apache.phoenix.end2end.MigrateSystemTablesToSystemNamespaceIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.347 s 
- in 

Build failed in Jenkins: Phoenix Compile Compatibility with HBase #1181

2019-11-15 Thread Apache Jenkins Server
See 


Changes:


--
Started by timer
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H37 (ubuntu xenial) in workspace 

[Phoenix_Compile_Compat_wHBase] $ /bin/bash /tmp/jenkins8439796902689102617.sh
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 386428
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 6
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 10240
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited
core id : 0
core id : 1
core id : 2
core id : 3
core id : 4
core id : 5
physical id : 0
physical id : 1
MemTotal:   98963736 kB
MemFree:18110960 kB
Filesystem  Size  Used Avail Use% Mounted on
udev 48G 0   48G   0% /dev
tmpfs   9.5G   66M  9.4G   1% /run
/dev/sda3   3.6T  557G  2.9T  16% /
tmpfs48G 0   48G   0% /dev/shm
tmpfs   5.0M 0  5.0M   0% /run/lock
tmpfs48G 0   48G   0% /sys/fs/cgroup
/dev/loop0   58M   58M 0 100% /snap/snapcraft/3308
/dev/loop2   90M   90M 0 100% /snap/core/7917
/dev/loop3   58M   58M 0 100% /snap/snapcraft/3440
/dev/sda2   473M  330M  119M  74% /boot
tmpfs   9.5G  4.0K  9.5G   1% /run/user/910
/dev/loop6   55M   55M 0 100% /snap/lxd/12224
/dev/loop1   55M   55M 0 100% /snap/lxd/12317
/dev/loop5   90M   90M 0 100% /snap/core/8039
apache-maven-2.2.1
apache-maven-3.0.4
apache-maven-3.0.5
apache-maven-3.1.1
apache-maven-3.2.1
apache-maven-3.2.5
apache-maven-3.3.3
apache-maven-3.3.9
apache-maven-3.5.0
apache-maven-3.5.2
apache-maven-3.5.4
apache-maven-3.6.0
apache-maven-3.6.2
latest
latest2
latest3


===
Verifying compile level compatibility with HBase 0.98 with Phoenix 
4.x-HBase-0.98
===

Cloning into 'hbase'...
Switched to a new branch '0.98'
Branch 0.98 set up to track remote branch 0.98 from origin.
[ERROR] Plugin org.codehaus.mojo:findbugs-maven-plugin:2.5.2 or one of its 
dependencies could not be resolved: Failed to read artifact descriptor for 
org.codehaus.mojo:findbugs-maven-plugin:jar:2.5.2: Could not transfer artifact 
org.codehaus.mojo:findbugs-maven-plugin:pom:2.5.2 from/to central 
(https://repo.maven.apache.org/maven2): Received fatal alert: protocol_version 
-> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginResolutionException
Build step 'Execute shell' marked build as failure


Apache-Phoenix | Master | Build Successful

2019-11-15 Thread Apache Jenkins Server
Master branch build status Successful
Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/master

Last Successful Compiled Artifacts https://builds.apache.org/job/Phoenix-master/lastSuccessfulBuild/artifact/

Last Complete Test Report https://builds.apache.org/job/Phoenix-master/lastCompletedBuild/testReport/

Changes
[chinmayskulkarni] PHOENIX-5545: DropChildViews Task fails for a base table when its child



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Apache Phoenix - Timeout crawler - Build https://builds.apache.org/job/Phoenix-master/2571/

2019-11-15 Thread Apache Jenkins Server
[...truncated 27 lines...]
Looking at the log, list of test(s) that timed-out:

Build:
https://builds.apache.org/job/Phoenix-master/2571/


Affected test class(es):
Set(['as SYSTEM'])


Build step 'Execute shell' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

Build failed in Jenkins: Phoenix-4.x-HBase-1.4 #311

2019-11-15 Thread Apache Jenkins Server
See 


Changes:

[chinmayskulkarni] PHOENIX-5545: DropChildViews Task fails for a base table 
when its child


--
[...truncated 104.47 KB...]
[INFO] Tests run: 66, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 538.225 
s - in org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Errors: 
[ERROR]   
HashJoinPersistentCacheIT>ParallelStatsDisabledIT.doSetup:60->BaseTest.setUpTestDriver:515->BaseTest.setUpTestDriver:520->BaseTest.checkClusterInitialized:434->BaseTest.setUpTestCluster:448->BaseTest.initMiniCluster:549
 ยป Runtime
[INFO] 
[ERROR] Tests run: 3699, Failures: 0, Errors: 1, Skipped: 2
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.22.2:integration-test 
(HBaseManagedTimeTests) @ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.22.2:integration-test 
(NeedTheirOwnClusterTests) @ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.004 
s - in 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running org.apache.phoenix.end2end.ConcurrentMutationsExtendedIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.426 s 
- in org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Running org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.685 s 
- in org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.611 s 
- in org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.855 s 
- in org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running org.apache.phoenix.end2end.CostBasedDecisionIT
[INFO] Running org.apache.phoenix.end2end.DropSchemaIT
[INFO] Running org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Running org.apache.phoenix.end2end.IndexBuildTimestampIT
[INFO] Running org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.369 s 
- in org.apache.phoenix.end2end.DropSchemaIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 143.863 
s - in org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.IndexRebuildTaskIT
[INFO] Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 98.739 
s - in org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.877 s 
- in org.apache.phoenix.end2end.IndexRebuildTaskIT
[INFO] Running org.apache.phoenix.end2end.IndexScrutinyToolForTenantIT
[INFO] Running org.apache.phoenix.end2end.IndexScrutinyToolIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 185.824 
s - in org.apache.phoenix.end2end.IndexBuildTimestampIT
[INFO] Running org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 190.289 
s - in org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 304.562 
s - in org.apache.phoenix.end2end.ConcurrentMutationsExtendedIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.186 s 
- in org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.34 s 
- in org.apache.phoenix.end2end.IndexScrutinyToolForTenantIT
[INFO] Running org.apache.phoenix.end2end.MigrateSystemTablesToSystemNamespaceIT
[INFO] Running 
org.apache.phoenix.end2end.IndexToolForPartialBuildWithNamespaceEnabledIT
[INFO] Running org.apache.phoenix.end2end.IndexToolIT
[INFO] Running org.apache.phoenix.end2end.LocalIndexSplitMergeIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.822 s 
- in org.apache.phoenix.end2end.IndexToolForPartialBuildWithNamespaceEnabledIT
[INFO] Running 
org.apache.phoenix.end2end.OrderByWithServerClientSpoolingDisabledIT
[INFO] Running 

Build failed in Jenkins: Phoenix-4.x-HBase-1.3 #596

2019-11-15 Thread Apache Jenkins Server
See 


Changes:

[chinmayskulkarni] PHOENIX-5545: DropChildViews Task fails for a base table 
when its child


--
[...truncated 140.19 KB...]

Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: 
org.apache.hadoop.hbase.DoNotRetryIOException: SCHEMA2.N60: 
java.lang.RuntimeException: java.lang.OutOfMemoryError: unable to create new 
native thread
at 
org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:120)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:2070)
at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17218)
at 
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8350)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2170)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2152)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:35076)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2376)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: 
java.lang.OutOfMemoryError: unable to create new native thread
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:220)
at 
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:314)
at 
org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:289)
at 
org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:164)
at 
org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:159)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:796)
at 
org.apache.hadoop.hbase.client.HTableWrapper.getScanner(HTableWrapper.java:215)
at org.apache.phoenix.util.ViewUtil.findRelatedViews(ViewUtil.java:124)
at org.apache.phoenix.util.ViewUtil.dropChildViews(ViewUtil.java:197)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1725)
... 9 more
Caused by: java.lang.RuntimeException: java.lang.OutOfMemoryError: unable to 
create new native thread
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:220)
at 
org.apache.hadoop.hbase.client.ClientSmallReversedScanner.loadCache(ClientSmallReversedScanner.java:228)
at 
org.apache.hadoop.hbase.client.ClientSmallReversedScanner.next(ClientSmallReversedScanner.java:202)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1298)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1197)
at 
org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
at 
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:303)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:212)
... 18 more
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:717)
at org.apache.zookeeper.ClientCnxn.start(ClientCnxn.java:405)
at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:450)
at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
at 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk(RecoverableZooKeeper.java:141)
at 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.(RecoverableZooKeeper.java:128)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.connect(ZKUtil.java:139)
at 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.(ZooKeeperWatcher.java:179)
at 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.(ZooKeeperWatcher.java:153)
at 
org.apache.hadoop.hbase.client.ZooKeeperKeepAliveConnection.(ZooKeeperKeepAliveConnection.java:43)
at 

Apache-Phoenix | 4.x-HBase-1.3 | Build Successful

2019-11-15 Thread Apache Jenkins Server
4.x-HBase-1.3 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-1.3

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastCompletedBuild/testReport/

Changes
[chinmayskulkarni] PHOENIX-5545: DropChildViews Task fails for a base table when its child



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout