Apache Phoenix - Timeout crawler - Build https://builds.apache.org/job/Phoenix-master/2597/

2019-12-13 Thread Apache Jenkins Server
[...truncated 48 lines...]
Looking at the log, list of test(s) that timed-out:

Build:
https://builds.apache.org/job/Phoenix-master/2597/


Affected test class(es):
Set(['as SYSTEM'])


Build step 'Execute shell' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

Build failed in Jenkins: Phoenix | Master #2597

2019-12-13 Thread Apache Jenkins Server
See 


Changes:

[swaroopa.kadam07] PHOENIX-5618 IndexScrutinyTool fix for array type columns 
(#653)


--
[...truncated 1.34 MB...]
[INFO] Running org.apache.phoenix.end2end.StatsEnabledSplitSystemCatalogIT
[INFO] Running org.apache.phoenix.end2end.SpillableGroupByIT
[INFO] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 896.642 
s - in org.apache.phoenix.end2end.PermissionNSEnabledIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.812 s 
- in org.apache.phoenix.end2end.SpillableGroupByIT
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 895.412 
s - in org.apache.phoenix.end2end.PermissionsCacheIT
[INFO] Running org.apache.phoenix.end2end.SystemCatalogIT
[INFO] Running org.apache.phoenix.end2end.UpdateCacheAcrossDifferentClientsIT
[INFO] Running org.apache.phoenix.end2end.TableSnapshotReadsMapReduceIT
[INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 126.294 
s - in org.apache.phoenix.end2end.StatsEnabledSplitSystemCatalogIT
[INFO] Running org.apache.phoenix.end2end.UpgradeIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 47.254 s 
- in org.apache.phoenix.end2end.TableSnapshotReadsMapReduceIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 98.831 s 
- in org.apache.phoenix.end2end.SystemCatalogIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 110.461 
s - in org.apache.phoenix.end2end.UpdateCacheAcrossDifferentClientsIT
[INFO] Running org.apache.phoenix.end2end.UserDefinedFunctionsIT
[INFO] Running org.apache.phoenix.end2end.index.ImmutableIndexIT
[INFO] Running org.apache.phoenix.end2end.index.GlobalIndexCheckerIT
[INFO] Running 
org.apache.phoenix.end2end.index.IndexRebuildIncrementDisableCountIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.336 s 
- in org.apache.phoenix.end2end.index.IndexRebuildIncrementDisableCountIT
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 157.458 
s - in org.apache.phoenix.end2end.UpgradeIT
[INFO] Running org.apache.phoenix.end2end.index.LocalIndexIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 171.259 
s - in org.apache.phoenix.end2end.UserDefinedFunctionsIT
[INFO] Running org.apache.phoenix.end2end.index.MutableIndexExtendedIT
[WARNING] Tests run: 27, Failures: 0, Errors: 0, Skipped: 9, Time elapsed: 
38.761 s - in org.apache.phoenix.end2end.index.MutableIndexExtendedIT
[INFO] Running org.apache.phoenix.end2end.index.MutableIndexFailureIT
[INFO] Running 
org.apache.phoenix.end2end.index.MutableIndexFailureWithNamespaceIT
[ERROR] Tests run: 17, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 
1,576.269 s <<< FAILURE! - in 
org.apache.phoenix.end2end.ParameterizedIndexUpgradeToolIT
[ERROR] 
testToolWithInputFileParameter[IndexUpgradeToolIT_mutable=true,upgrade=false,isNamespaceEnabled=false](org.apache.phoenix.end2end.ParameterizedIndexUpgradeToolIT)
  Time elapsed: 287.983 s  <<< ERROR!
org.apache.phoenix.exception.PhoenixIOException: java.lang.Exception: 
java.lang.OutOfMemoryError: unable to create new native thread
at 
org.apache.phoenix.end2end.ParameterizedIndexUpgradeToolIT.setup(ParameterizedIndexUpgradeToolIT.java:103)
Caused by: java.io.IOException: java.lang.Exception: 
java.lang.OutOfMemoryError: unable to create new native thread

[ERROR] 
testToolWithInputFileParameter[IndexUpgradeToolIT_mutable=true,upgrade=false,isNamespaceEnabled=false](org.apache.phoenix.end2end.ParameterizedIndexUpgradeToolIT)
  Time elapsed: 287.983 s  <<< ERROR!
java.lang.NullPointerException
at 
org.apache.phoenix.end2end.ParameterizedIndexUpgradeToolIT.cleanup(ParameterizedIndexUpgradeToolIT.java:340)

[INFO] Running org.apache.phoenix.end2end.index.MutableIndexRebuilderIT
[INFO] Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 358.687 
s - in org.apache.phoenix.end2end.index.GlobalIndexCheckerIT
[INFO] Running org.apache.phoenix.end2end.index.PartialIndexRebuilderIT
[INFO] Running org.apache.phoenix.end2end.index.PhoenixMRJobSubmitterIT
[INFO] Running org.apache.phoenix.end2end.join.HashJoinCacheIT
[INFO] Running org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
[INFO] Running org.apache.phoenix.end2end.index.ShortViewIndexIdIT
[INFO] Running org.apache.phoenix.end2end.join.SortMergeJoinNoSpoolingIT
[INFO] Running org.apache.phoenix.execute.PartialCommitIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.233 s 
- in org.apache.phoenix.end2end.index.PhoenixMRJobSubmitterIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.249 s 
- in org.apache.phoenix.end2end.join.HashJoinCacheIT
[INFO] Running org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.504 s 
- in org.a

Apache-Phoenix | 4.x-HBase-1.3 | Build Successful

2019-12-13 Thread Apache Jenkins Server
4.x-HBase-1.3 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-1.3

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastCompletedBuild/testReport/

Changes
[s.kadam] PHOENIX-5618 IndexScrutinyTool fix for array type columns



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Build failed in Jenkins: Phoenix-4.x-HBase-1.3 #622

2019-12-13 Thread Apache Jenkins Server
See 


Changes:

[s.kadam] PHOENIX-5618 IndexScrutinyTool fix for array type columns


--
[...truncated 103.11 KB...]
[INFO]  T E S T S
[INFO] ---
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.22.0:integration-test 
(NeedTheirOwnClusterTests) @ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.003 
s - in 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.982 s 
- in org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedMutableNonTxStatsCollectorIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT
[INFO] Running org.apache.phoenix.end2end.ColumnEncodedMutableTxStatsCollectorIT
[INFO] Running org.apache.phoenix.end2end.ConcurrentMutationsExtendedIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.525 s 
- in org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.7 s - 
in org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Running org.apache.phoenix.end2end.CostBasedDecisionIT
[INFO] Running org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.855 s 
- in org.apache.phoenix.end2end.CountDistinctCompressionIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
180.654 s - in 
org.apache.phoenix.end2end.ColumnEncodedMutableNonTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
179.665 s - in 
org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
181.935 s - in org.apache.phoenix.end2end.ColumnEncodedMutableTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
179.399 s - in 
org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
[INFO] Running org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.DropSchemaIT
[INFO] Running org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Running org.apache.phoenix.end2end.IndexBuildTimestampIT
[INFO] Running org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 283.601 
s - in org.apache.phoenix.end2end.ConcurrentMutationsExtendedIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.404 s 
- in org.apache.phoenix.end2end.DropSchemaIT
[INFO] Running org.apache.phoenix.end2end.IndexRebuildTaskIT
[INFO] Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.634 
s - in org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 150.109 
s - in org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.065 s 
- in org.apache.phoenix.end2end.IndexRebuildTaskIT
[INFO] Running org.apache.phoenix.end2end.IndexScrutinyToolIT
[INFO] Running org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 186.683 
s - in org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Running 
org.apache.phoenix.end2end.IndexToolForPartialBuildWithNamespaceEnabledIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 192.051 
s - in org.apache.phoenix.end2end.IndexBuildTimestampIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.859 s 
- in org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Running org.apache.phoenix.end2end.MigrateSystemTablesToSystemNamespaceIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.419 s 
- in org.apache.phoenix.end2end.IndexToolForPartialBuildWithNamespaceEnabledIT
[INFO] Running org.apache.phoenix.end2end.IndexScrutinyToolForTenantIT
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.02 s 
<<< FAILURE! - in org.apache.phoenix.end2end.

Apache-Phoenix | 4.x-HBase-1.3 | Build Successful

2019-12-13 Thread Apache Jenkins Server
4.x-HBase-1.3 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-1.3

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastCompletedBuild/testReport/

Changes
[s.kadam] PHOENIX-5618 IndexScrutinyTool fix for array type columns



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5618 IndexScrutinyTool fix for array type columns

2019-12-13 Thread skadam
This is an automated email from the ASF dual-hosted git repository.

skadam pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 84e1b92  PHOENIX-5618 IndexScrutinyTool fix for array type columns
84e1b92 is described below

commit 84e1b92671472ac337cb929ecee57df6830b9e87
Author: Gokcen Iskender 
AuthorDate: Thu Dec 12 18:15:09 2019 -0800

PHOENIX-5618 IndexScrutinyTool fix for array type columns

Signed-off-by: s.kadam 
---
 .../phoenix/end2end/IndexScrutinyToolIT.java   | 55 ++
 .../mapreduce/index/IndexScrutinyMapper.java   | 32 -
 2 files changed, 86 insertions(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexScrutinyToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexScrutinyToolIT.java
index c6f5418..13df56d 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexScrutinyToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexScrutinyToolIT.java
@@ -46,6 +46,7 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.mapreduce.Counters;
 import org.apache.hadoop.mapreduce.Job;
+import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.CsvBulkImportUtil;
 import org.apache.phoenix.mapreduce.index.IndexScrutinyTableOutput;
 import org.apache.phoenix.mapreduce.index.IndexScrutinyTool;
@@ -163,6 +164,51 @@ public class IndexScrutinyToolIT extends 
IndexScrutinyToolBaseIT {
 assertEquals(numIndexRows, countRows(conn, indexTableFullName));
 }
 
+@Test public void testScrutinyOnArrayTypes() throws Exception {
+String dataTableName = generateUniqueName();
+String indexTableName = generateUniqueName();
+String dataTableDDL = "CREATE TABLE %s (ID INTEGER NOT NULL PRIMARY 
KEY, NAME VARCHAR, VB VARBINARY)";
+String indexTableDDL = "CREATE INDEX %s ON %s (NAME) INCLUDE (VB)";
+String upsertData = "UPSERT INTO %s VALUES (?, ?, ?)";
+String upsertIndex = "UPSERT INTO %s (\"0:NAME\", \":ID\", \"0:VB\") 
values (?,?,?)";
+
+try (Connection conn =
+DriverManager.getConnection(getUrl(), 
PropertiesUtil.deepCopy(TEST_PROPERTIES))) {
+conn.createStatement().execute(String.format(dataTableDDL, 
dataTableName));
+conn.createStatement().execute(String.format(indexTableDDL, 
indexTableName, dataTableName));
+// insert two rows
+PreparedStatement upsertDataStmt = 
conn.prepareStatement(String.format(upsertData, dataTableName));
+upsertRow(upsertDataStmt, 1, "name-1", new byte[] {127, 0, 0, 1});
+upsertRow(upsertDataStmt, 2, "name-2", new byte[] {127, 1, 0, 5});
+conn.commit();
+
+List completedJobs = runScrutiny(null, dataTableName, 
indexTableName);
+Job job = completedJobs.get(0);
+assertTrue(job.isSuccessful());
+Counters counters = job.getCounters();
+assertEquals(2, getCounterValue(counters, VALID_ROW_COUNT));
+assertEquals(0, getCounterValue(counters, INVALID_ROW_COUNT));
+
+// Now insert a different varbinary row
+upsertRow(upsertDataStmt, 3, "name-3", new byte[] {1, 1, 1, 1});
+conn.commit();
+
+PreparedStatement upsertIndexStmt = 
conn.prepareStatement(String.format(upsertIndex, indexTableName));
+upsertIndexStmt.setString(1, "name-3");
+upsertIndexStmt.setInt(2, 3);
+upsertIndexStmt.setBytes(3, new byte[] {0, 0, 0, 1});
+upsertIndexStmt.executeUpdate();
+conn.commit();
+
+completedJobs = runScrutiny(null, dataTableName, indexTableName);
+job = completedJobs.get(0);
+assertTrue(job.isSuccessful());
+counters = job.getCounters();
+assertEquals(2, getCounterValue(counters, VALID_ROW_COUNT));
+assertEquals(1, getCounterValue(counters, INVALID_ROW_COUNT));
+}
+}
+
 /**
  * Tests running a scrutiny while updates and deletes are happening.
  * Since CURRENT_SCN is set, the scrutiny shouldn't report any issue.
@@ -643,6 +689,15 @@ public class IndexScrutinyToolIT extends 
IndexScrutinyToolBaseIT {
 stmt.executeUpdate();
 }
 
+private void upsertRow(PreparedStatement stmt, int id, String name, byte[] 
val) throws SQLException {
+int index = 1;
+// insert row
+stmt.setInt(index++, id);
+stmt.setString(index++, name);
+stmt.setBytes(index++, val);
+stmt.executeUpdate();
+}
+
 private int deleteRow(String fullTableName, String whereCondition) throws 
SQLException {
 String deleteSql = String.format(DELETE_SQL, indexTableFullName) + 
whereCondit

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5618 IndexScrutinyTool fix for array type columns

2019-12-13 Thread skadam
This is an automated email from the ASF dual-hosted git repository.

skadam pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 253dc1a  PHOENIX-5618 IndexScrutinyTool fix for array type columns
253dc1a is described below

commit 253dc1af9f1d7f49976d7db059c396e1293c6b80
Author: Gokcen Iskender 
AuthorDate: Thu Dec 12 18:15:09 2019 -0800

PHOENIX-5618 IndexScrutinyTool fix for array type columns

Signed-off-by: s.kadam 
---
 .../phoenix/end2end/IndexScrutinyToolIT.java   | 55 ++
 .../mapreduce/index/IndexScrutinyMapper.java   | 32 -
 2 files changed, 86 insertions(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexScrutinyToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexScrutinyToolIT.java
index c6f5418..13df56d 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexScrutinyToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexScrutinyToolIT.java
@@ -46,6 +46,7 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.mapreduce.Counters;
 import org.apache.hadoop.mapreduce.Job;
+import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.CsvBulkImportUtil;
 import org.apache.phoenix.mapreduce.index.IndexScrutinyTableOutput;
 import org.apache.phoenix.mapreduce.index.IndexScrutinyTool;
@@ -163,6 +164,51 @@ public class IndexScrutinyToolIT extends 
IndexScrutinyToolBaseIT {
 assertEquals(numIndexRows, countRows(conn, indexTableFullName));
 }
 
+@Test public void testScrutinyOnArrayTypes() throws Exception {
+String dataTableName = generateUniqueName();
+String indexTableName = generateUniqueName();
+String dataTableDDL = "CREATE TABLE %s (ID INTEGER NOT NULL PRIMARY 
KEY, NAME VARCHAR, VB VARBINARY)";
+String indexTableDDL = "CREATE INDEX %s ON %s (NAME) INCLUDE (VB)";
+String upsertData = "UPSERT INTO %s VALUES (?, ?, ?)";
+String upsertIndex = "UPSERT INTO %s (\"0:NAME\", \":ID\", \"0:VB\") 
values (?,?,?)";
+
+try (Connection conn =
+DriverManager.getConnection(getUrl(), 
PropertiesUtil.deepCopy(TEST_PROPERTIES))) {
+conn.createStatement().execute(String.format(dataTableDDL, 
dataTableName));
+conn.createStatement().execute(String.format(indexTableDDL, 
indexTableName, dataTableName));
+// insert two rows
+PreparedStatement upsertDataStmt = 
conn.prepareStatement(String.format(upsertData, dataTableName));
+upsertRow(upsertDataStmt, 1, "name-1", new byte[] {127, 0, 0, 1});
+upsertRow(upsertDataStmt, 2, "name-2", new byte[] {127, 1, 0, 5});
+conn.commit();
+
+List completedJobs = runScrutiny(null, dataTableName, 
indexTableName);
+Job job = completedJobs.get(0);
+assertTrue(job.isSuccessful());
+Counters counters = job.getCounters();
+assertEquals(2, getCounterValue(counters, VALID_ROW_COUNT));
+assertEquals(0, getCounterValue(counters, INVALID_ROW_COUNT));
+
+// Now insert a different varbinary row
+upsertRow(upsertDataStmt, 3, "name-3", new byte[] {1, 1, 1, 1});
+conn.commit();
+
+PreparedStatement upsertIndexStmt = 
conn.prepareStatement(String.format(upsertIndex, indexTableName));
+upsertIndexStmt.setString(1, "name-3");
+upsertIndexStmt.setInt(2, 3);
+upsertIndexStmt.setBytes(3, new byte[] {0, 0, 0, 1});
+upsertIndexStmt.executeUpdate();
+conn.commit();
+
+completedJobs = runScrutiny(null, dataTableName, indexTableName);
+job = completedJobs.get(0);
+assertTrue(job.isSuccessful());
+counters = job.getCounters();
+assertEquals(2, getCounterValue(counters, VALID_ROW_COUNT));
+assertEquals(1, getCounterValue(counters, INVALID_ROW_COUNT));
+}
+}
+
 /**
  * Tests running a scrutiny while updates and deletes are happening.
  * Since CURRENT_SCN is set, the scrutiny shouldn't report any issue.
@@ -643,6 +689,15 @@ public class IndexScrutinyToolIT extends 
IndexScrutinyToolBaseIT {
 stmt.executeUpdate();
 }
 
+private void upsertRow(PreparedStatement stmt, int id, String name, byte[] 
val) throws SQLException {
+int index = 1;
+// insert row
+stmt.setInt(index++, id);
+stmt.setString(index++, name);
+stmt.setBytes(index++, val);
+stmt.executeUpdate();
+}
+
 private int deleteRow(String fullTableName, String whereCondition) throws 
SQLException {
 String deleteSql = String.format(DELETE_SQL, indexTableFullName) + 
whereCondit

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5618 IndexScrutinyTool fix for array type columns

2019-12-13 Thread skadam
This is an automated email from the ASF dual-hosted git repository.

skadam pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 71d186c  PHOENIX-5618 IndexScrutinyTool fix for array type columns
71d186c is described below

commit 71d186c07461e330091e7a2a6aaac672bb3ae073
Author: Gokcen Iskender 
AuthorDate: Thu Dec 12 18:15:09 2019 -0800

PHOENIX-5618 IndexScrutinyTool fix for array type columns

Signed-off-by: s.kadam 
---
 .../phoenix/end2end/IndexScrutinyToolIT.java   | 55 ++
 .../mapreduce/index/IndexScrutinyMapper.java   | 32 -
 2 files changed, 86 insertions(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexScrutinyToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexScrutinyToolIT.java
index c6f5418..13df56d 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexScrutinyToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexScrutinyToolIT.java
@@ -46,6 +46,7 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.mapreduce.Counters;
 import org.apache.hadoop.mapreduce.Job;
+import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.CsvBulkImportUtil;
 import org.apache.phoenix.mapreduce.index.IndexScrutinyTableOutput;
 import org.apache.phoenix.mapreduce.index.IndexScrutinyTool;
@@ -163,6 +164,51 @@ public class IndexScrutinyToolIT extends 
IndexScrutinyToolBaseIT {
 assertEquals(numIndexRows, countRows(conn, indexTableFullName));
 }
 
+@Test public void testScrutinyOnArrayTypes() throws Exception {
+String dataTableName = generateUniqueName();
+String indexTableName = generateUniqueName();
+String dataTableDDL = "CREATE TABLE %s (ID INTEGER NOT NULL PRIMARY 
KEY, NAME VARCHAR, VB VARBINARY)";
+String indexTableDDL = "CREATE INDEX %s ON %s (NAME) INCLUDE (VB)";
+String upsertData = "UPSERT INTO %s VALUES (?, ?, ?)";
+String upsertIndex = "UPSERT INTO %s (\"0:NAME\", \":ID\", \"0:VB\") 
values (?,?,?)";
+
+try (Connection conn =
+DriverManager.getConnection(getUrl(), 
PropertiesUtil.deepCopy(TEST_PROPERTIES))) {
+conn.createStatement().execute(String.format(dataTableDDL, 
dataTableName));
+conn.createStatement().execute(String.format(indexTableDDL, 
indexTableName, dataTableName));
+// insert two rows
+PreparedStatement upsertDataStmt = 
conn.prepareStatement(String.format(upsertData, dataTableName));
+upsertRow(upsertDataStmt, 1, "name-1", new byte[] {127, 0, 0, 1});
+upsertRow(upsertDataStmt, 2, "name-2", new byte[] {127, 1, 0, 5});
+conn.commit();
+
+List completedJobs = runScrutiny(null, dataTableName, 
indexTableName);
+Job job = completedJobs.get(0);
+assertTrue(job.isSuccessful());
+Counters counters = job.getCounters();
+assertEquals(2, getCounterValue(counters, VALID_ROW_COUNT));
+assertEquals(0, getCounterValue(counters, INVALID_ROW_COUNT));
+
+// Now insert a different varbinary row
+upsertRow(upsertDataStmt, 3, "name-3", new byte[] {1, 1, 1, 1});
+conn.commit();
+
+PreparedStatement upsertIndexStmt = 
conn.prepareStatement(String.format(upsertIndex, indexTableName));
+upsertIndexStmt.setString(1, "name-3");
+upsertIndexStmt.setInt(2, 3);
+upsertIndexStmt.setBytes(3, new byte[] {0, 0, 0, 1});
+upsertIndexStmt.executeUpdate();
+conn.commit();
+
+completedJobs = runScrutiny(null, dataTableName, indexTableName);
+job = completedJobs.get(0);
+assertTrue(job.isSuccessful());
+counters = job.getCounters();
+assertEquals(2, getCounterValue(counters, VALID_ROW_COUNT));
+assertEquals(1, getCounterValue(counters, INVALID_ROW_COUNT));
+}
+}
+
 /**
  * Tests running a scrutiny while updates and deletes are happening.
  * Since CURRENT_SCN is set, the scrutiny shouldn't report any issue.
@@ -643,6 +689,15 @@ public class IndexScrutinyToolIT extends 
IndexScrutinyToolBaseIT {
 stmt.executeUpdate();
 }
 
+private void upsertRow(PreparedStatement stmt, int id, String name, byte[] 
val) throws SQLException {
+int index = 1;
+// insert row
+stmt.setInt(index++, id);
+stmt.setString(index++, name);
+stmt.setBytes(index++, val);
+stmt.executeUpdate();
+}
+
 private int deleteRow(String fullTableName, String whereCondition) throws 
SQLException {
 String deleteSql = String.format(DELETE_SQL, indexTableFullName) + 
whereCondit

[phoenix] branch master updated: PHOENIX-5618 IndexScrutinyTool fix for array type columns (#653)

2019-12-13 Thread skadam
This is an automated email from the ASF dual-hosted git repository.

skadam pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new d73765a  PHOENIX-5618 IndexScrutinyTool fix for array type columns 
(#653)
d73765a is described below

commit d73765a22ab0a0c9cf558cc6fa8dcef3c0c4b386
Author: Gokcen Iskender <47044859+gokc...@users.noreply.github.com>
AuthorDate: Fri Dec 13 16:58:13 2019 -0800

PHOENIX-5618 IndexScrutinyTool fix for array type columns (#653)
---
 .../phoenix/end2end/IndexScrutinyToolIT.java   | 55 ++
 .../mapreduce/index/IndexScrutinyMapper.java   | 32 -
 2 files changed, 86 insertions(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexScrutinyToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexScrutinyToolIT.java
index 1f9de15..01cdda3 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexScrutinyToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexScrutinyToolIT.java
@@ -45,6 +45,7 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.mapreduce.Counters;
 import org.apache.hadoop.mapreduce.Job;
+import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.CsvBulkImportUtil;
 import org.apache.phoenix.mapreduce.index.IndexScrutinyTableOutput;
 import org.apache.phoenix.mapreduce.index.IndexScrutinyTool;
@@ -161,6 +162,51 @@ public class IndexScrutinyToolIT extends 
IndexScrutinyToolBaseIT {
 assertEquals(numIndexRows, countRows(conn, indexTableFullName));
 }
 
+@Test public void testScrutinyOnArrayTypes() throws Exception {
+String dataTableName = generateUniqueName();
+String indexTableName = generateUniqueName();
+String dataTableDDL = "CREATE TABLE %s (ID INTEGER NOT NULL PRIMARY 
KEY, NAME VARCHAR, VB VARBINARY)";
+String indexTableDDL = "CREATE INDEX %s ON %s (NAME) INCLUDE (VB)";
+String upsertData = "UPSERT INTO %s VALUES (?, ?, ?)";
+String upsertIndex = "UPSERT INTO %s (\"0:NAME\", \":ID\", \"0:VB\") 
values (?,?,?)";
+
+try (Connection conn =
+DriverManager.getConnection(getUrl(), 
PropertiesUtil.deepCopy(TEST_PROPERTIES))) {
+conn.createStatement().execute(String.format(dataTableDDL, 
dataTableName));
+conn.createStatement().execute(String.format(indexTableDDL, 
indexTableName, dataTableName));
+// insert two rows
+PreparedStatement upsertDataStmt = 
conn.prepareStatement(String.format(upsertData, dataTableName));
+upsertRow(upsertDataStmt, 1, "name-1", new byte[] {127, 0, 0, 1});
+upsertRow(upsertDataStmt, 2, "name-2", new byte[] {127, 1, 0, 5});
+conn.commit();
+
+List completedJobs = runScrutiny(null, dataTableName, 
indexTableName);
+Job job = completedJobs.get(0);
+assertTrue(job.isSuccessful());
+Counters counters = job.getCounters();
+assertEquals(2, getCounterValue(counters, VALID_ROW_COUNT));
+assertEquals(0, getCounterValue(counters, INVALID_ROW_COUNT));
+
+// Now insert a different varbinary row
+upsertRow(upsertDataStmt, 3, "name-3", new byte[] {1, 1, 1, 1});
+conn.commit();
+
+PreparedStatement upsertIndexStmt = 
conn.prepareStatement(String.format(upsertIndex, indexTableName));
+upsertIndexStmt.setString(1, "name-3");
+upsertIndexStmt.setInt(2, 3);
+upsertIndexStmt.setBytes(3, new byte[] {0, 0, 0, 1});
+upsertIndexStmt.executeUpdate();
+conn.commit();
+
+completedJobs = runScrutiny(null, dataTableName, indexTableName);
+job = completedJobs.get(0);
+assertTrue(job.isSuccessful());
+counters = job.getCounters();
+assertEquals(2, getCounterValue(counters, VALID_ROW_COUNT));
+assertEquals(1, getCounterValue(counters, INVALID_ROW_COUNT));
+}
+}
+
 /**
  * Tests running a scrutiny while updates and deletes are happening.
  * Since CURRENT_SCN is set, the scrutiny shouldn't report any issue.
@@ -642,6 +688,15 @@ public class IndexScrutinyToolIT extends 
IndexScrutinyToolBaseIT {
 stmt.executeUpdate();
 }
 
+private void upsertRow(PreparedStatement stmt, int id, String name, byte[] 
val) throws SQLException {
+int index = 1;
+// insert row
+stmt.setInt(index++, id);
+stmt.setString(index++, name);
+stmt.setBytes(index++, val);
+stmt.executeUpdate();
+}
+
 private int deleteRow(String fullTableName, String whereCondition) throws 
SQLException {
 String deleteSql = String.format(DELETE_SQL, indexTableFullName) + 

svn commit: r1871355 - in /phoenix/site: publish/index.html publish/language/datatypes.html publish/language/functions.html publish/language/index.html publish/team.html source/src/site/markdown/index

2019-12-13 Thread elserj
Author: elserj
Date: Sat Dec 14 00:12:29 2019
New Revision: 1871355

URL: http://svn.apache.org/viewvc?rev=1871355&view=rev
Log:
Remove the search-hadoop.com link as it seems like it changed owners

We can re-instate it after we make sure that it isn't owned by some
entity which will try to take advantage of our outbound link to it.

Modified:
phoenix/site/publish/index.html
phoenix/site/publish/language/datatypes.html
phoenix/site/publish/language/functions.html
phoenix/site/publish/language/index.html
phoenix/site/publish/team.html
phoenix/site/source/src/site/markdown/index.md

Modified: phoenix/site/publish/index.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/index.html?rev=1871355&r1=1871354&r2=1871355&view=diff
==
--- phoenix/site/publish/index.html (original)
+++ phoenix/site/publish/index.html Sat Dec 14 00:12:29 2019
@@ -1,7 +1,7 @@
 
 
 
 
@@ -177,29 +177,30 @@
 
  
   
-Download latest Apache Phoenix binary and source release 
artifacts  
-Browse 
through Apache Phoenix JIRAs  
-Sync and build Apache Phoenix from source code  
-
-   http://search-hadoop.com/?"; method="get"> 
- 
- 
- 
-  
- 
-
-
- 
-  
-   
-   http://search-hadoop.com/?"; method="get"> 
- 
- 
- 
- 
+Download latest Apache Phoenix binary and source release 
artifacts  
+Browse 
through Apache Phoenix JIRAs  
+Sync and build Apache Phoenix from source code  
+   
   
  
 
+
 News:  Phoenix next major release 5.0.0 
has been released and is available for download here    
            https://twitter.com/ApachePhoenix";> 

   

Modified: phoenix/site/publish/language/datatypes.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/language/datatypes.html?rev=1871355&r1=1871354&r2=1871355&view=diff
==
--- phoenix/site/publish/language/datatypes.html (original)
+++ phoenix/site/publish/language/datatypes.html Sat Dec 14 00:12:29 2019
@@ -1,7 +1,7 @@
 
 
 
 

Modified: phoenix/site/publish/language/functions.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/language/functions.html?rev=1871355&r1=1871354&r2=1871355&view=diff
==
--- phoenix/site/publish/language/functions.html (original)
+++ phoenix/site/publish/language/functions.html Sat Dec 14 00:12:29 2019
@@ -1,7 +1,7 @@
 
 
 
 

Modified: phoenix/site/publish/language/index.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/language/index.html?rev=1871355&r1=1871354&r2=1871355&view=diff
==
--- phoenix/site/publish/language/index.html (original)
+++ phoenix/site/publish/language/index.html Sat Dec 14 00:12:29 2019
@@ -1,7 +1,7 @@
 
 
 
 

Modified: phoenix/site/publish/team.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/team.html?rev=1871355&r1=1871354&r2=1871355&view=diff
==
--- phoenix/site/publish/team.html (original)
+++ phoenix/site/publish/team.html Sat Dec 14 00:12:29 2019
@@ -1,7 +1,7 @@
 
 
 
 
@@ -403,42 +403,48 @@
Committer 


+   Istvan Toth  
+   Cloudera  
+   mailto:st...@apache.org";>st...@apache.org  
+   Committer 
+   
+   
Jaanai Zhang  
Alibaba  
mailto:jaa...@apache.org";>jaa...@apache.org  
Committer 

-   
+   
Jan Fernando  
Salesforce  
mailto:jferna...@apache.org";>jferna...@apache.org  
Committer 

-   
+   
Kadir Ozdemir  
Salesforce  
mailto:ka...@apache.org";>ka...@apache.org  
Committer 

-   
+   
Kevin Liew  
Simba Technologies  
mailto:kl...@apache.org";>kl...@apache.org  
Committer 

-   
+   
Mihir Monani  
Salesforce  
mailto:mihir6...@apache.org";>mihir6...@apache.org  
Committer 

-   
+   
Swaroopa Kadam  
Salesforce  
mailto:ska...@apache.org";>ska...@apache.org  
Committer 

-   
+   
Ohad Shacham  
Yahoo Research, Oath  
mailto:oh...@apache.org";>oh...@apache.org  

Modified: phoenix/site/source/src/site/markdown/index.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/index.md?rev=1871355&r1=1871354&r2=1871355&view=diff
==
--- phoenix/site/source/src/site/markdown/index.md (original)
+++ phoenix/site/source/src/site/markdown/index.md Sat Dec 14 00:12:29 2019
@@ -9,28 +9,28 @@
 
 
 
-
+
 
 Download latest Apache Phoenix binary and source release 
artifacts
 
-
+
 
 Browse through Apache Phoenix JIRAs
 
-
+
 
 Sync and build Apache Phoenix from source code
 
-
+
 

Apache-Phoenix | 4.x-HBase-1.3 | Build Successful

2019-12-13 Thread Apache Jenkins Server
4.x-HBase-1.3 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-1.3

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastCompletedBuild/testReport/

Changes
[gjacoby] PHOENIX-5578 - CREATE TABLE IF NOT EXISTS loads IndexRegionObserver on



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Build failed in Jenkins: Phoenix-4.x-HBase-1.3 #621

2019-12-13 Thread Apache Jenkins Server
See 


Changes:

[gjacoby] PHOENIX-5578 - CREATE TABLE IF NOT EXISTS loads IndexRegionObserver on


--
[...truncated 104.73 KB...]
[ERROR] Errors: 
[ERROR]   
AlterSessionIT>ParallelStatsDisabledIT.doSetup:51->BaseTest.setUpTestDriver:515->BaseTest.setUpTestDriver:520->BaseTest.checkClusterInitialized:434->BaseTest.setUpTestCluster:448->BaseTest.initMiniCluster:549
 » Runtime
[INFO] 
[ERROR] Tests run: 3705, Failures: 0, Errors: 1, Skipped: 2
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.22.0:integration-test 
(HBaseManagedTimeTests) @ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.22.0:integration-test 
(NeedTheirOwnClusterTests) @ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.003 
s - in 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running org.apache.phoenix.end2end.ColumnEncodedMutableTxStatsCollectorIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedMutableNonTxStatsCollectorIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.712 s 
- in org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running org.apache.phoenix.end2end.ConcurrentMutationsExtendedIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.462 s 
- in org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.668 s 
- in org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Running org.apache.phoenix.end2end.CostBasedDecisionIT
[INFO] Running org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.719 s 
- in org.apache.phoenix.end2end.CountDistinctCompressionIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
179.459 s - in org.apache.phoenix.end2end.ColumnEncodedMutableTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
179.115 s - in 
org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
183.438 s - in 
org.apache.phoenix.end2end.ColumnEncodedMutableNonTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
181.458 s - in 
org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT
[INFO] Running org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.DropSchemaIT
[INFO] Running org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Running org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Running org.apache.phoenix.end2end.IndexBuildTimestampIT
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 290.266 
s - in org.apache.phoenix.end2end.ConcurrentMutationsExtendedIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.224 s 
- in org.apache.phoenix.end2end.DropSchemaIT
[INFO] Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 96.024 
s - in org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Running org.apache.phoenix.end2end.IndexRebuildTaskIT
[INFO] Running org.apache.phoenix.end2end.IndexScrutinyToolForTenantIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 152.873 
s - in org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.203 s 
- in org.apache.phoenix.end2end.IndexRebuildTaskIT
[INFO] Running org.apache.phoenix.end2end.IndexScrutinyToolIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 188.023 
s - in org.apache.phoenix.end2end.IndexBuildTimestampIT
[INFO] Running 
org.apache.phoenix.end2end.IndexToolForPartialBuildWithNamespaceEnabledIT
[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 194.643 
s - in org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Running org.apache.phoenix.end2end.IndexToolForP

Apache-Phoenix | 4.x-HBase-1.3 | Build Successful

2019-12-13 Thread Apache Jenkins Server
4.x-HBase-1.3 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-1.3

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastCompletedBuild/testReport/

Changes
[gjacoby] PHOENIX-5578 - CREATE TABLE IF NOT EXISTS loads IndexRegionObserver on



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


[phoenix] branch 4.14-HBase-1.4 updated: PHOENIX-5578 - CREATE TABLE IF NOT EXISTS loads IndexRegionObserver on an existing table

2019-12-13 Thread gjacoby
This is an automated email from the ASF dual-hosted git repository.

gjacoby pushed a commit to branch 4.14-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.4 by this push:
 new 199f152  PHOENIX-5578 - CREATE TABLE IF NOT EXISTS loads 
IndexRegionObserver on an existing table
199f152 is described below

commit 199f1528b19dd09d15316d2aaa1d207fdf684d32
Author: Geoffrey Jacoby 
AuthorDate: Thu Nov 21 16:52:39 2019 -0800

PHOENIX-5578 - CREATE TABLE IF NOT EXISTS loads IndexRegionObserver on an 
existing table
---
 .../end2end/ParameterizedIndexUpgradeToolIT.java   |  57 ++---
 .../phoenix/end2end/index/IndexCoprocIT.java   | 253 +
 .../phoenix/query/ConnectionQueryServicesImpl.java |  65 --
 3 files changed, 329 insertions(+), 46 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
index 1b164a4..3bf69a1 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
@@ -18,9 +18,9 @@
 package org.apache.phoenix.end2end;
 
 import com.google.common.collect.Maps;
-import org.apache.commons.cli.CommandLine;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.hbase.index.IndexRegionObserver;
 import org.apache.phoenix.hbase.index.Indexer;
 import org.apache.phoenix.index.GlobalIndexChecker;
@@ -59,6 +59,7 @@ import java.util.UUID;
 
 import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.ROLLBACK_OP;
 import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.UPGRADE_OP;
+import static org.apache.phoenix.query.QueryServices.DROP_METADATA_ATTRIB;
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 
 @RunWith(Parameterized.class)
@@ -139,6 +140,7 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 Boolean.toString(isNamespaceEnabled));
 clientProps.put(QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE,
 Boolean.toString(isNamespaceEnabled));
+clientProps.put(DROP_METADATA_ATTRIB, Boolean.toString(true));
 serverProps.putAll(clientProps);
 //To mimic the upgrade/rollback scenario, so that table creation uses 
old/new design
 clientProps.put(QueryServices.INDEX_REGION_OBSERVER_ENABLED_ATTRIB,
@@ -228,23 +230,28 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 throws IOException {
 if (mutable) {
 for (String table : tableList) {
-
Assert.assertTrue(admin.getTableDescriptor(TableName.valueOf(table))
+Assert.assertTrue("Can't find IndexRegionObserver for " + 
table,
+admin.getTableDescriptor(TableName.valueOf(table))
 .hasCoprocessor(IndexRegionObserver.class.getName()));
-
Assert.assertFalse(admin.getTableDescriptor(TableName.valueOf(table))
+Assert.assertFalse("Found Indexer on " + table,
+admin.getTableDescriptor(TableName.valueOf(table))
 .hasCoprocessor(Indexer.class.getName()));
 }
 }
 for (String index : indexList) {
-
Assert.assertTrue(admin.getTableDescriptor(TableName.valueOf(index))
+Assert.assertTrue("Couldn't find GlobalIndexChecker on " + index,
+admin.getTableDescriptor(TableName.valueOf(index))
 .hasCoprocessor(GlobalIndexChecker.class.getName()));
 }
 // Transactional indexes should not have new coprocessors
 for (String index : TRANSACTIONAL_INDEXES_LIST) {
-
Assert.assertFalse(admin.getTableDescriptor(TableName.valueOf(index))
+Assert.assertFalse("Found GlobalIndexChecker on transactional 
index " + index,
+admin.getTableDescriptor(TableName.valueOf(index))
 .hasCoprocessor(GlobalIndexChecker.class.getName()));
 }
 for (String table : TRANSACTIONAL_TABLE_LIST) {
-
Assert.assertFalse(admin.getTableDescriptor(TableName.valueOf(table))
+Assert.assertFalse("Found IndexRegionObserver on transactional 
table",
+admin.getTableDescriptor(TableName.valueOf(table))
 .hasCoprocessor(IndexRegionObserver.class.getName()));
 }
 }
@@ -253,14 +260,17 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 throws IOException {
 if (mutable) {
 for (String table : tableList) {
-
Assert.assertTrue(admin.getTableDescriptor(TableName.valueOf(table))
+Assert.

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5578 - CREATE TABLE IF NOT EXISTS loads IndexRegionObserver on an existing table

2019-12-13 Thread gjacoby
This is an automated email from the ASF dual-hosted git repository.

gjacoby pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 2de262e  PHOENIX-5578 - CREATE TABLE IF NOT EXISTS loads 
IndexRegionObserver on an existing table
2de262e is described below

commit 2de262ef36093a22c7aebaa28502dda9227c44b4
Author: Geoffrey Jacoby 
AuthorDate: Thu Nov 21 16:52:39 2019 -0800

PHOENIX-5578 - CREATE TABLE IF NOT EXISTS loads IndexRegionObserver on an 
existing table
---
 .../end2end/ParameterizedIndexUpgradeToolIT.java   |  40 +++-
 .../phoenix/end2end/index/IndexCoprocIT.java   | 259 +
 .../phoenix/query/ConnectionQueryServicesImpl.java |  79 ---
 3 files changed, 342 insertions(+), 36 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
index 79d6247..1df46a8 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
@@ -20,6 +20,7 @@ package org.apache.phoenix.end2end;
 import com.google.common.collect.Maps;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.hbase.index.IndexRegionObserver;
 import org.apache.phoenix.hbase.index.Indexer;
 import org.apache.phoenix.index.GlobalIndexChecker;
@@ -58,6 +59,7 @@ import java.util.UUID;
 
 import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.ROLLBACK_OP;
 import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.UPGRADE_OP;
+import static org.apache.phoenix.query.QueryServices.DROP_METADATA_ATTRIB;
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 
 @RunWith(Parameterized.class)
@@ -138,6 +140,7 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 Boolean.toString(isNamespaceEnabled));
 clientProps.put(QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE,
 Boolean.toString(isNamespaceEnabled));
+clientProps.put(DROP_METADATA_ATTRIB, Boolean.toString(true));
 serverProps.putAll(clientProps);
 //To mimic the upgrade/rollback scenario, so that table creation uses 
old/new design
 clientProps.put(QueryServices.INDEX_REGION_OBSERVER_ENABLED_ATTRIB,
@@ -227,23 +230,28 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 throws IOException {
 if (mutable) {
 for (String table : tableList) {
-
Assert.assertTrue(admin.getTableDescriptor(TableName.valueOf(table))
+Assert.assertTrue("Can't find IndexRegionObserver for " + 
table,
+admin.getTableDescriptor(TableName.valueOf(table))
 .hasCoprocessor(IndexRegionObserver.class.getName()));
-
Assert.assertFalse(admin.getTableDescriptor(TableName.valueOf(table))
+Assert.assertFalse("Found Indexer on " + table,
+admin.getTableDescriptor(TableName.valueOf(table))
 .hasCoprocessor(Indexer.class.getName()));
 }
 }
 for (String index : indexList) {
-
Assert.assertTrue(admin.getTableDescriptor(TableName.valueOf(index))
+Assert.assertTrue("Couldn't find GlobalIndexChecker on " + index,
+admin.getTableDescriptor(TableName.valueOf(index))
 .hasCoprocessor(GlobalIndexChecker.class.getName()));
 }
 // Transactional indexes should not have new coprocessors
 for (String index : TRANSACTIONAL_INDEXES_LIST) {
-
Assert.assertFalse(admin.getTableDescriptor(TableName.valueOf(index))
+Assert.assertFalse("Found GlobalIndexChecker on transactional 
index " + index,
+admin.getTableDescriptor(TableName.valueOf(index))
 .hasCoprocessor(GlobalIndexChecker.class.getName()));
 }
 for (String table : TRANSACTIONAL_TABLE_LIST) {
-
Assert.assertFalse(admin.getTableDescriptor(TableName.valueOf(table))
+Assert.assertFalse("Found IndexRegionObserver on transactional 
table",
+admin.getTableDescriptor(TableName.valueOf(table))
 .hasCoprocessor(IndexRegionObserver.class.getName()));
 }
 }
@@ -252,14 +260,17 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 throws IOException {
 if (mutable) {
 for (String table : tableList) {
-
Assert.assertTrue(admin.getTableDescriptor(TableName.valueOf(table))
+Assert.assertTrue("Can't find Indexer for " + table,
+  

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5578 - CREATE TABLE IF NOT EXISTS loads IndexRegionObserver on an existing table

2019-12-13 Thread gjacoby
This is an automated email from the ASF dual-hosted git repository.

gjacoby pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 264da8f  PHOENIX-5578 - CREATE TABLE IF NOT EXISTS loads 
IndexRegionObserver on an existing table
264da8f is described below

commit 264da8f1859c5ea54adb430379c10b5f3be1ae43
Author: Geoffrey Jacoby 
AuthorDate: Thu Nov 21 16:52:39 2019 -0800

PHOENIX-5578 - CREATE TABLE IF NOT EXISTS loads IndexRegionObserver on an 
existing table
---
 .../end2end/ParameterizedIndexUpgradeToolIT.java   |  40 +++-
 .../phoenix/end2end/index/IndexCoprocIT.java   | 259 +
 .../phoenix/query/ConnectionQueryServicesImpl.java |  79 ---
 3 files changed, 342 insertions(+), 36 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
index 79d6247..1df46a8 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
@@ -20,6 +20,7 @@ package org.apache.phoenix.end2end;
 import com.google.common.collect.Maps;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.hbase.index.IndexRegionObserver;
 import org.apache.phoenix.hbase.index.Indexer;
 import org.apache.phoenix.index.GlobalIndexChecker;
@@ -58,6 +59,7 @@ import java.util.UUID;
 
 import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.ROLLBACK_OP;
 import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.UPGRADE_OP;
+import static org.apache.phoenix.query.QueryServices.DROP_METADATA_ATTRIB;
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 
 @RunWith(Parameterized.class)
@@ -138,6 +140,7 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 Boolean.toString(isNamespaceEnabled));
 clientProps.put(QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE,
 Boolean.toString(isNamespaceEnabled));
+clientProps.put(DROP_METADATA_ATTRIB, Boolean.toString(true));
 serverProps.putAll(clientProps);
 //To mimic the upgrade/rollback scenario, so that table creation uses 
old/new design
 clientProps.put(QueryServices.INDEX_REGION_OBSERVER_ENABLED_ATTRIB,
@@ -227,23 +230,28 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 throws IOException {
 if (mutable) {
 for (String table : tableList) {
-
Assert.assertTrue(admin.getTableDescriptor(TableName.valueOf(table))
+Assert.assertTrue("Can't find IndexRegionObserver for " + 
table,
+admin.getTableDescriptor(TableName.valueOf(table))
 .hasCoprocessor(IndexRegionObserver.class.getName()));
-
Assert.assertFalse(admin.getTableDescriptor(TableName.valueOf(table))
+Assert.assertFalse("Found Indexer on " + table,
+admin.getTableDescriptor(TableName.valueOf(table))
 .hasCoprocessor(Indexer.class.getName()));
 }
 }
 for (String index : indexList) {
-
Assert.assertTrue(admin.getTableDescriptor(TableName.valueOf(index))
+Assert.assertTrue("Couldn't find GlobalIndexChecker on " + index,
+admin.getTableDescriptor(TableName.valueOf(index))
 .hasCoprocessor(GlobalIndexChecker.class.getName()));
 }
 // Transactional indexes should not have new coprocessors
 for (String index : TRANSACTIONAL_INDEXES_LIST) {
-
Assert.assertFalse(admin.getTableDescriptor(TableName.valueOf(index))
+Assert.assertFalse("Found GlobalIndexChecker on transactional 
index " + index,
+admin.getTableDescriptor(TableName.valueOf(index))
 .hasCoprocessor(GlobalIndexChecker.class.getName()));
 }
 for (String table : TRANSACTIONAL_TABLE_LIST) {
-
Assert.assertFalse(admin.getTableDescriptor(TableName.valueOf(table))
+Assert.assertFalse("Found IndexRegionObserver on transactional 
table",
+admin.getTableDescriptor(TableName.valueOf(table))
 .hasCoprocessor(IndexRegionObserver.class.getName()));
 }
 }
@@ -252,14 +260,17 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 throws IOException {
 if (mutable) {
 for (String table : tableList) {
-
Assert.assertTrue(admin.getTableDescriptor(TableName.valueOf(table))
+Assert.assertTrue("Can't find Indexer for " + table,
+  

[phoenix] branch 4.14-HBase-1.3 updated (4a5b748 -> d779f85)

2019-12-13 Thread gjacoby
This is an automated email from the ASF dual-hosted git repository.

gjacoby pushed a change to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git.


from 4a5b748  PHOENIX-5615 Index read repair should delete all the cells of 
an invalid unverified row
 new 28cbba9  PHOENIX-5578 - CREATE TABLE IF NOT EXISTS loads 
IndexRegionObserver on an existing table
 new d779f85  PHOENIX-5578 - CREATE TABLE IF NOT EXISTS loads 
IndexRegionObserver on an existing table

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../end2end/ParameterizedIndexUpgradeToolIT.java   |  57 ++---
 .../phoenix/end2end/index/IndexCoprocIT.java   | 253 +
 .../phoenix/query/ConnectionQueryServicesImpl.java |  65 --
 3 files changed, 329 insertions(+), 46 deletions(-)
 create mode 100644 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexCoprocIT.java



[phoenix] 01/02: PHOENIX-5578 - CREATE TABLE IF NOT EXISTS loads IndexRegionObserver on an existing table

2019-12-13 Thread gjacoby
This is an automated email from the ASF dual-hosted git repository.

gjacoby pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit 28cbba9b652b7162646a9ca812cc56f052bd9d8b
Author: Geoffrey Jacoby 
AuthorDate: Thu Nov 21 16:52:39 2019 -0800

PHOENIX-5578 - CREATE TABLE IF NOT EXISTS loads IndexRegionObserver on an 
existing table
---
 .../end2end/ParameterizedIndexUpgradeToolIT.java   |  57 ++---
 .../phoenix/end2end/index/IndexCoprocIT.java   | 259 +
 .../phoenix/query/ConnectionQueryServicesImpl.java |  65 --
 3 files changed, 335 insertions(+), 46 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
index 1b164a4..3bf69a1 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
@@ -18,9 +18,9 @@
 package org.apache.phoenix.end2end;
 
 import com.google.common.collect.Maps;
-import org.apache.commons.cli.CommandLine;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.hbase.index.IndexRegionObserver;
 import org.apache.phoenix.hbase.index.Indexer;
 import org.apache.phoenix.index.GlobalIndexChecker;
@@ -59,6 +59,7 @@ import java.util.UUID;
 
 import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.ROLLBACK_OP;
 import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.UPGRADE_OP;
+import static org.apache.phoenix.query.QueryServices.DROP_METADATA_ATTRIB;
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 
 @RunWith(Parameterized.class)
@@ -139,6 +140,7 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 Boolean.toString(isNamespaceEnabled));
 clientProps.put(QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE,
 Boolean.toString(isNamespaceEnabled));
+clientProps.put(DROP_METADATA_ATTRIB, Boolean.toString(true));
 serverProps.putAll(clientProps);
 //To mimic the upgrade/rollback scenario, so that table creation uses 
old/new design
 clientProps.put(QueryServices.INDEX_REGION_OBSERVER_ENABLED_ATTRIB,
@@ -228,23 +230,28 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 throws IOException {
 if (mutable) {
 for (String table : tableList) {
-
Assert.assertTrue(admin.getTableDescriptor(TableName.valueOf(table))
+Assert.assertTrue("Can't find IndexRegionObserver for " + 
table,
+admin.getTableDescriptor(TableName.valueOf(table))
 .hasCoprocessor(IndexRegionObserver.class.getName()));
-
Assert.assertFalse(admin.getTableDescriptor(TableName.valueOf(table))
+Assert.assertFalse("Found Indexer on " + table,
+admin.getTableDescriptor(TableName.valueOf(table))
 .hasCoprocessor(Indexer.class.getName()));
 }
 }
 for (String index : indexList) {
-
Assert.assertTrue(admin.getTableDescriptor(TableName.valueOf(index))
+Assert.assertTrue("Couldn't find GlobalIndexChecker on " + index,
+admin.getTableDescriptor(TableName.valueOf(index))
 .hasCoprocessor(GlobalIndexChecker.class.getName()));
 }
 // Transactional indexes should not have new coprocessors
 for (String index : TRANSACTIONAL_INDEXES_LIST) {
-
Assert.assertFalse(admin.getTableDescriptor(TableName.valueOf(index))
+Assert.assertFalse("Found GlobalIndexChecker on transactional 
index " + index,
+admin.getTableDescriptor(TableName.valueOf(index))
 .hasCoprocessor(GlobalIndexChecker.class.getName()));
 }
 for (String table : TRANSACTIONAL_TABLE_LIST) {
-
Assert.assertFalse(admin.getTableDescriptor(TableName.valueOf(table))
+Assert.assertFalse("Found IndexRegionObserver on transactional 
table",
+admin.getTableDescriptor(TableName.valueOf(table))
 .hasCoprocessor(IndexRegionObserver.class.getName()));
 }
 }
@@ -253,14 +260,17 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 throws IOException {
 if (mutable) {
 for (String table : tableList) {
-
Assert.assertTrue(admin.getTableDescriptor(TableName.valueOf(table))
+Assert.assertTrue("Can't find Indexer for " + table,
+admin.getTableDescriptor(TableName.valueOf(table))
 .hasCoprocessor(Indexer.class.getName()));
-
Assert.asse

[phoenix] 02/02: PHOENIX-5578 - CREATE TABLE IF NOT EXISTS loads IndexRegionObserver on an existing table

2019-12-13 Thread gjacoby
This is an automated email from the ASF dual-hosted git repository.

gjacoby pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit d779f85e84313f690154cb527e16d76e9a6cf5c5
Author: Geoffrey Jacoby 
AuthorDate: Tue Dec 10 14:12:14 2019 -0800

PHOENIX-5578 - CREATE TABLE IF NOT EXISTS loads IndexRegionObserver on an 
existing table
---
 .../phoenix/end2end/index/IndexCoprocIT.java   | 54 ++
 1 file changed, 24 insertions(+), 30 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexCoprocIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexCoprocIT.java
index c6b4e7b..0870e37 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexCoprocIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexCoprocIT.java
@@ -62,7 +62,7 @@ public class IndexCoprocIT extends ParallelStatsDisabledIT {
 
 @Test
 public void testCreateCoprocs() throws Exception {
-String schemaName = "S" + generateUniqueName();
+String schemaName = "S_" + generateUniqueName();
 String tableName = "T_" + generateUniqueName();
 String indexName = "I_" + generateUniqueName();
 String physicalTableName = 
SchemaUtil.getPhysicalHBaseTableName(schemaName, tableName,
@@ -77,28 +77,35 @@ public class IndexCoprocIT extends ParallelStatsDisabledIT {
 HTableDescriptor baseDescriptor = 
admin.getTableDescriptor(TableName.valueOf(physicalTableName));
 HTableDescriptor indexDescriptor = 
admin.getTableDescriptor(TableName.valueOf(physicalIndexName));
 
-assertCoprocsContains(IndexRegionObserver.class, baseDescriptor);
-assertCoprocsContains(GlobalIndexChecker.class, indexDescriptor);
+assertUsingNewCoprocs(baseDescriptor, indexDescriptor);
 
-removeCoproc(IndexRegionObserver.class, baseDescriptor, admin);
-removeCoproc(IndexRegionObserver.class, indexDescriptor, admin);
-removeCoproc(GlobalIndexChecker.class, indexDescriptor, admin);
+//Simulate an index upgrade rollback by removing coprocs and enabling 
old Indexer
+downgradeIndexCoprocs(admin, baseDescriptor, indexDescriptor);
 
-Map props = new HashMap();
-props.put(NonTxIndexBuilder.CODEC_CLASS_NAME_KEY, 
PhoenixIndexCodec.class.getName());
-Indexer.enableIndexing(baseDescriptor, NonTxIndexBuilder.class,
-props, 100);
-admin.modifyTable(baseDescriptor.getTableName(), baseDescriptor);
 baseDescriptor = 
admin.getTableDescriptor(TableName.valueOf(physicalTableName));
 indexDescriptor = 
admin.getTableDescriptor(TableName.valueOf(physicalIndexName));
 assertUsingOldCoprocs(baseDescriptor, indexDescriptor);
 
+//Now that we've downgraded, we make sure that a create statement 
won't re-upgrade us
 createBaseTable(schemaName, tableName, true, 0, null);
 baseDescriptor = 
admin.getTableDescriptor(TableName.valueOf(physicalTableName));
 indexDescriptor = 
admin.getTableDescriptor(TableName.valueOf(physicalIndexName));
 assertUsingOldCoprocs(baseDescriptor, indexDescriptor);
 }
 
+private void downgradeIndexCoprocs(Admin admin, HTableDescriptor 
baseDescriptor,
+   HTableDescriptor indexDescriptor) 
throws Exception {
+removeCoproc(IndexRegionObserver.class, baseDescriptor, admin);
+removeCoproc(IndexRegionObserver.class, indexDescriptor, admin);
+removeCoproc(GlobalIndexChecker.class, indexDescriptor, admin);
+
+Map props = new HashMap();
+props.put(NonTxIndexBuilder.CODEC_CLASS_NAME_KEY, 
PhoenixIndexCodec.class.getName());
+Indexer.enableIndexing(baseDescriptor, NonTxIndexBuilder.class,
+props, 100);
+admin.modifyTable(baseDescriptor.getTableName(), baseDescriptor);
+}
+
 @Test
 public void testCreateOnExistingHBaseTable() throws Exception {
 String schemaName = generateUniqueName();
@@ -140,16 +147,14 @@ public class IndexCoprocIT extends 
ParallelStatsDisabledIT {
 HTableDescriptor baseDescriptor = 
admin.getTableDescriptor(TableName.valueOf(physicalTableName));
 HTableDescriptor indexDescriptor = 
admin.getTableDescriptor(TableName.valueOf(physicalIndexName));
 
-assertCoprocsContains(IndexRegionObserver.class, baseDescriptor);
-assertCoprocsContains(GlobalIndexChecker.class, indexDescriptor);
+assertUsingNewCoprocs(baseDescriptor, indexDescriptor);
 String columnName = "foo";
 addColumnToBaseTable(schemaName, tableName, columnName);
-assertCoprocsContains(IndexRegionObserver.class, baseDescriptor);
-assertCoprocsContains(GlobalIndexChecker.class, indexDescriptor);
+assertUsingNewCoprocs(baseDescriptor, indexDescriptor);
 dropColumnToBaseTable(schemaName, tableName, columnName);
-

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5578 - CREATE TABLE IF NOT EXISTS loads IndexRegionObserver on an existing table

2019-12-13 Thread gjacoby
This is an automated email from the ASF dual-hosted git repository.

gjacoby pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 6b05c71  PHOENIX-5578 - CREATE TABLE IF NOT EXISTS loads 
IndexRegionObserver on an existing table
6b05c71 is described below

commit 6b05c71a2ad67fa6b4c4655cc984421247646e4f
Author: Geoffrey Jacoby 
AuthorDate: Thu Nov 21 16:52:39 2019 -0800

PHOENIX-5578 - CREATE TABLE IF NOT EXISTS loads IndexRegionObserver on an 
existing table
---
 .../end2end/ParameterizedIndexUpgradeToolIT.java   |  40 +++-
 .../phoenix/end2end/index/IndexCoprocIT.java   | 259 +
 .../phoenix/query/ConnectionQueryServicesImpl.java |  79 ---
 3 files changed, 342 insertions(+), 36 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
index 79d6247..1df46a8 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
@@ -20,6 +20,7 @@ package org.apache.phoenix.end2end;
 import com.google.common.collect.Maps;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.hbase.index.IndexRegionObserver;
 import org.apache.phoenix.hbase.index.Indexer;
 import org.apache.phoenix.index.GlobalIndexChecker;
@@ -58,6 +59,7 @@ import java.util.UUID;
 
 import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.ROLLBACK_OP;
 import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.UPGRADE_OP;
+import static org.apache.phoenix.query.QueryServices.DROP_METADATA_ATTRIB;
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 
 @RunWith(Parameterized.class)
@@ -138,6 +140,7 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 Boolean.toString(isNamespaceEnabled));
 clientProps.put(QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE,
 Boolean.toString(isNamespaceEnabled));
+clientProps.put(DROP_METADATA_ATTRIB, Boolean.toString(true));
 serverProps.putAll(clientProps);
 //To mimic the upgrade/rollback scenario, so that table creation uses 
old/new design
 clientProps.put(QueryServices.INDEX_REGION_OBSERVER_ENABLED_ATTRIB,
@@ -227,23 +230,28 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 throws IOException {
 if (mutable) {
 for (String table : tableList) {
-
Assert.assertTrue(admin.getTableDescriptor(TableName.valueOf(table))
+Assert.assertTrue("Can't find IndexRegionObserver for " + 
table,
+admin.getTableDescriptor(TableName.valueOf(table))
 .hasCoprocessor(IndexRegionObserver.class.getName()));
-
Assert.assertFalse(admin.getTableDescriptor(TableName.valueOf(table))
+Assert.assertFalse("Found Indexer on " + table,
+admin.getTableDescriptor(TableName.valueOf(table))
 .hasCoprocessor(Indexer.class.getName()));
 }
 }
 for (String index : indexList) {
-
Assert.assertTrue(admin.getTableDescriptor(TableName.valueOf(index))
+Assert.assertTrue("Couldn't find GlobalIndexChecker on " + index,
+admin.getTableDescriptor(TableName.valueOf(index))
 .hasCoprocessor(GlobalIndexChecker.class.getName()));
 }
 // Transactional indexes should not have new coprocessors
 for (String index : TRANSACTIONAL_INDEXES_LIST) {
-
Assert.assertFalse(admin.getTableDescriptor(TableName.valueOf(index))
+Assert.assertFalse("Found GlobalIndexChecker on transactional 
index " + index,
+admin.getTableDescriptor(TableName.valueOf(index))
 .hasCoprocessor(GlobalIndexChecker.class.getName()));
 }
 for (String table : TRANSACTIONAL_TABLE_LIST) {
-
Assert.assertFalse(admin.getTableDescriptor(TableName.valueOf(table))
+Assert.assertFalse("Found IndexRegionObserver on transactional 
table",
+admin.getTableDescriptor(TableName.valueOf(table))
 .hasCoprocessor(IndexRegionObserver.class.getName()));
 }
 }
@@ -252,14 +260,17 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 throws IOException {
 if (mutable) {
 for (String table : tableList) {
-
Assert.assertTrue(admin.getTableDescriptor(TableName.valueOf(table))
+Assert.assertTrue("Can't find Indexer for " + table,
+  

Build failed in Jenkins: Phoenix Compile Compatibility with HBase #1209

2019-12-13 Thread Apache Jenkins Server
See 


Changes:


--
Started by timer
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H33 (ubuntu) in workspace 

[Phoenix_Compile_Compat_wHBase] $ /bin/bash /tmp/jenkins7384702943902295093.sh
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 386461
max locked memory   (kbytes, -l) 16384
max memory size (kbytes, -m) unlimited
open files  (-n) 6
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 10240
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited
core id : 0
core id : 1
core id : 2
core id : 3
core id : 4
core id : 5
physical id : 0
physical id : 1
MemTotal:   98978420 kB
MemFree:39610860 kB
Filesystem  Size  Used Avail Use% Mounted on
udev 48G 0   48G   0% /dev
tmpfs   9.5G  1.6M  9.5G   1% /run
/dev/sda3   3.6T  400G  3.1T  12% /
tmpfs48G  4.0K   48G   1% /dev/shm
tmpfs   5.0M 0  5.0M   0% /run/lock
tmpfs48G 0   48G   0% /sys/fs/cgroup
/dev/loop4   58M   58M 0 100% /snap/snapcraft/3440
/dev/sda2   473M  159M  290M  36% /boot
tmpfs   9.5G  4.0K  9.5G   1% /run/user/910
/dev/loop5   55M   55M 0 100% /snap/lxd/12317
/dev/loop0   55M   55M 0 100% /snap/core18/1265
/dev/loop6   59M   59M 0 100% /snap/snapcraft/3720
/dev/loop7   55M   55M 0 100% /snap/lxd/12631
/dev/loop1   55M   55M 0 100% /snap/core18/1279
/dev/loop8   90M   90M 0 100% /snap/core/8213
/dev/loop2   90M   90M 0 100% /snap/core/8268
apache-maven-2.2.1
apache-maven-3.0.5
apache-maven-3.1.1
apache-maven-3.2.5
apache-maven-3.3.9
apache-maven-3.5.2
apache-maven-3.5.4
apache-maven-3.6.0
apache-maven-3.6.2
latest
latest2
latest3


===
Verifying compile level compatibility with HBase 0.98 with Phoenix 
4.x-HBase-0.98
===

Cloning into 'hbase'...
Checking out files:  77% (3809/4944)   Checking out files:  78% (3857/4944)   
Checking out files:  79% (3906/4944)   Checking out files:  80% (3956/4944)   
Checking out files:  81% (4005/4944)   Checking out files:  82% (4055/4944)   
Checking out files:  83% (4104/4944)   Checking out files:  84% (4153/4944)   
Checking out files:  85% (4203/4944)   Checking out files:  86% (4252/4944)   
Checking out files:  87% (4302/4944)   Checking out files:  88% (4351/4944)   
Checking out files:  89% (4401/4944)   Checking out files:  90% (4450/4944)   
Checking out files:  91% (4500/4944)   Checking out files:  92% (4549/4944)   
Checking out files:  93% (4598/4944)   Checking out files:  94% (4648/4944)   
Checking out files:  95% (4697/4944)   Checking out files:  96% (4747/4944)   
Checking out files:  97% (4796/4944)   Checking out files:  98% (4846/4944)   
Checking out files:  99% (4895/4944)   Checking out files: 100% (4944/4944)   
Checking out files: 100% (4944/4944), done.
Switched to a new branch '0.98'
Branch '0.98' set up to track remote branch '0.98' from 'origin'.
[ERROR] Plugin org.codehaus.mojo:findbugs-maven-plugin:2.5.2 or one of its 
dependencies could not be resolved: Failed to read artifact descriptor for 
org.codehaus.mojo:findbugs-maven-plugin:jar:2.5.2: Could not transfer artifact 
org.codehaus.mojo:findbugs-maven-plugin:pom:2.5.2 from/to central 
(https://repo.maven.apache.org/maven2): Received fatal alert: protocol_version 
-> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginResolutionException
Build step 'Execute shell' marked build as failure