[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5381 Increase phoenix.task.handling.interval.ms to 10s for tests.

2019-06-29 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 35ec58f  PHOENIX-5381 Increase phoenix.task.handling.interval.ms to 
10s for tests.
35ec58f is described below

commit 35ec58faeece377b9eb85d074ec3dca3ce388612
Author: Lars Hofhansl 
AuthorDate: Sat Jun 29 22:38:46 2019 -0700

PHOENIX-5381 Increase phoenix.task.handling.interval.ms to 10s for tests.
---
 phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
index 799419f..431cc9b 100644
--- a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
+++ b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
@@ -623,7 +623,7 @@ public abstract class BaseTest {
 conf.setInt("hbase.assignment.zkevent.workers", 5);
 conf.setInt("hbase.assignment.threads.max", 5);
 conf.setInt("hbase.catalogjanitor.interval", 5000);
-conf.setInt(QueryServices.TASK_HANDLING_INTERVAL_MS_ATTRIB, 1000);
+conf.setInt(QueryServices.TASK_HANDLING_INTERVAL_MS_ATTRIB, 1);
 conf.setInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER, 2);
 conf.setInt(NUM_CONCURRENT_INDEX_WRITER_THREADS_CONF_KEY, 1);
 conf.setInt(GLOBAL_INDEX_ROW_REPAIR_COUNT_ATTRIB, 5);



[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5381 Increase phoenix.task.handling.interval.ms to 10s for tests.

2019-06-29 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new c3996f1  PHOENIX-5381 Increase phoenix.task.handling.interval.ms to 
10s for tests.
c3996f1 is described below

commit c3996f1eed7570f38475eb3969dba499faabbe9c
Author: Lars Hofhansl 
AuthorDate: Sat Jun 29 22:39:14 2019 -0700

PHOENIX-5381 Increase phoenix.task.handling.interval.ms to 10s for tests.
---
 phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
index dd89f07..e852c16 100644
--- a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
+++ b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
@@ -623,7 +623,7 @@ public abstract class BaseTest {
 conf.setInt("hbase.assignment.zkevent.workers", 5);
 conf.setInt("hbase.assignment.threads.max", 5);
 conf.setInt("hbase.catalogjanitor.interval", 5000);
-conf.setInt(QueryServices.TASK_HANDLING_INTERVAL_MS_ATTRIB, 1000);
+conf.setInt(QueryServices.TASK_HANDLING_INTERVAL_MS_ATTRIB, 1);
 conf.setInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER, 2);
 conf.setInt(NUM_CONCURRENT_INDEX_WRITER_THREADS_CONF_KEY, 1);
 conf.setInt(GLOBAL_INDEX_ROW_REPAIR_COUNT_ATTRIB, 5);



[phoenix] branch master updated: PHOENIX-5381 Increase phoenix.task.handling.interval.ms to 10s for tests.

2019-06-29 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 31daa09  PHOENIX-5381 Increase phoenix.task.handling.interval.ms to 
10s for tests.
31daa09 is described below

commit 31daa0917c534cbcfb160ec212d5fa839f7df584
Author: Lars Hofhansl 
AuthorDate: Sat Jun 29 22:37:55 2019 -0700

PHOENIX-5381 Increase phoenix.task.handling.interval.ms to 10s for tests.
---
 phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
index 160660e..383ca05 100644
--- a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
+++ b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
@@ -622,7 +622,7 @@ public abstract class BaseTest {
 conf.setInt("hbase.assignment.zkevent.workers", 5);
 conf.setInt("hbase.assignment.threads.max", 5);
 conf.setInt("hbase.catalogjanitor.interval", 5000);
-conf.setInt(QueryServices.TASK_HANDLING_INTERVAL_MS_ATTRIB, 1000);
+conf.setInt(QueryServices.TASK_HANDLING_INTERVAL_MS_ATTRIB, 1);
 conf.setInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER, 2);
 conf.setInt(NUM_CONCURRENT_INDEX_WRITER_THREADS_CONF_KEY, 1);
 conf.setInt(GLOBAL_INDEX_ROW_REPAIR_COUNT_ATTRIB, 5);



[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5381 Increase phoenix.task.handling.interval.ms to 10s for tests.

2019-06-29 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 3a86c2d  PHOENIX-5381 Increase phoenix.task.handling.interval.ms to 
10s for tests.
3a86c2d is described below

commit 3a86c2d57d4a47d5158b8cf437574b6cadf64670
Author: Lars Hofhansl 
AuthorDate: Sat Jun 29 22:37:05 2019 -0700

PHOENIX-5381 Increase phoenix.task.handling.interval.ms to 10s for tests.
---
 phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
index dd89f07..e852c16 100644
--- a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
+++ b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
@@ -623,7 +623,7 @@ public abstract class BaseTest {
 conf.setInt("hbase.assignment.zkevent.workers", 5);
 conf.setInt("hbase.assignment.threads.max", 5);
 conf.setInt("hbase.catalogjanitor.interval", 5000);
-conf.setInt(QueryServices.TASK_HANDLING_INTERVAL_MS_ATTRIB, 1000);
+conf.setInt(QueryServices.TASK_HANDLING_INTERVAL_MS_ATTRIB, 1);
 conf.setInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER, 2);
 conf.setInt(NUM_CONCURRENT_INDEX_WRITER_THREADS_CONF_KEY, 1);
 conf.setInt(GLOBAL_INDEX_ROW_REPAIR_COUNT_ATTRIB, 5);



Build failed in Jenkins: Phoenix-4.x-HBase-1.4 #202

2019-06-29 Thread Apache Jenkins Server
See 


Changes:

[kadir] PHOENIX-5373 GlobalIndexChecker should treat the rows created by the

--
[...truncated 105.40 KB...]
[INFO]  T E S T S
[INFO] ---
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test 
(NeedTheirOwnClusterTests) @ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.003 
s - in 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.993 s 
- in org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running org.apache.phoenix.end2end.ConcurrentMutationsExtendedIT
[INFO] Running org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Running org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.579 s 
- in org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.228 s 
- in org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.01 s - 
in org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Running org.apache.phoenix.end2end.CostBasedDecisionIT
[INFO] Running org.apache.phoenix.end2end.DropSchemaIT
[INFO] Running org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Running org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Running org.apache.phoenix.end2end.IndexBuildTimestampIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.396 s 
- in org.apache.phoenix.end2end.DropSchemaIT
[INFO] Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 126.876 
s - in org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.532 s 
- in org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 96.892 
s - in org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Running 
org.apache.phoenix.end2end.IndexToolForPartialBuildWithNamespaceEnabledIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.92 s 
- in org.apache.phoenix.end2end.IndexToolForPartialBuildWithNamespaceEnabledIT
[INFO] Running org.apache.phoenix.end2end.MigrateSystemTablesToSystemNamespaceIT
[INFO] Running org.apache.phoenix.end2end.IndexToolIT
[INFO] Running org.apache.phoenix.end2end.LocalIndexSplitMergeIT
[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 201.857 
s - in org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 219.277 
s - in org.apache.phoenix.end2end.IndexBuildTimestampIT
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 387.024 
s - in org.apache.phoenix.end2end.ConcurrentMutationsExtendedIT
[INFO] Running 
org.apache.phoenix.end2end.OrderByWithServerClientSpoolingDisabledIT
[INFO] Running org.apache.phoenix.end2end.OrderByWithServerMemoryLimitIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.497 s 
- in org.apache.phoenix.end2end.OrderByWithServerMemoryLimitIT
[INFO] Running org.apache.phoenix.end2end.PartialResultServerConfigurationIT
[INFO] Running org.apache.phoenix.end2end.OrderByWithSpillingIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 187.216 
s - in org.apache.phoenix.end2end.LocalIndexSplitMergeIT
[INFO] Running org.apache.phoenix.end2end.PermissionNSDisabledIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.739 s 
- in org.apache.phoenix.end2end.PartialResultServerConfigurationIT
[INFO] Running org.apache.phoenix.end2end.PermissionNSEnabledIT
[INFO] Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 554.757 
s - in org.apache.phoenix.end2end.CostBasedDecisionIT
[INFO] Tests run: 64, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 328.748 
s - in org.apache.phoenix.end2end.IndexToolIT
[INFO] Running org.apache.phoenix.end2end.PermissionsCacheIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 239.176 
s - in 

[phoenix] branch 4.14-HBase-1.4 updated: PHOENIX-5373 GlobalIndexChecker should treat the rows created by the previous design as unverified

2019-06-29 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch 4.14-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.4 by this push:
 new 82e58e3  PHOENIX-5373 GlobalIndexChecker should treat the rows created 
by the previous design as unverified
82e58e3 is described below

commit 82e58e3492340a8ee9bfa8a84a60df1f2b4ecbaf
Author: Kadir 
AuthorDate: Tue Jun 25 18:39:21 2019 -0700

PHOENIX-5373 GlobalIndexChecker should treat the rows created by the 
previous design as unverified
---
 .../apache/phoenix/end2end/CsvBulkLoadToolIT.java  |  8 +-
 .../apache/phoenix/end2end/IndexExtendedIT.java| 12 -
 .../org/apache/phoenix/end2end/IndexToolIT.java| 12 ++---
 .../phoenix/end2end/RegexBulkLoadToolIT.java   |  8 --
 .../phoenix/trace/PhoenixTracingEndToEndIT.java|  2 +-
 .../org/apache/phoenix/util/IndexScrutinyIT.java   |  5 ++--
 .../apache/phoenix/index/GlobalIndexChecker.java   | 29 ++
 .../phoenix/query/ConnectionQueryServicesImpl.java |  6 +
 .../apache/phoenix/query/QueryServicesOptions.java |  2 +-
 9 files changed, 58 insertions(+), 26 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
index 7e4226d..d570c1a 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
@@ -29,14 +29,18 @@ import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.ResultSet;
 import java.sql.Statement;
+import java.util.Map;
 
+import com.google.common.collect.Maps;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.mapred.FileAlreadyExistsException;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.CsvBulkLoadTool;
+import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.util.DateUtil;
@@ -53,7 +57,9 @@ public class CsvBulkLoadToolIT extends BaseOwnClusterIT {
 
 @BeforeClass
 public static void doSetup() throws Exception {
-setUpTestDriver(ReadOnlyProps.EMPTY_PROPS);
+Map clientProps = Maps.newHashMapWithExpectedSize(1);
+clientProps.put(QueryServices.INDEX_REGION_OBSERVER_ENABLED_ATTRIB, 
Boolean.FALSE.toString());
+setUpTestDriver(ReadOnlyProps.EMPTY_PROPS, new 
ReadOnlyProps(clientProps.entrySet().iterator()));
 zkQuorum = TestUtil.LOCALHOST + PhoenixRuntime.JDBC_PROTOCOL_SEPARATOR 
+ getUtility().getZkCluster().getClientPort();
 conn = DriverManager.getConnection(getUrl());
 }
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
index 624f3e3..570decc 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
@@ -88,16 +88,16 @@ public class IndexExtendedIT extends BaseTest {
 
 @Parameters(name="mutable = {0} , localIndex = {1}, directApi = {2}, 
useSnapshot = {3}")
 public static Collection data() {
-List list = Lists.newArrayListWithExpectedSize(16);
+List list = Lists.newArrayListWithExpectedSize(10);
 boolean[] Booleans = new boolean[]{false, true};
 for (boolean mutable : Booleans ) {
-for (boolean localIndex : Booleans ) {
-for (boolean directApi : Booleans ) {
-for (boolean useSnapshot : Booleans ) {
-list.add(new Boolean[]{ mutable, localIndex, 
directApi, useSnapshot});
-}
+for (boolean directApi : Booleans ) {
+for (boolean useSnapshot : Booleans) {
+list.add(new Boolean[]{mutable, true, directApi, 
useSnapshot});
 }
 }
+// Due to PHOENIX-5375 and PHOENIX-5376, the useSnapshot and bulk 
load options are ignored for global indexes
+list.add(new Boolean[]{ mutable, false, true, false});
 }
 return list;
 }
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
index aaf9509..9b248e1 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
@@ -128,11 +128,17 @@ public class IndexToolIT extends ParallelStatsEnabledIT {

[phoenix] branch 4.14-HBase-1.3 updated: PHOENIX-5373 GlobalIndexChecker should treat the rows created by the previous design as unverified

2019-06-29 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.3 by this push:
 new 875bd25  PHOENIX-5373 GlobalIndexChecker should treat the rows created 
by the previous design as unverified
875bd25 is described below

commit 875bd253b9ecf60be6218bb232b9372624a5935d
Author: Kadir 
AuthorDate: Tue Jun 25 18:39:21 2019 -0700

PHOENIX-5373 GlobalIndexChecker should treat the rows created by the 
previous design as unverified
---
 .../apache/phoenix/end2end/CsvBulkLoadToolIT.java  |  8 +-
 .../apache/phoenix/end2end/IndexExtendedIT.java| 12 -
 .../org/apache/phoenix/end2end/IndexToolIT.java| 12 ++---
 .../phoenix/end2end/RegexBulkLoadToolIT.java   |  8 --
 .../phoenix/trace/PhoenixTracingEndToEndIT.java|  2 +-
 .../org/apache/phoenix/util/IndexScrutinyIT.java   |  5 ++--
 .../apache/phoenix/index/GlobalIndexChecker.java   | 29 ++
 .../phoenix/query/ConnectionQueryServicesImpl.java |  6 +
 .../apache/phoenix/query/QueryServicesOptions.java |  2 +-
 9 files changed, 58 insertions(+), 26 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
index 7e4226d..d570c1a 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
@@ -29,14 +29,18 @@ import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.ResultSet;
 import java.sql.Statement;
+import java.util.Map;
 
+import com.google.common.collect.Maps;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.mapred.FileAlreadyExistsException;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.CsvBulkLoadTool;
+import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.util.DateUtil;
@@ -53,7 +57,9 @@ public class CsvBulkLoadToolIT extends BaseOwnClusterIT {
 
 @BeforeClass
 public static void doSetup() throws Exception {
-setUpTestDriver(ReadOnlyProps.EMPTY_PROPS);
+Map clientProps = Maps.newHashMapWithExpectedSize(1);
+clientProps.put(QueryServices.INDEX_REGION_OBSERVER_ENABLED_ATTRIB, 
Boolean.FALSE.toString());
+setUpTestDriver(ReadOnlyProps.EMPTY_PROPS, new 
ReadOnlyProps(clientProps.entrySet().iterator()));
 zkQuorum = TestUtil.LOCALHOST + PhoenixRuntime.JDBC_PROTOCOL_SEPARATOR 
+ getUtility().getZkCluster().getClientPort();
 conn = DriverManager.getConnection(getUrl());
 }
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
index 624f3e3..570decc 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
@@ -88,16 +88,16 @@ public class IndexExtendedIT extends BaseTest {
 
 @Parameters(name="mutable = {0} , localIndex = {1}, directApi = {2}, 
useSnapshot = {3}")
 public static Collection data() {
-List list = Lists.newArrayListWithExpectedSize(16);
+List list = Lists.newArrayListWithExpectedSize(10);
 boolean[] Booleans = new boolean[]{false, true};
 for (boolean mutable : Booleans ) {
-for (boolean localIndex : Booleans ) {
-for (boolean directApi : Booleans ) {
-for (boolean useSnapshot : Booleans ) {
-list.add(new Boolean[]{ mutable, localIndex, 
directApi, useSnapshot});
-}
+for (boolean directApi : Booleans ) {
+for (boolean useSnapshot : Booleans) {
+list.add(new Boolean[]{mutable, true, directApi, 
useSnapshot});
 }
 }
+// Due to PHOENIX-5375 and PHOENIX-5376, the useSnapshot and bulk 
load options are ignored for global indexes
+list.add(new Boolean[]{ mutable, false, true, false});
 }
 return list;
 }
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
index aaf9509..9b248e1 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
@@ -128,11 +128,17 @@ public class IndexToolIT extends ParallelStatsEnabledIT {

Build failed in Jenkins: Phoenix-4.x-HBase-1.3 #457

2019-06-29 Thread Apache Jenkins Server
See 


Changes:

[kadir] PHOENIX-5373 GlobalIndexChecker should treat the rows created by the

--
[...truncated 111.11 KB...]
[INFO] Running org.apache.phoenix.end2end.LocalIndexSplitMergeIT
[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 198.254 
s - in org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 219.776 
s - in org.apache.phoenix.end2end.IndexBuildTimestampIT
[INFO] Running 
org.apache.phoenix.end2end.NonColumnEncodedImmutableNonTxStatsCollectorIT
[INFO] Running 
org.apache.phoenix.end2end.NonColumnEncodedImmutableTxStatsCollectorIT
[INFO] Running 
org.apache.phoenix.end2end.OrderByWithServerClientSpoolingDisabledIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 185.948 
s - in org.apache.phoenix.end2end.LocalIndexSplitMergeIT
[INFO] Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 551.321 
s - in org.apache.phoenix.end2end.CostBasedDecisionIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
177.051 s - in 
org.apache.phoenix.end2end.NonColumnEncodedImmutableNonTxStatsCollectorIT
[INFO] Running org.apache.phoenix.end2end.OrderByWithServerMemoryLimitIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.495 s 
- in org.apache.phoenix.end2end.OrderByWithServerMemoryLimitIT
[INFO] Running org.apache.phoenix.end2end.PartialResultServerConfigurationIT
[INFO] Running org.apache.phoenix.end2end.OrderByWithSpillingIT
[INFO] Running org.apache.phoenix.end2end.PermissionNSDisabledIT
[INFO] Tests run: 64, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 319.21 
s - in org.apache.phoenix.end2end.IndexToolIT
[INFO] Running org.apache.phoenix.end2end.PermissionNSEnabledIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.455 s 
- in org.apache.phoenix.end2end.PartialResultServerConfigurationIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 231.289 
s - in org.apache.phoenix.end2end.OrderByWithServerClientSpoolingDisabledIT
[INFO] Running org.apache.phoenix.end2end.PermissionsCacheIT
[INFO] Running org.apache.phoenix.end2end.PhoenixDriverIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 447.02 s 
- in org.apache.phoenix.end2end.MigrateSystemTablesToSystemNamespaceIT
[WARNING] Tests run: 56, Failures: 0, Errors: 0, Skipped: 8, Time elapsed: 
358.193 s - in 
org.apache.phoenix.end2end.NonColumnEncodedImmutableTxStatsCollectorIT
[INFO] Running org.apache.phoenix.end2end.QueryLoggerIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.959 s 
- in org.apache.phoenix.end2end.PhoenixDriverIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.258 s 
- in org.apache.phoenix.end2end.QueryLoggerIT
[INFO] Running org.apache.phoenix.end2end.QueryTimeoutIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 257.821 
s - in org.apache.phoenix.end2end.OrderByWithSpillingIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.422 s 
- in org.apache.phoenix.end2end.QueryTimeoutIT
[INFO] Running org.apache.phoenix.end2end.QueryWithLimitIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.293 s 
- in org.apache.phoenix.end2end.QueryWithLimitIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 302.343 
s - in org.apache.phoenix.end2end.PermissionNSDisabledIT
[INFO] Running org.apache.phoenix.end2end.RebuildIndexConnectionPropsIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.375 s 
- in org.apache.phoenix.end2end.RebuildIndexConnectionPropsIT
[INFO] Running org.apache.phoenix.end2end.RegexBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.RenewLeaseIT
[INFO] Running org.apache.phoenix.end2end.SequencePointInTimeIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.094 s 
- in org.apache.phoenix.end2end.SequencePointInTimeIT
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 307.514 
s - in org.apache.phoenix.end2end.PermissionNSEnabledIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.514 s 
- in org.apache.phoenix.end2end.RenewLeaseIT
[INFO] Running org.apache.phoenix.end2end.SystemCatalogCreationOnConnectionIT
[INFO] Running org.apache.phoenix.end2end.SpillableGroupByIT
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 310.073 
s - in org.apache.phoenix.end2end.PermissionsCacheIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.04 s 
- in org.apache.phoenix.end2end.SpillableGroupByIT
[INFO] Running org.apache.phoenix.end2end.SplitIT
[WARNING] Tests run: 10, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
77.488 s - in org.apache.phoenix.end2end.RegexBulkLoadToolIT
[INFO] Running 

Build failed in Jenkins: Phoenix | Master #2440

2019-06-29 Thread Apache Jenkins Server
See 


Changes:

[kadir] PHOENIX-5373 GlobalIndexChecker should treat the rows created by the

--
[...truncated 112.34 KB...]
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.397 s 
- in org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Running org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.897 s 
- in org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 267.494 
s - in org.apache.phoenix.end2end.join.SubqueryUsingSortMergeJoinIT
[INFO] Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 357.821 
s - in org.apache.phoenix.end2end.join.SubqueryIT
[INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 868.08 
s - in org.apache.phoenix.end2end.index.MutableIndexIT
[INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 238.504 
s - in org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 50, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 442.666 
s - in org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 66, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 550.574 
s - in org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Failures: 
[ERROR]   
SaltedIndexIT.testMutableTableIndexMaintanenceSalted:87->testMutableTableIndexMaintanence:128
[ERROR]   
SaltedIndexIT.testMutableTableIndexMaintanenceUnsalted:98->testMutableTableIndexMaintanence:128
[ERROR] Errors: 
[ERROR]   
SortMergeJoinGlobalIndexIT>ParallelStatsDisabledIT.doSetup:60->BaseTest.setUpTestDriver:515->BaseTest.setUpTestDriver:520->BaseTest.checkClusterInitialized:434->BaseTest.setUpTestCluster:448->BaseTest.initMiniCluster:549
 ยป Runtime
[INFO] 
[ERROR] Tests run: 3644, Failures: 2, Errors: 1, Skipped: 2
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test (HBaseManagedTimeTests) 
@ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test 
(NeedTheirOwnClusterTests) @ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.003 
s - in 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.129 s 
- in org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running org.apache.phoenix.end2end.ConcurrentMutationsExtendedIT
[INFO] Running org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Running org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.091 s 
- in org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.957 s 
- in org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.994 s 
- in org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running org.apache.phoenix.end2end.CostBasedDecisionIT
[INFO] Running org.apache.phoenix.end2end.DropSchemaIT
[INFO] Running org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Running org.apache.phoenix.end2end.IndexBuildTimestampIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.514 s 
- in org.apache.phoenix.end2end.DropSchemaIT
[INFO] Running org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 125.973 
s - in org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.657 s 
- in org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 104.673 
s - in org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Running 
org.apache.phoenix.end2end.IndexToolForPartialBuildWithNamespaceEnabledIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.14 s 
- in org.apache.phoenix.end2end.IndexToolForPartialBuildWithNamespaceEnabledIT
[INFO] Running 

Build failed in Jenkins: Phoenix-4.x-HBase-1.5 #67

2019-06-29 Thread Apache Jenkins Server
See 


Changes:

[kadir] PHOENIX-5373 GlobalIndexChecker should treat the rows created by the

--
[...truncated 110.22 KB...]
[INFO] Running org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.347 s 
- in org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Running org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.881 s 
- in org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 164.704 
s - in org.apache.phoenix.end2end.join.SubqueryUsingSortMergeJoinIT
[INFO] Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 229.565 
s - in org.apache.phoenix.end2end.join.SubqueryIT
[INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 169.871 
s - in org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 569.4 s 
- in org.apache.phoenix.end2end.index.MutableIndexIT
[INFO] Tests run: 50, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 343.148 
s - in org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 66, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 403.973 
s - in org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Failures: 
[ERROR]   
SaltedIndexIT.testMutableTableIndexMaintanenceSalted:87->testMutableTableIndexMaintanence:128
[ERROR]   
SaltedIndexIT.testMutableTableIndexMaintanenceUnsalted:98->testMutableTableIndexMaintanence:128
[INFO] 
[ERROR] Tests run: 3672, Failures: 2, Errors: 0, Skipped: 2
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test (HBaseManagedTimeTests) 
@ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test 
(NeedTheirOwnClusterTests) @ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.001 
s - in 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Running org.apache.phoenix.end2end.ConcurrentMutationsExtendedIT
[INFO] Running org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.473 s 
- in org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Running org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.256 s 
- in org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.334 s 
- in org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.534 s 
- in org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Running org.apache.phoenix.end2end.CostBasedDecisionIT
[INFO] Running org.apache.phoenix.end2end.DropSchemaIT
[INFO] Running org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Running org.apache.phoenix.end2end.IndexBuildTimestampIT
[INFO] Running org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.199 s 
- in org.apache.phoenix.end2end.DropSchemaIT
[INFO] Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 94.279 
s - in org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.435 s 
- in org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 76.606 
s - in org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Running 
org.apache.phoenix.end2end.IndexToolForPartialBuildWithNamespaceEnabledIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.361 s 
- in org.apache.phoenix.end2end.IndexToolForPartialBuildWithNamespaceEnabledIT
[INFO] Running org.apache.phoenix.end2end.IndexToolIT
[INFO] Running org.apache.phoenix.end2end.MigrateSystemTablesToSystemNamespaceIT
[INFO] Running org.apache.phoenix.end2end.LocalIndexSplitMergeIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, 

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5373 GlobalIndexChecker should treat the rows created by the previous design as unverified

2019-06-29 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 59776d8  PHOENIX-5373 GlobalIndexChecker should treat the rows created 
by the previous design as unverified
59776d8 is described below

commit 59776d81a965af65445584fe50abd2deb0489b70
Author: Kadir 
AuthorDate: Tue Jun 25 18:39:21 2019 -0700

PHOENIX-5373 GlobalIndexChecker should treat the rows created by the 
previous design as unverified
---
 .../apache/phoenix/end2end/CsvBulkLoadToolIT.java  |  8 +-
 .../apache/phoenix/end2end/IndexExtendedIT.java| 12 -
 .../org/apache/phoenix/end2end/IndexToolIT.java| 13 +++---
 .../phoenix/end2end/RegexBulkLoadToolIT.java   |  8 --
 .../org/apache/phoenix/util/IndexScrutinyIT.java   |  5 ++--
 .../apache/phoenix/index/GlobalIndexChecker.java   | 29 ++
 .../phoenix/query/ConnectionQueryServicesImpl.java |  6 +
 .../apache/phoenix/query/QueryServicesOptions.java |  2 +-
 8 files changed, 58 insertions(+), 25 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
index 699b469..f91956c 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
@@ -29,14 +29,18 @@ import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.ResultSet;
 import java.sql.Statement;
+import java.util.Map;
 
+import com.google.common.collect.Maps;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.mapred.FileAlreadyExistsException;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.CsvBulkLoadTool;
+import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.util.DateUtil;
@@ -53,7 +57,9 @@ public class CsvBulkLoadToolIT extends BaseOwnClusterIT {
 
 @BeforeClass
 public static void doSetup() throws Exception {
-setUpTestDriver(ReadOnlyProps.EMPTY_PROPS);
+Map clientProps = Maps.newHashMapWithExpectedSize(1);
+clientProps.put(QueryServices.INDEX_REGION_OBSERVER_ENABLED_ATTRIB, 
Boolean.FALSE.toString());
+setUpTestDriver(ReadOnlyProps.EMPTY_PROPS, new 
ReadOnlyProps(clientProps.entrySet().iterator()));
 zkQuorum = TestUtil.LOCALHOST + PhoenixRuntime.JDBC_PROTOCOL_SEPARATOR 
+ getUtility().getZkCluster().getClientPort();
 conn = DriverManager.getConnection(getUrl());
 }
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
index 6af4d78..052a70e 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
@@ -88,16 +88,16 @@ public class IndexExtendedIT extends BaseTest {
 
 @Parameters(name="mutable = {0} , localIndex = {1}, directApi = {2}, 
useSnapshot = {3}")
 public static Collection data() {
-List list = Lists.newArrayListWithExpectedSize(16);
+List list = Lists.newArrayListWithExpectedSize(10);
 boolean[] Booleans = new boolean[]{false, true};
 for (boolean mutable : Booleans ) {
-for (boolean localIndex : Booleans ) {
-for (boolean directApi : Booleans ) {
-for (boolean useSnapshot : Booleans ) {
-list.add(new Boolean[]{ mutable, localIndex, 
directApi, useSnapshot});
-}
+for (boolean directApi : Booleans ) {
+for (boolean useSnapshot : Booleans) {
+list.add(new Boolean[]{mutable, true, directApi, 
useSnapshot});
 }
 }
+// Due to PHOENIX-5375 and PHOENIX-5376, the useSnapshot and bulk 
load options are ignored for global indexes
+list.add(new Boolean[]{ mutable, false, true, false});
 }
 return list;
 }
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
index c4bdf47..4efb15b 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
@@ -137,9 +137,16 @@ public class IndexToolIT extends 
BaseUniqueNamesOwnClusterIT {
 || 

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5373 GlobalIndexChecker should treat the rows created by the previous design as unverified

2019-06-29 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 7899306  PHOENIX-5373 GlobalIndexChecker should treat the rows created 
by the previous design as unverified
7899306 is described below

commit 789930617cdfe3f435b3331817cc4996d27a9368
Author: Kadir 
AuthorDate: Tue Jun 25 18:39:21 2019 -0700

PHOENIX-5373 GlobalIndexChecker should treat the rows created by the 
previous design as unverified
---
 .../apache/phoenix/end2end/CsvBulkLoadToolIT.java  |  8 +-
 .../apache/phoenix/end2end/IndexExtendedIT.java| 12 -
 .../org/apache/phoenix/end2end/IndexToolIT.java| 13 +++---
 .../phoenix/end2end/RegexBulkLoadToolIT.java   |  8 --
 .../org/apache/phoenix/util/IndexScrutinyIT.java   |  5 ++--
 .../apache/phoenix/index/GlobalIndexChecker.java   | 29 ++
 .../phoenix/query/ConnectionQueryServicesImpl.java |  6 +
 .../apache/phoenix/query/QueryServicesOptions.java |  2 +-
 8 files changed, 58 insertions(+), 25 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
index 699b469..f91956c 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
@@ -29,14 +29,18 @@ import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.ResultSet;
 import java.sql.Statement;
+import java.util.Map;
 
+import com.google.common.collect.Maps;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.mapred.FileAlreadyExistsException;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.CsvBulkLoadTool;
+import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.util.DateUtil;
@@ -53,7 +57,9 @@ public class CsvBulkLoadToolIT extends BaseOwnClusterIT {
 
 @BeforeClass
 public static void doSetup() throws Exception {
-setUpTestDriver(ReadOnlyProps.EMPTY_PROPS);
+Map clientProps = Maps.newHashMapWithExpectedSize(1);
+clientProps.put(QueryServices.INDEX_REGION_OBSERVER_ENABLED_ATTRIB, 
Boolean.FALSE.toString());
+setUpTestDriver(ReadOnlyProps.EMPTY_PROPS, new 
ReadOnlyProps(clientProps.entrySet().iterator()));
 zkQuorum = TestUtil.LOCALHOST + PhoenixRuntime.JDBC_PROTOCOL_SEPARATOR 
+ getUtility().getZkCluster().getClientPort();
 conn = DriverManager.getConnection(getUrl());
 }
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
index 6af4d78..052a70e 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
@@ -88,16 +88,16 @@ public class IndexExtendedIT extends BaseTest {
 
 @Parameters(name="mutable = {0} , localIndex = {1}, directApi = {2}, 
useSnapshot = {3}")
 public static Collection data() {
-List list = Lists.newArrayListWithExpectedSize(16);
+List list = Lists.newArrayListWithExpectedSize(10);
 boolean[] Booleans = new boolean[]{false, true};
 for (boolean mutable : Booleans ) {
-for (boolean localIndex : Booleans ) {
-for (boolean directApi : Booleans ) {
-for (boolean useSnapshot : Booleans ) {
-list.add(new Boolean[]{ mutable, localIndex, 
directApi, useSnapshot});
-}
+for (boolean directApi : Booleans ) {
+for (boolean useSnapshot : Booleans) {
+list.add(new Boolean[]{mutable, true, directApi, 
useSnapshot});
 }
 }
+// Due to PHOENIX-5375 and PHOENIX-5376, the useSnapshot and bulk 
load options are ignored for global indexes
+list.add(new Boolean[]{ mutable, false, true, false});
 }
 return list;
 }
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
index c4bdf47..4efb15b 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
@@ -137,9 +137,16 @@ public class IndexToolIT extends 
BaseUniqueNamesOwnClusterIT {
 || 

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5373 GlobalIndexChecker should treat the rows created by the previous design as unverified

2019-06-29 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 8a92014  PHOENIX-5373 GlobalIndexChecker should treat the rows created 
by the previous design as unverified
8a92014 is described below

commit 8a920140f80b3e446c1089ece2c7e23de7628189
Author: Kadir 
AuthorDate: Tue Jun 25 18:39:21 2019 -0700

PHOENIX-5373 GlobalIndexChecker should treat the rows created by the 
previous design as unverified
---
 .../apache/phoenix/end2end/CsvBulkLoadToolIT.java  |  8 +-
 .../apache/phoenix/end2end/IndexExtendedIT.java| 12 -
 .../org/apache/phoenix/end2end/IndexToolIT.java| 13 +++---
 .../phoenix/end2end/RegexBulkLoadToolIT.java   |  8 --
 .../org/apache/phoenix/util/IndexScrutinyIT.java   |  5 ++--
 .../apache/phoenix/index/GlobalIndexChecker.java   | 29 ++
 .../phoenix/query/ConnectionQueryServicesImpl.java |  6 +
 .../apache/phoenix/query/QueryServicesOptions.java |  2 +-
 8 files changed, 58 insertions(+), 25 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
index 699b469..f91956c 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
@@ -29,14 +29,18 @@ import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.ResultSet;
 import java.sql.Statement;
+import java.util.Map;
 
+import com.google.common.collect.Maps;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.mapred.FileAlreadyExistsException;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.CsvBulkLoadTool;
+import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.util.DateUtil;
@@ -53,7 +57,9 @@ public class CsvBulkLoadToolIT extends BaseOwnClusterIT {
 
 @BeforeClass
 public static void doSetup() throws Exception {
-setUpTestDriver(ReadOnlyProps.EMPTY_PROPS);
+Map clientProps = Maps.newHashMapWithExpectedSize(1);
+clientProps.put(QueryServices.INDEX_REGION_OBSERVER_ENABLED_ATTRIB, 
Boolean.FALSE.toString());
+setUpTestDriver(ReadOnlyProps.EMPTY_PROPS, new 
ReadOnlyProps(clientProps.entrySet().iterator()));
 zkQuorum = TestUtil.LOCALHOST + PhoenixRuntime.JDBC_PROTOCOL_SEPARATOR 
+ getUtility().getZkCluster().getClientPort();
 conn = DriverManager.getConnection(getUrl());
 }
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
index 6af4d78..052a70e 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
@@ -88,16 +88,16 @@ public class IndexExtendedIT extends BaseTest {
 
 @Parameters(name="mutable = {0} , localIndex = {1}, directApi = {2}, 
useSnapshot = {3}")
 public static Collection data() {
-List list = Lists.newArrayListWithExpectedSize(16);
+List list = Lists.newArrayListWithExpectedSize(10);
 boolean[] Booleans = new boolean[]{false, true};
 for (boolean mutable : Booleans ) {
-for (boolean localIndex : Booleans ) {
-for (boolean directApi : Booleans ) {
-for (boolean useSnapshot : Booleans ) {
-list.add(new Boolean[]{ mutable, localIndex, 
directApi, useSnapshot});
-}
+for (boolean directApi : Booleans ) {
+for (boolean useSnapshot : Booleans) {
+list.add(new Boolean[]{mutable, true, directApi, 
useSnapshot});
 }
 }
+// Due to PHOENIX-5375 and PHOENIX-5376, the useSnapshot and bulk 
load options are ignored for global indexes
+list.add(new Boolean[]{ mutable, false, true, false});
 }
 return list;
 }
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
index f1728a0..e9a2291 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
@@ -138,9 +138,16 @@ public class IndexToolIT extends 
BaseUniqueNamesOwnClusterIT {
 || 

[phoenix] branch master updated: PHOENIX-5373 GlobalIndexChecker should treat the rows created by the previous design as unverified

2019-06-29 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new daffc17  PHOENIX-5373 GlobalIndexChecker should treat the rows created 
by the previous design as unverified
daffc17 is described below

commit daffc17f46c4a7425225a676c4e321be43280e30
Author: Kadir 
AuthorDate: Tue Jun 25 18:39:21 2019 -0700

PHOENIX-5373 GlobalIndexChecker should treat the rows created by the 
previous design as unverified
---
 .../apache/phoenix/end2end/CsvBulkLoadToolIT.java  |  8 +++-
 .../apache/phoenix/end2end/IndexExtendedIT.java| 12 +--
 .../org/apache/phoenix/end2end/IndexToolIT.java| 13 +---
 .../phoenix/end2end/RegexBulkLoadToolIT.java   |  8 ++--
 .../org/apache/phoenix/util/IndexScrutinyIT.java   |  5 +++--
 .../apache/phoenix/index/GlobalIndexChecker.java   | 24 +++---
 .../phoenix/query/ConnectionQueryServicesImpl.java |  3 +++
 .../apache/phoenix/query/QueryServicesOptions.java |  2 +-
 8 files changed, 48 insertions(+), 27 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
index 699b469..f91956c 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
@@ -29,14 +29,18 @@ import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.ResultSet;
 import java.sql.Statement;
+import java.util.Map;
 
+import com.google.common.collect.Maps;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.mapred.FileAlreadyExistsException;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.CsvBulkLoadTool;
+import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.util.DateUtil;
@@ -53,7 +57,9 @@ public class CsvBulkLoadToolIT extends BaseOwnClusterIT {
 
 @BeforeClass
 public static void doSetup() throws Exception {
-setUpTestDriver(ReadOnlyProps.EMPTY_PROPS);
+Map clientProps = Maps.newHashMapWithExpectedSize(1);
+clientProps.put(QueryServices.INDEX_REGION_OBSERVER_ENABLED_ATTRIB, 
Boolean.FALSE.toString());
+setUpTestDriver(ReadOnlyProps.EMPTY_PROPS, new 
ReadOnlyProps(clientProps.entrySet().iterator()));
 zkQuorum = TestUtil.LOCALHOST + PhoenixRuntime.JDBC_PROTOCOL_SEPARATOR 
+ getUtility().getZkCluster().getClientPort();
 conn = DriverManager.getConnection(getUrl());
 }
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
index 6af4d78..052a70e 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
@@ -88,16 +88,16 @@ public class IndexExtendedIT extends BaseTest {
 
 @Parameters(name="mutable = {0} , localIndex = {1}, directApi = {2}, 
useSnapshot = {3}")
 public static Collection data() {
-List list = Lists.newArrayListWithExpectedSize(16);
+List list = Lists.newArrayListWithExpectedSize(10);
 boolean[] Booleans = new boolean[]{false, true};
 for (boolean mutable : Booleans ) {
-for (boolean localIndex : Booleans ) {
-for (boolean directApi : Booleans ) {
-for (boolean useSnapshot : Booleans ) {
-list.add(new Boolean[]{ mutable, localIndex, 
directApi, useSnapshot});
-}
+for (boolean directApi : Booleans ) {
+for (boolean useSnapshot : Booleans) {
+list.add(new Boolean[]{mutable, true, directApi, 
useSnapshot});
 }
 }
+// Due to PHOENIX-5375 and PHOENIX-5376, the useSnapshot and bulk 
load options are ignored for global indexes
+list.add(new Boolean[]{ mutable, false, true, false});
 }
 return list;
 }
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
index 868ba35..bdfa20f 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
@@ -134,9 +134,16 @@ public class IndexToolIT extends 
BaseUniqueNamesOwnClusterIT {
 || 

Build failed in Jenkins: Phoenix Compile Compatibility with HBase #1043

2019-06-29 Thread Apache Jenkins Server
See 


--
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on H25 (ubuntu xenial) in workspace 

[Phoenix_Compile_Compat_wHBase] $ /bin/bash /tmp/jenkins851599371818732947.sh
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 386407
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 6
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 10240
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited
core id : 0
core id : 1
core id : 2
core id : 3
core id : 4
core id : 5
physical id : 0
physical id : 1
MemTotal:   98957636 kB
MemFree:27838932 kB
Filesystem  Size  Used Avail Use% Mounted on
udev 48G 0   48G   0% /dev
tmpfs   9.5G  954M  8.6G  10% /run
/dev/sda3   3.6T  417G  3.0T  12% /
tmpfs48G 0   48G   0% /dev/shm
tmpfs   5.0M 0  5.0M   0% /run/lock
tmpfs48G 0   48G   0% /sys/fs/cgroup
/dev/sda2   473M  236M  213M  53% /boot
tmpfs   9.5G  4.0K  9.5G   1% /run/user/910
tmpfs   9.5G 0  9.5G   0% /run/user/1000
/dev/loop11  57M   57M 0 100% /snap/snapcraft/3022
/dev/loop4   57M   57M 0 100% /snap/snapcraft/3059
/dev/loop1   55M   55M 0 100% /snap/lxd/10934
/dev/loop10  55M   55M 0 100% /snap/lxd/10972
/dev/loop7   89M   89M 0 100% /snap/core/7169
/dev/loop8   89M   89M 0 100% /snap/core/7270
apache-maven-2.2.1
apache-maven-3.0.4
apache-maven-3.0.5
apache-maven-3.1.1
apache-maven-3.2.1
apache-maven-3.2.5
apache-maven-3.3.3
apache-maven-3.3.9
apache-maven-3.5.0
apache-maven-3.5.2
apache-maven-3.5.4
apache-maven-3.6.0
latest
latest2
latest3


===
Verifying compile level compatibility with HBase 0.98 with Phoenix 
4.x-HBase-0.98
===

Cloning into 'hbase'...
Switched to a new branch '0.98'
Branch 0.98 set up to track remote branch 0.98 from origin.
[ERROR] Plugin org.codehaus.mojo:findbugs-maven-plugin:2.5.2 or one of its 
dependencies could not be resolved: Failed to read artifact descriptor for 
org.codehaus.mojo:findbugs-maven-plugin:jar:2.5.2: Could not transfer artifact 
org.codehaus.mojo:findbugs-maven-plugin:pom:2.5.2 from/to central 
(https://repo.maven.apache.org/maven2): Received fatal alert: protocol_version 
-> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginResolutionException
Build step 'Execute shell' marked build as failure


Build failed in Jenkins: Phoenix-4.x-HBase-1.4 #201

2019-06-29 Thread Apache Jenkins Server
See 


Changes:

[larsh] PHOENIX-5371 SystemCatalogCreationOnConnectionIT is slow.

--
[...truncated 1.11 MB...]
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.973 s 
- in org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 279.085 
s - in org.apache.phoenix.end2end.join.SubqueryUsingSortMergeJoinIT
[INFO] Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 396.485 
s - in org.apache.phoenix.end2end.join.SubqueryIT
[INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 888.573 
s - in org.apache.phoenix.end2end.index.MutableIndexIT
[INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 261.924 
s - in org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 50, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 478.913 
s - in org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 66, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 558.016 
s - in org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] 
[INFO] Results:
[INFO] 
[WARNING] Tests run: 3672, Failures: 0, Errors: 0, Skipped: 2
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test (HBaseManagedTimeTests) 
@ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test 
(NeedTheirOwnClusterTests) @ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.002 
s - in 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.411 s 
- in org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Running org.apache.phoenix.end2end.ConcurrentMutationsExtendedIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.77 s - 
in org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Running org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.303 s 
- in org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.564 s 
- in org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Running org.apache.phoenix.end2end.CostBasedDecisionIT
[INFO] Running org.apache.phoenix.end2end.DropSchemaIT
[INFO] Running org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Running org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Running org.apache.phoenix.end2end.IndexBuildTimestampIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.266 s 
- in org.apache.phoenix.end2end.DropSchemaIT
[INFO] Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 126.381 
s - in org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.894 s 
- in org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Running 
org.apache.phoenix.end2end.IndexToolForPartialBuildWithNamespaceEnabledIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.177 s 
- in org.apache.phoenix.end2end.IndexToolForPartialBuildWithNamespaceEnabledIT
[INFO] Running org.apache.phoenix.end2end.IndexToolIT
[INFO] Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 190.39 
s - in org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 198.696 
s - in org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Running org.apache.phoenix.end2end.MigrateSystemTablesToSystemNamespaceIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 218.729 
s - in org.apache.phoenix.end2end.IndexBuildTimestampIT
[INFO] Running org.apache.phoenix.end2end.LocalIndexSplitMergeIT
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 363.189 
s - in org.apache.phoenix.end2end.ConcurrentMutationsExtendedIT
[INFO] Running 
org.apache.phoenix.end2end.OrderByWithServerClientSpoolingDisabledIT
[INFO] Running