Build failed in Jenkins: Phoenix-4.x-HBase-1.3 #442

2019-06-20 Thread Apache Jenkins Server
See 


Changes:

[chinmayskulkarni] PHOENIX-5313: All mappers grab all RegionLocations from .META

--
[...truncated 526.75 KB...]
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.005 s 
<<< FAILURE! - in org.apache.phoenix.end2end.index.MutableIndexReplicationIT
[ERROR] org.apache.phoenix.end2end.index.MutableIndexReplicationIT  Time 
elapsed: 0.005 s  <<< ERROR!
java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.MutableIndexReplicationIT.setupConfigsAndStartCluster(MutableIndexReplicationIT.java:157)
at 
org.apache.phoenix.end2end.index.MutableIndexReplicationIT.setUpBeforeClass(MutableIndexReplicationIT.java:108)
Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
seconds
at 
org.apache.phoenix.end2end.index.MutableIndexReplicationIT.setupConfigsAndStartCluster(MutableIndexReplicationIT.java:157)
at 
org.apache.phoenix.end2end.index.MutableIndexReplicationIT.setUpBeforeClass(MutableIndexReplicationIT.java:108)

[INFO] Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 340.663 
s - in org.apache.phoenix.end2end.index.MutableIndexFailureIT
[INFO] Running org.apache.phoenix.end2end.index.PhoenixMRJobSubmitterIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.635 s 
- in org.apache.phoenix.end2end.index.PhoenixMRJobSubmitterIT
[WARNING] Tests run: 45, Failures: 0, Errors: 0, Skipped: 18, Time elapsed: 
501.595 s - in org.apache.phoenix.end2end.index.ImmutableIndexIT
[INFO] Running org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
[INFO] Running org.apache.phoenix.end2end.join.HashJoinCacheIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.494 s 
- in org.apache.phoenix.end2end.join.HashJoinCacheIT
[INFO] Running org.apache.phoenix.end2end.join.SortMergeJoinNoSpoolingIT
[INFO] Running org.apache.phoenix.execute.PartialCommitIT
[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 99.832 
s - in org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
[INFO] Running org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
[INFO] Running org.apache.phoenix.execute.UpsertSelectOverlappingBatchesIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.935 s 
- in org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
[INFO] Running org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 81.502 s 
- in org.apache.phoenix.execute.UpsertSelectOverlappingBatchesIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.205 s 
- in org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT
[ERROR] Tests run: 12, Failures: 0, Errors: 6, Skipped: 0, Time elapsed: 
158.112 s <<< FAILURE! - in org.apache.phoenix.execute.PartialCommitIT
[ERROR] 
testOrderOfMutationsIsPredicatable[PartialCommitIT_transactionProvider=OMID](org.apache.phoenix.execute.PartialCommitIT)
  Time elapsed: 12.274 s  <<< ERROR!
org.apache.phoenix.exception.PhoenixIOException: java.lang.NullPointerException
at 
org.apache.phoenix.execute.PartialCommitIT.testPartialCommit(PartialCommitIT.java:258)
at 
org.apache.phoenix.execute.PartialCommitIT.testOrderOfMutationsIsPredicatable(PartialCommitIT.java:200)
Caused by: java.util.concurrent.ExecutionException: 
java.lang.NullPointerException
at 
org.apache.phoenix.execute.PartialCommitIT.testPartialCommit(PartialCommitIT.java:258)
at 
org.apache.phoenix.execute.PartialCommitIT.testOrderOfMutationsIsPredicatable(PartialCommitIT.java:200)
Caused by: java.lang.NullPointerException

[ERROR] 
testNoFailure[PartialCommitIT_transactionProvider=OMID](org.apache.phoenix.execute.PartialCommitIT)
  Time elapsed: 11.281 s  <<< ERROR!
java.lang.NullPointerException
at 
org.apache.phoenix.execute.PartialCommitIT.testPartialCommit(PartialCommitIT.java:258)
at 
org.apache.phoenix.execute.PartialCommitIT.testNoFailure(PartialCommitIT.java:153)

[ERROR] 
testDeleteFailure[PartialCommitIT_transactionProvider=OMID](org.apache.phoenix.execute.PartialCommitIT)
  Time elapsed: 13.266 s  <<< ERROR!
org.apache.phoenix.exception.PhoenixIOException: java.lang.NullPointerException
at 
org.apache.phoenix.execute.PartialCommitIT.testPartialCommit(PartialCommitIT.java:258)
at 
org.apache.phoenix.execute.PartialCommitIT.testDeleteFailure(PartialCommitIT.java:185)
Caused by: java.util.concurrent.ExecutionException: 
java.lang.NullPointerException
at 
org.apache.phoenix.execute.PartialCommitIT.testPartialCommit(PartialCommitIT.java:258)
at 
org.apache.phoenix.execute.PartialCommitIT.testDeleteFailure(PartialCommitIT.java:185)
Caused by: java.lang.NullPointerException

[ERROR] 

Jenkins build is back to normal : Phoenix-4.x-HBase-1.4 #189

2019-06-20 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : Phoenix | Master #2422

2019-06-20 Thread Apache Jenkins Server
See 




Apache-Phoenix | 4.x-HBase-1.3 | Build Successful

2019-06-20 Thread Apache Jenkins Server
4.x-HBase-1.3 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-1.3

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastCompletedBuild/testReport/

Changes
[chinmayskulkarni] PHOENIX-5313: All mappers grab all RegionLocations from .META



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


[phoenix] branch 4.14-HBase-1.3 updated: PHOENIX-5313: All mappers grab all RegionLocations from .META

2019-06-20 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.3 by this push:
 new ddb40b1  PHOENIX-5313: All mappers grab all RegionLocations from .META
ddb40b1 is described below

commit ddb40b1608dc4975fd0dca98a2343062bca537c7
Author: Chinmay Kulkarni 
AuthorDate: Wed Jun 19 13:40:27 2019 -0700

PHOENIX-5313: All mappers grab all RegionLocations from .META
---
 .../org/apache/phoenix/end2end/MapReduceIT.java| 50 ++--
 .../iterate/MapReduceParallelScanGrouper.java  |  4 +-
 .../phoenix/mapreduce/PhoenixInputFormat.java  | 41 
 .../phoenix/mapreduce/PhoenixRecordReader.java | 12 +++--
 .../mapreduce/util/PhoenixMapReduceUtil.java   | 20 
 .../TestingMapReduceParallelScanGrouper.java   | 54 ++
 .../mapreduce/PhoenixTestingInputFormat.java   | 46 ++
 7 files changed, 201 insertions(+), 26 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
index fb24bb2..2460cd2 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
@@ -25,20 +25,30 @@ import org.apache.hadoop.mapreduce.Job;
 import org.apache.hadoop.mapreduce.Mapper;
 import org.apache.hadoop.mapreduce.Reducer;
 import org.apache.hadoop.mapreduce.lib.db.DBWritable;
+import org.apache.phoenix.iterate.TestingMapReduceParallelScanGrouper;
 import org.apache.phoenix.mapreduce.PhoenixOutputFormat;
+import org.apache.phoenix.mapreduce.PhoenixTestingInputFormat;
 import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
 import org.apache.phoenix.mapreduce.util.PhoenixMapReduceUtil;
 import org.apache.phoenix.schema.types.PDouble;
 import org.apache.phoenix.schema.types.PhoenixArray;
 import org.apache.phoenix.util.PhoenixRuntime;
+import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
 
 import java.io.IOException;
-import java.sql.*;
+import java.sql.Array;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
 import java.util.Properties;
 
-import static org.junit.Assert.*;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
 
 /**
  * Test that our MapReduce basic tools work as expected
@@ -48,28 +58,37 @@ public class MapReduceIT extends ParallelStatsDisabledIT {
 private static final String STOCK_NAME = "STOCK_NAME";
 private static final String RECORDING_YEAR = "RECORDING_YEAR";
 private static final String RECORDINGS_QUARTER = "RECORDINGS_QUARTER";
-private  String CREATE_STOCK_TABLE = "CREATE TABLE IF NOT EXISTS %s ( " +
+
+// We pre-split the table to ensure that we have multiple mappers.
+// This is used to test scenarios with more than 1 mapper
+private static final String CREATE_STOCK_TABLE = "CREATE TABLE IF NOT 
EXISTS %s ( " +
 " STOCK_NAME VARCHAR NOT NULL , RECORDING_YEAR  INTEGER NOT  NULL, 
 RECORDINGS_QUARTER " +
-" DOUBLE array[] CONSTRAINT pk PRIMARY KEY ( STOCK_NAME, 
RECORDING_YEAR ))";
+" DOUBLE array[] CONSTRAINT pk PRIMARY KEY ( STOCK_NAME, 
RECORDING_YEAR )) "
++ "SPLIT ON ('AA')";
 
 private static final String CREATE_STOCK_VIEW = "CREATE VIEW IF NOT EXISTS 
%s (v1 VARCHAR) AS "
 + " SELECT * FROM %s WHERE RECORDING_YEAR = 2008";
 
 private static final String MAX_RECORDING = "MAX_RECORDING";
-private  String CREATE_STOCK_STATS_TABLE =
+private static final String CREATE_STOCK_STATS_TABLE =
 "CREATE TABLE IF NOT EXISTS %s(STOCK_NAME VARCHAR NOT NULL , "
 + " MAX_RECORDING DOUBLE CONSTRAINT pk PRIMARY KEY 
(STOCK_NAME ))";
 
 
-private String UPSERT = "UPSERT into %s values (?, ?, ?)";
+private static final String UPSERT = "UPSERT into %s values (?, ?, ?)";
 
-private String TENANT_ID = "1234567890";
+private static final String TENANT_ID = "1234567890";
 
 @Before
 public void setupTables() throws Exception {
 
 }
 
+@After
+public void clearCountersForScanGrouper() {
+
TestingMapReduceParallelScanGrouper.clearNumCallsToGetRegionBoundaries();
+}
+
 @Test
 public void testNoConditionsOnSelect() throws Exception {
 try (Connection conn = DriverManager.getConnection(getUrl())) {
@@ -93,7 +112,8 @@ public class MapReduceIT extends ParallelStatsDisabledIT {
 
 }
 
-private void createAndTestJob(Connection conn, String s, double v, String 
tenantId) throws SQLException, IOException, InterruptedException, 

[phoenix] branch 4.14-HBase-1.4 updated: PHOENIX-5313: All mappers grab all RegionLocations from .META

2019-06-20 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.14-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.4 by this push:
 new 704f938  PHOENIX-5313: All mappers grab all RegionLocations from .META
704f938 is described below

commit 704f9382036b6bfd48a555cfefa479f4b4f7f6cc
Author: Chinmay Kulkarni 
AuthorDate: Wed Jun 19 13:40:27 2019 -0700

PHOENIX-5313: All mappers grab all RegionLocations from .META
---
 .../org/apache/phoenix/end2end/MapReduceIT.java| 50 ++--
 .../iterate/MapReduceParallelScanGrouper.java  |  4 +-
 .../phoenix/mapreduce/PhoenixInputFormat.java  | 41 
 .../phoenix/mapreduce/PhoenixRecordReader.java | 12 +++--
 .../mapreduce/util/PhoenixMapReduceUtil.java   | 20 
 .../TestingMapReduceParallelScanGrouper.java   | 54 ++
 .../mapreduce/PhoenixTestingInputFormat.java   | 46 ++
 7 files changed, 201 insertions(+), 26 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
index fb24bb2..2460cd2 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
@@ -25,20 +25,30 @@ import org.apache.hadoop.mapreduce.Job;
 import org.apache.hadoop.mapreduce.Mapper;
 import org.apache.hadoop.mapreduce.Reducer;
 import org.apache.hadoop.mapreduce.lib.db.DBWritable;
+import org.apache.phoenix.iterate.TestingMapReduceParallelScanGrouper;
 import org.apache.phoenix.mapreduce.PhoenixOutputFormat;
+import org.apache.phoenix.mapreduce.PhoenixTestingInputFormat;
 import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
 import org.apache.phoenix.mapreduce.util.PhoenixMapReduceUtil;
 import org.apache.phoenix.schema.types.PDouble;
 import org.apache.phoenix.schema.types.PhoenixArray;
 import org.apache.phoenix.util.PhoenixRuntime;
+import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
 
 import java.io.IOException;
-import java.sql.*;
+import java.sql.Array;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
 import java.util.Properties;
 
-import static org.junit.Assert.*;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
 
 /**
  * Test that our MapReduce basic tools work as expected
@@ -48,28 +58,37 @@ public class MapReduceIT extends ParallelStatsDisabledIT {
 private static final String STOCK_NAME = "STOCK_NAME";
 private static final String RECORDING_YEAR = "RECORDING_YEAR";
 private static final String RECORDINGS_QUARTER = "RECORDINGS_QUARTER";
-private  String CREATE_STOCK_TABLE = "CREATE TABLE IF NOT EXISTS %s ( " +
+
+// We pre-split the table to ensure that we have multiple mappers.
+// This is used to test scenarios with more than 1 mapper
+private static final String CREATE_STOCK_TABLE = "CREATE TABLE IF NOT 
EXISTS %s ( " +
 " STOCK_NAME VARCHAR NOT NULL , RECORDING_YEAR  INTEGER NOT  NULL, 
 RECORDINGS_QUARTER " +
-" DOUBLE array[] CONSTRAINT pk PRIMARY KEY ( STOCK_NAME, 
RECORDING_YEAR ))";
+" DOUBLE array[] CONSTRAINT pk PRIMARY KEY ( STOCK_NAME, 
RECORDING_YEAR )) "
++ "SPLIT ON ('AA')";
 
 private static final String CREATE_STOCK_VIEW = "CREATE VIEW IF NOT EXISTS 
%s (v1 VARCHAR) AS "
 + " SELECT * FROM %s WHERE RECORDING_YEAR = 2008";
 
 private static final String MAX_RECORDING = "MAX_RECORDING";
-private  String CREATE_STOCK_STATS_TABLE =
+private static final String CREATE_STOCK_STATS_TABLE =
 "CREATE TABLE IF NOT EXISTS %s(STOCK_NAME VARCHAR NOT NULL , "
 + " MAX_RECORDING DOUBLE CONSTRAINT pk PRIMARY KEY 
(STOCK_NAME ))";
 
 
-private String UPSERT = "UPSERT into %s values (?, ?, ?)";
+private static final String UPSERT = "UPSERT into %s values (?, ?, ?)";
 
-private String TENANT_ID = "1234567890";
+private static final String TENANT_ID = "1234567890";
 
 @Before
 public void setupTables() throws Exception {
 
 }
 
+@After
+public void clearCountersForScanGrouper() {
+
TestingMapReduceParallelScanGrouper.clearNumCallsToGetRegionBoundaries();
+}
+
 @Test
 public void testNoConditionsOnSelect() throws Exception {
 try (Connection conn = DriverManager.getConnection(getUrl())) {
@@ -93,7 +112,8 @@ public class MapReduceIT extends ParallelStatsDisabledIT {
 
 }
 
-private void createAndTestJob(Connection conn, String s, double v, String 
tenantId) throws SQLException, IOException, InterruptedException, 

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5313: All mappers grab all RegionLocations from .META

2019-06-20 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new edee198  PHOENIX-5313: All mappers grab all RegionLocations from .META
edee198 is described below

commit edee198824974a55e5dd31827a7749f8a98d937c
Author: Chinmay Kulkarni 
AuthorDate: Wed Jun 19 13:40:27 2019 -0700

PHOENIX-5313: All mappers grab all RegionLocations from .META
---
 .../org/apache/phoenix/end2end/MapReduceIT.java| 50 ++--
 .../iterate/MapReduceParallelScanGrouper.java  |  4 +-
 .../phoenix/mapreduce/PhoenixInputFormat.java  | 41 
 .../phoenix/mapreduce/PhoenixRecordReader.java | 12 +++--
 .../mapreduce/util/PhoenixMapReduceUtil.java   | 20 
 .../TestingMapReduceParallelScanGrouper.java   | 54 ++
 .../mapreduce/PhoenixTestingInputFormat.java   | 46 ++
 7 files changed, 201 insertions(+), 26 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
index fb24bb2..2460cd2 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
@@ -25,20 +25,30 @@ import org.apache.hadoop.mapreduce.Job;
 import org.apache.hadoop.mapreduce.Mapper;
 import org.apache.hadoop.mapreduce.Reducer;
 import org.apache.hadoop.mapreduce.lib.db.DBWritable;
+import org.apache.phoenix.iterate.TestingMapReduceParallelScanGrouper;
 import org.apache.phoenix.mapreduce.PhoenixOutputFormat;
+import org.apache.phoenix.mapreduce.PhoenixTestingInputFormat;
 import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
 import org.apache.phoenix.mapreduce.util.PhoenixMapReduceUtil;
 import org.apache.phoenix.schema.types.PDouble;
 import org.apache.phoenix.schema.types.PhoenixArray;
 import org.apache.phoenix.util.PhoenixRuntime;
+import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
 
 import java.io.IOException;
-import java.sql.*;
+import java.sql.Array;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
 import java.util.Properties;
 
-import static org.junit.Assert.*;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
 
 /**
  * Test that our MapReduce basic tools work as expected
@@ -48,28 +58,37 @@ public class MapReduceIT extends ParallelStatsDisabledIT {
 private static final String STOCK_NAME = "STOCK_NAME";
 private static final String RECORDING_YEAR = "RECORDING_YEAR";
 private static final String RECORDINGS_QUARTER = "RECORDINGS_QUARTER";
-private  String CREATE_STOCK_TABLE = "CREATE TABLE IF NOT EXISTS %s ( " +
+
+// We pre-split the table to ensure that we have multiple mappers.
+// This is used to test scenarios with more than 1 mapper
+private static final String CREATE_STOCK_TABLE = "CREATE TABLE IF NOT 
EXISTS %s ( " +
 " STOCK_NAME VARCHAR NOT NULL , RECORDING_YEAR  INTEGER NOT  NULL, 
 RECORDINGS_QUARTER " +
-" DOUBLE array[] CONSTRAINT pk PRIMARY KEY ( STOCK_NAME, 
RECORDING_YEAR ))";
+" DOUBLE array[] CONSTRAINT pk PRIMARY KEY ( STOCK_NAME, 
RECORDING_YEAR )) "
++ "SPLIT ON ('AA')";
 
 private static final String CREATE_STOCK_VIEW = "CREATE VIEW IF NOT EXISTS 
%s (v1 VARCHAR) AS "
 + " SELECT * FROM %s WHERE RECORDING_YEAR = 2008";
 
 private static final String MAX_RECORDING = "MAX_RECORDING";
-private  String CREATE_STOCK_STATS_TABLE =
+private static final String CREATE_STOCK_STATS_TABLE =
 "CREATE TABLE IF NOT EXISTS %s(STOCK_NAME VARCHAR NOT NULL , "
 + " MAX_RECORDING DOUBLE CONSTRAINT pk PRIMARY KEY 
(STOCK_NAME ))";
 
 
-private String UPSERT = "UPSERT into %s values (?, ?, ?)";
+private static final String UPSERT = "UPSERT into %s values (?, ?, ?)";
 
-private String TENANT_ID = "1234567890";
+private static final String TENANT_ID = "1234567890";
 
 @Before
 public void setupTables() throws Exception {
 
 }
 
+@After
+public void clearCountersForScanGrouper() {
+
TestingMapReduceParallelScanGrouper.clearNumCallsToGetRegionBoundaries();
+}
+
 @Test
 public void testNoConditionsOnSelect() throws Exception {
 try (Connection conn = DriverManager.getConnection(getUrl())) {
@@ -93,7 +112,8 @@ public class MapReduceIT extends ParallelStatsDisabledIT {
 
 }
 
-private void createAndTestJob(Connection conn, String s, double v, String 
tenantId) throws SQLException, IOException, InterruptedException, 

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5313: All mappers grab all RegionLocations from .META

2019-06-20 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new cf8afd8  PHOENIX-5313: All mappers grab all RegionLocations from .META
cf8afd8 is described below

commit cf8afd8a733768ccbfc5b71d298c0fe15c826c35
Author: Chinmay Kulkarni 
AuthorDate: Wed Jun 19 13:40:27 2019 -0700

PHOENIX-5313: All mappers grab all RegionLocations from .META
---
 .../org/apache/phoenix/end2end/MapReduceIT.java| 50 ++--
 .../iterate/MapReduceParallelScanGrouper.java  |  4 +-
 .../phoenix/mapreduce/PhoenixInputFormat.java  | 41 
 .../phoenix/mapreduce/PhoenixRecordReader.java | 12 +++--
 .../mapreduce/util/PhoenixMapReduceUtil.java   | 20 
 .../TestingMapReduceParallelScanGrouper.java   | 54 ++
 .../mapreduce/PhoenixTestingInputFormat.java   | 46 ++
 7 files changed, 201 insertions(+), 26 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
index fb24bb2..2460cd2 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
@@ -25,20 +25,30 @@ import org.apache.hadoop.mapreduce.Job;
 import org.apache.hadoop.mapreduce.Mapper;
 import org.apache.hadoop.mapreduce.Reducer;
 import org.apache.hadoop.mapreduce.lib.db.DBWritable;
+import org.apache.phoenix.iterate.TestingMapReduceParallelScanGrouper;
 import org.apache.phoenix.mapreduce.PhoenixOutputFormat;
+import org.apache.phoenix.mapreduce.PhoenixTestingInputFormat;
 import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
 import org.apache.phoenix.mapreduce.util.PhoenixMapReduceUtil;
 import org.apache.phoenix.schema.types.PDouble;
 import org.apache.phoenix.schema.types.PhoenixArray;
 import org.apache.phoenix.util.PhoenixRuntime;
+import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
 
 import java.io.IOException;
-import java.sql.*;
+import java.sql.Array;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
 import java.util.Properties;
 
-import static org.junit.Assert.*;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
 
 /**
  * Test that our MapReduce basic tools work as expected
@@ -48,28 +58,37 @@ public class MapReduceIT extends ParallelStatsDisabledIT {
 private static final String STOCK_NAME = "STOCK_NAME";
 private static final String RECORDING_YEAR = "RECORDING_YEAR";
 private static final String RECORDINGS_QUARTER = "RECORDINGS_QUARTER";
-private  String CREATE_STOCK_TABLE = "CREATE TABLE IF NOT EXISTS %s ( " +
+
+// We pre-split the table to ensure that we have multiple mappers.
+// This is used to test scenarios with more than 1 mapper
+private static final String CREATE_STOCK_TABLE = "CREATE TABLE IF NOT 
EXISTS %s ( " +
 " STOCK_NAME VARCHAR NOT NULL , RECORDING_YEAR  INTEGER NOT  NULL, 
 RECORDINGS_QUARTER " +
-" DOUBLE array[] CONSTRAINT pk PRIMARY KEY ( STOCK_NAME, 
RECORDING_YEAR ))";
+" DOUBLE array[] CONSTRAINT pk PRIMARY KEY ( STOCK_NAME, 
RECORDING_YEAR )) "
++ "SPLIT ON ('AA')";
 
 private static final String CREATE_STOCK_VIEW = "CREATE VIEW IF NOT EXISTS 
%s (v1 VARCHAR) AS "
 + " SELECT * FROM %s WHERE RECORDING_YEAR = 2008";
 
 private static final String MAX_RECORDING = "MAX_RECORDING";
-private  String CREATE_STOCK_STATS_TABLE =
+private static final String CREATE_STOCK_STATS_TABLE =
 "CREATE TABLE IF NOT EXISTS %s(STOCK_NAME VARCHAR NOT NULL , "
 + " MAX_RECORDING DOUBLE CONSTRAINT pk PRIMARY KEY 
(STOCK_NAME ))";
 
 
-private String UPSERT = "UPSERT into %s values (?, ?, ?)";
+private static final String UPSERT = "UPSERT into %s values (?, ?, ?)";
 
-private String TENANT_ID = "1234567890";
+private static final String TENANT_ID = "1234567890";
 
 @Before
 public void setupTables() throws Exception {
 
 }
 
+@After
+public void clearCountersForScanGrouper() {
+
TestingMapReduceParallelScanGrouper.clearNumCallsToGetRegionBoundaries();
+}
+
 @Test
 public void testNoConditionsOnSelect() throws Exception {
 try (Connection conn = DriverManager.getConnection(getUrl())) {
@@ -93,7 +112,8 @@ public class MapReduceIT extends ParallelStatsDisabledIT {
 
 }
 
-private void createAndTestJob(Connection conn, String s, double v, String 
tenantId) throws SQLException, IOException, InterruptedException, 

Build failed in Jenkins: Phoenix Compile Compatibility with HBase #1034

2019-06-20 Thread Apache Jenkins Server
See 


--
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on H25 (ubuntu xenial) in workspace 

[Phoenix_Compile_Compat_wHBase] $ /bin/bash /tmp/jenkins9093750463792267227.sh
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 386407
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 6
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 10240
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited
core id : 0
core id : 1
core id : 2
core id : 3
core id : 4
core id : 5
physical id : 0
physical id : 1
MemTotal:   98957636 kB
MemFree:22940996 kB
Filesystem  Size  Used Avail Use% Mounted on
udev 48G 0   48G   0% /dev
tmpfs   9.5G  914M  8.6G  10% /run
/dev/sda3   3.6T  427G  3.0T  13% /
tmpfs48G 0   48G   0% /dev/shm
tmpfs   5.0M 0  5.0M   0% /run/lock
tmpfs48G 0   48G   0% /sys/fs/cgroup
/dev/sda2   473M  236M  213M  53% /boot
tmpfs   9.5G  4.0K  9.5G   1% /run/user/910
tmpfs   9.5G 0  9.5G   0% /run/user/1000
/dev/loop9   90M   90M 0 100% /snap/core/6673
/dev/loop8   90M   90M 0 100% /snap/core/6818
/dev/loop2   89M   89M 0 100% /snap/core/6964
/dev/loop11  57M   57M 0 100% /snap/snapcraft/3022
/dev/loop4   57M   57M 0 100% /snap/snapcraft/3059
/dev/loop7   55M   55M 0 100% /snap/lxd/10923
/dev/loop1   55M   55M 0 100% /snap/lxd/10934
apache-maven-2.2.1
apache-maven-3.0.4
apache-maven-3.0.5
apache-maven-3.1.1
apache-maven-3.2.1
apache-maven-3.2.5
apache-maven-3.3.3
apache-maven-3.3.9
apache-maven-3.5.0
apache-maven-3.5.2
apache-maven-3.5.4
apache-maven-3.6.0
latest
latest2
latest3


===
Verifying compile level compatibility with HBase 0.98 with Phoenix 
4.x-HBase-0.98
===

Cloning into 'hbase'...
Switched to a new branch '0.98'
Branch 0.98 set up to track remote branch 0.98 from origin.
[ERROR] Plugin org.codehaus.mojo:findbugs-maven-plugin:2.5.2 or one of its 
dependencies could not be resolved: Failed to read artifact descriptor for 
org.codehaus.mojo:findbugs-maven-plugin:jar:2.5.2: Could not transfer artifact 
org.codehaus.mojo:findbugs-maven-plugin:pom:2.5.2 from/to central 
(https://repo.maven.apache.org/maven2): Received fatal alert: protocol_version 
-> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginResolutionException
Build step 'Execute shell' marked build as failure


[phoenix] 02/02: PHOENIX-5269 use AccessChecker to check for user permisssions

2019-06-20 Thread tdsilva
This is an automated email from the ASF dual-hosted git repository.

tdsilva pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit 138c495f2f6875bfca812ab975a2db4a63385e39
Author: Kiran Kumar Maturi 
AuthorDate: Thu Jun 20 09:03:41 2019 +0530

PHOENIX-5269 use AccessChecker to check for user permisssions
---
 .../apache/phoenix/end2end/PermissionsCacheIT.java | 105 +
 .../coprocessor/PhoenixAccessController.java   |  91 --
 pom.xml|   2 +-
 3 files changed, 190 insertions(+), 8 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/PermissionsCacheIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/PermissionsCacheIT.java
new file mode 100644
index 000..c2f7ce2
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/PermissionsCacheIT.java
@@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.assertTrue;
+
+import java.security.PrivilegedExceptionAction;
+import java.sql.Connection;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.AuthUtil;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.security.access.AccessControlLists;
+import org.apache.hadoop.hbase.security.access.Permission.Action;
+import org.apache.hadoop.hbase.security.access.TablePermission;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import com.google.common.collect.ListMultimap;
+
+public class PermissionsCacheIT extends BasePermissionsIT {
+
+
+public PermissionsCacheIT(boolean isNamespaceMapped) throws Exception {
+super(isNamespaceMapped);
+}
+
+@Test
+public void testPermissionsCachedWithAccessChecker() throws Throwable {
+if (!isNamespaceMapped) {
+return;
+}
+startNewMiniCluster();
+final String schema = generateUniqueName();
+final String tableName = generateUniqueName();
+final String phoenixTableName = SchemaUtil.getTableName(schema, 
tableName);
+try (Connection conn = getConnection()) {
+grantPermissions(regularUser1.getShortName(), 
PHOENIX_NAMESPACE_MAPPED_SYSTEM_TABLES,
+Action.READ, Action.EXEC);
+grantPermissions(regularUser1.getShortName(), 
Collections.singleton("SYSTEM:SEQUENCE"),
+Action.WRITE, Action.READ, Action.EXEC);
+superUser1.runAs(new PrivilegedExceptionAction() {
+@Override
+public Void run() throws Exception {
+try {
+verifyAllowed(createSchema(schema), superUser1);
+grantPermissions(regularUser1.getShortName(), schema, 
Action.CREATE);
+
grantPermissions(AuthUtil.toGroupEntry(GROUP_SYSTEM_ACCESS), schema,
+Action.CREATE);
+} catch (Throwable e) {
+if (e instanceof Exception) {
+throw (Exception) e;
+} else {
+throw new Exception(e);
+}
+}
+return null;
+}
+});
+verifyAllowed(createTable(phoenixTableName), regularUser1);
+HBaseTestingUtility utility = getUtility();
+Configuration conf = utility.getConfiguration();
+ZooKeeperWatcher zkw = 
HBaseTestingUtility.getZooKeeperWatcher(utility);
+String aclZnodeParent = conf.get("zookeeper.znode.acl.parent", 
"acl");
+String aclZNode = ZKUtil.joinZNode(zkw.baseZNode, aclZnodeParent);
+String tableZNode = ZKUtil.joinZNode(aclZNode, "@" + schema);
+byte[] data = ZKUtil.getData(zkw, tableZNode);
+ListMultimap userPermissions =
+  

[phoenix] branch 4.14-HBase-1.3 updated (18007c8 -> 138c495)

2019-06-20 Thread tdsilva
This is an automated email from the ASF dual-hosted git repository.

tdsilva pushed a change to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git.


from 18007c8  PHOENIX-5343 OrphanViewTool should not check Index Tables
 new 7095dd5  PHOENIX-5303 Fix index failures with some versions of HBase.
 new 138c495  PHOENIX-5269 use AccessChecker to check for user permisssions

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../apache/phoenix/end2end/PermissionsCacheIT.java | 105 +
 .../coprocessor/PhoenixAccessController.java   |  91 --
 .../hbase/index/scanner/ScannerBuilder.java|   9 +-
 pom.xml|   2 +-
 4 files changed, 197 insertions(+), 10 deletions(-)
 create mode 100644 
phoenix-core/src/it/java/org/apache/phoenix/end2end/PermissionsCacheIT.java



[phoenix] 01/02: PHOENIX-5303 Fix index failures with some versions of HBase.

2019-06-20 Thread tdsilva
This is an automated email from the ASF dual-hosted git repository.

tdsilva pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit 7095dd575da2aad38c2d8bd173d83dd4b6994f61
Author: Lars Hofhansl 
AuthorDate: Tue May 28 10:49:43 2019 -0700

PHOENIX-5303 Fix index failures with some versions of HBase.
---
 .../org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java   | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java
index 703fcd2..318517c 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java
@@ -24,6 +24,7 @@ import java.util.HashSet;
 import java.util.Set;
 
 import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.KeyValue;
 import org.apache.hadoop.hbase.KeyValueUtil;
 import org.apache.hadoop.hbase.client.Mutation;
@@ -33,6 +34,7 @@ import org.apache.hadoop.hbase.filter.FamilyFilter;
 import org.apache.hadoop.hbase.filter.Filter;
 import org.apache.hadoop.hbase.filter.FilterBase;
 import org.apache.hadoop.hbase.filter.FilterList;
+import org.apache.hadoop.hbase.filter.FilterList.Operator;
 import org.apache.hadoop.hbase.filter.QualifierFilter;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.hbase.index.covered.KeyValueStore;
@@ -92,10 +94,13 @@ public class ScannerBuilder {
   Filter columnFilter =
   new FamilyFilter(CompareOp.EQUAL, new 
BinaryComparator(ref.getFamily()));
   // combine with a match for the qualifier, if the qualifier is a 
specific qualifier
+  // in that case we *must* let empty qualifiers through for family delete 
markers
   if (!Bytes.equals(ColumnReference.ALL_QUALIFIERS, ref.getQualifier())) {
 columnFilter =
-new FilterList(columnFilter, new QualifierFilter(CompareOp.EQUAL, 
new BinaryComparator(
-ref.getQualifier(;
+new FilterList(columnFilter,
+new FilterList(Operator.MUST_PASS_ONE,
+new QualifierFilter(CompareOp.EQUAL, new 
BinaryComparator(ref.getQualifier())),
+new QualifierFilter(CompareOp.EQUAL, new 
BinaryComparator(HConstants.EMPTY_BYTE_ARRAY;
   }
   columnFilters.addFilter(columnFilter);
 }



[phoenix] branch 4.14-HBase-1.4 updated: PHOENIX-5303 Fix index failures with some versions of HBase.

2019-06-20 Thread tdsilva
This is an automated email from the ASF dual-hosted git repository.

tdsilva pushed a commit to branch 4.14-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.4 by this push:
 new 0f07bdf  PHOENIX-5303 Fix index failures with some versions of HBase.
0f07bdf is described below

commit 0f07bdf4db6da166249ce909216881500dd11080
Author: Lars Hofhansl 
AuthorDate: Tue May 28 10:49:43 2019 -0700

PHOENIX-5303 Fix index failures with some versions of HBase.
---
 .../org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java   | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java
index 703fcd2..318517c 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java
@@ -24,6 +24,7 @@ import java.util.HashSet;
 import java.util.Set;
 
 import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.KeyValue;
 import org.apache.hadoop.hbase.KeyValueUtil;
 import org.apache.hadoop.hbase.client.Mutation;
@@ -33,6 +34,7 @@ import org.apache.hadoop.hbase.filter.FamilyFilter;
 import org.apache.hadoop.hbase.filter.Filter;
 import org.apache.hadoop.hbase.filter.FilterBase;
 import org.apache.hadoop.hbase.filter.FilterList;
+import org.apache.hadoop.hbase.filter.FilterList.Operator;
 import org.apache.hadoop.hbase.filter.QualifierFilter;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.hbase.index.covered.KeyValueStore;
@@ -92,10 +94,13 @@ public class ScannerBuilder {
   Filter columnFilter =
   new FamilyFilter(CompareOp.EQUAL, new 
BinaryComparator(ref.getFamily()));
   // combine with a match for the qualifier, if the qualifier is a 
specific qualifier
+  // in that case we *must* let empty qualifiers through for family delete 
markers
   if (!Bytes.equals(ColumnReference.ALL_QUALIFIERS, ref.getQualifier())) {
 columnFilter =
-new FilterList(columnFilter, new QualifierFilter(CompareOp.EQUAL, 
new BinaryComparator(
-ref.getQualifier(;
+new FilterList(columnFilter,
+new FilterList(Operator.MUST_PASS_ONE,
+new QualifierFilter(CompareOp.EQUAL, new 
BinaryComparator(ref.getQualifier())),
+new QualifierFilter(CompareOp.EQUAL, new 
BinaryComparator(HConstants.EMPTY_BYTE_ARRAY;
   }
   columnFilters.addFilter(columnFilter);
 }