Build failed in Jenkins: Phoenix-4.x-HBase-1.5 #8

2019-05-28 Thread Apache Jenkins Server
See 


Changes:

[larsh] PHOENIX-5304 LocalIndexSplitMergeIT fails with HBase 1.5.x.

--
[...truncated 503.03 KB...]
[INFO] Running org.apache.phoenix.end2end.PermissionNSEnabledIT
[INFO] Running org.apache.phoenix.end2end.PermissionNSDisabledIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.468 s 
- in org.apache.phoenix.end2end.PartialResultServerConfigurationIT
[INFO] Running org.apache.phoenix.end2end.PermissionsCacheIT
[INFO] Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 368.61 
s - in org.apache.phoenix.end2end.CostBasedDecisionIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 166.708 
s - in org.apache.phoenix.end2end.PermissionNSDisabledIT
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 175.123 
s - in org.apache.phoenix.end2end.PermissionNSEnabledIT
[INFO] Running org.apache.phoenix.end2end.PhoenixDriverIT
[INFO] Running org.apache.phoenix.end2end.QueryLoggerIT
[INFO] Running org.apache.phoenix.end2end.QueryTimeoutIT
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 174.626 
s - in org.apache.phoenix.end2end.PermissionsCacheIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.55 s - 
in org.apache.phoenix.end2end.QueryTimeoutIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 57.929 s 
- in org.apache.phoenix.end2end.PhoenixDriverIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 288.852 
s - in org.apache.phoenix.end2end.MigrateSystemTablesToSystemNamespaceIT
[INFO] Running org.apache.phoenix.end2end.QueryWithLimitIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.135 s 
- in org.apache.phoenix.end2end.QueryLoggerIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.052 s 
- in org.apache.phoenix.end2end.QueryWithLimitIT
[INFO] Running org.apache.phoenix.end2end.RegexBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.RebuildIndexConnectionPropsIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.426 s 
- in org.apache.phoenix.end2end.RebuildIndexConnectionPropsIT
[INFO] Running org.apache.phoenix.end2end.RenewLeaseIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.414 s 
- in org.apache.phoenix.end2end.RenewLeaseIT
[INFO] Running org.apache.phoenix.end2end.SequencePointInTimeIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.144 s 
- in org.apache.phoenix.end2end.SequencePointInTimeIT
[INFO] Running org.apache.phoenix.end2end.SystemCatalogCreationOnConnectionIT
[INFO] Running org.apache.phoenix.end2end.StatsEnabledSplitSystemCatalogIT
[INFO] Running org.apache.phoenix.end2end.SpillableGroupByIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.699 s 
- in org.apache.phoenix.end2end.SpillableGroupByIT
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 70.427 
s - in org.apache.phoenix.end2end.RegexBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.SystemCatalogIT
[INFO] Running org.apache.phoenix.end2end.UpdateCacheAcrossDifferentClientsIT
[INFO] Running org.apache.phoenix.end2end.TableSnapshotReadsMapReduceIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 452.644 
s - in org.apache.phoenix.end2end.LocalIndexSplitMergeIT
[INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 90.432 
s - in org.apache.phoenix.end2end.StatsEnabledSplitSystemCatalogIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.849 s 
- in org.apache.phoenix.end2end.TableSnapshotReadsMapReduceIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.34 s 
- in org.apache.phoenix.end2end.SystemCatalogIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 72.376 s 
- in org.apache.phoenix.end2end.UpdateCacheAcrossDifferentClientsIT
[INFO] Running org.apache.phoenix.end2end.UserDefinedFunctionsIT
[INFO] Running org.apache.phoenix.end2end.index.ImmutableIndexIT
[INFO] Running 
org.apache.phoenix.end2end.index.IndexRebuildIncrementDisableCountIT
[INFO] Running org.apache.phoenix.end2end.index.LocalIndexIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.028 s 
- in org.apache.phoenix.end2end.index.IndexRebuildIncrementDisableCountIT
[INFO] Running org.apache.phoenix.end2end.index.MutableIndexFailureIT
[INFO] Running 
org.apache.phoenix.end2end.index.MutableIndexFailureWithNamespaceIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 115.614 
s - in org.apache.phoenix.end2end.UserDefinedFunctionsIT
[INFO] Running org.apache.phoenix.end2end.index.MutableIndexRebuilderIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 151.886 
s - in 

[phoenix] branch 4.x-cdh5.15 deleted (was 4bea60d)

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a change to branch 4.x-cdh5.15
in repository https://gitbox.apache.org/repos/asf/phoenix.git.


 was 4bea60d  PHOENIX-5037 Fix maven site reporting warnings on build

The revisions that were on this branch are still contained in
other references; therefore, this change does not discard any commits
from the repository.



[phoenix] 12/18: PHOENIX-5074 DropTableWithViewsIT.testDropTableWithChildViews is flapping

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a commit to branch 4.x-cdh5.16
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit 3f17a89e6c70b8f47023175e585e216dcfec5731
Author: Kadir 
AuthorDate: Thu Dec 20 19:38:44 2018 +

PHOENIX-5074 DropTableWithViewsIT.testDropTableWithChildViews is flapping
---
 .../phoenix/end2end/DropTableWithViewsIT.java  | 56 --
 1 file changed, 30 insertions(+), 26 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
index 9502218..a4cd354 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
@@ -20,7 +20,6 @@ package org.apache.phoenix.end2end;
 import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.fail;
 
 import java.sql.Connection;
 import java.sql.DriverManager;
@@ -30,14 +29,16 @@ import java.util.Collection;
 
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
 import org.apache.phoenix.coprocessor.TableViewFinderResult;
+import org.apache.phoenix.coprocessor.TaskRegionObserver;
 import org.apache.phoenix.coprocessor.ViewFinder;
 import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
 
-import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.util.SchemaUtil;
+import org.junit.BeforeClass;
 import org.junit.Test;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
@@ -50,6 +51,20 @@ public class DropTableWithViewsIT extends 
SplitSystemCatalogIT {
 private final boolean columnEncoded;
 private final String TENANT_SPECIFIC_URL1 = getUrl() + ';' + 
TENANT_ID_ATTRIB + "=" + TENANT1;
 
+private static RegionCoprocessorEnvironment TaskRegionEnvironment;
+
+@BeforeClass
+public static void doSetup() throws Exception {
+SplitSystemCatalogIT.doSetup();
+TaskRegionEnvironment =
+getUtility()
+.getRSForFirstRegionInTable(
+
PhoenixDatabaseMetaData.SYSTEM_TASK_HBASE_TABLE_NAME)
+
.getRegions(PhoenixDatabaseMetaData.SYSTEM_TASK_HBASE_TABLE_NAME)
+.get(0).getCoprocessorHost()
+
.findCoprocessorEnvironment(TaskRegionObserver.class.getName());
+}
+
 public DropTableWithViewsIT(boolean isMultiTenant, boolean columnEncoded) {
 this.isMultiTenant = isMultiTenant;
 this.columnEncoded = columnEncoded;
@@ -108,30 +123,19 @@ public class DropTableWithViewsIT extends 
SplitSystemCatalogIT {
 // Drop the base table
 String dropTable = String.format("DROP TABLE IF EXISTS %s 
CASCADE", baseTable);
 conn.createStatement().execute(dropTable);
-
-// Wait for the tasks for dropping child views to complete. The 
depth of the view tree is 2, so we expect that
-// this will be done in two task handling runs, i.e., in tree task 
handling interval at most in general
-// by assuming that each non-root level will be processed in one 
interval. To be on the safe side, we will
-// wait at most 10 intervals.
-long halfTimeInterval = 
config.getLong(QueryServices.TASK_HANDLING_INTERVAL_MS_ATTRIB,
-QueryServicesOptions.DEFAULT_TASK_HANDLING_INTERVAL_MS)/2;
-ResultSet rs = null;
-boolean timedOut = true;
-Thread.sleep(3 * halfTimeInterval);
-for (int i = 3; i < 20; i++) {
-rs = conn.createStatement().executeQuery("SELECT * " +
-" FROM " + 
PhoenixDatabaseMetaData.SYSTEM_TASK_NAME +
-" WHERE " + PhoenixDatabaseMetaData.TASK_TYPE 
+ " = " +
-
PTable.TaskType.DROP_CHILD_VIEWS.getSerializedValue());
-Thread.sleep(halfTimeInterval);
-if (!rs.next()) {
-timedOut = false;
-break;
-}
-}
-if (timedOut) {
-fail("Drop child view task execution timed out!");
-}
+// Run DropChildViewsTask to complete the tasks for dropping child 
views. The depth of the view tree is 2,
+// so we expect that this will be done in two task handling runs 
as each non-root level will be processed
+// in one run
+TaskRegionObserver.DropChildViewsTask task =
+   

[phoenix] 13/18: PHOENIX-5074; fix compilation failure.

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a commit to branch 4.x-cdh5.16
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit 82172a167de8adb709374b03c9a43bc1dc494e74
Author: Lars Hofhansl 
AuthorDate: Tue Dec 25 10:21:35 2018 +

PHOENIX-5074; fix compilation failure.
---
 .../src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java  | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
index a4cd354..6aaf703 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
@@ -57,10 +57,10 @@ public class DropTableWithViewsIT extends 
SplitSystemCatalogIT {
 public static void doSetup() throws Exception {
 SplitSystemCatalogIT.doSetup();
 TaskRegionEnvironment =
-getUtility()
+(RegionCoprocessorEnvironment)getUtility()
 .getRSForFirstRegionInTable(
 
PhoenixDatabaseMetaData.SYSTEM_TASK_HBASE_TABLE_NAME)
-
.getRegions(PhoenixDatabaseMetaData.SYSTEM_TASK_HBASE_TABLE_NAME)
+
.getOnlineRegions(PhoenixDatabaseMetaData.SYSTEM_TASK_HBASE_TABLE_NAME)
 .get(0).getCoprocessorHost()
 
.findCoprocessorEnvironment(TaskRegionObserver.class.getName());
 }



[phoenix] 18/18: PHOENIX-5059 Use the Datasource v2 api in the spark connector

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a commit to branch 4.x-cdh5.16
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit b129be998de7eac2bd8b3bf78f5feb7812b8f642
Author: Thomas D'Silva 
AuthorDate: Tue Dec 11 22:59:39 2018 +

PHOENIX-5059 Use the Datasource v2 api in the spark connector
---
 .../phoenix/end2end/salted/BaseSaltedTableIT.java  |   6 +-
 phoenix-spark/pom.xml  |   8 +
 .../java/org/apache/phoenix/spark/OrderByIT.java   | 117 ++---
 .../java/org/apache/phoenix/spark/SparkUtil.java   |  45 +-
 phoenix-spark/src/it/resources/globalSetup.sql |   6 +-
 .../phoenix/spark/AbstractPhoenixSparkIT.scala |  12 +-
 .../org/apache/phoenix/spark/PhoenixSparkIT.scala  | 543 +++--
 .../spark/PhoenixSparkITTenantSpecific.scala   |  18 +-
 .../spark/datasource/v2/PhoenixDataSource.java |  82 
 .../v2/reader/PhoenixDataSourceReadOptions.java|  51 ++
 .../v2/reader/PhoenixDataSourceReader.java | 201 
 .../v2/reader/PhoenixInputPartition.java   |  44 ++
 .../v2/reader/PhoenixInputPartitionReader.java | 168 +++
 .../v2/writer/PhoenixDataSourceWriteOptions.java   | 109 +
 .../datasource/v2/writer/PhoenixDataWriter.java| 100 
 .../v2/writer/PhoenixDataWriterFactory.java|  19 +
 .../v2/writer/PhoenixDatasourceWriter.java |  34 ++
 ...org.apache.spark.sql.sources.DataSourceRegister |   1 +
 .../apache/phoenix/spark/ConfigurationUtil.scala   |   1 +
 .../apache/phoenix/spark/DataFrameFunctions.scala  |   2 +-
 .../org/apache/phoenix/spark/DefaultSource.scala   |   1 +
 ...lation.scala => FilterExpressionCompiler.scala} | 109 ++---
 .../org/apache/phoenix/spark/PhoenixRDD.scala  |  61 +--
 .../phoenix/spark/PhoenixRecordWritable.scala  |   2 +-
 .../org/apache/phoenix/spark/PhoenixRelation.scala |  70 +--
 .../apache/phoenix/spark/ProductRDDFunctions.scala |   1 +
 .../phoenix/spark/SparkContextFunctions.scala  |   1 +
 .../org/apache/phoenix/spark/SparkSchemaUtil.scala |  84 
 .../phoenix/spark/SparkSqlContextFunctions.scala   |   1 +
 .../datasources/jdbc/PhoenixJdbcDialect.scala  |  21 +
 .../execution/datasources/jdbc/SparkJdbcUtil.scala | 309 
 31 files changed, 1664 insertions(+), 563 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/BaseSaltedTableIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/BaseSaltedTableIT.java
index 3051cd6..ef127ac 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/BaseSaltedTableIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/BaseSaltedTableIT.java
@@ -194,7 +194,7 @@ public abstract class BaseSaltedTableIT extends 
ParallelStatsDisabledIT  {
 .setSelectColumns(
 Lists.newArrayList("A_INTEGER", "A_STRING", "A_ID", 
"B_STRING", "B_INTEGER"))
 .setFullTableName(tableName)
-.setWhereClause("a_integer = 1 AND a_string >= 'ab' AND 
a_string < 'de' AND a_id = '123'");
+.setWhereClause("A_INTEGER = 1 AND A_STRING >= 'ab' AND 
A_STRING < 'de' AND A_ID = '123'");
 rs = executeQuery(conn, queryBuilder);
 assertTrue(rs.next());
 assertEquals(1, rs.getInt(1));
@@ -205,7 +205,7 @@ public abstract class BaseSaltedTableIT extends 
ParallelStatsDisabledIT  {
 assertFalse(rs.next());
 
 // all single slots with one value.
-queryBuilder.setWhereClause("a_integer = 1 AND a_string = 'ab' AND 
a_id = '123'");
+queryBuilder.setWhereClause("A_INTEGER = 1 AND A_STRING = 'ab' AND 
A_ID = '123'");
 rs = executeQuery(conn, queryBuilder);
 assertTrue(rs.next());
 assertEquals(1, rs.getInt(1));
@@ -216,7 +216,7 @@ public abstract class BaseSaltedTableIT extends 
ParallelStatsDisabledIT  {
 assertFalse(rs.next());
 
 // all single slots with multiple values.
-queryBuilder.setWhereClause("a_integer in (2, 4) AND a_string = 
'abc' AND a_id = '123'");
+queryBuilder.setWhereClause("A_INTEGER in (2, 4) AND A_STRING = 
'abc' AND A_ID = '123'");
 rs = executeQuery(conn, queryBuilder);
 
 assertTrue(rs.next());
diff --git a/phoenix-spark/pom.xml b/phoenix-spark/pom.xml
index e2790bd..9cc3c3d 100644
--- a/phoenix-spark/pom.xml
+++ b/phoenix-spark/pom.xml
@@ -487,6 +487,14 @@
 src/it/scala
 
src/it/resources
 
+
+org.apache.maven.plugins
+maven-compiler-plugin
+
+1.8
+1.8
+
+
   
 org.codehaus.mojo
 build-helper-maven-plugin
diff --git a/phoenix-spark/src/it/java/org/apache/phoenix/spark/OrderByIT.java 
b/phoenix-spark/src/it/java/org/apache/phoenix/spark/OrderByIT.java
index 83578ba..1257c43 

[phoenix] 17/18: Changes for CDH 5.16.x

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a commit to branch 4.x-cdh5.16
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit cea84e84d5b294bc1205a9b673037984b700ce63
Author: pboado 
AuthorDate: Tue May 28 23:45:56 2019 +0100

Changes for CDH 5.16.x
---
 phoenix-assembly/pom.xml   |  2 +-
 phoenix-client/pom.xml |  2 +-
 phoenix-core/pom.xml   |  2 +-
 phoenix-flume/pom.xml  |  2 +-
 phoenix-hive/pom.xml   |  2 +-
 phoenix-kafka/pom.xml  |  2 +-
 phoenix-load-balancer/pom.xml  |  2 +-
 phoenix-parcel/pom.xml |  2 +-
 phoenix-pherf/pom.xml  |  2 +-
 phoenix-pig/pom.xml|  2 +-
 phoenix-queryserver-client/pom.xml |  2 +-
 phoenix-queryserver/pom.xml|  2 +-
 phoenix-server/pom.xml |  2 +-
 phoenix-spark/pom.xml  |  2 +-
 phoenix-tracing-webapp/pom.xml |  2 +-
 pom.xml| 10 +-
 16 files changed, 20 insertions(+), 20 deletions(-)

diff --git a/phoenix-assembly/pom.xml b/phoenix-assembly/pom.xml
index 300b4f6..5c2aeb5 100644
--- a/phoenix-assembly/pom.xml
+++ b/phoenix-assembly/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.15.0-cdh5.15.1
+4.15.0-cdh5.16.2
   
   phoenix-assembly
   Phoenix Assembly
diff --git a/phoenix-client/pom.xml b/phoenix-client/pom.xml
index cfed3ce..3028c81 100644
--- a/phoenix-client/pom.xml
+++ b/phoenix-client/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.15.0-cdh5.15.1
+4.15.0-cdh5.16.2
   
   phoenix-client
   Phoenix Client
diff --git a/phoenix-core/pom.xml b/phoenix-core/pom.xml
index 8caf88f..043505a 100644
--- a/phoenix-core/pom.xml
+++ b/phoenix-core/pom.xml
@@ -4,7 +4,7 @@
   
 org.apache.phoenix
 phoenix
-4.15.0-cdh5.15.1
+4.15.0-cdh5.16.2
   
   phoenix-core
   Phoenix Core
diff --git a/phoenix-flume/pom.xml b/phoenix-flume/pom.xml
index c67de23..5711714 100644
--- a/phoenix-flume/pom.xml
+++ b/phoenix-flume/pom.xml
@@ -26,7 +26,7 @@
   
 org.apache.phoenix
 phoenix
-4.15.0-cdh5.15.1
+4.15.0-cdh5.16.2
   
   phoenix-flume
   Phoenix - Flume
diff --git a/phoenix-hive/pom.xml b/phoenix-hive/pom.xml
index 50670e0..8af7c16 100644
--- a/phoenix-hive/pom.xml
+++ b/phoenix-hive/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.15.0-cdh5.15.1
+4.15.0-cdh5.16.2
   
   phoenix-hive
   Phoenix - Hive
diff --git a/phoenix-kafka/pom.xml b/phoenix-kafka/pom.xml
index 460eb5a..6da5a58 100644
--- a/phoenix-kafka/pom.xml
+++ b/phoenix-kafka/pom.xml
@@ -26,7 +26,7 @@

org.apache.phoenix
phoenix
-   4.15.0-cdh5.15.1
+   4.15.0-cdh5.16.2

phoenix-kafka
Phoenix - Kafka
diff --git a/phoenix-load-balancer/pom.xml b/phoenix-load-balancer/pom.xml
index a8319e9..a59ee06 100644
--- a/phoenix-load-balancer/pom.xml
+++ b/phoenix-load-balancer/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.15.0-cdh5.15.1
+4.15.0-cdh5.16.2
   
   phoenix-load-balancer
   Phoenix Load Balancer
diff --git a/phoenix-parcel/pom.xml b/phoenix-parcel/pom.xml
index 417a2db..eb2f254 100644
--- a/phoenix-parcel/pom.xml
+++ b/phoenix-parcel/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.15.0-cdh5.15.1
+4.15.0-cdh5.16.2
   
   phoenix-parcel
   Phoenix Parcels for CDH
diff --git a/phoenix-pherf/pom.xml b/phoenix-pherf/pom.xml
index cb648e4..340bb58 100644
--- a/phoenix-pherf/pom.xml
+++ b/phoenix-pherf/pom.xml
@@ -15,7 +15,7 @@

org.apache.phoenix
phoenix
-   4.15.0-cdh5.15.1
+   4.15.0-cdh5.16.2

 
phoenix-pherf
diff --git a/phoenix-pig/pom.xml b/phoenix-pig/pom.xml
index 48ffb91..8f96d6f 100644
--- a/phoenix-pig/pom.xml
+++ b/phoenix-pig/pom.xml
@@ -26,7 +26,7 @@
   
 org.apache.phoenix
 phoenix
-4.15.0-cdh5.15.1
+4.15.0-cdh5.16.2
   
   phoenix-pig
   Phoenix - Pig
diff --git a/phoenix-queryserver-client/pom.xml 
b/phoenix-queryserver-client/pom.xml
index a87d338..ea386d7 100644
--- a/phoenix-queryserver-client/pom.xml
+++ b/phoenix-queryserver-client/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.15.0-cdh5.15.1
+4.15.0-cdh5.16.2
   
   phoenix-queryserver-client
   Phoenix Query Server Client
diff --git a/phoenix-queryserver/pom.xml b/phoenix-queryserver/pom.xml
index f91fce5..0a19b6d 100644
--- a/phoenix-queryserver/pom.xml
+++ b/phoenix-queryserver/pom.xml
@@ -26,7 +26,7 @@
   
 org.apache.phoenix
 phoenix
-4.15.0-cdh5.15.1
+4.15.0-cdh5.16.2
   
   phoenix-queryserver
   Phoenix Query Server
diff --git a/phoenix-server/pom.xml b/phoenix-server/pom.xml
index def100c..18a5ab9 100644
--- a/phoenix-server/pom.xml
+++ b/phoenix-server/pom.xml
@@ -27,7 +27,7 @@
   
 

[phoenix] 11/18: [PHOENIX-3623] Integrate Omid with Phoenix.

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a commit to branch 4.x-cdh5.16
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit 81f850311c4b03cb900a9dea079ee311d9c685fd
Author: Ohad Shacham 
AuthorDate: Thu Dec 20 12:15:03 2018 +

[PHOENIX-3623] Integrate Omid with Phoenix.

This commit finishes the integration of Omid as Phoenix transaction 
processor engine.
More information regarding the integration exists at [PHOENIX-3623] and at 
[OMID-82], which is the corresponding jira in Omid.
---
 bin/omid-env.sh|  43 
 bin/omid-server-configuration.yml  |  25 +++
 bin/omid.sh|  93 +
 phoenix-assembly/pom.xml   |   5 +
 .../build/components/all-common-dependencies.xml   |  28 +++
 phoenix-core/pom.xml   |  46 +
 .../phoenix/coprocessor/OmidGCProcessor.java   |   7 +-
 .../coprocessor/OmidTransactionalProcessor.java|   8 +-
 .../transaction/OmidTransactionContext.java| 217 -
 .../transaction/OmidTransactionProvider.java   | 106 +-
 .../phoenix/transaction/OmidTransactionTable.java  |  64 +-
 .../phoenix/transaction/TransactionFactory.java|   5 +-
 .../phoenix/query/QueryServicesTestImpl.java   |   1 -
 phoenix-server/pom.xml |   1 +
 pom.xml|  47 +
 15 files changed, 665 insertions(+), 31 deletions(-)

diff --git a/bin/omid-env.sh b/bin/omid-env.sh
new file mode 100644
index 000..820cdaa
--- /dev/null
+++ b/bin/omid-env.sh
@@ -0,0 +1,43 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# Set the flags to pass to the jvm when running omid
+# export JVM_FLAGS=-Xmx8096m
+# 
-
+# Check if HADOOP_CONF_DIR and HBASE_CONF_DIR are set
+# 
-
+export JVM_FLAGS=-Xmx4096m
+if [ -z ${HADOOP_CONF_DIR+x} ]; then
+if [ -z ${HADOOP_HOME+x} ]; then
+echo "WARNING: HADOOP_HOME or HADOOP_CONF_DIR are unset";
+else
+export HADOOP_CONF_DIR=${HADOOP_HOME}/conf
+fi
+else
+echo "HADOOP_CONF_DIR is set to '$HADOOP_CONF_DIR'";
+fi
+
+if [ -z ${HBASE_CONF_DIR+x} ]; then
+if [ -z ${HBASE_HOME+x} ]; then
+echo "WARNING: HBASE_HOME or HBASE_CONF_DIR are unset";
+else
+export HBASE_CONF_DIR=${HBASE_HOME}/conf
+fi
+else
+echo "HBASE_CONF_DIR is set to '$HBASE_CONF_DIR'";
+fi
diff --git a/bin/omid-server-configuration.yml 
b/bin/omid-server-configuration.yml
new file mode 100644
index 000..8d1616e
--- /dev/null
+++ b/bin/omid-server-configuration.yml
@@ -0,0 +1,25 @@
+# 
=
+#
+# Omid TSO Server Configuration
+# 
-
+#
+# Tune here the default values for TSO server config parameters found in 
'default-omid-server-configuration.yml' file
+#
+# 
=
+
+
+timestampStoreModule: 
!!org.apache.omid.timestamp.storage.HBaseTimestampStorageModule [ ]
+commitTableStoreModule: 
!!org.apache.omid.committable.hbase.HBaseCommitTableStorageModule [ ]
+
+metrics: !!org.apache.omid.metrics.CodahaleMetricsProvider [
+!!org.apache.omid.metrics.CodahaleMetricsConfig {
+  outputFreqInSecs: 10,
+  reporters: !!set {
+!!org.apache.omid.metrics.CodahaleMetricsConfig$Reporter CSV
+  },
+  csvDir: "csvMetrics",
+}
+]
+
+timestampType: WORLD_TIME
+lowLatency: false
diff --git a/bin/omid.sh b/bin/omid.sh
new file mode 100755
index 000..5b33ed5
--- /dev/null
+++ b/bin/omid.sh
@@ -0,0 +1,93 @@
+#!/bin/bash
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the 

[phoenix] 16/18: PHOENIX-5055 Split mutations batches probably affects correctness of index data

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a commit to branch 4.x-cdh5.16
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit 0e5a2635ea023d72459e63bd6443f3733642482b
Author: jaanai 
AuthorDate: Sat Jan 5 13:17:42 2019 +

PHOENIX-5055 Split mutations batches probably affects correctness of index 
data
---
 .../apache/phoenix/end2end/MutationStateIT.java| 47 +-
 .../org/apache/phoenix/end2end/QueryMoreIT.java|  6 +--
 .../org/apache/phoenix/execute/MutationState.java  | 41 ++-
 .../apache/phoenix/execute/MutationStateTest.java  | 41 +++
 4 files changed, 122 insertions(+), 13 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutationStateIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutationStateIT.java
index 36782c1..5a5fb56 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutationStateIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutationStateIT.java
@@ -25,8 +25,14 @@ import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.SQLException;
 import java.sql.Statement;
+import java.util.Iterator;
 import java.util.Properties;
 
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.execute.MutationState;
 import org.apache.phoenix.jdbc.PhoenixConnection;
@@ -157,5 +163,44 @@ public class MutationStateIT extends 
ParallelStatsDisabledIT {
 stmt.execute();
 assertTrue("Mutation state size should decrease", prevEstimatedSize+4 
> state.getEstimatedSize());
 }
-
+
+@Test
+public void testSplitMutationsIntoSameGroupForSingleRow() throws Exception 
{
+String tableName = "TBL_" + generateUniqueName();
+String indexName = "IDX_" + generateUniqueName();
+Properties props = new Properties();
+props.put("phoenix.mutate.batchSize", "2");
+try (PhoenixConnection conn = DriverManager.getConnection(getUrl(), 
props).unwrap(PhoenixConnection.class)) {
+conn.setAutoCommit(false);
+conn.createStatement().executeUpdate(
+"CREATE TABLE "  + tableName + " ("
++ "A VARCHAR NOT NULL PRIMARY KEY,"
++ "B VARCHAR,"
++ "C VARCHAR,"
++ "D VARCHAR) COLUMN_ENCODED_BYTES = 0");
+conn.createStatement().executeUpdate("CREATE INDEX " + indexName + 
" on "  + tableName + " (C) INCLUDE(D)");
+
+conn.createStatement().executeUpdate("UPSERT INTO "  + tableName + 
"(A,B,C,D) VALUES ('A2','B2','C2','D2')");
+conn.createStatement().executeUpdate("UPSERT INTO "  + tableName + 
"(A,B,C,D) VALUES ('A3','B3', 'C3', null)");
+conn.commit();
+
+Table htable = 
conn.getQueryServices().getTable(Bytes.toBytes(tableName));
+Scan scan = new Scan();
+scan.setRaw(true);
+Iterator scannerIter = htable.getScanner(scan).iterator();
+while (scannerIter.hasNext()) {
+long ts = -1;
+Result r = scannerIter.next();
+for (Cell cell : r.listCells()) {
+if (ts == -1) {
+ts = cell.getTimestamp();
+} else {
+assertEquals("(" + cell.toString() + ") has different 
ts", ts, cell.getTimestamp());
+}
+}
+}
+htable.close();
+}
+}
+
 }
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryMoreIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryMoreIT.java
index 2b1d31e..7c45f1a 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryMoreIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryMoreIT.java
@@ -493,14 +493,14 @@ public class QueryMoreIT extends ParallelStatsDisabledIT {
 connection.commit();
 assertEquals(2L, connection.getMutationState().getBatchCount());
 
-// set the batch size (rows) to 1 
-
connectionProperties.setProperty(QueryServices.MUTATE_BATCH_SIZE_ATTRIB, "1");
+// set the batch size (rows) to 2 since three are at least 2 mutations 
when updates a single row
+
connectionProperties.setProperty(QueryServices.MUTATE_BATCH_SIZE_ATTRIB, "2");
 
connectionProperties.setProperty(QueryServices.MUTATE_BATCH_SIZE_BYTES_ATTRIB, 
"128");
 connection = (PhoenixConnection) DriverManager.getConnection(getUrl(), 
connectionProperties);
 upsertRows(connection, fullTableName);
 connection.commit();
 // each row 

[phoenix] 15/18: PHOENIX-4820 Optimize OrderBy for ClientAggregatePlan

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a commit to branch 4.x-cdh5.16
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit 460da6136a75245b119d4f0393e08e9f61d579d5
Author: chenglei 
AuthorDate: Sat Jan 5 01:58:00 2019 +

PHOENIX-4820 Optimize OrderBy for ClientAggregatePlan
---
 .../org/apache/phoenix/end2end/AggregateIT.java| 104 +++
 .../apache/phoenix/compile/GroupByCompiler.java|   8 +-
 .../apache/phoenix/compile/OrderByCompiler.java|  18 +-
 .../phoenix/compile/OrderPreservingTracker.java|  53 ++--
 .../org/apache/phoenix/compile/QueryCompiler.java  |  12 +-
 .../org/apache/phoenix/compile/RowProjector.java   |  15 +-
 .../phoenix/expression/BaseCompoundExpression.java |  11 +-
 .../apache/phoenix/expression/BaseExpression.java  |  11 +
 .../phoenix/expression/BaseSingleExpression.java   |   5 +
 .../phoenix/expression/DelegateExpression.java |   5 +
 .../org/apache/phoenix/expression/Expression.java  |   6 +
 .../expression/ProjectedColumnExpression.java  |   8 +-
 .../expression/function/RandomFunction.java|   5 +
 .../expression/visitor/CloneExpressionVisitor.java |   6 +-
 .../CloneNonDeterministicExpressionVisitor.java|  31 --
 .../org/apache/phoenix/util/ExpressionUtil.java| 160 +-
 .../apache/phoenix/compile/QueryCompilerTest.java  | 324 +
 .../expression/ArithmeticOperationTest.java|   8 +-
 18 files changed, 705 insertions(+), 85 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java
index 8916d4d..d52025e 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java
@@ -227,5 +227,109 @@ public class AggregateIT extends BaseAggregateIT {
 assertEquals(4, rs.getLong(1));
 }
 }
+
+@Test
+public void testOrderByOptimizeForClientAggregatePlanBug4820() throws 
Exception {
+doTestOrderByOptimizeForClientAggregatePlanBug4820(false,false);
+doTestOrderByOptimizeForClientAggregatePlanBug4820(false,true);
+doTestOrderByOptimizeForClientAggregatePlanBug4820(true,false);
+doTestOrderByOptimizeForClientAggregatePlanBug4820(true,true);
+}
+
+private void doTestOrderByOptimizeForClientAggregatePlanBug4820(boolean 
desc ,boolean salted) throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+Connection conn = null;
+try {
+conn = DriverManager.getConnection(getUrl(), props);
+String tableName = generateUniqueName();
+String sql = "create table " + tableName + "( "+
+" pk1 varchar not null , " +
+" pk2 varchar not null, " +
+" pk3 varchar not null," +
+" v1 varchar, " +
+" v2 varchar, " +
+" CONSTRAINT TEST_PK PRIMARY KEY ( "+
+"pk1 "+(desc ? "desc" : "")+", "+
+"pk2 "+(desc ? "desc" : "")+", "+
+"pk3 "+(desc ? "desc" : "")+
+" )) "+(salted ? "SALT_BUCKETS =4" : "split on('b')");
+conn.createStatement().execute(sql);
+
+conn.createStatement().execute("UPSERT INTO "+tableName+" VALUES 
('a11','a12','a13','a14','a15')");
+conn.createStatement().execute("UPSERT INTO "+tableName+" VALUES 
('a21','a22','a23','a24','a25')");
+conn.createStatement().execute("UPSERT INTO "+tableName+" VALUES 
('a31','a32','a33','a34','a35')");
+conn.createStatement().execute("UPSERT INTO "+tableName+" VALUES 
('b11','b12','b13','b14','b15')");
+conn.createStatement().execute("UPSERT INTO "+tableName+" VALUES 
('b21','b22','b23','b24','b25')");
+conn.createStatement().execute("UPSERT INTO "+tableName+" VALUES 
('b31','b32','b33','b34','b35')");
+conn.commit();
+
+sql = "select a.ak3 "+
+  "from (select pk1 ak1,pk2 ak2,pk3 ak3, substr(v1,1,1) 
av1,substr(v2,1,1) av2 from "+tableName+" order by pk2,pk3 limit 10) a "+
+  "group by a.ak3,a.av1 order by a.ak3 desc,a.av1";
+ResultSet rs = conn.prepareStatement(sql).executeQuery();
+assertResultSet(rs, new 
Object[][]{{"b33"},{"b23"},{"b13"},{"a33"},{"a23"},{"a13"}});
+
+sql = "select a.ak3 "+
+  "from (select pk1 ak1,pk2 ak2,pk3 ak3, substr(v1,1,1) 
av1,substr(v2,1,1) av2 from "+tableName+" order by pk2,pk3 limit 10) a "+
+  "group by a.ak3,a.av1 order by a.ak3,a.av1";
+rs = conn.prepareStatement(sql).executeQuery();
+assertResultSet(rs, new 
Object[][]{{"a13"},{"a23"},{"a33"},{"b13"},{"b23"},{"b33"}});
+
+sql = "select a.ak3 "+
+ 

[phoenix] 02/18: ScanningResultIterator metric RowsScanned not set. PHOENIX-5051

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a commit to branch 4.x-cdh5.16
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit c360f87cf14c2540137798480cd3d70a933ebbbf
Author: chfeng 
AuthorDate: Wed Dec 5 02:40:29 2018 +

ScanningResultIterator metric RowsScanned not set. PHOENIX-5051
---
 .../main/java/org/apache/phoenix/iterate/ScanningResultIterator.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
index f02e9d3..893eaa2 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
@@ -117,7 +117,7 @@ public class ScanningResultIterator implements 
ResultIterator {
 scanMetricsMap.get(RPC_RETRIES_METRIC_NAME));
 changeMetric(scanMetricsHolder.getCountOfRemoteRPCRetries(),
 scanMetricsMap.get(REMOTE_RPC_RETRIES_METRIC_NAME));
-changeMetric(scanMetricsHolder.getCountOfRowsFiltered(),
+changeMetric(scanMetricsHolder.getCountOfRowsScanned(),
 scanMetricsMap.get(COUNT_OF_ROWS_SCANNED_KEY_METRIC_NAME));
 changeMetric(scanMetricsHolder.getCountOfRowsFiltered(),
 
scanMetricsMap.get(COUNT_OF_ROWS_FILTERED_KEY_METRIC_NAME));



[phoenix] 08/18: PHOENIX-4983: Added missing apache license header.

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a commit to branch 4.x-cdh5.16
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit f530f94659bcb337c8adce997ac7696431c719e2
Author: s.kadam 
AuthorDate: Fri Dec 14 16:04:29 2018 +

PHOENIX-4983: Added missing apache license header.
---
 .../org/apache/phoenix/end2end/UpsertWithSCNIT.java | 17 +
 1 file changed, 17 insertions(+)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertWithSCNIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertWithSCNIT.java
index 6f231ff..40bb883 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertWithSCNIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertWithSCNIT.java
@@ -1,3 +1,20 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.phoenix.end2end;
 
 import org.apache.phoenix.exception.SQLExceptionCode;



[phoenix] 05/18: PHOENIX-5025 Tool to clean up orphan views

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a commit to branch 4.x-cdh5.16
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit dff179b6c184bfeb4d28c090241cf08577ec4d85
Author: Kadir 
AuthorDate: Tue Nov 13 06:24:10 2018 +

PHOENIX-5025 Tool to clean up orphan views
---
 .../apache/phoenix/end2end/OrphanViewToolIT.java   | 472 +++
 .../apache/phoenix/mapreduce/OrphanViewTool.java   | 879 +
 2 files changed, 1351 insertions(+)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrphanViewToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrphanViewToolIT.java
new file mode 100644
index 000..f9a1785
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrphanViewToolIT.java
@@ -0,0 +1,472 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LINK_TYPE;
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME;
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CHILD_LINK_NAME;
+import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TABLE_SCHEM;
+import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TABLE_TYPE;
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Paths;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.List;
+
+import com.google.common.collect.Lists;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.phoenix.mapreduce.OrphanViewTool;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.AfterClass;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameters;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+@RunWith(Parameterized.class)
+public class OrphanViewToolIT extends ParallelStatsDisabledIT {
+private static final Logger LOG = 
LoggerFactory.getLogger(OrphanViewToolIT.class);
+
+private final boolean isMultiTenant;
+private final boolean columnEncoded;
+
+private static final long fanout = 2;
+private static final long childCount = fanout;
+private static final long grandChildCount = fanout * fanout;
+private static final long grandGrandChildCount = fanout * fanout * fanout;
+
+private static final String filePath = "/tmp/";
+private static final String viewFileName = "/tmp/" + 
OrphanViewTool.fileName[OrphanViewTool.VIEW];
+private static final String physicalLinkFileName = "/tmp/" + 
OrphanViewTool.fileName[OrphanViewTool.PHYSICAL_TABLE_LINK];
+private static final String parentLinkFileName = "/tmp/" + 
OrphanViewTool.fileName[OrphanViewTool.PARENT_TABLE_LINK];
+private static final String childLinkFileName = "/tmp/" + 
OrphanViewTool.fileName[OrphanViewTool.CHILD_TABLE_LINK];
+
+protected static String SCHEMA1 = "SCHEMA1";
+protected static String SCHEMA2 = "SCHEMA2";
+protected static String SCHEMA3 = "SCHEMA3";
+protected static String SCHEMA4 = "SCHEMA4";
+
+private final String TENANT_SPECIFIC_URL = getUrl() + ';' + 
TENANT_ID_ATTRIB + "=tenant";
+
+private static final String createBaseTableFirstPartDDL = "CREATE TABLE IF 
NOT EXISTS %s";
+private static final String createBaseTableSecondPartDDL = "(%s PK2 
VARCHAR NOT NULL, V1 VARCHAR, V2 VARCHAR " +
+" CONSTRAINT NAME_PK PRIMARY KEY (%s PK2)) %s";
+private static final String deleteTableRows = "DELETE FROM " + 
SYSTEM_CATALOG_NAME +
+" WHERE " + TABLE_SCHEM + " %s AND " +
+TABLE_TYPE + " = '" + PTableType.TABLE.getSerializedValue() + "'";
+
+private static final String createViewDDL = "CREATE 

[phoenix] 01/18: PHOENIX-4781 Create artifact jar so that shaded jar replaces it properly

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a commit to branch 4.x-cdh5.16
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit de4e0416017ae27f78f9cb1bf98f09b88d844cfb
Author: Vincent Poon 
AuthorDate: Sat Dec 1 01:55:34 2018 +

PHOENIX-4781 Create artifact jar so that shaded jar replaces it properly
---
 phoenix-client/pom.xml | 9 +++--
 phoenix-server/pom.xml | 9 +++--
 2 files changed, 6 insertions(+), 12 deletions(-)

diff --git a/phoenix-client/pom.xml b/phoenix-client/pom.xml
index 83c7ad9..cfed3ce 100644
--- a/phoenix-client/pom.xml
+++ b/phoenix-client/pom.xml
@@ -57,12 +57,9 @@
   
 org.apache.maven.plugins
 maven-jar-plugin
-
-  
-default-jar
-none
-  
-
+
+  phoenix-${project.version}-client
+
   
   
 org.apache.maven.plugins
diff --git a/phoenix-server/pom.xml b/phoenix-server/pom.xml
index 648e4d1..e6a7afe 100644
--- a/phoenix-server/pom.xml
+++ b/phoenix-server/pom.xml
@@ -61,12 +61,9 @@
   
 org.apache.maven.plugins
 maven-jar-plugin
-
-  
-default-jar
-none
-  
-
+
+  phoenix-${project.version}-server
+
   
   
 org.apache.maven.plugins



[phoenix] 04/18: PHOENIX-4763: Changing a base table property value should be reflected in child views (if the property wasn't changed)

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a commit to branch 4.x-cdh5.16
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit 622fcf4802d83316111fd19203723e74f843f67d
Author: Chinmay Kulkarni 
AuthorDate: Mon Dec 10 05:07:41 2018 +

PHOENIX-4763: Changing a base table property value should be reflected in 
child views (if the property wasn't changed)
---
 .../phoenix/end2end/AlterTableWithViewsIT.java | 117 +--
 .../end2end/ExplainPlanWithStatsEnabledIT.java |   8 +-
 .../apache/phoenix/end2end/PropertiesInSyncIT.java |   6 +-
 .../IndexHalfStoreFileReaderGenerator.java |   3 +-
 .../org/apache/phoenix/compile/DeleteCompiler.java |   2 +-
 .../org/apache/phoenix/compile/JoinCompiler.java   |   2 +-
 .../phoenix/compile/TupleProjectionCompiler.java   |   3 +-
 .../org/apache/phoenix/compile/UpsertCompiler.java |   2 +-
 .../org/apache/phoenix/compile/WhereOptimizer.java |   3 +-
 .../phoenix/coprocessor/MetaDataEndpointImpl.java  | 134 ++--
 .../phoenix/coprocessor/MetaDataProtocol.java  |  32 +-
 .../coprocessor/generated/MetaDataProtos.java  | 356 ++---
 .../coprocessor/generated/PTableProtos.java|  99 +++---
 .../coprocessor/generated/ServerCachingProtos.java | 122 +++
 .../org/apache/phoenix/index/IndexMaintainer.java  |  16 +-
 .../phoenix/index/PhoenixIndexFailurePolicy.java   |   2 +-
 .../org/apache/phoenix/schema/DelegateTable.java   |  12 +-
 .../org/apache/phoenix/schema/MetaDataClient.java  |  25 +-
 .../java/org/apache/phoenix/schema/PTable.java |   6 +-
 .../java/org/apache/phoenix/schema/PTableImpl.java |  56 +++-
 .../java/org/apache/phoenix/util/MetaDataUtil.java |  46 ++-
 .../org/apache/phoenix/util/MetaDataUtilTest.java  | 115 +--
 phoenix-protocol/src/main/MetaDataService.proto|   4 +-
 phoenix-protocol/src/main/PTable.proto |   2 +-
 .../src/main/ServerCachingService.proto|   2 +-
 25 files changed, 739 insertions(+), 436 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
index 9e7aaa2..82a119f 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
@@ -73,6 +73,8 @@ import org.junit.runners.Parameterized.Parameters;
 import com.google.common.base.Function;
 import com.google.common.collect.Lists;
 
+import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_USE_STATS_FOR_PARALLELIZATION;
+
 @RunWith(Parameterized.class)
 public class AlterTableWithViewsIT extends SplitSystemCatalogIT {
 
@@ -174,41 +176,53 @@ public class AlterTableWithViewsIT extends 
SplitSystemCatalogIT {
 
conn.createStatement().execute(generateDDL("UPDATE_CACHE_FREQUENCY=2", 
ddlFormat));
 viewConn.createStatement().execute("CREATE VIEW " + viewOfTable1 + 
" ( VIEW_COL1 DECIMAL(10,2), VIEW_COL2 VARCHAR ) AS SELECT * FROM " + 
tableName);
 viewConn.createStatement().execute("CREATE VIEW " + viewOfTable2 + 
" ( VIEW_COL1 DECIMAL(10,2), VIEW_COL2 VARCHAR ) AS SELECT * FROM " + 
tableName);
-
-viewConn.createStatement().execute("ALTER VIEW " + viewOfTable2 + 
" SET UPDATE_CACHE_FREQUENCY = 1");
-
-PhoenixConnection phoenixConn = 
conn.unwrap(PhoenixConnection.class);
-PTable table = phoenixConn.getTable(new PTableKey(null, 
tableName));
 PName tenantId = isMultiTenant ? PNameFactory.newName(TENANT1) : 
null;
-assertFalse(table.isImmutableRows());
-assertEquals(2, table.getUpdateCacheFrequency());
+
+// Initially all property values should be the same for the base 
table and its views
+PTable table = conn.unwrap(PhoenixConnection.class).getTable(new 
PTableKey(null, tableName));
 PTable viewTable1 = 
viewConn.unwrap(PhoenixConnection.class).getTable(new PTableKey(tenantId, 
viewOfTable1));
+PTable viewTable2 = 
viewConn.unwrap(PhoenixConnection.class).getTable(new PTableKey(tenantId, 
viewOfTable2));
+assertFalse(table.isImmutableRows());
 assertFalse(viewTable1.isImmutableRows());
+assertFalse(viewTable2.isImmutableRows());
+assertEquals(2, table.getUpdateCacheFrequency());
 assertEquals(2, viewTable1.getUpdateCacheFrequency());
+assertEquals(2, viewTable2.getUpdateCacheFrequency());
+assertNull(table.useStatsForParallelization());
+assertNull(viewTable1.useStatsForParallelization());
+assertNull(viewTable2.useStatsForParallelization());
+
+// Alter a property value for one of the views
+viewConn.createStatement().execute("ALTER VIEW " + viewOfTable2
++ " SET 

[phoenix] 09/18: PHOENIX-5025 Tool to clean up orphan views (addendum)

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a commit to branch 4.x-cdh5.16
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit bd4f52f576d11b779a82d89dd20354188adaf850
Author: Kadir 
AuthorDate: Thu Dec 13 01:53:38 2018 +

PHOENIX-5025 Tool to clean up orphan views (addendum)
---
 .../apache/phoenix/end2end/OrphanViewToolIT.java   | 23 +
 .../apache/phoenix/mapreduce/OrphanViewTool.java   | 24 --
 2 files changed, 28 insertions(+), 19 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrphanViewToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrphanViewToolIT.java
index f9a1785..ab78ecd 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrphanViewToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrphanViewToolIT.java
@@ -27,9 +27,9 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
 
 import java.io.File;
+import java.io.FileReader;
 import java.io.IOException;
-import java.nio.file.Files;
-import java.nio.file.Paths;
+import java.io.LineNumberReader;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.ResultSet;
@@ -206,9 +206,13 @@ public class OrphanViewToolIT extends 
ParallelStatsDisabledIT {
 }
 
 private void verifyLineCount(String fileName, long lineCount) throws 
IOException {
-if (Files.lines(Paths.get(fileName)).count() != lineCount)
-LOG.debug(Files.lines(Paths.get(fileName)).count() + " != " + 
lineCount);
-assertTrue(Files.lines(Paths.get(fileName)).count() == lineCount);
+LineNumberReader reader = new LineNumberReader(new 
FileReader(fileName));
+while (reader.readLine() != null) {
+}
+int count = reader.getLineNumber();
+if (count != lineCount)
+LOG.debug(count + " != " + lineCount);
+assertTrue(count == lineCount);
 }
 
 private void verifyCountQuery(Connection connection, String query, String 
schemaName, long count)
@@ -238,7 +242,6 @@ public class OrphanViewToolIT extends 
ParallelStatsDisabledIT {
 }
 }
 
-
 private void verifyNoChildLink(Connection connection, String 
viewSchemaName) throws Exception {
 // Verify that there there is no link in the system child link table
 verifyCountQuery(connection, countChildLinksQuery, viewSchemaName, 0);
@@ -264,6 +267,7 @@ public class OrphanViewToolIT extends 
ParallelStatsDisabledIT {
 schemaName == null ? "IS NULL" : "= '" + schemaName + "'"));
 connection.commit();
 }
+
 @Test
 public void testDeleteBaseTableRows() throws Exception {
 String baseTableName = generateUniqueName();
@@ -438,7 +442,8 @@ public class OrphanViewToolIT extends 
ParallelStatsDisabledIT {
 }
 }
 
-public static String[] getArgValues(boolean clean, boolean identify, 
boolean outputPath, boolean inputPath) {
+public static String[] getArgValues(boolean clean, boolean identify, 
boolean outputPath, boolean inputPath)
+throws InterruptedException{
 final List args = Lists.newArrayList();
 if (outputPath) {
 args.add("-op");
@@ -454,8 +459,10 @@ public class OrphanViewToolIT extends 
ParallelStatsDisabledIT {
 if (identify) {
 args.add("-i");
 }
+final long ageMs = 2000;
+Thread.sleep(ageMs);
 args.add("-a");
-args.add("0");
+args.add(Long.toString(ageMs));
 return args.toArray(new String[0]);
 }
 
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/OrphanViewTool.java 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/OrphanViewTool.java
index a8a30b6..713fb05 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/OrphanViewTool.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/OrphanViewTool.java
@@ -812,17 +812,6 @@ public class OrphanViewTool extends Configured implements 
Tool {
 } catch (IllegalStateException e) {
 printHelpAndExit(e.getMessage(), getOptions());
 }
-
-Properties props = new Properties();
-long scn = System.currentTimeMillis() - ageMs;
-props.setProperty("CurrentSCN", Long.toString(scn));
-connection = ConnectionUtil.getInputConnection(configuration);
-PhoenixConnection phoenixConnection = 
connection.unwrap(PhoenixConnection.class);
-
-if (clean) {
-// Take a snapshot of system tables to be modified
-createSnapshot(phoenixConnection, scn);
-}
 if (outputPath != null) {
 // Create files to log orphan views and links
 for (int i = VIEW; i < ORPHAN_TYPE_COUNT; i++) {
@@ -834,7 +823,20 @@ public class OrphanViewTool extends Configured 

[phoenix] 03/18: PHOENIX-4832: Add Canary Test Tool for Phoenix Query Server.

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a commit to branch 4.x-cdh5.16
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit 1d9073c1a326f4317b0ee2960668c90f0234b003
Author: s.kadam 
AuthorDate: Thu Dec 6 00:11:07 2018 +

PHOENIX-4832: Add Canary Test Tool for Phoenix Query Server.
---
 phoenix-core/pom.xml   |   7 +
 .../org/apache/phoenix/tool/CanaryTestResult.java  |  86 
 .../org/apache/phoenix/tool/PhoenixCanaryTool.java | 477 +
 .../resources/phoenix-canary-file-sink.properties  |  17 +
 .../apache/phoenix/tool/PhoenixCanaryToolTest.java | 140 ++
 5 files changed, 727 insertions(+)

diff --git a/phoenix-core/pom.xml b/phoenix-core/pom.xml
index 97091b9..f8112fe 100644
--- a/phoenix-core/pom.xml
+++ b/phoenix-core/pom.xml
@@ -26,6 +26,7 @@
 
   
 ${project.basedir}/..
+0.8.1
   
 
   
@@ -228,6 +229,12 @@
   sqlline
 
 
+  net.sourceforge.argparse4j
+  argparse4j
+  ${argparse4j.version}
+
+
+
   com.google.guava
   guava
 
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/tool/CanaryTestResult.java 
b/phoenix-core/src/main/java/org/apache/phoenix/tool/CanaryTestResult.java
new file mode 100644
index 000..b72439c
--- /dev/null
+++ b/phoenix-core/src/main/java/org/apache/phoenix/tool/CanaryTestResult.java
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.tool;
+
+public class CanaryTestResult {
+
+private boolean isSuccessful;
+private long startTime;
+private long executionTime;
+private String message;
+private String testName;
+private String timestamp;
+private Object miscellaneous;
+
+public Object getMiscellaneous() {
+return miscellaneous;
+}
+
+public void setMiscellaneous(Object miscellaneous) {
+this.miscellaneous = miscellaneous;
+}
+
+public long getStartTime() {
+return startTime;
+}
+
+public void setStartTime(long startTime) {
+this.startTime = startTime;
+}
+
+public String getTimestamp() {
+return timestamp;
+}
+
+public void setTimestamp(String timestamp) {
+this.timestamp = timestamp;
+}
+
+public boolean isSuccessful() {
+return isSuccessful;
+}
+
+public void setSuccessful(boolean successful) {
+isSuccessful = successful;
+}
+
+public long getExecutionTime() {
+return executionTime;
+}
+
+public void setExecutionTime(long executionTime) {
+this.executionTime = executionTime;
+}
+
+public String getMessage() {
+return message;
+}
+
+public void setMessage(String message) {
+this.message = message;
+}
+
+public String getTestName() {
+return testName;
+}
+
+public void setTestName(String testName) {
+this.testName = testName;
+}
+
+}
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/tool/PhoenixCanaryTool.java 
b/phoenix-core/src/main/java/org/apache/phoenix/tool/PhoenixCanaryTool.java
new file mode 100644
index 000..405f54f
--- /dev/null
+++ b/phoenix-core/src/main/java/org/apache/phoenix/tool/PhoenixCanaryTool.java
@@ -0,0 +1,477 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.tool;
+
+import com.google.common.base.Throwables;
+import 

[phoenix] branch 4.x-cdh5.16 created (now b129be9)

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a change to branch 4.x-cdh5.16
in repository https://gitbox.apache.org/repos/asf/phoenix.git.


  at b129be9  PHOENIX-5059 Use the Datasource v2 api in the spark connector

This branch includes the following new commits:

 new de4e041  PHOENIX-4781 Create artifact jar so that shaded jar replaces 
it properly
 new c360f87  ScanningResultIterator metric RowsScanned not set. 
PHOENIX-5051
 new 1d9073c  PHOENIX-4832: Add Canary Test Tool for Phoenix Query Server.
 new 622fcf4  PHOENIX-4763: Changing a base table property value should be 
reflected in child views (if the property wasn't changed)
 new dff179b  PHOENIX-5025 Tool to clean up orphan views
 new 4de622a  PHOENIX-4983: Allow using a connection with a SCN set to 
write data to tables EXCEPT transactional tables or mutable tables with indexes 
or tables with ROW_TIMESTAMP column.
 new 4db9a6f  PHOENIX-5048 Index Rebuilder does not handle INDEX_STATE 
timestamp check for all index
 new f530f94  PHOENIX-4983: Added missing apache license header.
 new bd4f52f  PHOENIX-5025 Tool to clean up orphan views (addendum)
 new 9c7ee72  PHOENIX-5070 NPE when upgrading Phoenix 4.13.0 to Phoenix 
4.14.1 with hbase-1.x branch in secure setup
 new 81f8503  [PHOENIX-3623] Integrate Omid with Phoenix.
 new 3f17a89  PHOENIX-5074 DropTableWithViewsIT.testDropTableWithChildViews 
is flapping
 new 82172a1  PHOENIX-5074; fix compilation failure.
 new 5873214  PHOENIX-5084 Changes from Transactional Tables are not 
visible to query in different client.
 new 460da61  PHOENIX-4820 Optimize OrderBy for ClientAggregatePlan
 new 0e5a263  PHOENIX-5055 Split mutations batches probably affects 
correctness of index data
 new cea84e8  Changes for CDH 5.16.x
 new b129be9  PHOENIX-5059 Use the Datasource v2 api in the spark connector

The 18 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[phoenix] 10/18: PHOENIX-5070 NPE when upgrading Phoenix 4.13.0 to Phoenix 4.14.1 with hbase-1.x branch in secure setup

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a commit to branch 4.x-cdh5.16
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit 9c7ee727aacce0b5d428160ed167345f8febf369
Author: Monani Mihir 
AuthorDate: Fri Dec 14 10:50:17 2018 +

PHOENIX-5070 NPE when upgrading Phoenix 4.13.0 to Phoenix 4.14.1 with 
hbase-1.x branch in secure setup
---
 .../java/org/apache/phoenix/coprocessor/PhoenixAccessController.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/PhoenixAccessController.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/PhoenixAccessController.java
index 62c158c..ef26d2c 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/PhoenixAccessController.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/PhoenixAccessController.java
@@ -406,7 +406,7 @@ public class PhoenixAccessController extends 
BaseMetaDataEndpointObserver {
 final List userPermissions = new 
ArrayList();
 try (Connection connection = 
ConnectionFactory.createConnection(env.getConfiguration())) {
 // Merge permissions from all accessController 
coprocessors loaded in memory
-for (BaseMasterAndRegionObserver service : 
accessControllers) {
+for (BaseMasterAndRegionObserver service : 
getAccessControllers()) {
 // Use AccessControlClient API's if the 
accessController is an instance of 
org.apache.hadoop.hbase.security.access.AccessController
 if 
(service.getClass().getName().equals(org.apache.hadoop.hbase.security.access.AccessController.class.getName()))
 {
 
userPermissions.addAll(AccessControlClient.getUserPermissions(connection, 
tableName.getNameAsString()));



[phoenix] 07/18: PHOENIX-5048 Index Rebuilder does not handle INDEX_STATE timestamp check for all index

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a commit to branch 4.x-cdh5.16
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit 4db9a6fb614a9d39130fe764adf52d92fb1ec8f7
Author: Monani Mihir 
AuthorDate: Fri Dec 14 12:45:55 2018 +

PHOENIX-5048 Index Rebuilder does not handle INDEX_STATE timestamp check 
for all index

Signed-off-by: Geoffrey Jacoby 
---
 .../coprocessor/MetaDataRegionObserver.java| 35 +-
 1 file changed, 21 insertions(+), 14 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataRegionObserver.java
index 4968525..4045d47 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataRegionObserver.java
@@ -512,20 +512,27 @@ public class MetaDataRegionObserver extends 
BaseRegionObserver {
String 
indexTableFullName = SchemaUtil.getTableName(

indexPTable.getSchemaName().getString(),

indexPTable.getTableName().getString());
-   if (scanEndTime 
== latestUpperBoundTimestamp) {
-   
IndexUtil.updateIndexState(conn, indexTableFullName, PIndexState.ACTIVE, 0L, 
latestUpperBoundTimestamp);
-   
batchExecutedPerTableMap.remove(dataPTable.getName());
-LOG.info("Making Index:" + 
indexPTable.getTableName() + " active after rebuilding");
-   } else {
-   // 
Increment timestamp so that client sees updated disable timestamp
-IndexUtil.updateIndexState(conn, 
indexTableFullName, indexPTable.getIndexState(), scanEndTime * 
signOfDisableTimeStamp, latestUpperBoundTimestamp);
-   Long 
noOfBatches = batchExecutedPerTableMap.get(dataPTable.getName());
-   if 
(noOfBatches == null) {
-   
noOfBatches = 0l;
-   }
-   
batchExecutedPerTableMap.put(dataPTable.getName(), ++noOfBatches);
-   
LOG.info("During Round-robin build: Successfully updated index disabled 
timestamp  for "
-   
+ indexTableFullName + " to " + scanEndTime);
+   try {
+   if 
(scanEndTime == latestUpperBoundTimestamp) {
+   
IndexUtil.updateIndexState(conn, indexTableFullName, PIndexState.ACTIVE, 0L,
+   
latestUpperBoundTimestamp);
+   
batchExecutedPerTableMap.remove(dataPTable.getName());
+   
LOG.info("Making Index:" + indexPTable.getTableName() + " active after 
rebuilding");
+   } else {
+   // 
Increment timestamp so that client sees updated disable timestamp
+   
IndexUtil.updateIndexState(conn, indexTableFullName, 
indexPTable.getIndexState(),
+   
scanEndTime * signOfDisableTimeStamp, latestUpperBoundTimestamp);
+   Long 
noOfBatches = batchExecutedPerTableMap.get(dataPTable.getName());
+   if 
(noOfBatches == null) {
+   
noOfBatches = 0l;
+   }
+   

[phoenix] 06/18: PHOENIX-4983: Allow using a connection with a SCN set to write data to tables EXCEPT transactional tables or mutable tables with indexes or tables with ROW_TIMESTAMP column.

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a commit to branch 4.x-cdh5.16
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit 4de622ab30d3f5aeb755ffaf786ec9ec4cdd3ba1
Author: s.kadam 
AuthorDate: Mon Dec 10 22:40:17 2018 +

PHOENIX-4983: Allow using a connection with a SCN set to write data to 
tables EXCEPT transactional tables or mutable tables with indexes or tables 
with ROW_TIMESTAMP column.
---
 .../apache/phoenix/end2end/UpsertWithSCNIT.java| 139 +
 .../org/apache/phoenix/compile/UpsertCompiler.java |  23 +++-
 .../apache/phoenix/exception/SQLExceptionCode.java |  13 +-
 .../org/apache/phoenix/jdbc/PhoenixConnection.java |   2 +-
 4 files changed, 172 insertions(+), 5 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertWithSCNIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertWithSCNIT.java
new file mode 100644
index 000..6f231ff
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertWithSCNIT.java
@@ -0,0 +1,139 @@
+package org.apache.phoenix.end2end;
+
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.exception.SQLExceptionInfo;
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.ExpectedException;
+
+import java.sql.Connection;
+import java.sql.Date;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.Properties;
+
+import static org.hamcrest.CoreMatchers.containsString;
+import static org.hamcrest.core.Is.is;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+
+public class UpsertWithSCNIT extends ParallelStatsDisabledIT {
+
+@Rule
+public final ExpectedException exception = ExpectedException.none();
+Properties props = null;
+PreparedStatement prep = null;
+String tableName =null;
+
+private void helpTestUpserWithSCNIT(boolean rowColumn, boolean txTable,
+boolean mutable, boolean local, 
boolean global)
+throws SQLException {
+
+tableName = generateUniqueName();
+String indx;
+String createTable = "CREATE TABLE "+tableName+" ("
++ (rowColumn ? "CREATED_DATE DATE NOT NULL, ":"")
++ "METRIC_ID CHAR(15) NOT NULL,METRIC_VALUE VARCHAR(50) 
CONSTRAINT PK PRIMARY KEY("
++ (rowColumn? "CREATED_DATE ROW_TIMESTAMP, ":"") + 
"METRIC_ID)) "
++ (mutable? "IMMUTABLE_ROWS=false":"" )
++ (txTable ? 
"TRANSACTION_PROVIDER='TEPHRA',TRANSACTIONAL=true":"");
+props = new Properties();
+Connection conn = DriverManager.getConnection(getUrl(), props);
+conn.createStatement().execute(createTable);
+
+if(local || global ){
+indx = "CREATE "+ (local? "LOCAL " : "") + "INDEX 
"+tableName+"_idx ON " +
+""+tableName+" (METRIC_VALUE)";
+conn.createStatement().execute(indx);
+}
+
+props.setProperty("CurrentSCN", 
Long.toString(System.currentTimeMillis()));
+conn = DriverManager.getConnection(getUrl(), props);
+conn.setAutoCommit(true);
+String upsert = "UPSERT INTO "+tableName+" (METRIC_ID, METRIC_VALUE) 
VALUES (?,?)";
+prep = conn.prepareStatement(upsert);
+prep.setString(1,"abc");
+prep.setString(2,"This is the first comment!");
+}
+
+@Test // See https://issues.apache.org/jira/browse/PHOENIX-4983
+public void testUpsertOnSCNSetTxnTable() throws SQLException {
+
+helpTestUpserWithSCNIT(false, true, false, false, false);
+exception.expect(SQLException.class);
+exception.expectMessage(containsString(String.valueOf(
+SQLExceptionCode
+.CANNOT_SPECIFY_SCN_FOR_TXN_TABLE
+.getErrorCode(;
+prep.executeUpdate();
+}
+
+@Test
+public void testUpsertOnSCNSetMutTableWithoutIdx() throws Exception {
+
+helpTestUpserWithSCNIT(false, false, true, false, false);
+prep.executeUpdate();
+props = new Properties();
+Connection conn = DriverManager.getConnection(getUrl(),props);
+ResultSet rs = conn.createStatement().executeQuery("SELECT * FROM 
"+tableName);
+assertTrue(rs.next());
+assertEquals("abc", rs.getString(1));
+assertEquals("This is the first comment!", rs.getString(2));
+assertFalse(rs.next());
+}
+
+@Test
+public void testUpsertOnSCNSetTable() throws Exception {
+
+helpTestUpserWithSCNIT(false, false, false, false, false);
+prep.executeUpdate();
+props = new Properties();
+Connection conn = DriverManager.getConnection(getUrl(),props);
+ResultSet rs = 

[phoenix] 14/18: PHOENIX-5084 Changes from Transactional Tables are not visible to query in different client.

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a commit to branch 4.x-cdh5.16
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit 58732144d28e1af4143b6554c0f01f7e1e0f1669
Author: Lars Hofhansl 
AuthorDate: Wed Jan 2 08:52:52 2019 +

PHOENIX-5084 Changes from Transactional Tables are not visible to query in 
different client.
---
 .../org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java | 12 
 1 file changed, 12 insertions(+)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
index 3ff62e2..61ba0fc 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
@@ -718,6 +718,7 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 @Override
 public ResultSet getColumns(String catalog, String schemaPattern, String 
tableNamePattern, String columnNamePattern)
 throws SQLException {
+try {
 boolean isTenantSpecificConnection = connection.getTenantId() != null;
 List tuples = Lists.newArrayListWithExpectedSize(10);
 ResultSet rs = getTables(catalog, schemaPattern, tableNamePattern, 
null);
@@ -893,6 +894,11 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 }
 }
 return new PhoenixResultSet(new MaterializedResultIterator(tuples), 
GET_COLUMNS_ROW_PROJECTOR, new StatementContext(new 
PhoenixStatement(connection), false));
+} finally {
+if (connection.getAutoCommit()) {
+connection.commit();
+}
+}
 }
 
 @Override
@@ -1142,6 +1148,7 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 if (tableName == null || tableName.length() == 0) {
 return emptyResultSet;
 }
+try {
 List tuples = Lists.newArrayListWithExpectedSize(10);
 ResultSet rs = getTables(catalog, schemaName, tableName, null);
 while (rs.next()) {
@@ -1219,6 +1226,11 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 return new PhoenixResultSet(new MaterializedResultIterator(tuples),
 GET_PRIMARY_KEYS_ROW_PROJECTOR,
 new StatementContext(new PhoenixStatement(connection), false));
+} finally {
+if (connection.getAutoCommit()) {
+connection.commit();
+}
+}
 }
 
 @Override



[phoenix] branch 4.14-cdh5.14 updated (7e43ebb -> 98b689e)

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a change to branch 4.14-cdh5.14
in repository https://gitbox.apache.org/repos/asf/phoenix.git.


from 7e43ebb  PHOENIX-5056 Ignore failing IT
 new 0de7307  PHOENIX-4872: BulkLoad has bug when loading on 
single-cell-array-with-offsets table.
 new 79ff982  modify index state based on client version to support old 
clients
 new 481fd38  PHOENIX-5126 RegionScanner leak leading to store files not 
getting cleared
 new 94379b7  PHOENIX-4900 Modify MAX_MUTATION_SIZE_EXCEEDED and 
MAX_MUTATION_SIZE_BYTES_EXCEEDED exception message to recommend turning 
autocommit on for deletes
 new 2302442  PHOENIX-5207 Create index if not exists fails incorrectly if 
table has 'maxIndexesPerTable' indexes already
 new e0a8b87  PHOENIX-5122: PHOENIX-4322 breaks client backward 
compatibility
 new 523bd30  PHOENIX-5217 Incorrect result for COUNT DISTINCT limit
 new 5c6955d  PHOENIX-5246: PhoenixAccessControllers.getAccessControllers() 
method is not correctly implementing the double-checked locking
 new cae2069  PHOENIX-5173: LIKE and ILIKE statements return empty result 
list for search without wildcard
 new de1f9b4  PHOENIX-5008: CQSI.init should not bubble up 
RetriableUpgradeException to client in case of an UpgradeRequiredException
 new 3be996d  PHOENIX-5008 (Addendum): CQSI.init should not bubble up 
RetriableUpgradeException to client in case of an UpgradeRequiredException
 new 0aa0a7e  PHOENIX-5005 Server-side delete / upsert-select potentially 
blocked after a split
 new 2b0d68d  PHOENIX-4750 Resolve server customizers and provide them to 
Avatica
 new 6d6ccea  PHOENIX-4755 Provide an option to plugin custom avatica 
server config in PQS
 new bac60e3  PHOENIX-3991 ROW_TIMESTAMP on TIMESTAMP column type throws 
ArrayOutOfBound when upserting without providing a value.
 new 151f816  PHOENIX-4834 PhoenixMetricsLog interface methods should not 
depend on specific logger
 new adde363  PHOENIX-4835 LoggingPhoenixConnection should log metrics upon 
connection close
 new 11ebb0f  PHOENIX-4853 Add sql statement to PhoenixMetricsLog interface 
for query level metrics logging
 new 685d9a0  PHOENIX-4854 Make LoggingPhoenixResultSet idempotent when 
logging metrics
 new 0ccb110  PHOENIX-4864 Fix NullPointerException while Logging some DDL 
Statements
 new db087e9  PHOENIX-4870 LoggingPhoenixConnection should log metrics when 
AutoCommit is set to True.
 new d1e234f  PHOENIX-4989 Include disruptor jar in shaded dependency
 new 06a94be  PHOENIX-4781 Create artifact jar so that shaded jar replaces 
it properly
 new 11375b9  PHOENIX-5048 Index Rebuilder does not handle INDEX_STATE 
timestamp check for all index
 new 017da22  PHOENIX-5070 NPE when upgrading Phoenix 4.13.0 to Phoenix 
4.14.1 with hbase-1.x branch in secure setup
 new 9f0616a  PHOENIX-5111: Null Pointer exception fix in index tool due to 
outputpath being null when direct option is supplied
 new 6b15799  PHOENIX-5094 increment pending disable count for index when 
rebuild starts
 new 04726ff  PHOENIX-4993 close cache connections when region server is 
going down
 new 68d956b  Add tenantId param to IndexTool
 new 7681bc1  PHOENIX-5080 Index becomes Active during Partial Index 
Rebuilder if Index Failure happens
 new dbc308e  PHOENIX-5025 Tool to clean up orphan views
 new 6dcf219  PHOENIX-5025 Tool to clean up orphan views (addendum)
 new 88e2ccf  PHOENIX-5247 DROP TABLE and DROP VIEW commands fail to drop 
second or higher level child views
 new 64437e8  PHOENIX-5137 check region close before commiting a batch for 
index rebuild
 new 5a66d58  PHOENIX-4832: Add Canary Test Tool for Phoenix Query Server.
 new 9f072eb  PHOENIX-5172: Harden the PQS canary synth test tool with 
retry mechanism and more logging
 new 654bb29  PHOENIX-5188 - IndexedKeyValue should populate KeyValue fields
 new 7c7ade4  PHOENIX-5124 PropertyPolicyProvider should not evaluate 
default hbase config properties
 new 4a32d77  PHOENIX-4822 Ensure the provided timezone is used client-side 
(Jaanai Zhang)
 new 51815e6  PHOENIX-4822 Fixed Spelling.
 new 06b7b9d  PHOENIX-5194 Thread Cache is not update for Index retries in 
for MutationState#send()#doMutation()
 new 9cb89e2  PHOENIX-5018 Index mutations created by UPSERT SELECT will 
have wrong timestamps
 new f7d3019  PHOENIX-5184: HBase and Phoenix connection leaks in Indexing 
code path, OrphanViewTool and PhoenixConfigurationUtil
 new 8ba7382  PhoenixResultSet#next() closes the result set if scanner 
returns null
 new 01e0e31  PHOENIX-5101 ScanningResultIterator getScanMetrics throws NPE
 new 736e2e4  PHOENIX-5101 ScanningResultIterator getScanMetrics throws NPE 
(Addendum)
 new 1f5bffa  Add missing license
 new 8e636a1  Set version to 4.14.2-cdh5.14
 new c18da31  PHOENIX-5195 

svn commit: r1860306 - in /phoenix/site: publish/release.html source/src/site/markdown/release.md

2019-05-28 Thread tdsilva
Author: tdsilva
Date: Tue May 28 22:20:19 2019
New Revision: 1860306

URL: http://svn.apache.org/viewvc?rev=1860306=rev
Log:
Fix typo

Modified:
phoenix/site/publish/release.html
phoenix/site/source/src/site/markdown/release.md

Modified: phoenix/site/publish/release.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/release.html?rev=1860306=1860305=1860306=diff
==
--- phoenix/site/publish/release.html (original)
+++ phoenix/site/publish/release.html Tue May 28 22:20:19 2019
@@ -212,7 +212,7 @@ git tag -a v4.11.0-HBase-0.98 v4.11.0-HB
  
 
   Remove any obsolete releases on https://dist.apache.org/repos/dist/release/phoenix;>https://dist.apache.org/repos/dist/release/phoenix
 given the current release. 
-   Ensure you ~/.m2/settings.xml is setup correctly:  
+   Ensure your ~/.m2/settings.xml is setup correctly:  
 
 server
   idapache.releases.https/id

Modified: phoenix/site/source/src/site/markdown/release.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/release.md?rev=1860306=1860305=1860306=diff
==
--- phoenix/site/source/src/site/markdown/release.md (original)
+++ phoenix/site/source/src/site/markdown/release.md Tue May 28 22:20:19 2019
@@ -35,7 +35,7 @@ Follow the instructions. Signed binary a
 
 3. Remove any obsolete releases on 
https://dist.apache.org/repos/dist/release/phoenix given the current release.
 
-4. Ensure you ~/.m2/settings.xml is setup correctly: 
+4. Ensure your ~/.m2/settings.xml is setup correctly: 
 
 ```





svn commit: r1860305 - in /phoenix/site: publish/release.html source/src/site/markdown/release.md

2019-05-28 Thread tdsilva
Author: tdsilva
Date: Tue May 28 22:19:29 2019
New Revision: 1860305

URL: http://svn.apache.org/viewvc?rev=1860305=rev
Log:
Update how to release to include instructions on ~/.m2/settings.xml

Modified:
phoenix/site/publish/release.html
phoenix/site/source/src/site/markdown/release.md

Modified: phoenix/site/publish/release.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/release.html?rev=1860305=1860304=1860305=diff
==
--- phoenix/site/publish/release.html (original)
+++ phoenix/site/publish/release.html Tue May 28 22:19:29 2019
@@ -1,7 +1,7 @@
 
 
 
 
@@ -212,6 +212,15 @@ git tag -a v4.11.0-HBase-0.98 v4.11.0-HB
  
 
   Remove any obsolete releases on https://dist.apache.org/repos/dist/release/phoenix;>https://dist.apache.org/repos/dist/release/phoenix
 given the current release. 
+   Ensure you ~/.m2/settings.xml is setup correctly:  
+
+server
+  idapache.releases.https/id
+  username !-- YOUR APACHE USERNAME -- /username
+  password !-- YOUR APACHE PASSWORD -- /password
+/server
+ 
+
Release to maven (remove release directory from local repro if 
present):  
 
 
@@ -480,7 +489,7 @@ mvn versions:set -DnewVersion=4.12.0-HBa


Back to 
top
-   Copyright 2018 http://www.apache.org;>Apache Software Foundation. All Rights 
Reserved.
+   Copyright 2019 http://www.apache.org;>Apache Software Foundation. All Rights 
Reserved.




Modified: phoenix/site/source/src/site/markdown/release.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/release.md?rev=1860305=1860304=1860305=diff
==
--- phoenix/site/source/src/site/markdown/release.md (original)
+++ phoenix/site/source/src/site/markdown/release.md Tue May 28 22:19:29 2019
@@ -35,15 +35,25 @@ Follow the instructions. Signed binary a
 
 3. Remove any obsolete releases on 
https://dist.apache.org/repos/dist/release/phoenix given the current release.
 
-4. Release to maven (remove release directory from local repro if present): 
+4. Ensure you ~/.m2/settings.xml is setup correctly: 
+
+```
+   
+ apache.releases.https
+   
+   
+   
+```
+
+5. Release to maven (remove release directory from local repro if present): 
 
 
 mvn clean deploy gpg:sign -DperformRelease=true 
-Dgpg.passphrase=[your_pass_phrase_here]
 -Dgpg.keyname=[your_key_here] -DskipTests -P release -pl 
phoenix-core,phoenix-pig,phoenix-tracing-webapp,
 
phoenix-queryserver,phoenix-spark,phoenix-flume,phoenix-pherf,phoenix-queryserver-client,phoenix-hive,phoenix-client,phoenix-server
 -am
 
-5. Go to https://repository.apache.org/#stagingRepositories and 
close -> release the staged artifacts.
-6. Set version back to upcoming SNAPSHOT and commit: 
+6. Go to https://repository.apache.org/#stagingRepositories and 
close -> release the staged artifacts.
+7. Set version back to upcoming SNAPSHOT and commit: 
 
 
 mvn versions:set -DnewVersion=4.12.0-HBase-0.98-SNAPSHOT 
-DgenerateBackupPoms=false




svn commit: r1860304 - in /phoenix/site: publish/download.html source/src/site/markdown/download.md

2019-05-28 Thread tdsilva
Author: tdsilva
Date: Tue May 28 22:08:40 2019
New Revision: 1860304

URL: http://svn.apache.org/viewvc?rev=1860304=rev
Log:
Fix typo

Modified:
phoenix/site/publish/download.html
phoenix/site/source/src/site/markdown/download.md

Modified: phoenix/site/publish/download.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/download.html?rev=1860304=1860303=1860304=diff
==
--- phoenix/site/publish/download.html (original)
+++ phoenix/site/publish/download.html Tue May 28 22:08:40 2019
@@ -183,8 +183,8 @@
 

svn commit: r1860303 - in /phoenix/site: publish/download.html publish/language/datatypes.html publish/language/functions.html publish/language/index.html source/src/site/markdown/download.md

2019-05-28 Thread tdsilva
Author: tdsilva
Date: Tue May 28 22:07:43 2019
New Revision: 1860303

URL: http://svn.apache.org/viewvc?rev=1860303=rev
Log:
Update download page for 4.14.2 release

Modified:
phoenix/site/publish/download.html
phoenix/site/publish/language/datatypes.html
phoenix/site/publish/language/functions.html
phoenix/site/publish/language/index.html
phoenix/site/source/src/site/markdown/download.md

Modified: phoenix/site/publish/download.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/download.html?rev=1860303=1860302=1860303=diff
==
--- phoenix/site/publish/download.html (original)
+++ phoenix/site/publish/download.html Tue May 28 22:07:43 2019
@@ -1,7 +1,7 @@
 
 
 
 
@@ -166,7 +166,7 @@
  Phoenix Downloads
  
 The below table lists mirrored release artifacts and their associated 
hashes and signatures available ONLY at apache.org. The keys used to sign 
releases can be found in our published https://www.apache.org/dist/phoenix/KEYS;>KEYS file. See our 
installation instructions here, our release 
notes here, and a list of fixes and new 
features https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12334393projectId=12315120;>here.
 Follow https://www.apache.org/dyn/closer.cgi#verify;>Verify the Integrity of the 
Files for how to verify your mirrored downloads. 
-Current release 4.14.1 can run on Apache HBase 0.98, 1.1, 1.2, 1.3 and 1.4. 
CDH HBase 5.11, 5.12, 5.13 and 5.14 is supported by 4.14.0. Apache HBase 2.0 is 
supported by 5.0.0. Please follow the appropriate link depending on your HBase 
version.  
+Current release 4.14.2 can run on Apache HBase 1.3 and 1.4. CDH HBase 5.11, 
5.12, 5.13 and 5.14 is supported by 4.14.0. Apache HBase 2.0 is supported by 
5.0.0. Please follow the appropriate link depending on your HBase version.  
  
   

@@ -183,11 +183,8 @@
 

[phoenix] 01/04: PHOENIX-4296: reverse scan in ChunkedResultIterator

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a commit to branch 4.x-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit c98c89f2fbbc34817cef1e07d030c65e84cc1d66
Author: chfeng 
AuthorDate: Thu May 16 11:41:41 2019 +0100

PHOENIX-4296: reverse scan in ChunkedResultIterator
---
 .../phoenix/iterate/ChunkedResultIterator.java | 13 +++-
 .../phoenix/iterate/ChunkedResultIteratorTest.java | 73 ++
 2 files changed, 83 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ChunkedResultIterator.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ChunkedResultIterator.java
index acb6c04..1aab2d5 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ChunkedResultIterator.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ChunkedResultIterator.java
@@ -58,6 +58,7 @@ public class ChunkedResultIterator implements 
PeekingResultIterator {
 
 private final ParallelIteratorFactory delegateIteratorFactory;
 private ImmutableBytesWritable lastKey = new ImmutableBytesWritable();
+private ImmutableBytesWritable prevLastKey = new ImmutableBytesWritable();
 private final StatementContext context;
 private final TableRef tableRef;
 private final long chunkSize;
@@ -96,8 +97,9 @@ public class ChunkedResultIterator implements 
PeekingResultIterator {
 }
 }
 
-private ChunkedResultIterator(ParallelIteratorFactory 
delegateIteratorFactory, MutationState mutationState,
-   StatementContext context, TableRef tableRef, Scan scan, long 
chunkSize, ResultIterator scanner, QueryPlan plan) throws SQLException {
+private ChunkedResultIterator(ParallelIteratorFactory 
delegateIteratorFactory,
+MutationState mutationState, StatementContext context, TableRef 
tableRef, Scan scan,
+long chunkSize, ResultIterator scanner, QueryPlan plan) throws 
SQLException {
 this.delegateIteratorFactory = delegateIteratorFactory;
 this.context = context;
 this.tableRef = tableRef;
@@ -138,8 +140,12 @@ public class ChunkedResultIterator implements 
PeekingResultIterator {
 if (resultIterator.peek() == null && lastKey != null) {
 resultIterator.close();
 scan = ScanUtil.newScan(scan);
-if(ScanUtil.isLocalIndex(scan)) {
+if (ScanUtil.isLocalIndex(scan)) {
 scan.setAttribute(SCAN_START_ROW_SUFFIX, 
ByteUtil.copyKeyBytesIfNecessary(lastKey));
+} else if (ScanUtil.isReversed(scan)) {
+// lastKey is the last row the previous iterator meet but not 
returned.
+// for reverse scan, use prevLastKey as the new stopRow.
+scan.setStopRow(ByteUtil.copyKeyBytesIfNecessary(prevLastKey));
 } else {
 scan.setStartRow(ByteUtil.copyKeyBytesIfNecessary(lastKey));
 }
@@ -212,6 +218,7 @@ public class ChunkedResultIterator implements 
PeekingResultIterator {
 byte[] currentKey = lastKey.get();
 int offset = lastKey.getOffset();
 int length = lastKey.getLength();
+prevLastKey.set(lastKey.copyBytes());
 newTuple.getKey(lastKey);
 
 return Bytes.compareTo(currentKey, offset, length, lastKey.get(), 
lastKey.getOffset(), lastKey.getLength()) != 0;
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/iterate/ChunkedResultIteratorTest.java
 
b/phoenix-core/src/test/java/org/apache/phoenix/iterate/ChunkedResultIteratorTest.java
new file mode 100644
index 000..18402f0
--- /dev/null
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/iterate/ChunkedResultIteratorTest.java
@@ -0,0 +1,73 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.iterate;
+
+import static org.apache.phoenix.util.TestUtil.PHOENIX_JDBC_URL;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.util.List;
+import java.util.Properties;
+import 

[phoenix] 04/04: PHOENIX-5112 Simplify QueryPlan selection in Phoenix.

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a commit to branch 4.x-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit 68d1a04f1c80212a6fe6dd9574de9c34ad39b779
Author: Lars Hofhansl 
AuthorDate: Sat May 25 02:55:09 2019 +0100

PHOENIX-5112 Simplify QueryPlan selection in Phoenix.
---
 .../org/apache/phoenix/optimize/QueryOptimizer.java| 18 --
 1 file changed, 18 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java 
b/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
index 43a5950..4f0dfeb 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
@@ -324,24 +324,6 @@ public class QueryOptimizer {
 
 QueryPlan plan = compiler.compile();
 
-boolean optimizedSort =
-plan.getOrderBy().getOrderByExpressions().isEmpty()
-&& 
!dataPlan.getOrderBy().getOrderByExpressions().isEmpty()
-|| plan.getGroupBy().isOrderPreserving()
-&& 
!dataPlan.getGroupBy().isOrderPreserving();
-
-// If query doesn't have where clause, or the planner didn't 
add any (bound) scan ranges, and some of
-// columns to project/filter are missing in the index then we 
need to get missing columns from main table
-// for each row in local index. It's like full scan of both 
local index and data table which is inefficient.
-// Then we don't use the index. If all the columns to project 
are present in the index 
-// then we can use the index even the query doesn't have where 
clause.
-// We'll use the index anyway if it allowed us to avoid a sort 
operation.
-if (index.getIndexType() == IndexType.LOCAL
-&& (indexSelect.getWhere() == null
-|| 
plan.getContext().getScanRanges().getBoundRanges().size() == 1)
-&& !plan.getContext().getDataColumns().isEmpty() && 
!optimizedSort) {
-return null;
-}
 indexTableRef = plan.getTableRef();
 indexTable = indexTableRef.getTable();
 indexState = indexTable.getIndexState();



[phoenix] 02/04: PHOENIX-5291 Ensure that Phoenix coprocessor close all scanners.

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a commit to branch 4.x-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit a5f1b5c26b9d15c68244bc62faa57b70361d209b
Author: Lars Hofhansl 
AuthorDate: Thu May 23 06:40:34 2019 +0100

PHOENIX-5291 Ensure that Phoenix coprocessor close all scanners.
---
 .../coprocessor/UngroupedAggregateRegionObserver.java   | 17 ++---
 1 file changed, 14 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
index dc7567b..dc61a98 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
@@ -1175,7 +1175,7 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 long rowCount = 0; // in case of async, we report 0 as number of rows 
updated
 StatisticsCollectionRunTracker statsRunTracker =
 StatisticsCollectionRunTracker.getInstance(config);
-boolean runUpdateStats = 
statsRunTracker.addUpdateStatsCommandRegion(region.getRegionInfo(),scan.getFamilyMap().keySet());
+final boolean runUpdateStats = 
statsRunTracker.addUpdateStatsCommandRegion(region.getRegionInfo(),scan.getFamilyMap().keySet());
 if (runUpdateStats) {
 if (!async) {
 rowCount = callable.call();
@@ -1204,8 +1204,11 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 
 @Override
 public void close() throws IOException {
-// No-op because we want to manage closing of the inner 
scanner ourselves.
-// This happens inside StatsCollectionCallable.
+// If we ran/scheduled StatsCollectionCallable the delegate
+// scanner is closed there. Otherwise close it here.
+if (!runUpdateStats) {
+super.close();
+}
 }
 
 @Override
@@ -1442,6 +1445,14 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 + fullTableName);
 Scan scan = new Scan();
 scan.setMaxVersions();
+
+// close the passed scanner since we are 
returning a brand-new one
+try {
+if (s != null) {
+s.close();
+}
+} catch (IOException ignore) {}
+
 return new StoreScanner(store, 
store.getScanInfo(), scan, scanners,
 ScanType.COMPACT_RETAIN_DELETES, 
store.getSmallestReadPoint(),
 HConstants.OLDEST_TIMESTAMP);



[phoenix] branch 4.x-HBase-1.2 updated (34ffbb9 -> 68d1a04)

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a change to branch 4.x-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git.


from 34ffbb9  PHOENIX-5231 Configurable Stats Cache
 new c98c89f  PHOENIX-4296: reverse scan in ChunkedResultIterator
 new a5f1b5c  PHOENIX-5291 Ensure that Phoenix coprocessor close all 
scanners.
 new 42511fb  PHOENIX-5297 POM cleanup and de-duplication
 new 68d1a04  PHOENIX-5112 Simplify QueryPlan selection in Phoenix.

The 4 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 phoenix-core/pom.xml   | 14 -
 .../UngroupedAggregateRegionObserver.java  | 17 -
 .../phoenix/iterate/ChunkedResultIterator.java | 13 +++-
 .../apache/phoenix/optimize/QueryOptimizer.java| 18 --
 .../phoenix/iterate/ChunkedResultIteratorTest.java | 73 ++
 phoenix-pherf/pom.xml  |  7 ---
 pom.xml|  8 +--
 7 files changed, 101 insertions(+), 49 deletions(-)
 create mode 100644 
phoenix-core/src/test/java/org/apache/phoenix/iterate/ChunkedResultIteratorTest.java



[phoenix] 03/04: PHOENIX-5297 POM cleanup and de-duplication

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a commit to branch 4.x-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit 42511fb880151962beb0005d2f514ac5c48acf33
Author: Josh Elser 
AuthorDate: Fri May 24 17:02:11 2019 +0100

PHOENIX-5297 POM cleanup and de-duplication

Signed-off-by: Geoffrey Jacoby 
---
 phoenix-core/pom.xml  | 14 --
 phoenix-pherf/pom.xml |  7 ---
 pom.xml   |  8 
 3 files changed, 4 insertions(+), 25 deletions(-)

diff --git a/phoenix-core/pom.xml b/phoenix-core/pom.xml
index 3aab0ed..99cab92 100644
--- a/phoenix-core/pom.xml
+++ b/phoenix-core/pom.xml
@@ -10,20 +10,6 @@
   Phoenix Core
   Core Phoenix codebase
 
-  
-  
-  The Apache Software License, Version 2.0
-  http://www.apache.org/licenses/LICENSE-2.0.txt
-  repo
-  
-  
-  
-
-  
-  Apache Software Foundation
-  http://www.apache.org
-  
-
   
 ${project.basedir}/..
 0.8.1
diff --git a/phoenix-pherf/pom.xml b/phoenix-pherf/pom.xml
index 8640b3a..6463c8f 100644
--- a/phoenix-pherf/pom.xml
+++ b/phoenix-pherf/pom.xml
@@ -27,13 +27,6 @@
org.apache.phoenix.shaded

 
-   
-   
-   apache release
-   
https://repository.apache.org/content/repositories/releases/
-   
-   
-


org.apache.phoenix
diff --git a/pom.xml b/pom.xml
index 83119ce..4ed9b89 100644
--- a/pom.xml
+++ b/pom.xml
@@ -11,7 +11,7 @@
   
 
   The Apache Software License, Version 2.0
-  http://www.apache.org/licenses/LICENSE-2.0.txt
+  https://www.apache.org/licenses/LICENSE-2.0.txt
   repo
   
 
@@ -19,7 +19,7 @@
 
   
 Apache Software Foundation
-http://www.apache.org
+https://www.apache.org
   
 
   
@@ -45,7 +45,7 @@
   
 
   
-
scm:git:http://git-wip-us.apache.org/repos/asf/phoenix.git
+
scm:git:https://git-wip-us.apache.org/repos/asf/phoenix.git
 https://git-wip-us.apache.org/repos/asf/phoenix.git
 
scm:git:https://git-wip-us.apache.org/repos/asf/phoenix.git
   
@@ -409,7 +409,7 @@
 
   true
   
-http://hbase.apache.org/apidocs/
+https://hbase.apache.org/apidocs/
   
 
 



[phoenix] annotated tag v4.14.2-HBase-1.4 created (now 54d12ee)

2019-05-28 Thread tdsilva
This is an automated email from the ASF dual-hosted git repository.

tdsilva pushed a change to annotated tag v4.14.2-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git.


  at 54d12ee  (tag)
 tagging 473ed83771f64138fec219346fc6c214f98367b3 (tag)
  length 169 bytes
  by Thomas D'Silva
  on Tue May 28 14:50:53 2019 -0700

- Log -
Phoenix v4.14.2-HBase-1.4 release
---

No new revisions were added by this update.



[phoenix] branch 4.14-HBase-1.3 updated: Set version to 4.14.3-HBase-1.3-SNAPSHOT

2019-05-28 Thread tdsilva
This is an automated email from the ASF dual-hosted git repository.

tdsilva pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.3 by this push:
 new 3573fc2  Set version to 4.14.3-HBase-1.3-SNAPSHOT
3573fc2 is described below

commit 3573fc2ab3ef44633902f501ae4be5d81581711f
Author: Thomas D'Silva 
AuthorDate: Tue May 28 14:48:12 2019 -0700

Set version to 4.14.3-HBase-1.3-SNAPSHOT
---
 phoenix-assembly/pom.xml   | 2 +-
 phoenix-client/pom.xml | 2 +-
 phoenix-core/pom.xml   | 2 +-
 phoenix-flume/pom.xml  | 2 +-
 phoenix-hive/pom.xml   | 2 +-
 phoenix-kafka/pom.xml  | 2 +-
 phoenix-load-balancer/pom.xml  | 2 +-
 phoenix-pherf/pom.xml  | 2 +-
 phoenix-pig/pom.xml| 2 +-
 phoenix-queryserver-client/pom.xml | 2 +-
 phoenix-queryserver/pom.xml| 2 +-
 phoenix-server/pom.xml | 2 +-
 phoenix-spark/pom.xml  | 2 +-
 phoenix-tracing-webapp/pom.xml | 2 +-
 pom.xml| 2 +-
 15 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/phoenix-assembly/pom.xml b/phoenix-assembly/pom.xml
index 7f7d78f..c3f7689 100644
--- a/phoenix-assembly/pom.xml
+++ b/phoenix-assembly/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.14.2-HBase-1.3
+4.14.3-HBase-1.3-SNAPSHOT
   
   phoenix-assembly
   Phoenix Assembly
diff --git a/phoenix-client/pom.xml b/phoenix-client/pom.xml
index d0ec982..40e178f 100644
--- a/phoenix-client/pom.xml
+++ b/phoenix-client/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.14.2-HBase-1.3
+4.14.3-HBase-1.3-SNAPSHOT
   
   phoenix-client
   Phoenix Client
diff --git a/phoenix-core/pom.xml b/phoenix-core/pom.xml
index 2734056..5267dcf 100644
--- a/phoenix-core/pom.xml
+++ b/phoenix-core/pom.xml
@@ -4,7 +4,7 @@
   
 org.apache.phoenix
 phoenix
-4.14.2-HBase-1.3
+4.14.3-HBase-1.3-SNAPSHOT
   
   phoenix-core
   Phoenix Core
diff --git a/phoenix-flume/pom.xml b/phoenix-flume/pom.xml
index dc62381..88bc4e1 100644
--- a/phoenix-flume/pom.xml
+++ b/phoenix-flume/pom.xml
@@ -26,7 +26,7 @@
   
 org.apache.phoenix
 phoenix
-4.14.2-HBase-1.3
+4.14.3-HBase-1.3-SNAPSHOT
   
   phoenix-flume
   Phoenix - Flume
diff --git a/phoenix-hive/pom.xml b/phoenix-hive/pom.xml
index 2162e8c..66bec76 100644
--- a/phoenix-hive/pom.xml
+++ b/phoenix-hive/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.14.2-HBase-1.3
+4.14.3-HBase-1.3-SNAPSHOT
   
   phoenix-hive
   Phoenix - Hive
diff --git a/phoenix-kafka/pom.xml b/phoenix-kafka/pom.xml
index da89a8a..17c6c98 100644
--- a/phoenix-kafka/pom.xml
+++ b/phoenix-kafka/pom.xml
@@ -26,7 +26,7 @@

org.apache.phoenix
phoenix
-   4.14.2-HBase-1.3
+   4.14.3-HBase-1.3-SNAPSHOT

phoenix-kafka
Phoenix - Kafka
diff --git a/phoenix-load-balancer/pom.xml b/phoenix-load-balancer/pom.xml
index df7a50b..4bd6c9d 100644
--- a/phoenix-load-balancer/pom.xml
+++ b/phoenix-load-balancer/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.14.2-HBase-1.3
+4.14.3-HBase-1.3-SNAPSHOT
   
   phoenix-load-balancer
   Phoenix Load Balancer
diff --git a/phoenix-pherf/pom.xml b/phoenix-pherf/pom.xml
index f5d570f..8ae2c3f 100644
--- a/phoenix-pherf/pom.xml
+++ b/phoenix-pherf/pom.xml
@@ -15,7 +15,7 @@

org.apache.phoenix
phoenix
-   4.14.2-HBase-1.3
+   4.14.3-HBase-1.3-SNAPSHOT

 
phoenix-pherf
diff --git a/phoenix-pig/pom.xml b/phoenix-pig/pom.xml
index 80901ba..99e6c00 100644
--- a/phoenix-pig/pom.xml
+++ b/phoenix-pig/pom.xml
@@ -26,7 +26,7 @@
   
 org.apache.phoenix
 phoenix
-4.14.2-HBase-1.3
+4.14.3-HBase-1.3-SNAPSHOT
   
   phoenix-pig
   Phoenix - Pig
diff --git a/phoenix-queryserver-client/pom.xml 
b/phoenix-queryserver-client/pom.xml
index 07ec210..5ee6a1a 100644
--- a/phoenix-queryserver-client/pom.xml
+++ b/phoenix-queryserver-client/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.14.2-HBase-1.3
+4.14.3-HBase-1.3-SNAPSHOT
   
   phoenix-queryserver-client
   Phoenix Query Server Client
diff --git a/phoenix-queryserver/pom.xml b/phoenix-queryserver/pom.xml
index 2e3d567..a2a10da 100644
--- a/phoenix-queryserver/pom.xml
+++ b/phoenix-queryserver/pom.xml
@@ -26,7 +26,7 @@
   
 org.apache.phoenix
 phoenix
-4.14.2-HBase-1.3
+4.14.3-HBase-1.3-SNAPSHOT
   
   phoenix-queryserver
   Phoenix Query Server
diff --git a/phoenix-server/pom.xml b/phoenix-server/pom.xml
index 3263940..af16c55 100644
--- a/phoenix-server/pom.xml
+++ b/phoenix-server/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.14.2-HBase-1.3
+4.14.3-HBase-1.3-SNAPSHOT
   

Build failed in Jenkins: Phoenix | Master #2384

2019-05-28 Thread Apache Jenkins Server
See 


Changes:

[larsh] PHOENIX-5303 Fix index failures with some versions of HBase.

--
[...truncated 1.30 MB...]
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.897 s 
- in org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Running org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 620.777 
s - in org.apache.phoenix.end2end.join.SortMergeJoinGlobalIndexIT
[INFO] Running org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.102 s 
- in org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Running org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 75.114 s 
- in org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Running org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 66.369 s 
- in org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Running org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.001 
s - in org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Running org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Tests run: 35, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 370.597 
s - in org.apache.phoenix.end2end.join.SortMergeJoinNoSpoolingIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.439 s 
- in org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Running org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 289.178 
s - in org.apache.phoenix.end2end.join.SubqueryUsingSortMergeJoinIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.204 s 
- in org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Running org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 943.266 
s - in org.apache.phoenix.end2end.join.HashJoinLocalIndexIT
[INFO] Running org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.508 s 
- in org.apache.phoenix.util.IndexScrutinyIT
[INFO] Running org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 927.603 
s - in org.apache.phoenix.end2end.join.SortMergeJoinLocalIndexIT
[INFO] Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 405.267 
s - in org.apache.phoenix.end2end.join.SubqueryIT
[INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 255.76 
s - in org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 50, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 446.228 
s - in org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 78, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 748.921 
s - in org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Failures: 
[ERROR]   HashJoinMoreIT.testBug2961:908
[ERROR] Errors: 
[ERROR]   
MutableIndexSplitForwardScanIT.testSplitDuringIndexScan:30->MutableIndexSplitIT.testSplitDuringIndexScan:87->MutableIndexSplitIT.splitDuringScan:152
 » StaleRegionBoundaryCache
[ERROR]   
MutableIndexSplitForwardScanIT.testSplitDuringIndexScan:30->MutableIndexSplitIT.testSplitDuringIndexScan:87->MutableIndexSplitIT.splitDuringScan:152
 » StaleRegionBoundaryCache
[INFO] 
[ERROR] Tests run: 3732, Failures: 1, Errors: 2, Skipped: 11
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test (HBaseManagedTimeTests) 
@ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test 
(NeedTheirOwnClusterTests) @ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.002 
s - in 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.791 s 
- in org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] 

[phoenix] 01/02: PHOENIX-4296: reverse scan in ChunkedResultIterator

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a commit to branch 4.14-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit 58083d70e1d774aa88283fb16945eb546c0e4f27
Author: chfeng 
AuthorDate: Thu May 16 11:41:41 2019 +0100

PHOENIX-4296: reverse scan in ChunkedResultIterator
---
 .../phoenix/iterate/ChunkedResultIterator.java | 13 +++-
 .../phoenix/iterate/ChunkedResultIteratorTest.java | 73 ++
 2 files changed, 83 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ChunkedResultIterator.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ChunkedResultIterator.java
index acb6c04..1aab2d5 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ChunkedResultIterator.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ChunkedResultIterator.java
@@ -58,6 +58,7 @@ public class ChunkedResultIterator implements 
PeekingResultIterator {
 
 private final ParallelIteratorFactory delegateIteratorFactory;
 private ImmutableBytesWritable lastKey = new ImmutableBytesWritable();
+private ImmutableBytesWritable prevLastKey = new ImmutableBytesWritable();
 private final StatementContext context;
 private final TableRef tableRef;
 private final long chunkSize;
@@ -96,8 +97,9 @@ public class ChunkedResultIterator implements 
PeekingResultIterator {
 }
 }
 
-private ChunkedResultIterator(ParallelIteratorFactory 
delegateIteratorFactory, MutationState mutationState,
-   StatementContext context, TableRef tableRef, Scan scan, long 
chunkSize, ResultIterator scanner, QueryPlan plan) throws SQLException {
+private ChunkedResultIterator(ParallelIteratorFactory 
delegateIteratorFactory,
+MutationState mutationState, StatementContext context, TableRef 
tableRef, Scan scan,
+long chunkSize, ResultIterator scanner, QueryPlan plan) throws 
SQLException {
 this.delegateIteratorFactory = delegateIteratorFactory;
 this.context = context;
 this.tableRef = tableRef;
@@ -138,8 +140,12 @@ public class ChunkedResultIterator implements 
PeekingResultIterator {
 if (resultIterator.peek() == null && lastKey != null) {
 resultIterator.close();
 scan = ScanUtil.newScan(scan);
-if(ScanUtil.isLocalIndex(scan)) {
+if (ScanUtil.isLocalIndex(scan)) {
 scan.setAttribute(SCAN_START_ROW_SUFFIX, 
ByteUtil.copyKeyBytesIfNecessary(lastKey));
+} else if (ScanUtil.isReversed(scan)) {
+// lastKey is the last row the previous iterator meet but not 
returned.
+// for reverse scan, use prevLastKey as the new stopRow.
+scan.setStopRow(ByteUtil.copyKeyBytesIfNecessary(prevLastKey));
 } else {
 scan.setStartRow(ByteUtil.copyKeyBytesIfNecessary(lastKey));
 }
@@ -212,6 +218,7 @@ public class ChunkedResultIterator implements 
PeekingResultIterator {
 byte[] currentKey = lastKey.get();
 int offset = lastKey.getOffset();
 int length = lastKey.getLength();
+prevLastKey.set(lastKey.copyBytes());
 newTuple.getKey(lastKey);
 
 return Bytes.compareTo(currentKey, offset, length, lastKey.get(), 
lastKey.getOffset(), lastKey.getLength()) != 0;
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/iterate/ChunkedResultIteratorTest.java
 
b/phoenix-core/src/test/java/org/apache/phoenix/iterate/ChunkedResultIteratorTest.java
new file mode 100644
index 000..18402f0
--- /dev/null
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/iterate/ChunkedResultIteratorTest.java
@@ -0,0 +1,73 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.iterate;
+
+import static org.apache.phoenix.util.TestUtil.PHOENIX_JDBC_URL;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.util.List;
+import java.util.Properties;
+import 

[phoenix] 02/02: PHOENIX-5291 Ensure that Phoenix coprocessor close all scanners.

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a commit to branch 4.14-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit 6349f245e29ca54d773026a5563c43a2ab9e8264
Author: Lars Hofhansl 
AuthorDate: Thu May 23 06:40:34 2019 +0100

PHOENIX-5291 Ensure that Phoenix coprocessor close all scanners.
---
 .../coprocessor/UngroupedAggregateRegionObserver.java   | 17 ++---
 1 file changed, 14 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
index f0ce5b2..72ee4a3 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
@@ -1158,7 +1158,7 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 long rowCount = 0; // in case of async, we report 0 as number of rows 
updated
 StatisticsCollectionRunTracker statsRunTracker =
 StatisticsCollectionRunTracker.getInstance(config);
-boolean runUpdateStats = 
statsRunTracker.addUpdateStatsCommandRegion(region.getRegionInfo(),scan.getFamilyMap().keySet());
+final boolean runUpdateStats = 
statsRunTracker.addUpdateStatsCommandRegion(region.getRegionInfo(),scan.getFamilyMap().keySet());
 if (runUpdateStats) {
 if (!async) {
 rowCount = callable.call();
@@ -1187,8 +1187,11 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 
 @Override
 public void close() throws IOException {
-// No-op because we want to manage closing of the inner 
scanner ourselves.
-// This happens inside StatsCollectionCallable.
+// If we ran/scheduled StatsCollectionCallable the delegate
+// scanner is closed there. Otherwise close it here.
+if (!runUpdateStats) {
+super.close();
+}
 }
 
 @Override
@@ -1425,6 +1428,14 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 + fullTableName);
 Scan scan = new Scan();
 scan.setMaxVersions();
+
+// close the passed scanner since we are 
returning a brand-new one
+try {
+if (s != null) {
+s.close();
+}
+} catch (IOException ignore) {}
+
 return new StoreScanner(store, 
store.getScanInfo(), scan, scanners,
 ScanType.COMPACT_RETAIN_DELETES, 
store.getSmallestReadPoint(),
 HConstants.OLDEST_TIMESTAMP);



[phoenix] branch 4.14-HBase-1.2 updated (0a7e93d -> 6349f24)

2019-05-28 Thread pboado
This is an automated email from the ASF dual-hosted git repository.

pboado pushed a change to branch 4.14-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git.


from 0a7e93d  PHOENIX-5055 Split mutations batches probably affects 
correctness of index data
 new 58083d7  PHOENIX-4296: reverse scan in ChunkedResultIterator
 new 6349f24  PHOENIX-5291 Ensure that Phoenix coprocessor close all 
scanners.

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../UngroupedAggregateRegionObserver.java  | 17 -
 .../phoenix/iterate/ChunkedResultIterator.java | 13 +++-
 .../phoenix/iterate/ChunkedResultIteratorTest.java | 73 ++
 3 files changed, 97 insertions(+), 6 deletions(-)
 create mode 100644 
phoenix-core/src/test/java/org/apache/phoenix/iterate/ChunkedResultIteratorTest.java



Build failed in Jenkins: Phoenix-4.x-HBase-1.4 #150

2019-05-28 Thread Apache Jenkins Server
See 


Changes:

[larsh] PHOENIX-5303 Fix index failures with some versions of HBase.

--
[...truncated 556.29 KB...]
[INFO] Running org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 589.154 
s - in org.apache.phoenix.end2end.join.HashJoinLocalIndexIT
[INFO] Running org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.001 
s - in org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Running org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.036 s 
- in org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Running org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.385 s 
- in org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Running org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.877 s 
- in org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Running org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 163.021 
s - in org.apache.phoenix.end2end.join.SubqueryUsingSortMergeJoinIT
[INFO] Running org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Running org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.643 s 
- in org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.838 s 
- in org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 581.464 
s - in org.apache.phoenix.end2end.join.SortMergeJoinLocalIndexIT
[INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 165.243 
s - in org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 117, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
950.819 s - in org.apache.phoenix.end2end.index.MutableIndexIT
[INFO] Tests run: 50, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 336.061 
s - in org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 78, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 614.66 
s - in org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Failures: 
[ERROR]   
UpgradeIT.testMapTableToNamespaceDuringUpgrade:189->BaseTest.verifySequenceValue:1769->BaseTest.verifySequence:1791
 expected:<-9223372036854775805> but was:<-9223372036854774707>
[ERROR]   HashJoinMoreIT.testBug2961:908
[INFO] 
[ERROR] Tests run: 3723, Failures: 2, Errors: 0, Skipped: 2
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test (HBaseManagedTimeTests) 
@ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test 
(NeedTheirOwnClusterTests) @ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.557 s 
- in 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.357 s 
- in org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Running org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.495 s 
- in org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.44 s 
- in org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.529 s 
- in org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Running org.apache.phoenix.end2end.CostBasedDecisionIT
[INFO] Running org.apache.phoenix.end2end.DropSchemaIT
[INFO] Running org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.828 s 
- in org.apache.phoenix.end2end.DropSchemaIT
[INFO] Running org.apache.phoenix.end2end.IndexExtendedIT

svn commit: r34295 - /release/phoenix/KEYS

2019-05-28 Thread tdsilva
Author: tdsilva
Date: Tue May 28 19:26:08 2019
New Revision: 34295

Log:
Added key for tdsi...@apache.org


Modified:
release/phoenix/KEYS

Modified: release/phoenix/KEYS
==
--- release/phoenix/KEYS (original)
+++ release/phoenix/KEYS Tue May 28 19:26:08 2019
@@ -706,3 +706,61 @@ Zlf0WWO0W8ULaPmNd4XK/oHzxdyR7OPc0LM22VaL
 Fx6bhHulGA==
 =PgYX
 -END PGP PUBLIC KEY BLOCK-
+pub   4096R/30E0F400 2019-05-28
+uid  Thomas D'Silva (CODE SIGNING KEY) 
+sig 330E0F400 2019-05-28  Thomas D'Silva (CODE SIGNING KEY) 

+sub   4096R/7C9B246A 2019-05-28
+sig  30E0F400 2019-05-28  Thomas D'Silva (CODE SIGNING KEY) 

+
+-BEGIN PGP PUBLIC KEY BLOCK-
+Version: GnuPG v1
+
+mQINBFztfiEBEADHubvHSks7I7vTZy12GjnMagYJy/j89xE6n4g/OlU5qq8euzus
+N8slciKOr/zPCOzmhPBCmv6WUdBvI9+dZl6ZYq6C7cLTsTIvkzCZ+hxkZR4zE0r2
+aY8KL3SbzQapEppuqCZUE/sfqkBTypE8Gk0S3QRPb7LF52q69ukfnixlIB7to3jo
+nca+5GfkieNMO+dZc8/kB1CkWXkbymOKQVSBMWpD89aBF85wHwahmJLKIj+Qy3eZ
+GKgY4HG8Edv9BqwutEtIdk/wmuLUpZ4uhKzpzv1/rhiHENmXxBisples7DoruUT3
+pVmu7/0NznH2MeGGmocJOdUoxWLQ3odCu/qdc5GDJyEaM7Znn58Jhhb5/UMgJlIZ
+Gu/+imGp90nEWkhhc8WEVGdyvCsNKjOM9qFAjhWtO+RTMuVqtiW2hwaKXCyfnE0L
+0YSYF7qkyfVAfsAbCNtZ49p69QRYQdoxc2t5wa5NaF5CDWcmY5WEgMvBBijI3uca
+xSvD1Rn89da9F2AjcIbrnO7ErgltVA904G6fnGpTDQTZE7lRX/XV8JOb1iJjjyZh
+nPiC5vDd19CcFPpfvUqL7hMeNER/ENf2Cqyike9SwYUEgDQysUTELJ8EWwBKToLC
+72uz87zNv7irWcpxu+kWcOAno5JOYdfc4s5IpRNIeuNnGDa3ZC2t7invIQARAQAB
+tDZUaG9tYXMgRCdTaWx2YSAoQ09ERSBTSUdOSU5HIEtFWSkgPHRkc2lsdmFAYXBh
+Y2hlLm9yZz6JAjgEEwECACIFAlztfiECGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4B
+AheAAAoJEAjEhQsw4PQAzRMQAJZ0NfRIcfMX5ixWqoEUS3w8KK+NkAinqW3x8UZL
+K25afx37nT4ThVzrh9kXO3dXdbeolf9TtU1Y75qdj5N0wQQ8V/AlF2Dl7O1LtBva
+mEZMh5xlAEtf/nmNC7HqRCrm3Qyos1wCnYGyymsqdqcrs40BEmGoTGsF4FEAkzJj
+p/AQ01bxdI8/Uq6alQ7qZysz82uZdsOrfrpkV5kQw4MOMd0wanmuCVIcTFnTe/Z7
+bUogouZUoXrvrUooCLoKT1F4eMUUPmUP2QixElshPHXNVf7+Oq7ARL8znjVNvu0H
+EgeuyngUD6DnKZvctURsLcNC+dX2nzES8+BHqORWq7Mf3C/l7SyRBtSF1AnoCBsX
+50r5Ow3XmdouivqoH4RkPpSkanckG2Bu+Aqe1Wqut+APkEJMh7haWVQmsgEc2v/u
+zCRCOPMLBsUcsa4rwxVNAzcq0B4y+XZcJ3SZCZNEC7acVuZlbSw1BpVM/OaTQ4gr
+SKbJwCM64rJgEqh1HmV0nNJThpylFn7CEcwHVGOeSlsBtEcuWW/dtc7IpLk2ErnA
+1j9SKV1ZlhTYu+V/Cg0FIedrXpLzfpey8mQmpuLo7Y569rI0/hY7MttGeYHT3bIc
+TV7GwdwN4JXE8ssh/2vVaLeewvkxQ7pzA2kY+caNZDJRTbW1LZqAugXse5KdByii
+LYFguQINBFztfiEBEACrLPVgbMxpDuhhogGfw/PQmCZYj9R1MtAxLFZ+VZek9c8L
+QpmtBV59Vk+JSA/nKjlJ2ilPmK0ZuJU8zjWtlGXWJa4U11jwLDk9VeOex19jkUbL
+a+N5YzZY9LWlkG2o5srFQGyaUBI4uBNaAgJYeIteqg/p4vPs9v9iT/fxhBE1ZH6e
+P2/qpKqEC/YAO1c1bV+qDyB7jscfBbQpH2ooU0BCS4QDLKzO1y1pCtE+FTYgRqaF
+3MxEISfzODd8SLSUjFdVfqKoCBFEkylToTfaZ9UIUPF5NQdAC8joUEGyZQEIOW6y
+b/4/vkQXU6o3EGxG2AJUpOymKnsVhlFj8cRYcYi1UQmldH7TJjqDFN9MYsUoHVM4
+naVLaL9f3ZyVowktmNKdmt1HWN9erFGxraWtAEeL7WreuWOR1mOub1a3NlWmCxWZ
+CO9Whw68tWuLn78b8rlehtMw6/2YSOi4q232REGI8ePyFB0iAY4AYcEjQ/1Bu5HX
+XcI1LSQnTZFkOoWKC6ungdbZCKK8LX+CrZeT2W45GAOgrsyV6YFWpEOroAg/kQyW
+FC7TeIZ5cRBHkFyoQQ+YrSk05Bvh/itAI6oO4MCrLGDuDdFcQPOheBkPItlJz1rn
+Xjr03ry9oTDu0UHE2pXFIzvLJxey8u024G8MEgij4xVVDUf/RCJPbjmDEYXNXQAR
+AQABiQIfBBgBAgAJBQJc7X4hAhsMAAoJEAjEhQsw4PQAg7cQALyBjuoe652bVKO/
+kUYCAZS97K9/sRXbEUUuUksjfOpqvLFNitGxPtXe6i9ChYL8fvrRhR220soAoirT
+OVIKqMVLx45GMBJtUX2jpuq8vLcPzcar9dxL6S3gEXYPOkwiUeddZi0KBm6GyhU0
+o1V3Q5AZKNttZYSfv/jn4E3rrAQezySVUFj4Jag6hAwUUzS/nYxRql8HVRIEmpnH
+Fcu1N8ZLB0kwjisRh5xdCsTWjpHjast/Ybku9zGYIlQ15aFLtDk96bYj5tBUQ/lG
+LClJFAJFxyM/I64WO3B5cp5Rx+9AltMwr0E2TVQvOJKvlnpZAtbdHtS6EZPnQzz3
+5320J+IMn3//G/qpTcWOCbxdOLUvPqFUBpztwjMc+towpacgKxbKHSHXFGHI2PhR
+HMKD40aHYhGr/rM709GKUtXzdYNZVdN+QXxkcky/MUaK7yWAXVZYYyqpvlp2ohJK
+V35yHJDNvvLe/7gg2hAmqOYdHHCo1lQvCC0OIfDk2SqCcO5vrMFCLc5OPsa08TJF
+LsUWDlzZOzRxSIIYfxGBlu6JbTgsmVBvPOTDwZxMu5LNsfOP0+cyYbXIK2i/rZIO
+fZdnmTL0dHP4Grd0UV/LOn4xeDZo5V8cE1No6iD0hTZ1NBV5pks1t2YWomGN1/f5
+ZMxGKYlvAPKDQdShT3q0B1Z6F8bB
+=pnAa
+-END PGP PUBLIC KEY BLOCK-




svn commit: r34293 - /dev/phoenix/KEYS /release/phoenix/KEYS

2019-05-28 Thread tdsilva
Author: tdsilva
Date: Tue May 28 18:21:35 2019
New Revision: 34293

Log:
Move dev KEYS to release KEYS

Added:
release/phoenix/KEYS
  - copied unchanged from r34292, dev/phoenix/KEYS
Removed:
dev/phoenix/KEYS



svn commit: r34292 - /release/phoenix/KEYS

2019-05-28 Thread tdsilva
Author: tdsilva
Date: Tue May 28 18:20:53 2019
New Revision: 34292

Log:
Replace release KEYS with dev KEYS

Removed:
release/phoenix/KEYS



[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5304 LocalIndexSplitMergeIT fails with HBase 1.5.x.

2019-05-28 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 892fa91  PHOENIX-5304 LocalIndexSplitMergeIT fails with HBase 1.5.x.
892fa91 is described below

commit 892fa9117205ddf9584704c0833c936a08295158
Author: Lars Hofhansl 
AuthorDate: Tue May 28 11:13:00 2019 -0700

PHOENIX-5304 LocalIndexSplitMergeIT fails with HBase 1.5.x.
---
 .../hbase/regionserver/LocalIndexStoreFileScanner.java  | 17 -
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/LocalIndexStoreFileScanner.java
 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/LocalIndexStoreFileScanner.java
index 19c868d..df279d7 100644
--- 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/LocalIndexStoreFileScanner.java
+++ 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/LocalIndexStoreFileScanner.java
@@ -125,7 +125,7 @@ public class LocalIndexStoreFileScanner extends 
StoreFileScanner{
 if (getComparator().compare(kv.getBuffer(), kv.getKeyOffset(), 
kv.getKeyLength(), fk, 0, fk.length) <= 0) {
 return super.seekToPreviousRow(key);
 }
-KeyValue replacedKey = getKeyPresentInHFiles(kv.getBuffer());
+KeyValue replacedKey = getKeyPresentInHFiles(kv);
 boolean seekToPreviousRow = super.seekToPreviousRow(replacedKey);
 while(super.peek()!=null && 
!isSatisfiedMidKeyCondition(super.peek())) {
 seekToPreviousRow = super.seekToPreviousRow(super.peek());
@@ -194,22 +194,21 @@ public class LocalIndexStoreFileScanner extends 
StoreFileScanner{
  * @param key
  *
  */
-private KeyValue getKeyPresentInHFiles(byte[] key) {
-KeyValue keyValue = new KeyValue(key);
+private KeyValue getKeyPresentInHFiles(Cell keyValue) {
 int rowLength = keyValue.getRowLength();
 int rowOffset = keyValue.getRowOffset();
 
 short length = (short) (rowLength - reader.getSplitRow().length + 
reader.getOffset());
 byte[] replacedKey =
-new byte[length + key.length - (rowOffset + rowLength) + 
ROW_LENGTH_SIZE];
+new byte[length + keyValue.getRowArray().length - (rowOffset + 
rowLength) + ROW_LENGTH_SIZE];
 System.arraycopy(Bytes.toBytes(length), 0, replacedKey, 0, 
ROW_LENGTH_SIZE);
 System.arraycopy(reader.getRegionStartKeyInHFile(), 0, replacedKey, 
ROW_LENGTH_SIZE, reader.getOffset());
 System.arraycopy(keyValue.getRowArray(), keyValue.getRowOffset() + 
reader.getSplitRow().length,
 replacedKey, reader.getOffset() + ROW_LENGTH_SIZE, rowLength
 - reader.getSplitRow().length);
-System.arraycopy(key, rowOffset + rowLength, replacedKey,
-reader.getOffset() + keyValue.getRowLength() - 
reader.getSplitRow().length
-+ ROW_LENGTH_SIZE, key.length - (rowOffset + rowLength));
+System.arraycopy(keyValue.getRowArray(), rowOffset + rowLength, 
replacedKey,
+reader.getOffset() + rowLength - reader.getSplitRow().length
++ ROW_LENGTH_SIZE, keyValue.getRowArray().length - 
(rowOffset + rowLength));
 return new KeyValue.KeyOnlyKeyValue(replacedKey);
 }
 
@@ -230,7 +229,7 @@ public class LocalIndexStoreFileScanner extends 
StoreFileScanner{
 }
 return seekOrReseekToProperKey(isSeek, keyToSeek);
 }
-keyToSeek = getKeyPresentInHFiles(kv.getBuffer());
+keyToSeek = getKeyPresentInHFiles(kv);
 return seekOrReseekToProperKey(isSeek, keyToSeek);
 } else {
 if (getComparator().compare(kv.getBuffer(), kv.getKeyOffset(), 
kv.getKeyLength(), reader.getSplitkey(), 0, reader.getSplitkey().length) >= 0) {
@@ -238,7 +237,7 @@ public class LocalIndexStoreFileScanner extends 
StoreFileScanner{
 return false;
 }
 if(!isSeek && reader.getRegionInfo().getStartKey().length == 0 && 
reader.getSplitRow().length > reader.getRegionStartKeyInHFile().length) {
-keyToSeek = getKeyPresentInHFiles(kv.getBuffer());
+keyToSeek = getKeyPresentInHFiles(kv);
 }
 }
 return seekOrReseekToProperKey(isSeek, keyToSeek);



[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5303 Fix index failures with some versions of HBase.

2019-05-28 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new a89ad40  PHOENIX-5303 Fix index failures with some versions of HBase.
a89ad40 is described below

commit a89ad400a1da4960cf62be16d8d5abd55822b235
Author: Lars Hofhansl 
AuthorDate: Tue May 28 10:49:43 2019 -0700

PHOENIX-5303 Fix index failures with some versions of HBase.
---
 .../org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java   | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java
index 703fcd2..318517c 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java
@@ -24,6 +24,7 @@ import java.util.HashSet;
 import java.util.Set;
 
 import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.KeyValue;
 import org.apache.hadoop.hbase.KeyValueUtil;
 import org.apache.hadoop.hbase.client.Mutation;
@@ -33,6 +34,7 @@ import org.apache.hadoop.hbase.filter.FamilyFilter;
 import org.apache.hadoop.hbase.filter.Filter;
 import org.apache.hadoop.hbase.filter.FilterBase;
 import org.apache.hadoop.hbase.filter.FilterList;
+import org.apache.hadoop.hbase.filter.FilterList.Operator;
 import org.apache.hadoop.hbase.filter.QualifierFilter;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.hbase.index.covered.KeyValueStore;
@@ -92,10 +94,13 @@ public class ScannerBuilder {
   Filter columnFilter =
   new FamilyFilter(CompareOp.EQUAL, new 
BinaryComparator(ref.getFamily()));
   // combine with a match for the qualifier, if the qualifier is a 
specific qualifier
+  // in that case we *must* let empty qualifiers through for family delete 
markers
   if (!Bytes.equals(ColumnReference.ALL_QUALIFIERS, ref.getQualifier())) {
 columnFilter =
-new FilterList(columnFilter, new QualifierFilter(CompareOp.EQUAL, 
new BinaryComparator(
-ref.getQualifier(;
+new FilterList(columnFilter,
+new FilterList(Operator.MUST_PASS_ONE,
+new QualifierFilter(CompareOp.EQUAL, new 
BinaryComparator(ref.getQualifier())),
+new QualifierFilter(CompareOp.EQUAL, new 
BinaryComparator(HConstants.EMPTY_BYTE_ARRAY;
   }
   columnFilters.addFilter(columnFilter);
 }



[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5303 Fix index failures with some versions of HBase.

2019-05-28 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 93ddc43  PHOENIX-5303 Fix index failures with some versions of HBase.
93ddc43 is described below

commit 93ddc43390eae8b72da2730301badc35d2242a54
Author: Lars Hofhansl 
AuthorDate: Tue May 28 10:48:29 2019 -0700

PHOENIX-5303 Fix index failures with some versions of HBase.
---
 .../org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java   | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java
index 703fcd2..318517c 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java
@@ -24,6 +24,7 @@ import java.util.HashSet;
 import java.util.Set;
 
 import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.KeyValue;
 import org.apache.hadoop.hbase.KeyValueUtil;
 import org.apache.hadoop.hbase.client.Mutation;
@@ -33,6 +34,7 @@ import org.apache.hadoop.hbase.filter.FamilyFilter;
 import org.apache.hadoop.hbase.filter.Filter;
 import org.apache.hadoop.hbase.filter.FilterBase;
 import org.apache.hadoop.hbase.filter.FilterList;
+import org.apache.hadoop.hbase.filter.FilterList.Operator;
 import org.apache.hadoop.hbase.filter.QualifierFilter;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.hbase.index.covered.KeyValueStore;
@@ -92,10 +94,13 @@ public class ScannerBuilder {
   Filter columnFilter =
   new FamilyFilter(CompareOp.EQUAL, new 
BinaryComparator(ref.getFamily()));
   // combine with a match for the qualifier, if the qualifier is a 
specific qualifier
+  // in that case we *must* let empty qualifiers through for family delete 
markers
   if (!Bytes.equals(ColumnReference.ALL_QUALIFIERS, ref.getQualifier())) {
 columnFilter =
-new FilterList(columnFilter, new QualifierFilter(CompareOp.EQUAL, 
new BinaryComparator(
-ref.getQualifier(;
+new FilterList(columnFilter,
+new FilterList(Operator.MUST_PASS_ONE,
+new QualifierFilter(CompareOp.EQUAL, new 
BinaryComparator(ref.getQualifier())),
+new QualifierFilter(CompareOp.EQUAL, new 
BinaryComparator(HConstants.EMPTY_BYTE_ARRAY;
   }
   columnFilters.addFilter(columnFilter);
 }



[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5303 Fix index failures with some versions of HBase.

2019-05-28 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 0fe00fd  PHOENIX-5303 Fix index failures with some versions of HBase.
0fe00fd is described below

commit 0fe00fd08b7e3d93c6696c2b85221682669bbbc6
Author: Lars Hofhansl 
AuthorDate: Tue May 28 10:46:52 2019 -0700

PHOENIX-5303 Fix index failures with some versions of HBase.
---
 .../org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java   | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java
index 703fcd2..318517c 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java
@@ -24,6 +24,7 @@ import java.util.HashSet;
 import java.util.Set;
 
 import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.KeyValue;
 import org.apache.hadoop.hbase.KeyValueUtil;
 import org.apache.hadoop.hbase.client.Mutation;
@@ -33,6 +34,7 @@ import org.apache.hadoop.hbase.filter.FamilyFilter;
 import org.apache.hadoop.hbase.filter.Filter;
 import org.apache.hadoop.hbase.filter.FilterBase;
 import org.apache.hadoop.hbase.filter.FilterList;
+import org.apache.hadoop.hbase.filter.FilterList.Operator;
 import org.apache.hadoop.hbase.filter.QualifierFilter;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.hbase.index.covered.KeyValueStore;
@@ -92,10 +94,13 @@ public class ScannerBuilder {
   Filter columnFilter =
   new FamilyFilter(CompareOp.EQUAL, new 
BinaryComparator(ref.getFamily()));
   // combine with a match for the qualifier, if the qualifier is a 
specific qualifier
+  // in that case we *must* let empty qualifiers through for family delete 
markers
   if (!Bytes.equals(ColumnReference.ALL_QUALIFIERS, ref.getQualifier())) {
 columnFilter =
-new FilterList(columnFilter, new QualifierFilter(CompareOp.EQUAL, 
new BinaryComparator(
-ref.getQualifier(;
+new FilterList(columnFilter,
+new FilterList(Operator.MUST_PASS_ONE,
+new QualifierFilter(CompareOp.EQUAL, new 
BinaryComparator(ref.getQualifier())),
+new QualifierFilter(CompareOp.EQUAL, new 
BinaryComparator(HConstants.EMPTY_BYTE_ARRAY;
   }
   columnFilters.addFilter(columnFilter);
 }



[phoenix] branch master updated: PHOENIX-5303 Fix index failures with some versions of HBase.

2019-05-28 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new d43bc67  PHOENIX-5303 Fix index failures with some versions of HBase.
d43bc67 is described below

commit d43bc67dea852b3c9d7c419680d3a1edf8d870c7
Author: Lars Hofhansl 
AuthorDate: Tue May 28 10:44:40 2019 -0700

PHOENIX-5303 Fix index failures with some versions of HBase.
---
 .../org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java   | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java
index 4c42fe4..988528f 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java
@@ -24,6 +24,7 @@ import java.util.HashSet;
 import java.util.Set;
 
 import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.KeyValue;
 import org.apache.hadoop.hbase.KeyValueUtil;
 import org.apache.hadoop.hbase.client.Mutation;
@@ -33,6 +34,7 @@ import org.apache.hadoop.hbase.filter.FamilyFilter;
 import org.apache.hadoop.hbase.filter.Filter;
 import org.apache.hadoop.hbase.filter.FilterBase;
 import org.apache.hadoop.hbase.filter.FilterList;
+import org.apache.hadoop.hbase.filter.FilterList.Operator;
 import org.apache.hadoop.hbase.filter.QualifierFilter;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.hbase.index.covered.KeyValueStore;
@@ -92,10 +94,13 @@ public class ScannerBuilder {
   Filter columnFilter =
   new FamilyFilter(CompareOp.EQUAL, new 
BinaryComparator(ref.getFamily()));
   // combine with a match for the qualifier, if the qualifier is a 
specific qualifier
+  // in that case we *must* let empty qualifiers through for family delete 
markers
   if (!Bytes.equals(ColumnReference.ALL_QUALIFIERS, ref.getQualifier())) {
 columnFilter =
-new FilterList(columnFilter, new QualifierFilter(CompareOp.EQUAL, 
new BinaryComparator(
-ref.getQualifier(;
+new FilterList(columnFilter,
+new FilterList(Operator.MUST_PASS_ONE,
+new QualifierFilter(CompareOp.EQUAL, new 
BinaryComparator(ref.getQualifier())),
+new QualifierFilter(CompareOp.EQUAL, new 
BinaryComparator(HConstants.EMPTY_BYTE_ARRAY;
   }
   columnFilters.addFilter(columnFilter);
 }



Build failed in Jenkins: Phoenix Compile Compatibility with HBase #1011

2019-05-28 Thread Apache Jenkins Server
See 


--
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu xenial) in workspace 

[Phoenix_Compile_Compat_wHBase] $ /bin/bash /tmp/jenkins2025049544617752031.sh
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 386407
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 6
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 10240
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited
core id : 0
core id : 1
core id : 2
core id : 3
core id : 4
core id : 5
physical id : 0
physical id : 1
MemTotal:   98957636 kB
MemFree:20856392 kB
Filesystem  Size  Used Avail Use% Mounted on
udev 48G 0   48G   0% /dev
tmpfs   9.5G  938M  8.6G  10% /run
/dev/sda3   3.6T  364G  3.1T  11% /
tmpfs48G 0   48G   0% /dev/shm
tmpfs   5.0M 0  5.0M   0% /run/lock
tmpfs48G 0   48G   0% /sys/fs/cgroup
/dev/sda2   473M  236M  213M  53% /boot
tmpfs   9.5G  4.0K  9.5G   1% /run/user/910
tmpfs   9.5G 0  9.5G   0% /run/user/1000
/dev/loop7   90M   90M 0 100% /snap/core/6673
/dev/loop11  54M   54M 0 100% /snap/lxd/10526
/dev/loop12  54M   54M 0 100% /snap/lxd/10601
/dev/loop1   57M   57M 0 100% /snap/snapcraft/2832
/dev/loop8   90M   90M 0 100% /snap/core/6818
/dev/loop10  57M   57M 0 100% /snap/snapcraft/2900
/dev/loop13  55M   55M 0 100% /snap/lxd/10756
/dev/loop5   89M   89M 0 100% /snap/core/6964
/dev/loop3   57M   57M 0 100% /snap/snapcraft/2947
apache-maven-2.2.1
apache-maven-3.0.4
apache-maven-3.0.5
apache-maven-3.1.1
apache-maven-3.2.1
apache-maven-3.2.5
apache-maven-3.3.3
apache-maven-3.3.9
apache-maven-3.5.0
apache-maven-3.5.2
apache-maven-3.5.4
apache-maven-3.6.0
latest
latest2
latest3


===
Verifying compile level compatibility with HBase 0.98 with Phoenix 
4.x-HBase-0.98
===

Cloning into 'hbase'...
Switched to a new branch '0.98'
Branch 0.98 set up to track remote branch 0.98 from origin.
[ERROR] Plugin org.codehaus.mojo:findbugs-maven-plugin:2.5.2 or one of its 
dependencies could not be resolved: Failed to read artifact descriptor for 
org.codehaus.mojo:findbugs-maven-plugin:jar:2.5.2: Could not transfer artifact 
org.codehaus.mojo:findbugs-maven-plugin:pom:2.5.2 from/to central 
(https://repo.maven.apache.org/maven2): Received fatal alert: protocol_version 
-> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginResolutionException
Build step 'Execute shell' marked build as failure