[phoenix] branch 4.x updated: Addendum PHOENIX-6511 Deletes fail in case of failed region split

2021-07-15 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x by this push:
 new 4215de3  Addendum PHOENIX-6511 Deletes fail in case of failed region 
split
4215de3 is described below

commit 4215de3dc6e8d87a26ffda3bc8b2cf82ea31994a
Author: Abhishek Singh Chouhan 
AuthorDate: Thu Jul 15 12:20:01 2021 -0700

Addendum PHOENIX-6511 Deletes fail in case of failed region split
---
 .../end2end/UngroupedAggregateRegionObserverSplitFailureIT.java| 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UngroupedAggregateRegionObserverSplitFailureIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UngroupedAggregateRegionObserverSplitFailureIT.java
index 454b632..33614db 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UngroupedAggregateRegionObserverSplitFailureIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UngroupedAggregateRegionObserverSplitFailureIT.java
@@ -33,6 +33,7 @@ import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.coprocessor.ObserverContext;
 import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
 import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.phoenix.compat.hbase.CompatObserverContext;
 import org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver;
 import org.apache.phoenix.jdbc.PhoenixDriver;
 import org.apache.phoenix.util.PhoenixRuntime;
@@ -88,7 +89,7 @@ public class UngroupedAggregateRegionObserverSplitFailureIT 
extends BaseUniqueNa
 UngroupedAggregateRegionObserver obs =
 (UngroupedAggregateRegionObserver) 
region.getCoprocessorHost()
 
.findCoprocessor(UngroupedAggregateRegionObserver.class.getName());
-ObserverContext ctx = new 
ObserverContext(null);
+ObserverContext ctx = new 
CompatObserverContext(null);
 ctx.prepare((RegionCoprocessorEnvironment) 
region.getCoprocessorHost()
 .findCoprocessorEnvironment(
 UngroupedAggregateRegionObserver.class.getName()));


[phoenix] branch 4.x updated: Addendum PHOENIX-6511 Deletes fail in case of failed region split

2021-07-15 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x by this push:
 new 1406761  Addendum PHOENIX-6511 Deletes fail in case of failed region 
split
1406761 is described below

commit 14067617e18f3332ab2ebdcaee4d78c6b467e0fe
Author: Abhishek Singh Chouhan 
AuthorDate: Thu Jul 15 11:22:37 2021 -0700

Addendum PHOENIX-6511 Deletes fail in case of failed region split
---
 .../phoenix/end2end/UngroupedAggregateRegionObserverSplitFailureIT.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UngroupedAggregateRegionObserverSplitFailureIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UngroupedAggregateRegionObserverSplitFailureIT.java
index a8a1eb7..454b632 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UngroupedAggregateRegionObserverSplitFailureIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UngroupedAggregateRegionObserverSplitFailureIT.java
@@ -88,7 +88,7 @@ public class UngroupedAggregateRegionObserverSplitFailureIT 
extends BaseUniqueNa
 UngroupedAggregateRegionObserver obs =
 (UngroupedAggregateRegionObserver) 
region.getCoprocessorHost()
 
.findCoprocessor(UngroupedAggregateRegionObserver.class.getName());
-ObserverContext ctx = new 
ObserverContext<>(null);
+ObserverContext ctx = new 
ObserverContext(null);
 ctx.prepare((RegionCoprocessorEnvironment) 
region.getCoprocessorHost()
 .findCoprocessorEnvironment(
 UngroupedAggregateRegionObserver.class.getName()));


[phoenix] branch 4.x updated: PHOENIX-6511 Deletes fail in case of failed region split

2021-07-14 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x by this push:
 new b40ad14  PHOENIX-6511 Deletes fail in case of failed region split
 new fb40ece  Merge pull request #1266 from abhishek-chouhan/PHOENIX-6511
b40ad14 is described below

commit b40ad141d8fa7e33f1eadc43f387af1c388f8b62
Author: Abhishek Singh Chouhan 
AuthorDate: Wed Jul 14 16:56:35 2021 -0700

PHOENIX-6511 Deletes fail in case of failed region split
---
 ...oupedAggregateRegionObserverSplitFailureIT.java | 109 +
 .../UngroupedAggregateRegionObserver.java  |  14 +++
 2 files changed, 123 insertions(+)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UngroupedAggregateRegionObserverSplitFailureIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UngroupedAggregateRegionObserverSplitFailureIT.java
new file mode 100644
index 000..a8a1eb7
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UngroupedAggregateRegionObserverSplitFailureIT.java
@@ -0,0 +1,109 @@
+/*
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.assertEquals;
+
+import java.io.IOException;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.SQLException;
+import java.util.List;
+import java.util.UUID;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.coprocessor.ObserverContext;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver;
+import org.apache.phoenix.jdbc.PhoenixDriver;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class UngroupedAggregateRegionObserverSplitFailureIT extends 
BaseUniqueNamesOwnClusterIT {
+
+private static HBaseTestingUtility hbaseTestUtil;
+private static String url;
+
+@BeforeClass
+public static synchronized void doSetup() throws Exception {
+Configuration conf = HBaseConfiguration.create();
+hbaseTestUtil = new HBaseTestingUtility(conf);
+setUpConfigForMiniCluster(conf);
+hbaseTestUtil.startMiniCluster();
+// establish url and quorum. Need to use PhoenixDriver and not 
PhoenixTestDriver
+String zkQuorum = "localhost:" + 
hbaseTestUtil.getZkCluster().getClientPort();
+url = PhoenixRuntime.JDBC_PROTOCOL + 
PhoenixRuntime.JDBC_PROTOCOL_SEPARATOR + zkQuorum;
+DriverManager.registerDriver(PhoenixDriver.INSTANCE);
+}
+
+@AfterClass
+public static synchronized void tearDownAfterClass() throws Exception {
+if (hbaseTestUtil != null) {
+hbaseTestUtil.shutdownMiniCluster();
+}
+}
+
+@Test
+public void testDeleteAfterSplitFailure() throws SQLException, IOException 
{
+String tableName = generateUniqueName();
+int numRecords = 100;
+try (Connection conn = DriverManager.getConnection(url)) {
+conn.createStatement().execute(
+"CREATE TABLE " + tableName + " (PK1 INTEGER NOT NULL PRIMARY 
KEY, KV1 VARCHAR)");
+int i = 0;
+String upsert = "UPSERT INTO " + tableName + " VALUES (?, ?)";
+PreparedStatement stmt = conn.prepareStatement(upsert);
+while (i < numRecords) {
+stmt.setInt(1, i);
+stmt.setString(2, UUID.randomUUID().toString());
+stmt.executeUpdate();
+i++;
+}
+conn.commit();
+
+List regions =
+
hbaseTestUtil.getMiniHBaseCluster().getRegions(TableName.valueOf(tableName));
+for (HRegion 

[phoenix] branch 4.x updated: PHOENIX-6001 Incremental rebuild/verification can result in missing rows and false positives

2020-07-11 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x by this push:
 new 43e18ce  PHOENIX-6001 Incremental rebuild/verification can result in 
missing rows and false positives
43e18ce is described below

commit 43e18ce761987139eec17c1647fa01fe092f4307
Author: Abhishek Singh Chouhan 
AuthorDate: Fri Jul 10 14:23:34 2020 -0700

PHOENIX-6001 Incremental rebuild/verification can result in missing rows 
and false positives
---
 .../end2end/IndexToolForNonTxGlobalIndexIT.java| 54 ++
 .../coprocessor/IndexRebuildRegionScanner.java |  4 +-
 2 files changed, 57 insertions(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
index 527dc87..a2e5da0 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
@@ -1175,6 +1175,60 @@ public class IndexToolForNonTxGlobalIndexIT extends 
BaseUniqueNamesOwnClusterIT
 }
 }
 
+@Test
+public void testIncrementalRebuildWithPageSize() throws Exception {
+String schemaName = generateUniqueName();
+String dataTableName = generateUniqueName();
+String fullDataTableName = SchemaUtil.getTableName(schemaName, 
dataTableName);
+String indexTableName = generateUniqueName();
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+long minTs = EnvironmentEdgeManager.currentTimeMillis();
+conn.createStatement().execute(
+"CREATE TABLE " + fullDataTableName + "(k VARCHAR PRIMARY KEY, 
v VARCHAR)");
+conn.createStatement().execute("UPSERT INTO " + fullDataTableName 
+ " VALUES('a','aa')");
+conn.createStatement().execute("UPSERT INTO " + fullDataTableName 
+ " VALUES('b','bb')");
+conn.commit();
+conn.createStatement().execute("DELETE FROM " + fullDataTableName 
+ " WHERE k = 'a'");
+conn.createStatement().execute("DELETE FROM " + fullDataTableName 
+ " WHERE k = 'b'");
+conn.commit();
+conn.createStatement().execute("UPSERT INTO " + fullDataTableName 
+ " VALUES('a','aaa')");
+conn.createStatement().execute("UPSERT INTO " + fullDataTableName 
+ " VALUES('b','bbb')");
+conn.createStatement().execute("DELETE FROM " + fullDataTableName 
+ " WHERE k = 'c'");
+conn.commit();
+conn.createStatement().execute(String.format("CREATE INDEX %s ON 
%s (v) ASYNC",
+indexTableName, fullDataTableName));
+// Run the index MR job and verify that the index table is built 
correctly
+Configuration conf = new 
Configuration(getUtility().getConfiguration());
+conf.set(QueryServices.INDEX_REBUILD_PAGE_SIZE_IN_ROWS, 
Long.toString(2));
+IndexTool indexTool =
+IndexToolIT.runIndexTool(conf, directApi, useSnapshot, 
schemaName,
+dataTableName, indexTableName, null, 0, 
IndexTool.IndexVerifyType.AFTER,
+"-st", String.valueOf(minTs), "-et",
+
String.valueOf(EnvironmentEdgeManager.currentTimeMillis()));
+assertEquals(3, 
indexTool.getJob().getCounters().findCounter(INPUT_RECORDS).getValue());
+assertEquals(3,
+
indexTool.getJob().getCounters().findCounter(SCANNED_DATA_ROW_COUNT).getValue());
+assertEquals(4,
+
indexTool.getJob().getCounters().findCounter(REBUILT_INDEX_ROW_COUNT).getValue());
+assertEquals(4, indexTool.getJob().getCounters()
+
.findCounter(AFTER_REBUILD_VALID_INDEX_ROW_COUNT).getValue());
+assertEquals(0, indexTool.getJob().getCounters()
+
.findCounter(AFTER_REBUILD_EXPIRED_INDEX_ROW_COUNT).getValue());
+assertEquals(0, indexTool.getJob().getCounters()
+
.findCounter(AFTER_REBUILD_INVALID_INDEX_ROW_COUNT).getValue());
+assertEquals(0, indexTool.getJob().getCounters()
+
.findCounter(AFTER_REBUILD_MISSING_INDEX_ROW_COUNT).getValue());
+assertEquals(0,
+indexTool.getJob().getCounters()
+
.findCounter(AFTER_REBUILD_BEYOND_MAXLOOKBACK_MISSING_INDEX_ROW_COUNT)
+.getValue());
+assertEquals(0,
+indexTool.getJob().getCounte

[phoenix] branch master updated: PHOENIX-6001 Incremental rebuild/verification can result in missing rows and false positives

2020-07-11 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new aafc481  PHOENIX-6001 Incremental rebuild/verification can result in 
missing rows and false positives
aafc481 is described below

commit aafc4810bec9a9176410e463aca79f46fa59f409
Author: Abhishek Singh Chouhan 
AuthorDate: Fri Jul 10 18:23:43 2020 -0700

PHOENIX-6001 Incremental rebuild/verification can result in missing rows 
and false positives
---
 .../end2end/IndexToolForNonTxGlobalIndexIT.java| 49 ++
 .../coprocessor/IndexRebuildRegionScanner.java |  5 ++-
 2 files changed, 52 insertions(+), 2 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
index 9347f53..f89bcb3 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
@@ -323,6 +323,55 @@ public class IndexToolForNonTxGlobalIndexIT extends 
BaseUniqueNamesOwnClusterIT
 // TODO: Enable once we move to these versions
 @Ignore
 @Test
+public void testIncrementalRebuildWithPageSize() throws Exception {
+String schemaName = generateUniqueName();
+String dataTableName = generateUniqueName();
+String fullDataTableName = SchemaUtil.getTableName(schemaName, 
dataTableName);
+String indexTableName = generateUniqueName();
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+long minTs = EnvironmentEdgeManager.currentTimeMillis();
+conn.createStatement().execute(
+"CREATE TABLE " + fullDataTableName + "(k VARCHAR PRIMARY KEY, 
v VARCHAR)");
+conn.createStatement().execute("UPSERT INTO " + fullDataTableName 
+ " VALUES('a','aa')");
+conn.createStatement().execute("UPSERT INTO " + fullDataTableName 
+ " VALUES('b','bb')");
+conn.commit();
+conn.createStatement().execute("DELETE FROM " + fullDataTableName 
+ " WHERE k = 'a'");
+conn.createStatement().execute("DELETE FROM " + fullDataTableName 
+ " WHERE k = 'b'");
+conn.commit();
+conn.createStatement().execute("UPSERT INTO " + fullDataTableName 
+ " VALUES('a','aaa')");
+conn.createStatement().execute("UPSERT INTO " + fullDataTableName 
+ " VALUES('b','bbb')");
+conn.createStatement().execute("DELETE FROM " + fullDataTableName 
+ " WHERE k = 'c'");
+conn.commit();
+conn.createStatement().execute(String.format("CREATE INDEX %s ON 
%s (v) ASYNC",
+indexTableName, fullDataTableName));
+// Run the index MR job and verify that the index table is built 
correctly
+Configuration conf = new 
Configuration(getUtility().getConfiguration());
+conf.set(QueryServices.INDEX_REBUILD_PAGE_SIZE_IN_ROWS, 
Long.toString(2));
+IndexTool indexTool =
+IndexToolIT.runIndexTool(conf, directApi, useSnapshot, 
schemaName,
+dataTableName, indexTableName, null, 0, 
IndexTool.IndexVerifyType.AFTER,
+"-st", String.valueOf(minTs), "-et",
+
String.valueOf(EnvironmentEdgeManager.currentTimeMillis()));
+assertEquals(3, 
indexTool.getJob().getCounters().findCounter(INPUT_RECORDS).getValue());
+assertEquals(3,
+
indexTool.getJob().getCounters().findCounter(SCANNED_DATA_ROW_COUNT).getValue());
+assertEquals(4,
+
indexTool.getJob().getCounters().findCounter(REBUILT_INDEX_ROW_COUNT).getValue());
+assertEquals(4, indexTool.getJob().getCounters()
+
.findCounter(AFTER_REBUILD_VALID_INDEX_ROW_COUNT).getValue());
+assertEquals(0, indexTool.getJob().getCounters()
+
.findCounter(AFTER_REBUILD_EXPIRED_INDEX_ROW_COUNT).getValue());
+assertEquals(0, indexTool.getJob().getCounters()
+
.findCounter(AFTER_REBUILD_INVALID_INDEX_ROW_COUNT).getValue());
+assertEquals(0, indexTool.getJob().getCounters()
+
.findCounter(AFTER_REBUILD_MISSING_INDEX_ROW_COUNT).getValue());
+}
+}
+
+// This test will only work with HBASE-22710 which is in 2.2.5+
+// TODO: Enable once we move to these versions
+@Ignore
+@Test
 public void testUpdatablePKFilterViewIndexRebuild() throws Exception {
 if (!mutabl

[phoenix] branch 4.x updated: PHOENIX-5995 Index Rebuild page size is not honored in case of point deletes

2020-07-09 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x by this push:
 new a1dbaa8  PHOENIX-5995 Index Rebuild page size is not honored in case 
of point deletes
a1dbaa8 is described below

commit a1dbaa874693ac0e3bf63f79171477cf0612f109
Author: Abhishek Singh Chouhan 
AuthorDate: Wed Jul 8 20:33:55 2020 -0700

PHOENIX-5995 Index Rebuild page size is not honored in case of point deletes
---
 .../end2end/IndexToolForNonTxGlobalIndexIT.java| 47 ++
 .../coprocessor/IndexRebuildRegionScanner.java |  3 +-
 2 files changed, 49 insertions(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
index 11c785c..527dc87 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
@@ -1130,6 +1130,53 @@ public class IndexToolForNonTxGlobalIndexIT extends 
BaseUniqueNamesOwnClusterIT
 }
 
 @Test
+public void testPointDeleteRebuildWithPageSize() throws Exception {
+String schemaName = generateUniqueName();
+String dataTableName = generateUniqueName();
+String fullDataTableName = SchemaUtil.getTableName(schemaName, 
dataTableName);
+String indexTableName = generateUniqueName();
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+conn.createStatement().execute(
+"CREATE TABLE " + fullDataTableName + "(k VARCHAR PRIMARY KEY, 
v VARCHAR)");
+conn.createStatement().execute("DELETE FROM " + fullDataTableName 
+ " WHERE k = 'a'");
+conn.createStatement().execute("DELETE FROM " + fullDataTableName 
+ " WHERE k = 'b'");
+conn.createStatement().execute("DELETE FROM " + fullDataTableName 
+ " WHERE k = 'c'");
+conn.commit();
+conn.createStatement().execute(String.format("CREATE INDEX %s ON 
%s (v) ASYNC",
+indexTableName, fullDataTableName));
+// Run the index MR job and verify that the index table is built 
correctly
+Configuration conf = new 
Configuration(getUtility().getConfiguration());
+conf.set(QueryServices.INDEX_REBUILD_PAGE_SIZE_IN_ROWS, 
Long.toString(1));
+IndexTool indexTool =
+IndexToolIT.runIndexTool(conf, directApi, useSnapshot, 
schemaName,
+dataTableName, indexTableName, null, 0, 
IndexTool.IndexVerifyType.BEFORE,
+new String[0]);
+assertEquals(3, 
indexTool.getJob().getCounters().findCounter(INPUT_RECORDS).getValue());
+assertEquals(3,
+
indexTool.getJob().getCounters().findCounter(SCANNED_DATA_ROW_COUNT).getValue());
+assertEquals(0,
+
indexTool.getJob().getCounters().findCounter(REBUILT_INDEX_ROW_COUNT).getValue());
+assertEquals(0, indexTool.getJob().getCounters()
+
.findCounter(BEFORE_REBUILD_VALID_INDEX_ROW_COUNT).getValue());
+assertEquals(0, indexTool.getJob().getCounters()
+
.findCounter(BEFORE_REBUILD_EXPIRED_INDEX_ROW_COUNT).getValue());
+assertEquals(0, indexTool.getJob().getCounters()
+
.findCounter(BEFORE_REBUILD_INVALID_INDEX_ROW_COUNT).getValue());
+assertEquals(0, indexTool.getJob().getCounters()
+
.findCounter(BEFORE_REBUILD_MISSING_INDEX_ROW_COUNT).getValue());
+assertEquals(0,
+indexTool.getJob().getCounters()
+
.findCounter(BEFORE_REBUILD_BEYOND_MAXLOOKBACK_MISSING_INDEX_ROW_COUNT)
+.getValue());
+assertEquals(0,
+indexTool.getJob().getCounters()
+
.findCounter(BEFORE_REBUILD_BEYOND_MAXLOOKBACK_INVALID_INDEX_ROW_COUNT)
+.getValue());
+}
+}
+
+
+@Test
 public void testUpdatablePKFilterViewIndexRebuild() throws Exception {
 if (!mutable) {
 return;
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/IndexRebuildRegionScanner.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/IndexRebuildRegionScanner.java
index e2418ba..36a1426 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/IndexRebuildRegionScanner.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/IndexRebuildRegionScanner.java
@@ -1333,7 +1333,8 @@ public class IndexRebuildRegionScanner extends 
GlobalIndexReg

[phoenix] branch master updated: PHOENIX-5995 Index Rebuild page size is not honored in case of point deletes

2020-07-09 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new e75e2f9  PHOENIX-5995 Index Rebuild page size is not honored in case 
of point deletes
e75e2f9 is described below

commit e75e2f9e959036f6ea602c3c085f7c850c14fb84
Author: Abhishek Singh Chouhan 
AuthorDate: Thu Jul 9 17:56:08 2020 -0700

PHOENIX-5995 Index Rebuild page size is not honored in case of point deletes
---
 .../end2end/IndexToolForNonTxGlobalIndexIT.java| 38 ++
 1 file changed, 38 insertions(+)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
index 4ef2b30..9347f53 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
@@ -281,6 +281,44 @@ public class IndexToolForNonTxGlobalIndexIT extends 
BaseUniqueNamesOwnClusterIT
 }
 }
 
+@Test
+public void testPointDeleteRebuildWithPageSize() throws Exception {
+String schemaName = generateUniqueName();
+String dataTableName = generateUniqueName();
+String fullDataTableName = SchemaUtil.getTableName(schemaName, 
dataTableName);
+String indexTableName = generateUniqueName();
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+conn.createStatement().execute(
+"CREATE TABLE " + fullDataTableName + "(k VARCHAR PRIMARY KEY, 
v VARCHAR)");
+conn.createStatement().execute("DELETE FROM " + fullDataTableName 
+ " WHERE k = 'a'");
+conn.createStatement().execute("DELETE FROM " + fullDataTableName 
+ " WHERE k = 'b'");
+conn.createStatement().execute("DELETE FROM " + fullDataTableName 
+ " WHERE k = 'c'");
+conn.commit();
+conn.createStatement().execute(String.format("CREATE INDEX %s ON 
%s (v) ASYNC",
+indexTableName, fullDataTableName));
+// Run the index MR job and verify that the index table is built 
correctly
+Configuration conf = new 
Configuration(getUtility().getConfiguration());
+conf.set(QueryServices.INDEX_REBUILD_PAGE_SIZE_IN_ROWS, 
Long.toString(1));
+IndexTool indexTool =
+IndexToolIT.runIndexTool(conf, directApi, useSnapshot, 
schemaName,
+dataTableName, indexTableName, null, 0, 
IndexTool.IndexVerifyType.BEFORE,
+new String[0]);
+assertEquals(3, 
indexTool.getJob().getCounters().findCounter(INPUT_RECORDS).getValue());
+assertEquals(3,
+
indexTool.getJob().getCounters().findCounter(SCANNED_DATA_ROW_COUNT).getValue());
+assertEquals(0,
+
indexTool.getJob().getCounters().findCounter(REBUILT_INDEX_ROW_COUNT).getValue());
+assertEquals(0, indexTool.getJob().getCounters()
+
.findCounter(BEFORE_REBUILD_VALID_INDEX_ROW_COUNT).getValue());
+assertEquals(0, indexTool.getJob().getCounters()
+
.findCounter(BEFORE_REBUILD_EXPIRED_INDEX_ROW_COUNT).getValue());
+assertEquals(0, indexTool.getJob().getCounters()
+
.findCounter(BEFORE_REBUILD_INVALID_INDEX_ROW_COUNT).getValue());
+assertEquals(0, indexTool.getJob().getCounters()
+
.findCounter(BEFORE_REBUILD_MISSING_INDEX_ROW_COUNT).getValue());
+}
+}
+
 // This test will only work with HBASE-22710 which is in 2.2.5+
 // TODO: Enable once we move to these versions
 @Ignore



[phoenix] branch 4.x updated: PHOENIX-5975 Index rebuild/verification page size should be configurable from IndexTool

2020-06-29 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x by this push:
 new 54329fa  PHOENIX-5975 Index rebuild/verification page size should be 
configurable from IndexTool
54329fa is described below

commit 54329fae44a699c2857af2a64602b9b79a862e05
Author: Abhishek Singh Chouhan 
AuthorDate: Mon Jun 29 13:41:19 2020 -0700

PHOENIX-5975 Index rebuild/verification page size should be configurable 
from IndexTool
---
 .../end2end/IndexToolForNonTxGlobalIndexIT.java| 47 ++
 .../org/apache/phoenix/end2end/IndexToolIT.java| 10 -
 .../phoenix/compile/ServerBuildIndexCompiler.java  | 10 +
 .../coprocessor/BaseScannerRegionObserver.java |  1 +
 .../coprocessor/GlobalIndexRegionScanner.java  | 11 -
 .../PhoenixServerBuildIndexInputFormat.java|  8 
 .../index/PhoenixServerBuildIndexMapper.java   | 16 
 7 files changed, 100 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
index 0216355..07ca656 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
@@ -18,6 +18,8 @@
 package org.apache.phoenix.end2end;
 
 import com.google.common.collect.Maps;
+
+import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.HColumnDescriptor;
@@ -1077,6 +1079,51 @@ public class IndexToolForNonTxGlobalIndexIT extends 
BaseUniqueNamesOwnClusterIT
 }
 
 @Test
+public void testOverrideIndexRebuildPageSizeFromIndexTool() throws 
Exception {
+String schemaName = generateUniqueName();
+String dataTableName = generateUniqueName();
+String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
+String indexTableName = generateUniqueName();
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+
+try(Connection conn = DriverManager.getConnection(getUrl(), props)) {
+String stmString1 =
+"CREATE TABLE " + dataTableFullName
++ " (ID INTEGER NOT NULL PRIMARY KEY, NAME VARCHAR, ZIP 
INTEGER) "
++ tableDDLOptions;
+conn.createStatement().execute(stmString1);
+String upsertQuery = String.format("UPSERT INTO %s VALUES(?, ?, 
?)", dataTableFullName);
+PreparedStatement stmt1 = conn.prepareStatement(upsertQuery);
+
+// Insert NROWS rows
+final int NROWS = 16;
+for (int i = 0; i < NROWS; i++) {
+IndexToolIT.upsertRow(stmt1, i);
+}
+conn.commit();
+
+String stmtString2 =
+String.format(
+"CREATE INDEX %s ON %s (NAME) INCLUDE (ZIP) ASYNC 
", indexTableName, dataTableFullName);
+conn.createStatement().execute(stmtString2);
+
+// Run the index MR job and verify that the index table is built 
correctly
+Configuration conf = new 
Configuration(getUtility().getConfiguration());
+conf.set(QueryServices.INDEX_REBUILD_PAGE_SIZE_IN_ROWS, 
Long.toString(2));
+IndexTool indexTool = IndexToolIT.runIndexTool(conf, directApi, 
useSnapshot, schemaName, dataTableName, indexTableName, null, 0, 
IndexTool.IndexVerifyType.BEFORE, new String[0]);
+assertEquals(NROWS, 
indexTool.getJob().getCounters().findCounter(INPUT_RECORDS).getValue());
+assertEquals(NROWS, 
indexTool.getJob().getCounters().findCounter(SCANNED_DATA_ROW_COUNT).getValue());
+assertEquals(NROWS, 
indexTool.getJob().getCounters().findCounter(REBUILT_INDEX_ROW_COUNT).getValue());
+assertEquals(0, 
indexTool.getJob().getCounters().findCounter(BEFORE_REBUILD_VALID_INDEX_ROW_COUNT).getValue());
+assertEquals(0, 
indexTool.getJob().getCounters().findCounter(BEFORE_REBUILD_EXPIRED_INDEX_ROW_COUNT).getValue());
+assertEquals(0, 
indexTool.getJob().getCounters().findCounter(BEFORE_REBUILD_INVALID_INDEX_ROW_COUNT).getValue());
+assertEquals(NROWS, 
indexTool.getJob().getCounters().findCounter(BEFORE_REBUILD_MISSING_INDEX_ROW_COUNT).getValue());
+assertEquals(0, 
indexTool.getJob().getCounters().findCounter(BEFORE_REBUILD_BEYOND_MAXLOOKBACK_MISSING_INDEX_ROW_COUNT).getValue());
+assertEquals(0, 
indexTool.getJob().getCounters().findCounter(BEFORE_REBUILD_BEYOND_MAXLOOKBACK_INVALID_INDEX_ROW_COUNT).getValue());
+   

[phoenix] branch master updated: PHOENIX-5975 Index rebuild/verification page size should be configurable from IndexTool

2020-06-29 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 81e9a19  PHOENIX-5975 Index rebuild/verification page size should be 
configurable from IndexTool
81e9a19 is described below

commit 81e9a1927a6cf7cf171f6d0544d657509aa8a867
Author: Abhishek Singh Chouhan 
AuthorDate: Mon Jun 29 15:59:13 2020 -0700

PHOENIX-5975 Index rebuild/verification page size should be configurable 
from IndexTool
---
 .../end2end/IndexToolForNonTxGlobalIndexIT.java| 44 ++
 .../org/apache/phoenix/end2end/IndexToolIT.java| 10 -
 .../phoenix/compile/ServerBuildIndexCompiler.java  | 10 +
 .../coprocessor/BaseScannerRegionObserver.java |  1 +
 .../coprocessor/GlobalIndexRegionScanner.java  | 12 +-
 .../PhoenixServerBuildIndexInputFormat.java|  8 
 .../index/PhoenixServerBuildIndexMapper.java   | 16 
 7 files changed, 98 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
index 1f8e77f..c350c2c 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
@@ -18,6 +18,7 @@
 package org.apache.phoenix.end2end;
 
 import com.google.common.collect.Maps;
+import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CellUtil;
 import org.apache.hadoop.hbase.TableName;
@@ -221,6 +222,49 @@ public class IndexToolForNonTxGlobalIndexIT extends 
BaseUniqueNamesOwnClusterIT
 }
 }
 
+@Test
+public void testOverrideIndexRebuildPageSizeFromIndexTool() throws 
Exception {
+String schemaName = generateUniqueName();
+String dataTableName = generateUniqueName();
+String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
+String indexTableName = generateUniqueName();
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+
+try(Connection conn = DriverManager.getConnection(getUrl(), props)) {
+String stmString1 =
+"CREATE TABLE " + dataTableFullName
++ " (ID INTEGER NOT NULL PRIMARY KEY, NAME VARCHAR, ZIP 
INTEGER) "
++ tableDDLOptions;
+conn.createStatement().execute(stmString1);
+String upsertQuery = String.format("UPSERT INTO %s VALUES(?, ?, 
?)", dataTableFullName);
+PreparedStatement stmt1 = conn.prepareStatement(upsertQuery);
+
+// Insert NROWS rows
+final int NROWS = 16;
+for (int i = 0; i < NROWS; i++) {
+IndexToolIT.upsertRow(stmt1, i);
+}
+conn.commit();
+
+String stmtString2 =
+String.format(
+"CREATE INDEX %s ON %s (NAME) INCLUDE (ZIP) ASYNC 
", indexTableName, dataTableFullName);
+conn.createStatement().execute(stmtString2);
+
+// Run the index MR job and verify that the index table is built 
correctly
+Configuration conf = new 
Configuration(getUtility().getConfiguration());
+conf.set(QueryServices.INDEX_REBUILD_PAGE_SIZE_IN_ROWS, 
Long.toString(2));
+IndexTool indexTool = IndexToolIT.runIndexTool(conf, directApi, 
useSnapshot, schemaName, dataTableName, indexTableName, null, 0, 
IndexTool.IndexVerifyType.BEFORE, new String[0]);
+assertEquals(NROWS, 
indexTool.getJob().getCounters().findCounter(INPUT_RECORDS).getValue());
+assertEquals(NROWS, 
indexTool.getJob().getCounters().findCounter(SCANNED_DATA_ROW_COUNT).getValue());
+assertEquals(NROWS, 
indexTool.getJob().getCounters().findCounter(REBUILT_INDEX_ROW_COUNT).getValue());
+assertEquals(0, 
indexTool.getJob().getCounters().findCounter(BEFORE_REBUILD_VALID_INDEX_ROW_COUNT).getValue());
+assertEquals(0, 
indexTool.getJob().getCounters().findCounter(BEFORE_REBUILD_EXPIRED_INDEX_ROW_COUNT).getValue());
+assertEquals(0, 
indexTool.getJob().getCounters().findCounter(BEFORE_REBUILD_INVALID_INDEX_ROW_COUNT).getValue());
+assertEquals(NROWS, 
indexTool.getJob().getCounters().findCounter(BEFORE_REBUILD_MISSING_INDEX_ROW_COUNT).getValue());
+}
+}
+
 // This test will only work with HBASE-22710 which is in 2.2.5+
 // TODO: Enable once we move to these versions
 @Ignore
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
index 33a

[phoenix] branch 4.x updated: PHOENIX-5932 View Index rebuild results in surplus rows from other view indexes

2020-06-09 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x by this push:
 new 905575f  PHOENIX-5932 View Index rebuild results in surplus rows from 
other view indexes
905575f is described below

commit 905575f19c99cbeb4e57b20d656ccda26af90a2b
Author: Abhishek Singh Chouhan 
AuthorDate: Thu Jun 4 21:47:14 2020 -0700

PHOENIX-5932 View Index rebuild results in surplus rows from other view 
indexes
---
 .../end2end/IndexToolForNonTxGlobalIndexIT.java| 184 +
 .../UngroupedAggregateRegionObserver.java  |  12 +-
 .../filter/AllVersionsIndexRebuildFilter.java  |  45 +
 .../org/apache/phoenix/filter/DelegateFilter.java  | 100 +++
 4 files changed, 340 insertions(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
index f6c5d55..c93f2ca 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
@@ -20,8 +20,10 @@ package org.apache.phoenix.end2end;
 import com.google.common.collect.Maps;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.hadoop.hbase.client.HConnection;
@@ -30,6 +32,7 @@ import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.client.ResultScanner;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.regionserver.ScanInfoUtil;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.hbase.util.Pair;
 import org.apache.phoenix.coprocessor.IndexRebuildRegionScanner;
@@ -137,6 +140,7 @@ public class IndexToolForNonTxGlobalIndexIT extends 
BaseUniqueNamesOwnClusterIT
 serverProps.put(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB,
 QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
 serverProps.put(QueryServices.INDEX_REBUILD_PAGE_SIZE_IN_ROWS, 
Long.toString(8));
+serverProps.put(ScanInfoUtil.PHOENIX_MAX_LOOKBACK_AGE_CONF_KEY, 
Long.toString(3600));
 Map clientProps = Maps.newHashMapWithExpectedSize(2);
 clientProps.put(QueryServices.USE_STATS_FOR_PARALLELIZATION, 
Boolean.toString(true));
 clientProps.put(QueryServices.STATS_UPDATE_FREQ_MS_ATTRIB, 
Long.toString(5));
@@ -676,6 +680,186 @@ public class IndexToolForNonTxGlobalIndexIT extends 
BaseUniqueNamesOwnClusterIT
 }
 }
 
+@Test
+public void testUpdatablePKFilterViewIndexRebuild() throws Exception {
+if (!mutable) {
+return;
+}
+String schemaName = generateUniqueName();
+String dataTableName = generateUniqueName();
+String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
+String view1Name = generateUniqueName();
+String view1FullName = SchemaUtil.getTableName(schemaName, view1Name);
+String view2Name = generateUniqueName();
+String view2FullName = SchemaUtil.getTableName(schemaName, view2Name);
+String indexTableName = generateUniqueName();
+String indexTableFullName = SchemaUtil.getTableName(schemaName, 
indexTableName);
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+// Create Table and Views. Note the view is on a non leading PK 
data table column
+String createTable =
+"CREATE TABLE IF NOT EXISTS " + dataTableFullName + " (\n"
++ "ORGANIZATION_ID VARCHAR NOT NULL,\n"
++ "KEY_PREFIX CHAR(3) NOT NULL,\n" + "
CREATED_BY VARCHAR,\n"
++ "CONSTRAINT PK PRIMARY KEY (\n" + "
ORGANIZATION_ID,\n"
++ "KEY_PREFIX\n" + ")\n"
++ ") VERSIONS=1, COLUMN_ENCODED_BYTES=0";
+conn.createStatement().execute(createTable);
+String createView1 =
+"CREATE VIEW IF NOT EXISTS " + view1FullName + " (\n"
++ " VIEW_COLA VARCHAR NOT NULL,\n"
++ " VIEW_COLB CHAR(1) C

[phoenix] branch master updated: PHOENIX-5932 View Index rebuild results in surplus rows from other view indexes

2020-06-09 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 3a82346  PHOENIX-5932 View Index rebuild results in surplus rows from 
other view indexes
3a82346 is described below

commit 3a8234659e38902177d08f5f9efee1214c0e74a9
Author: Abhishek Singh Chouhan 
AuthorDate: Tue Jun 9 14:24:02 2020 -0700

PHOENIX-5932 View Index rebuild results in surplus rows from other view 
indexes
---
 .../end2end/IndexToolForNonTxGlobalIndexIT.java| 186 +
 .../UngroupedAggregateRegionObserver.java  |  12 +-
 .../filter/AllVersionsIndexRebuildFilter.java  |  45 +
 .../org/apache/phoenix/filter/DelegateFilter.java  |  95 +++
 4 files changed, 337 insertions(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
index dfde1ee..956b8cb 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForNonTxGlobalIndexIT.java
@@ -25,6 +25,7 @@ import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
 import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
 import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.client.ResultScanner;
@@ -49,6 +50,7 @@ import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.TestUtil;
 import org.junit.Assert;
 import org.junit.BeforeClass;
+import org.junit.Ignore;
 import org.junit.Rule;
 import org.junit.Test;
 import org.junit.rules.ExpectedException;
@@ -202,6 +204,190 @@ public class IndexToolForNonTxGlobalIndexIT extends 
BaseUniqueNamesOwnClusterIT
 }
 }
 
+// This test will only work with HBASE-22710 which is in 2.2.5+
+// TODO: Enable once we move to these versions
+@Ignore
+@Test
+public void testUpdatablePKFilterViewIndexRebuild() throws Exception {
+if (!mutable) {
+return;
+}
+String schemaName = generateUniqueName();
+String dataTableName = generateUniqueName();
+String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
+String view1Name = generateUniqueName();
+String view1FullName = SchemaUtil.getTableName(schemaName, view1Name);
+String view2Name = generateUniqueName();
+String view2FullName = SchemaUtil.getTableName(schemaName, view2Name);
+String indexTableName = generateUniqueName();
+String indexTableFullName = SchemaUtil.getTableName(schemaName, 
indexTableName);
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+// Create Table and Views. Note the view is on a non leading PK 
data table column
+String createTable =
+"CREATE TABLE IF NOT EXISTS " + dataTableFullName + " (\n"
++ "ORGANIZATION_ID VARCHAR NOT NULL,\n"
++ "KEY_PREFIX CHAR(3) NOT NULL,\n" + "
CREATED_BY VARCHAR,\n"
++ "CONSTRAINT PK PRIMARY KEY (\n" + "
ORGANIZATION_ID,\n"
++ "KEY_PREFIX\n" + ")\n"
++ ") VERSIONS=1, COLUMN_ENCODED_BYTES=0";
+conn.createStatement().execute(createTable);
+String createView1 =
+"CREATE VIEW IF NOT EXISTS " + view1FullName + " (\n"
++ " VIEW_COLA VARCHAR NOT NULL,\n"
++ " VIEW_COLB CHAR(1) CONSTRAINT PKVIEW PRIMARY 
KEY (\n"
++ " VIEW_COLA\n" + " )) AS \n" + " SELECT * FROM " 
+ dataTableFullName
++ " WHERE KEY_PREFIX = 'aaa'";
+conn.createStatement().execute(createView1);
+String createView2 =
+"CREATE VIEW IF NOT EXISTS " + view2FullName + " (\n"
++ " VIEW_COL1 VARCHAR NOT NULL,\n"
++ " VIEW_COL2 CHAR(1) CONSTRAINT PKVIEW PRIMARY 
KEY (\n"
++ " VIEW_COL1\n" + " )) AS \n" + " SELECT * FROM " 
+ dataTableFullName
+

[phoenix] branch 4.x updated: PHOENIX-5931 PhoenixIndexFailurePolicy throws NPE if cause of IOE is null

2020-06-04 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x by this push:
 new b2dfc71  PHOENIX-5931 PhoenixIndexFailurePolicy throws NPE if cause of 
IOE is null
b2dfc71 is described below

commit b2dfc71b68305d7d0d1622fdd45b6cbd17d25532
Author: Abhishek Singh Chouhan 
AuthorDate: Mon Jun 1 19:55:04 2020 -0700

PHOENIX-5931 PhoenixIndexFailurePolicy throws NPE if cause of IOE is null
---
 .../org/apache/phoenix/coprocessor/IndexRebuildRegionScanner.java| 5 +++--
 .../java/org/apache/phoenix/index/PhoenixIndexFailurePolicy.java | 2 +-
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/IndexRebuildRegionScanner.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/IndexRebuildRegionScanner.java
index a9682b0..c5aa36c 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/IndexRebuildRegionScanner.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/IndexRebuildRegionScanner.java
@@ -1373,8 +1373,9 @@ public class IndexRebuildRegionScanner extends 
GlobalIndexRegionScanner {
 }
 }
 }
-} catch (IOException e) {
-LOGGER.error("IOException during rebuilding: " + 
Throwables.getStackTraceAsString(e));
+} catch (Throwable e) {
+LOGGER.error("Exception in IndexRebuildRegionScanner for region "
++ region.getRegionInfo().getRegionNameAsString(), e);
 throw e;
 } finally {
 region.closeRegionOperation();
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexFailurePolicy.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexFailurePolicy.java
index 00cbc1e..45bd7dd 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexFailurePolicy.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexFailurePolicy.java
@@ -493,7 +493,7 @@ public class PhoenixIndexFailurePolicy extends 
DelegateIndexFailurePolicy {
 return;
 } catch (IOException e) {
 SQLException inferredE = 
ServerUtil.parseLocalOrRemoteServerException(e);
-if (inferredE == null || inferredE.getErrorCode() != 
SQLExceptionCode.INDEX_WRITE_FAILURE.getErrorCode()) {
+if (inferredE != null && inferredE.getErrorCode() != 
SQLExceptionCode.INDEX_WRITE_FAILURE.getErrorCode()) {
 // If this call is from phoenix client, we also need to 
check if SQLException
 // error is INDEX_METADATA_NOT_FOUND or not
 // if it's not an INDEX_METADATA_NOT_FOUND, throw 
exception,



[phoenix] branch master updated: PHOENIX-5931 PhoenixIndexFailurePolicy throws NPE if cause of IOE is null

2020-06-04 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 77c6cb3  PHOENIX-5931 PhoenixIndexFailurePolicy throws NPE if cause of 
IOE is null
77c6cb3 is described below

commit 77c6cb32fce04b912b7c502dc170d86af8293fe6
Author: Abhishek Singh Chouhan 
AuthorDate: Mon Jun 1 19:55:04 2020 -0700

PHOENIX-5931 PhoenixIndexFailurePolicy throws NPE if cause of IOE is null
---
 .../org/apache/phoenix/coprocessor/IndexRebuildRegionScanner.java| 5 +++--
 .../java/org/apache/phoenix/index/PhoenixIndexFailurePolicy.java | 2 +-
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/IndexRebuildRegionScanner.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/IndexRebuildRegionScanner.java
index 812a6f6..40c6d57 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/IndexRebuildRegionScanner.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/IndexRebuildRegionScanner.java
@@ -1209,8 +1209,9 @@ public class IndexRebuildRegionScanner extends 
GlobalIndexRegionScanner {
 }
 }
 }
-} catch (IOException e) {
-LOGGER.error("IOException during rebuilding: " + 
Throwables.getStackTraceAsString(e));
+} catch (Throwable e) {
+LOGGER.error("Exception in IndexRebuildRegionScanner for region "
++ region.getRegionInfo().getRegionNameAsString(), e);
 throw e;
 } finally {
 region.closeRegionOperation();
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexFailurePolicy.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexFailurePolicy.java
index a0a765c..f06df71 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexFailurePolicy.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexFailurePolicy.java
@@ -493,7 +493,7 @@ public class PhoenixIndexFailurePolicy extends 
DelegateIndexFailurePolicy {
 return;
 } catch (IOException e) {
 SQLException inferredE = 
ServerUtil.parseLocalOrRemoteServerException(e);
-if (inferredE == null || inferredE.getErrorCode() != 
SQLExceptionCode.INDEX_WRITE_FAILURE.getErrorCode()) {
+if (inferredE != null && inferredE.getErrorCode() != 
SQLExceptionCode.INDEX_WRITE_FAILURE.getErrorCode()) {
 // If this call is from phoenix client, we also need to 
check if SQLException
 // error is INDEX_METADATA_NOT_FOUND or not
 // if it's not an INDEX_METADATA_NOT_FOUND, throw 
exception,



[phoenix] branch 4.x updated: PHOENIX-5899 Index writes and verifications should contain information of underlying cause of failure

2020-05-19 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x by this push:
 new dcc5cbc  PHOENIX-5899 Index writes and verifications should contain 
information of underlying cause of failure
dcc5cbc is described below

commit dcc5cbc0836eaa5fdce9fa9c66c46d2cda0db100
Author: Abhishek Singh Chouhan 
AuthorDate: Mon May 18 17:42:26 2020 -0700

PHOENIX-5899 Index writes and verifications should contain information of 
underlying cause of failure
---
 .../coprocessor/IndexRebuildRegionScanner.java | 12 ++--
 .../phoenix/coprocessor/IndexerRegionScanner.java  | 14 ++--
 .../phoenix/hbase/index/IndexRegionObserver.java   |  2 +-
 .../exception/MultiIndexWriteFailureException.java | 17 +++--
 .../hbase/index/parallel/BaseTaskRunner.java   | 10 ++-
 .../phoenix/hbase/index/parallel/TaskRunner.java   | 20 +++---
 .../TrackingParallelWriterIndexCommitter.java  | 47 ++---
 .../java/org/apache/phoenix/util/ServerUtil.java   | 11 +++
 .../hbase/index/parallel/TestTaskRunner.java   | 79 ++
 9 files changed, 176 insertions(+), 36 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/IndexRebuildRegionScanner.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/IndexRebuildRegionScanner.java
index 9feb27f..6b4bab1 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/IndexRebuildRegionScanner.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/IndexRebuildRegionScanner.java
@@ -35,6 +35,7 @@ import java.util.List;
 import java.util.Map;
 import java.util.NavigableSet;
 import java.util.concurrent.ExecutionException;
+import java.util.concurrent.Future;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.collect.Lists;
@@ -855,20 +856,23 @@ public class IndexRebuildRegionScanner extends 
GlobalIndexRegionScanner {
 if (keys.size() > 0) {
 addVerifyTask(keys, perTaskVerificationPhaseResult);
 }
-List taskResultList = null;
+Pair, List>> resultsAndFutures = null;
 try {
 LOGGER.debug("Waiting on index verify tasks to complete...");
-taskResultList = this.pool.submitUninterruptible(tasks);
+resultsAndFutures = this.pool.submitUninterruptible(tasks);
 } catch (ExecutionException e) {
 throw new RuntimeException("Should not fail on the results while 
using a WaitForCompletionTaskRunner", e);
 } catch (EarlyExitFailure e) {
 throw new RuntimeException("Stopped while waiting for batch, 
quitting!", e);
 }
-for (Boolean result : taskResultList) {
+int index = 0;
+for (Boolean result : resultsAndFutures.getFirst()) {
 if (result == null) {
+Throwable cause = 
ServerUtil.getExceptionFromFailedFuture(resultsAndFutures.getSecond().get(index));
 // there was a failure
-throw new IOException(exceptionMessage);
+throw new IOException(exceptionMessage, cause);
 }
+index++;
 }
 for (IndexToolVerificationResult.PhaseResult result : 
verificationPhaseResultList) {
 verificationPhaseResult.add(result);
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/IndexerRegionScanner.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/IndexerRegionScanner.java
index b493729..ad8924e 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/IndexerRegionScanner.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/IndexerRegionScanner.java
@@ -30,6 +30,7 @@ import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
 import java.util.concurrent.ExecutionException;
+import java.util.concurrent.Future;
 
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CellUtil;
@@ -48,7 +49,7 @@ import org.apache.hadoop.hbase.regionserver.Region;
 import org.apache.hadoop.hbase.regionserver.RegionScanner;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
-
+import org.apache.hadoop.hbase.util.Pair;
 import org.apache.phoenix.compile.ScanRanges;
 import org.apache.phoenix.filter.SkipScanFilter;
 import org.apache.phoenix.hbase.index.ValueGetter;
@@ -269,20 +270,23 @@ public class IndexerRegionScanner extends 
GlobalIndexRegionScanner {
 if (keys.size() > 0) {
 addVerifyTask(keys, perTaskDataKeyToDataPutMap, 
perTaskVerificationPhaseResult);
 }
-List taskResultList = null;
+Pair, List>> resultsAndFutures = null;
 try {
 LOGGER.debug("Waiting on index verify tasks to complete...");
-

[phoenix] branch master updated: PHOENIX-5899 Index writes and verifications should contain information of underlying cause of failure

2020-05-19 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 09605d8  PHOENIX-5899 Index writes and verifications should contain 
information of underlying cause of failure
09605d8 is described below

commit 09605d8403e8393d4eccea052c0e20b867af37f6
Author: Abhishek Singh Chouhan 
AuthorDate: Tue May 19 13:21:02 2020 -0700

PHOENIX-5899 Index writes and verifications should contain information of 
underlying cause of failure
---
 .../coprocessor/IndexRebuildRegionScanner.java | 12 ++--
 .../phoenix/hbase/index/IndexRegionObserver.java   |  2 +-
 .../exception/MultiIndexWriteFailureException.java | 17 +++--
 .../hbase/index/parallel/BaseTaskRunner.java   | 10 ++-
 .../phoenix/hbase/index/parallel/TaskRunner.java   | 20 +++---
 .../TrackingParallelWriterIndexCommitter.java  | 47 ++---
 .../java/org/apache/phoenix/util/ServerUtil.java   | 13 +++-
 .../hbase/index/parallel/TestTaskRunner.java   | 79 ++
 8 files changed, 168 insertions(+), 32 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/IndexRebuildRegionScanner.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/IndexRebuildRegionScanner.java
index 128678a..11da487 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/IndexRebuildRegionScanner.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/IndexRebuildRegionScanner.java
@@ -52,6 +52,7 @@ import java.util.List;
 import java.util.Map;
 import java.util.NavigableSet;
 import java.util.concurrent.ExecutionException;
+import java.util.concurrent.Future;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.collect.Lists;
@@ -1028,20 +1029,23 @@ public class IndexRebuildRegionScanner extends 
BaseRegionScanner {
 if (keys.size() > 0) {
 addVerifyTask(keys, perTaskVerificationPhaseResult);
 }
-List taskResultList = null;
+Pair, List>> resultsAndFutures = null;
 try {
 LOGGER.debug("Waiting on index verify tasks to complete...");
-taskResultList = this.pool.submitUninterruptible(tasks);
+resultsAndFutures = this.pool.submitUninterruptible(tasks);
 } catch (ExecutionException e) {
 throw new RuntimeException("Should not fail on the results while 
using a WaitForCompletionTaskRunner", e);
 } catch (EarlyExitFailure e) {
 throw new RuntimeException("Stopped while waiting for batch, 
quitting!", e);
 }
-for (Boolean result : taskResultList) {
+int index = 0;
+for (Boolean result : resultsAndFutures.getFirst()) {
 if (result == null) {
+Throwable cause = 
ServerUtil.getExceptionFromFailedFuture(resultsAndFutures.getSecond().get(index));
 // there was a failure
-throw new IOException(exceptionMessage);
+throw new IOException(exceptionMessage, cause);
 }
+index++;
 }
 for (IndexToolVerificationResult.PhaseResult result : 
verificationPhaseResultList) {
 verificationPhaseResult.add(result);
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
index a76ba51..a567cff 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
@@ -1068,7 +1068,7 @@ public class IndexRegionObserver implements 
RegionObserver, RegionCoprocessor {
   removePendingRows(context);
   context.rowLocks.clear();
   if (context.rebuild) {
-  throw new IOException(String.format("%s for rebuild", 
e.getMessage()));
+  throw new IOException(String.format("%s for rebuild", 
e.getMessage()), e);
   } else {
   rethrowIndexingException(e);
   }
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/exception/MultiIndexWriteFailureException.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/exception/MultiIndexWriteFailureException.java
index a14e8a5..472d027 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/exception/MultiIndexWriteFailureException.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/exception/MultiIndexWriteFailureException.java
@@ -40,14 +40,21 @@ public class MultiIndexWriteFailureException extends 
IndexWriteException {
   /**
* @param failures the tables to which the index write did not succeed
*/
-  public MultiIndexWriteFai

[phoenix] branch 4.x updated: PHOENIX-5863 Upsert into view against a table with index throws exception when 4.14.3 client connects to 4.16 server

2020-05-19 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x by this push:
 new 422ade0  PHOENIX-5863 Upsert into view against a table with index 
throws exception when 4.14.3 client connects to 4.16 server
422ade0 is described below

commit 422ade0883b94da758c219d6dc9633b2d901759a
Author: Sandeep Guggilam 
AuthorDate: Tue Apr 28 15:08:58 2020 -0700

PHOENIX-5863 Upsert into view against a table with index throws exception 
when 4.14.3 client connects to 4.16 server

Signed-off-by: Abhishek Singh Chouhan 
---
 .../phoenix/coprocessor/MetaDataEndpointImpl.java  |   2 +-
 .../java/org/apache/phoenix/util/ViewUtil.java | 103 -
 2 files changed, 59 insertions(+), 46 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 7c9dcc5..5735711 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -617,7 +617,7 @@ public class MetaDataEndpointImpl extends MetaDataProtocol 
implements Coprocesso
 && table.getViewType() != ViewType.MAPPED) {
 try (PhoenixConnection connection = 
QueryUtil.getConnectionOnServer(env.getConfiguration()).unwrap(PhoenixConnection.class))
 {
 PTable pTable = PhoenixRuntime.getTableNoCache(connection, 
table.getParentName().getString());
-table = 
ViewUtil.addDerivedColumnsAndIndexesFromParent(connection, table, pTable);
+table = ViewUtil.addDerivedColumnsFromParent(connection, 
table, pTable);
 }
 }
 
builder.setReturnCode(MetaDataProtos.MutationCode.TABLE_ALREADY_EXISTS);
diff --git a/phoenix-core/src/main/java/org/apache/phoenix/util/ViewUtil.java 
b/phoenix-core/src/main/java/org/apache/phoenix/util/ViewUtil.java
index 7e622b5..e5e9b8f 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/util/ViewUtil.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/util/ViewUtil.java
@@ -15,10 +15,24 @@
  */
 package org.apache.phoenix.util;
 
-import com.google.common.base.Objects;
-import com.google.common.collect.ImmutableList;
-import com.google.common.collect.Lists;
-import com.google.common.collect.Maps;
+import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.MIN_SPLITTABLE_SYSTEM_CATALOG;
+import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LINK_TYPE_BYTES;
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.PARENT_TENANT_ID_BYTES;
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES;
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CHILD_LINK_NAME_BYTES;
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TABLE_FAMILY_BYTES;
+import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TABLE_TYPE_BYTES;
+import static org.apache.phoenix.schema.PTableImpl.getColumnsToClone;
+import static org.apache.phoenix.util.SchemaUtil.getVarChars;
+
+import java.io.IOException;
+import java.sql.SQLException;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Set;
+
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.TableName;
@@ -63,23 +77,10 @@ import org.apache.phoenix.schema.types.PLong;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import java.io.IOException;
-import java.sql.SQLException;
-import java.util.Collections;
-import java.util.List;
-import java.util.Map;
-import java.util.Properties;
-import java.util.Set;
-
-import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.MIN_SPLITTABLE_SYSTEM_CATALOG;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LINK_TYPE_BYTES;
-import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.PARENT_TENANT_ID_BYTES;
-import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES;
-import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CHILD_LINK_NAME_BYTES;
-import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TABLE_FAMILY_BYTES;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TABLE_TYPE_BYTES;
-import static org.apache.phoenix.schema.PTableImpl.getColumnsToClone;
-import static org.apache.phoenix.util.SchemaUtil.getVarChars;
+import com.google.common.base.Objects;
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
 
 public class ViewUtil {
 
@@ -344,13 +345,46 @@ public class 

[phoenix] branch 4.x updated: PHOENIX-5580 Wrong values seen when updating a view for a table that has an index

2020-05-04 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x by this push:
 new d751e2a  PHOENIX-5580 Wrong values seen when updating a view for a 
table that has an index
d751e2a is described below

commit d751e2a4f9f335180013c01250d687b95f85f848
Author: Sandeep Guggilam 
AuthorDate: Tue Apr 21 17:39:00 2020 -0700

PHOENIX-5580 Wrong values seen when updating a view for a table that has an 
index

Signed-off-by: Abhishek Singh Chouhan 
---
 .../phoenix/end2end/index/MutableIndexIT.java  |  33 
 .../coprocessor/generated/ServerCachingProtos.java | 203 +++--
 .../org/apache/phoenix/index/IndexMaintainer.java  |  29 ++-
 .../src/main/ServerCachingService.proto|   1 +
 4 files changed, 242 insertions(+), 24 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
index f61f02c..7812de2 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
@@ -201,6 +201,39 @@ public class MutableIndexIT extends 
ParallelStatsDisabledIT {
 }
 
 @Test
+public void testUpsertIntoViewOnTableWithIndex() throws Exception {
+String baseTable = generateUniqueName();
+String view = generateUniqueName();
+
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+String baseTableDDL = "CREATE TABLE IF NOT EXISTS " + baseTable + 
+" (ID VARCHAR PRIMARY KEY, V1 VARCHAR)";
+conn.createStatement().execute(baseTableDDL);
+
+// Create an Index on the base table
+String tableIndex = generateUniqueName() + "_IDX";
+conn.createStatement().execute("CREATE INDEX " + tableIndex + 
+" ON " + baseTable + " (V1)");
+
+// Create a view on the base table
+String viewDDL = "CREATE VIEW IF NOT EXISTS " + view 
++ " (V2 INTEGER) AS SELECT * FROM " + baseTable
++ " WHERE ID='a'";
+conn.createStatement().execute(viewDDL);
+
+String upsert = "UPSERT INTO " + view + " (ID, V1, V2) "
++ "VALUES ('a' ,'ab', 7)";
+conn.createStatement().executeUpdate(upsert);
+conn.commit();
+
+ResultSet rs = conn.createStatement().executeQuery("SELECT ID, V1 
from " + baseTable);
+assertTrue(rs.next());
+assertEquals("a", rs.getString(1));
+assertEquals("ab", rs.getString(2)); 
+}
+}
+
+@Test
 public void testCoveredColumns() throws Exception {
 String tableName = "TBL_" + generateUniqueName();
 String indexName = "IDX_" + generateUniqueName();
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/ServerCachingProtos.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/ServerCachingProtos.java
index 3fd01a2..ab1ffee 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/ServerCachingProtos.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/ServerCachingProtos.java
@@ -2177,6 +2177,21 @@ public final class ServerCachingProtos {
  * optional int32 indexDataColumnCount = 23 [default = -1];
  */
 int getIndexDataColumnCount();
+
+// optional string parentTableType = 24;
+/**
+ * optional string parentTableType = 24;
+ */
+boolean hasParentTableType();
+/**
+ * optional string parentTableType = 24;
+ */
+java.lang.String getParentTableType();
+/**
+ * optional string parentTableType = 24;
+ */
+com.google.protobuf.ByteString
+getParentTableTypeBytes();
   }
   /**
* Protobuf type {@code IndexMaintainer}
@@ -2380,6 +2395,11 @@ public final class ServerCachingProtos {
   indexDataColumnCount_ = input.readInt32();
   break;
 }
+case 194: {
+  bitField0_ |= 0x0004;
+  parentTableType_ = input.readBytes();
+  break;
+}
   }
 }
   } catch (com.google.protobuf.InvalidProtocolBufferException e) {
@@ -2896,6 +2916,49 @@ public final class ServerCachingProtos {
   return indexDataColumnCount_;
 }
 
+// optional string parentTableType = 24;
+public static final int PARENTTABLETYPE_FIELD_NUMBER = 24;
+private java.lang.Object parentTableType_;
+/**
+ * optional 

[phoenix] branch master updated: PHOENIX-5580 Wrong values seen when updating a view for a table that has an index

2020-05-04 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 364b62c  PHOENIX-5580 Wrong values seen when updating a view for a 
table that has an index
364b62c is described below

commit 364b62c34547a3a4e4b5496e1e50eee1d6daa514
Author: Sandeep Guggilam 
AuthorDate: Tue Apr 21 17:39:00 2020 -0700

PHOENIX-5580 Wrong values seen when updating a view for a table that has an 
index

Signed-off-by: Abhishek Singh Chouhan 
---
 .../phoenix/end2end/index/MutableIndexIT.java  |  33 
 .../coprocessor/generated/ServerCachingProtos.java | 203 +++--
 .../org/apache/phoenix/index/IndexMaintainer.java  |  29 ++-
 .../src/main/ServerCachingService.proto|   1 +
 4 files changed, 242 insertions(+), 24 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
index 23c1956..0071392 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
@@ -195,6 +195,39 @@ public class MutableIndexIT extends 
ParallelStatsDisabledIT {
 }
 
 @Test
+public void testUpsertIntoViewOnTableWithIndex() throws Exception {
+String baseTable = generateUniqueName();
+String view = generateUniqueName();
+
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+String baseTableDDL = "CREATE TABLE IF NOT EXISTS " + baseTable + 
+" (ID VARCHAR PRIMARY KEY, V1 VARCHAR)";
+conn.createStatement().execute(baseTableDDL);
+
+// Create an Index on the base table
+String tableIndex = generateUniqueName() + "_IDX";
+conn.createStatement().execute("CREATE INDEX " + tableIndex + 
+" ON " + baseTable + " (V1)");
+
+// Create a view on the base table
+String viewDDL = "CREATE VIEW IF NOT EXISTS " + view 
++ " (V2 INTEGER) AS SELECT * FROM " + baseTable
++ " WHERE ID='a'";
+conn.createStatement().execute(viewDDL);
+
+String upsert = "UPSERT INTO " + view + " (ID, V1, V2) "
++ "VALUES ('a' ,'ab', 7)";
+conn.createStatement().executeUpdate(upsert);
+conn.commit();
+
+ResultSet rs = conn.createStatement().executeQuery("SELECT ID, V1 
from " + baseTable);
+assertTrue(rs.next());
+assertEquals("a", rs.getString(1));
+assertEquals("ab", rs.getString(2)); 
+}
+}
+
+@Test
 public void testCoveredColumns() throws Exception {
 String tableName = "TBL_" + generateUniqueName();
 String indexName = "IDX_" + generateUniqueName();
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/ServerCachingProtos.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/ServerCachingProtos.java
index 3fd01a2..ab1ffee 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/ServerCachingProtos.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/ServerCachingProtos.java
@@ -2177,6 +2177,21 @@ public final class ServerCachingProtos {
  * optional int32 indexDataColumnCount = 23 [default = -1];
  */
 int getIndexDataColumnCount();
+
+// optional string parentTableType = 24;
+/**
+ * optional string parentTableType = 24;
+ */
+boolean hasParentTableType();
+/**
+ * optional string parentTableType = 24;
+ */
+java.lang.String getParentTableType();
+/**
+ * optional string parentTableType = 24;
+ */
+com.google.protobuf.ByteString
+getParentTableTypeBytes();
   }
   /**
* Protobuf type {@code IndexMaintainer}
@@ -2380,6 +2395,11 @@ public final class ServerCachingProtos {
   indexDataColumnCount_ = input.readInt32();
   break;
 }
+case 194: {
+  bitField0_ |= 0x0004;
+  parentTableType_ = input.readBytes();
+  break;
+}
   }
 }
   } catch (com.google.protobuf.InvalidProtocolBufferException e) {
@@ -2896,6 +2916,49 @@ public final class ServerCachingProtos {
   return indexDataColumnCount_;
 }
 
+// optional string parentTableType = 24;
+public static final int PARENTTABLETYPE_FIELD_NUMBER = 24;
+private java.lang.Object parentTab

[phoenix] branch 4.x updated: PHOENIX-5751 Remove redundant IndexUtil#isGlobalIndexCheckEnabled() calls for immutable data tables

2020-03-24 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x by this push:
 new 07f26d5  PHOENIX-5751 Remove redundant 
IndexUtil#isGlobalIndexCheckEnabled() calls for immutable data tables
07f26d5 is described below

commit 07f26d53a2e1ffb99968822cd54f469dfb74699d
Author: Abhishek 
AuthorDate: Tue Mar 24 18:44:15 2020 -0700

PHOENIX-5751 Remove redundant IndexUtil#isGlobalIndexCheckEnabled() calls 
for immutable data tables
---
 .../apache/phoenix/end2end/BasePermissionsIT.java  | 75 ++
 .../org/apache/phoenix/execute/MutationState.java  |  3 +
 2 files changed, 78 insertions(+)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
index 4301ec8..218c6b1 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
@@ -1217,4 +1217,79 @@ public abstract class BasePermissionsIT extends BaseTest 
{
 revokeAll();
 }
 }
+
+@Test
+public void testUpsertIntoImmutableTable() throws Throwable {
+final String schema = generateUniqueName();
+final String tableName = generateUniqueName();
+final String phoenixTableName = schema + "." + tableName;
+grantSystemTableAccess();
+try {
+superUser1.runAs(new PrivilegedExceptionAction() {
+@Override
+public Void run() throws Exception {
+try {
+verifyAllowed(createSchema(schema), superUser1);
+
verifyAllowed(onlyCreateImmutableTable(phoenixTableName), superUser1);
+} catch (Throwable e) {
+if (e instanceof Exception) {
+throw (Exception) e;
+} else {
+throw new Exception(e);
+}
+}
+return null;
+}
+});
+
+if (isNamespaceMapped) {
+grantPermissions(unprivilegedUser.getShortName(), schema, 
Permission.Action.WRITE,
+Permission.Action.READ, Permission.Action.EXEC);
+} else {
+grantPermissions(unprivilegedUser.getShortName(),
+NamespaceDescriptor.DEFAULT_NAMESPACE.getName(), 
Permission.Action.WRITE,
+Permission.Action.READ, Permission.Action.EXEC);
+}
+verifyAllowed(upsertRowsIntoTable(phoenixTableName), 
unprivilegedUser);
+verifyAllowed(readTable(phoenixTableName), unprivilegedUser);
+} finally {
+revokeAll();
+}
+}
+
+AccessTestAction onlyCreateImmutableTable(final String tableName) throws 
SQLException {
+return new AccessTestAction() {
+@Override
+public Object run() throws Exception {
+try (Connection conn = getConnection(); Statement stmt = 
conn.createStatement()) {
+assertFalse(stmt.execute("CREATE IMMUTABLE TABLE " + 
tableName
++ "(pk INTEGER not null primary key, data VARCHAR, 
val integer)"));
+}
+return null;
+}
+};
+}
+
+AccessTestAction upsertRowsIntoTable(final String tableName) throws 
SQLException {
+return new AccessTestAction() {
+@Override
+public Object run() throws Exception {
+try (Connection conn = getConnection()) {
+try (PreparedStatement pstmt =
+conn.prepareStatement(
+"UPSERT INTO " + tableName + " values(?, ?, 
?)")) {
+for (int i = 0; i < NUM_RECORDS; i++) {
+pstmt.setInt(1, i);
+pstmt.setString(2, Integer.toString(i));
+pstmt.setInt(3, i);
+assertEquals(1, pstmt.executeUpdate());
+}
+}
+conn.commit();
+}
+return null;
+}
+};
+}
+
 }
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java 
b/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
index b7c55f3..7088a49 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
@@ -1227,6 +1227,9 @@ public class MutationState implements SQLCloseable {
 while (mapIter

[phoenix] branch master updated: PHOENIX-5751 Remove redundant IndexUtil#isGlobalIndexCheckEnabled() calls for immutable data tables

2020-03-24 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 0f9ed7a  PHOENIX-5751 Remove redundant 
IndexUtil#isGlobalIndexCheckEnabled() calls for immutable data tables
0f9ed7a is described below

commit 0f9ed7aa13d1d2b195ab1c315265ccebce037b80
Author: Abhishek 
AuthorDate: Fri Mar 20 00:50:00 2020 -0700

PHOENIX-5751 Remove redundant IndexUtil#isGlobalIndexCheckEnabled() calls 
for immutable data tables
---
 .../apache/phoenix/end2end/BasePermissionsIT.java  | 75 ++
 .../org/apache/phoenix/execute/MutationState.java  |  3 +
 2 files changed, 78 insertions(+)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
index 382cacc..f722c02 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
@@ -1218,4 +1218,79 @@ public abstract class BasePermissionsIT extends BaseTest 
{
 revokeAll();
 }
 }
+
+@Test
+public void testUpsertIntoImmutableTable() throws Throwable {
+final String schema = generateUniqueName();
+final String tableName = generateUniqueName();
+final String phoenixTableName = schema + "." + tableName;
+grantSystemTableAccess();
+try {
+superUser1.runAs(new PrivilegedExceptionAction() {
+@Override
+public Void run() throws Exception {
+try {
+verifyAllowed(createSchema(schema), superUser1);
+
verifyAllowed(onlyCreateImmutableTable(phoenixTableName), superUser1);
+} catch (Throwable e) {
+if (e instanceof Exception) {
+throw (Exception) e;
+} else {
+throw new Exception(e);
+}
+}
+return null;
+}
+});
+
+if (isNamespaceMapped) {
+grantPermissions(unprivilegedUser.getShortName(), schema, 
Permission.Action.WRITE,
+Permission.Action.READ, Permission.Action.EXEC);
+} else {
+grantPermissions(unprivilegedUser.getShortName(),
+NamespaceDescriptor.DEFAULT_NAMESPACE.getName(), 
Permission.Action.WRITE,
+Permission.Action.READ, Permission.Action.EXEC);
+}
+verifyAllowed(upsertRowsIntoTable(phoenixTableName), 
unprivilegedUser);
+verifyAllowed(readTable(phoenixTableName), unprivilegedUser);
+} finally {
+revokeAll();
+}
+}
+
+AccessTestAction onlyCreateImmutableTable(final String tableName) throws 
SQLException {
+return new AccessTestAction() {
+@Override
+public Object run() throws Exception {
+try (Connection conn = getConnection(); Statement stmt = 
conn.createStatement()) {
+assertFalse(stmt.execute("CREATE IMMUTABLE TABLE " + 
tableName
++ "(pk INTEGER not null primary key, data VARCHAR, 
val integer)"));
+}
+return null;
+}
+};
+}
+
+AccessTestAction upsertRowsIntoTable(final String tableName) throws 
SQLException {
+return new AccessTestAction() {
+@Override
+public Object run() throws Exception {
+try (Connection conn = getConnection()) {
+try (PreparedStatement pstmt =
+conn.prepareStatement(
+"UPSERT INTO " + tableName + " values(?, ?, 
?)")) {
+for (int i = 0; i < NUM_RECORDS; i++) {
+pstmt.setInt(1, i);
+pstmt.setString(2, Integer.toString(i));
+pstmt.setInt(3, i);
+assertEquals(1, pstmt.executeUpdate());
+}
+}
+conn.commit();
+}
+return null;
+}
+};
+}
+
 }
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java 
b/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
index dbe58e0..c72522a 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
@@ -1227,6 +1227,9 @@ public class MutationState implements SQLCloseable {
 while (mapIter

[phoenix] branch 4.x updated: PHOENIX-5790 Add Apache license header to compatible_client_versions.json

2020-03-24 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x by this push:
 new 3cd20f4  PHOENIX-5790 Add Apache license header to 
compatible_client_versions.json
3cd20f4 is described below

commit 3cd20f4bdf38a22609404f8734314cdcb35ecb0d
Author: Sandeep Guggilam 
AuthorDate: Fri Mar 20 16:45:36 2020 -0700

PHOENIX-5790 Add Apache license header to compatible_client_versions.json

Signed-off-by: Abhishek Singh Chouhan 
---
 .../src/it/resources/compatible_client_versions.json  | 19 ++-
 1 file changed, 18 insertions(+), 1 deletion(-)

diff --git a/phoenix-core/src/it/resources/compatible_client_versions.json 
b/phoenix-core/src/it/resources/compatible_client_versions.json
index 5f28b82..49a5297 100644
--- a/phoenix-core/src/it/resources/compatible_client_versions.json
+++ b/phoenix-core/src/it/resources/compatible_client_versions.json
@@ -1,7 +1,24 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 {
 "_comment": "Lists all phoenix compatible client versions against the 
current branch version for a given hbase profile \
  If hbase profile is 1.3, phoenix client versions 4.14.3 and 
4.15.0 are tested against current branch version",
 "1.3": ["4.14.3", "4.15.0"],
 "1.4": ["4.14.3", "4.15.0"],
 "1.5": ["4.15.0"]
-}
\ No newline at end of file
+}



[phoenix] branch master updated: PHOENIX-5790 Add Apache license header to compatible_client_versions.json

2020-03-24 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 457ae44  PHOENIX-5790 Add Apache license header to 
compatible_client_versions.json
457ae44 is described below

commit 457ae44cf09231fd0122b3932623d82c3a8b932a
Author: Sandeep Guggilam 
AuthorDate: Fri Mar 20 22:45:20 2020 -0700

PHOENIX-5790 Add Apache license header to compatible_client_versions.json

Signed-off-by: Abhishek Singh Chouhan 
---
 .../src/it/resources/compatible_client_versions.json  | 19 ++-
 1 file changed, 18 insertions(+), 1 deletion(-)

diff --git a/phoenix-core/src/it/resources/compatible_client_versions.json 
b/phoenix-core/src/it/resources/compatible_client_versions.json
index 6feabf5..1b436d5 100644
--- a/phoenix-core/src/it/resources/compatible_client_versions.json
+++ b/phoenix-core/src/it/resources/compatible_client_versions.json
@@ -1,3 +1,20 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 {
 "_comment": "Lists all phoenix compatible client versions against the 
current branch version for a given hbase profile \
  If hbase profile is 1.3, phoenix client versions 4.14.3 and 
4.15.0 are tested against current branch version",
@@ -7,4 +24,4 @@
 "2.0": ["5.1.0"],
 "2.1": ["5.1.0"],
 "2.2": ["5.1.0"]
-}
\ No newline at end of file
+}



[phoenix] branch 4.14-HBase-1.3 updated: PHOENIX-5745 Fix QA false negatives

2020-02-27 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.3 by this push:
 new 104127f  PHOENIX-5745 Fix QA false negatives
104127f is described below

commit 104127fe0a4ef730e516c78e7dfdb31e4f080a0d
Author: Abhishek Singh Chouhan 
AuthorDate: Tue Feb 25 14:41:34 2020 -0800

PHOENIX-5745 Fix QA false negatives
---
 dev/test-patch.sh | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/dev/test-patch.sh b/dev/test-patch.sh
index 8e22fc4..090bf69 100755
--- a/dev/test-patch.sh
+++ b/dev/test-patch.sh
@@ -1090,8 +1090,8 @@ checkJavacWarnings
 (( RESULT = RESULT + $? ))
 # checkProtocErrors
 # (( RESULT = RESULT + $? ))
-checkJavadocWarnings
-(( RESULT = RESULT + $? ))
+#checkJavadocWarnings
+#(( RESULT = RESULT + $? ))
 # checkCheckstyleErrors
 # (( RESULT = RESULT + $? ))
 checkInterfaceAudience



[phoenix] branch 4.14-HBase-1.4 updated: PHOENIX-5745 Fix QA false negatives

2020-02-27 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.14-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.4 by this push:
 new 3b64a23  PHOENIX-5745 Fix QA false negatives
3b64a23 is described below

commit 3b64a23da345b77db2447b86f667196f3a0dd1d9
Author: Abhishek Singh Chouhan 
AuthorDate: Tue Feb 25 14:41:34 2020 -0800

PHOENIX-5745 Fix QA false negatives
---
 dev/test-patch.sh | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/dev/test-patch.sh b/dev/test-patch.sh
index 8e22fc4..090bf69 100755
--- a/dev/test-patch.sh
+++ b/dev/test-patch.sh
@@ -1090,8 +1090,8 @@ checkJavacWarnings
 (( RESULT = RESULT + $? ))
 # checkProtocErrors
 # (( RESULT = RESULT + $? ))
-checkJavadocWarnings
-(( RESULT = RESULT + $? ))
+#checkJavadocWarnings
+#(( RESULT = RESULT + $? ))
 # checkCheckstyleErrors
 # (( RESULT = RESULT + $? ))
 checkInterfaceAudience



[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5745 Fix QA false negatives

2020-02-27 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 5d9e99c  PHOENIX-5745 Fix QA false negatives
5d9e99c is described below

commit 5d9e99cb111e09c53bb3ef041ff3f6de7a9a7309
Author: Abhishek Singh Chouhan 
AuthorDate: Tue Feb 25 14:41:34 2020 -0800

PHOENIX-5745 Fix QA false negatives
---
 dev/test-patch.sh | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/dev/test-patch.sh b/dev/test-patch.sh
index 8e22fc4..090bf69 100755
--- a/dev/test-patch.sh
+++ b/dev/test-patch.sh
@@ -1090,8 +1090,8 @@ checkJavacWarnings
 (( RESULT = RESULT + $? ))
 # checkProtocErrors
 # (( RESULT = RESULT + $? ))
-checkJavadocWarnings
-(( RESULT = RESULT + $? ))
+#checkJavadocWarnings
+#(( RESULT = RESULT + $? ))
 # checkCheckstyleErrors
 # (( RESULT = RESULT + $? ))
 checkInterfaceAudience



[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5745 Fix QA false negatives

2020-02-27 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new a690dba  PHOENIX-5745 Fix QA false negatives
a690dba is described below

commit a690dbae3bedd4e36962746beb4467740ef37913
Author: Abhishek Singh Chouhan 
AuthorDate: Tue Feb 25 14:41:34 2020 -0800

PHOENIX-5745 Fix QA false negatives
---
 dev/test-patch.sh | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/dev/test-patch.sh b/dev/test-patch.sh
index 8e22fc4..090bf69 100755
--- a/dev/test-patch.sh
+++ b/dev/test-patch.sh
@@ -1090,8 +1090,8 @@ checkJavacWarnings
 (( RESULT = RESULT + $? ))
 # checkProtocErrors
 # (( RESULT = RESULT + $? ))
-checkJavadocWarnings
-(( RESULT = RESULT + $? ))
+#checkJavadocWarnings
+#(( RESULT = RESULT + $? ))
 # checkCheckstyleErrors
 # (( RESULT = RESULT + $? ))
 checkInterfaceAudience



[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5745 Fix QA false negatives

2020-02-27 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 4fc6837  PHOENIX-5745 Fix QA false negatives
4fc6837 is described below

commit 4fc6837afde6fe1f5a0f762151ae7d50da6ffbc6
Author: Abhishek Singh Chouhan 
AuthorDate: Tue Feb 25 14:41:34 2020 -0800

PHOENIX-5745 Fix QA false negatives
---
 dev/test-patch.sh | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/dev/test-patch.sh b/dev/test-patch.sh
index 8e22fc4..090bf69 100755
--- a/dev/test-patch.sh
+++ b/dev/test-patch.sh
@@ -1090,8 +1090,8 @@ checkJavacWarnings
 (( RESULT = RESULT + $? ))
 # checkProtocErrors
 # (( RESULT = RESULT + $? ))
-checkJavadocWarnings
-(( RESULT = RESULT + $? ))
+#checkJavadocWarnings
+#(( RESULT = RESULT + $? ))
 # checkCheckstyleErrors
 # (( RESULT = RESULT + $? ))
 checkInterfaceAudience



[phoenix] branch master updated: PHOENIX-5745 Fix QA false negatives

2020-02-27 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 4c2f481  PHOENIX-5745 Fix QA false negatives
4c2f481 is described below

commit 4c2f481f82bb1bcf5f62d009dd20854ef35b13a8
Author: Abhishek Singh Chouhan 
AuthorDate: Tue Feb 25 14:41:34 2020 -0800

PHOENIX-5745 Fix QA false negatives
---
 dev/test-patch.sh | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/dev/test-patch.sh b/dev/test-patch.sh
index 8e22fc4..090bf69 100755
--- a/dev/test-patch.sh
+++ b/dev/test-patch.sh
@@ -1090,8 +1090,8 @@ checkJavacWarnings
 (( RESULT = RESULT + $? ))
 # checkProtocErrors
 # (( RESULT = RESULT + $? ))
-checkJavadocWarnings
-(( RESULT = RESULT + $? ))
+#checkJavadocWarnings
+#(( RESULT = RESULT + $? ))
 # checkCheckstyleErrors
 # (( RESULT = RESULT + $? ))
 checkInterfaceAudience



[phoenix] branch 4.14-HBase-1.3 updated: PHOENIX-5737 Hadoop QA run says no tests even though there are added IT tests

2020-02-24 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.3 by this push:
 new ef0290a  PHOENIX-5737 Hadoop QA run says no tests even though there 
are added IT tests
ef0290a is described below

commit ef0290a7a03cf202dc6cf5972b5bbdc21333807c
Author: Sandeep Guggilam 
AuthorDate: Thu Feb 20 19:18:03 2020 -0800

PHOENIX-5737 Hadoop QA run says no tests even though there are added IT 
tests

Signed-off-by: Abhishek Singh Chouhan 
---
 dev/test-patch.sh | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/dev/test-patch.sh b/dev/test-patch.sh
index a1161fb..8e22fc4 100755
--- a/dev/test-patch.sh
+++ b/dev/test-patch.sh
@@ -384,7 +384,7 @@ checkTests () {
   echo "=="
   echo ""
   echo ""
-  testReferences=`$GREP -c -i '/test' $PATCH_DIR/patch`
+  testReferences=`$GREP -c -i '/test\|/it' $PATCH_DIR/patch`
   echo "There appear to be $testReferences test files referenced in the patch."
   if [[ $testReferences == 0 ]] ; then
 if [[ $JENKINS == "true" ]] ; then
@@ -414,9 +414,11 @@ checkTests () {
 Also please list what manual steps were performed to 
verify this patch."
 return 1
   fi
+  testsAdded=`$GREP -c -i -e '+ \+@Test' $PATCH_DIR/patch`
+  echo "There appear to be $testsAdded new tests added in the patch."
   JIRA_COMMENT="$JIRA_COMMENT
 
-{color:green}+1 tests included{color}.  The patch appears to include 
$testReferences new or modified tests."
+{color:green}+1 tests included{color}.  The patch appears to include 
$testsAdded new or modified tests."
   return 0
 }
 



[phoenix] branch 4.14-HBase-1.4 updated: PHOENIX-5737 Hadoop QA run says no tests even though there are added IT tests

2020-02-24 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.14-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.4 by this push:
 new 668e5ba  PHOENIX-5737 Hadoop QA run says no tests even though there 
are added IT tests
668e5ba is described below

commit 668e5ba2acac95ee6ebe05eea8e6aaee41872bed
Author: Sandeep Guggilam 
AuthorDate: Thu Feb 20 19:18:03 2020 -0800

PHOENIX-5737 Hadoop QA run says no tests even though there are added IT 
tests

Signed-off-by: Abhishek Singh Chouhan 
---
 dev/test-patch.sh | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/dev/test-patch.sh b/dev/test-patch.sh
index a1161fb..8e22fc4 100755
--- a/dev/test-patch.sh
+++ b/dev/test-patch.sh
@@ -384,7 +384,7 @@ checkTests () {
   echo "=="
   echo ""
   echo ""
-  testReferences=`$GREP -c -i '/test' $PATCH_DIR/patch`
+  testReferences=`$GREP -c -i '/test\|/it' $PATCH_DIR/patch`
   echo "There appear to be $testReferences test files referenced in the patch."
   if [[ $testReferences == 0 ]] ; then
 if [[ $JENKINS == "true" ]] ; then
@@ -414,9 +414,11 @@ checkTests () {
 Also please list what manual steps were performed to 
verify this patch."
 return 1
   fi
+  testsAdded=`$GREP -c -i -e '+ \+@Test' $PATCH_DIR/patch`
+  echo "There appear to be $testsAdded new tests added in the patch."
   JIRA_COMMENT="$JIRA_COMMENT
 
-{color:green}+1 tests included{color}.  The patch appears to include 
$testReferences new or modified tests."
+{color:green}+1 tests included{color}.  The patch appears to include 
$testsAdded new or modified tests."
   return 0
 }
 



[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5737 Hadoop QA run says no tests even though there are added IT tests

2020-02-24 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new dfec37e  PHOENIX-5737 Hadoop QA run says no tests even though there 
are added IT tests
dfec37e is described below

commit dfec37e1a3dbca5548f425c4b904ab2e77e92aea
Author: Sandeep Guggilam 
AuthorDate: Thu Feb 20 19:18:03 2020 -0800

PHOENIX-5737 Hadoop QA run says no tests even though there are added IT 
tests

Signed-off-by: Abhishek Singh Chouhan 
---
 dev/test-patch.sh | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/dev/test-patch.sh b/dev/test-patch.sh
index a1161fb..8e22fc4 100755
--- a/dev/test-patch.sh
+++ b/dev/test-patch.sh
@@ -384,7 +384,7 @@ checkTests () {
   echo "=="
   echo ""
   echo ""
-  testReferences=`$GREP -c -i '/test' $PATCH_DIR/patch`
+  testReferences=`$GREP -c -i '/test\|/it' $PATCH_DIR/patch`
   echo "There appear to be $testReferences test files referenced in the patch."
   if [[ $testReferences == 0 ]] ; then
 if [[ $JENKINS == "true" ]] ; then
@@ -414,9 +414,11 @@ checkTests () {
 Also please list what manual steps were performed to 
verify this patch."
 return 1
   fi
+  testsAdded=`$GREP -c -i -e '+ \+@Test' $PATCH_DIR/patch`
+  echo "There appear to be $testsAdded new tests added in the patch."
   JIRA_COMMENT="$JIRA_COMMENT
 
-{color:green}+1 tests included{color}.  The patch appears to include 
$testReferences new or modified tests."
+{color:green}+1 tests included{color}.  The patch appears to include 
$testsAdded new or modified tests."
   return 0
 }
 



[phoenix] branch 4.15-HBase-1.3 updated: PHOENIX-5737 Hadoop QA run says no tests even though there are added IT tests

2020-02-24 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.15-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.15-HBase-1.3 by this push:
 new de536b6  PHOENIX-5737 Hadoop QA run says no tests even though there 
are added IT tests
de536b6 is described below

commit de536b6b2e3414fd5bc597552a70c21eb2b62769
Author: Sandeep Guggilam 
AuthorDate: Thu Feb 20 19:18:03 2020 -0800

PHOENIX-5737 Hadoop QA run says no tests even though there are added IT 
tests

Signed-off-by: Abhishek Singh Chouhan 
---
 dev/test-patch.sh | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/dev/test-patch.sh b/dev/test-patch.sh
index a1161fb..8e22fc4 100755
--- a/dev/test-patch.sh
+++ b/dev/test-patch.sh
@@ -384,7 +384,7 @@ checkTests () {
   echo "=="
   echo ""
   echo ""
-  testReferences=`$GREP -c -i '/test' $PATCH_DIR/patch`
+  testReferences=`$GREP -c -i '/test\|/it' $PATCH_DIR/patch`
   echo "There appear to be $testReferences test files referenced in the patch."
   if [[ $testReferences == 0 ]] ; then
 if [[ $JENKINS == "true" ]] ; then
@@ -414,9 +414,11 @@ checkTests () {
 Also please list what manual steps were performed to 
verify this patch."
 return 1
   fi
+  testsAdded=`$GREP -c -i -e '+ \+@Test' $PATCH_DIR/patch`
+  echo "There appear to be $testsAdded new tests added in the patch."
   JIRA_COMMENT="$JIRA_COMMENT
 
-{color:green}+1 tests included{color}.  The patch appears to include 
$testReferences new or modified tests."
+{color:green}+1 tests included{color}.  The patch appears to include 
$testsAdded new or modified tests."
   return 0
 }
 



[phoenix] branch 4.15-HBase-1.5 updated: PHOENIX-5737 Hadoop QA run says no tests even though there are added IT tests

2020-02-24 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.15-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.15-HBase-1.5 by this push:
 new 4c7124a  PHOENIX-5737 Hadoop QA run says no tests even though there 
are added IT tests
4c7124a is described below

commit 4c7124af8e1cd0ccebfeb58fa621b01c2569baa9
Author: Sandeep Guggilam 
AuthorDate: Thu Feb 20 19:18:03 2020 -0800

PHOENIX-5737 Hadoop QA run says no tests even though there are added IT 
tests

Signed-off-by: Abhishek Singh Chouhan 
---
 dev/test-patch.sh | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/dev/test-patch.sh b/dev/test-patch.sh
index a1161fb..8e22fc4 100755
--- a/dev/test-patch.sh
+++ b/dev/test-patch.sh
@@ -384,7 +384,7 @@ checkTests () {
   echo "=="
   echo ""
   echo ""
-  testReferences=`$GREP -c -i '/test' $PATCH_DIR/patch`
+  testReferences=`$GREP -c -i '/test\|/it' $PATCH_DIR/patch`
   echo "There appear to be $testReferences test files referenced in the patch."
   if [[ $testReferences == 0 ]] ; then
 if [[ $JENKINS == "true" ]] ; then
@@ -414,9 +414,11 @@ checkTests () {
 Also please list what manual steps were performed to 
verify this patch."
 return 1
   fi
+  testsAdded=`$GREP -c -i -e '+ \+@Test' $PATCH_DIR/patch`
+  echo "There appear to be $testsAdded new tests added in the patch."
   JIRA_COMMENT="$JIRA_COMMENT
 
-{color:green}+1 tests included{color}.  The patch appears to include 
$testReferences new or modified tests."
+{color:green}+1 tests included{color}.  The patch appears to include 
$testsAdded new or modified tests."
   return 0
 }
 



[phoenix] branch 4.15-HBase-1.4 updated: PHOENIX-5737 Hadoop QA run says no tests even though there are added IT tests

2020-02-24 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.15-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.15-HBase-1.4 by this push:
 new 1d1b119  PHOENIX-5737 Hadoop QA run says no tests even though there 
are added IT tests
1d1b119 is described below

commit 1d1b119d273c75c15c5bcab531e83b091184455d
Author: Sandeep Guggilam 
AuthorDate: Thu Feb 20 19:18:03 2020 -0800

PHOENIX-5737 Hadoop QA run says no tests even though there are added IT 
tests

Signed-off-by: Abhishek Singh Chouhan 
---
 dev/test-patch.sh | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/dev/test-patch.sh b/dev/test-patch.sh
index a1161fb..8e22fc4 100755
--- a/dev/test-patch.sh
+++ b/dev/test-patch.sh
@@ -384,7 +384,7 @@ checkTests () {
   echo "=="
   echo ""
   echo ""
-  testReferences=`$GREP -c -i '/test' $PATCH_DIR/patch`
+  testReferences=`$GREP -c -i '/test\|/it' $PATCH_DIR/patch`
   echo "There appear to be $testReferences test files referenced in the patch."
   if [[ $testReferences == 0 ]] ; then
 if [[ $JENKINS == "true" ]] ; then
@@ -414,9 +414,11 @@ checkTests () {
 Also please list what manual steps were performed to 
verify this patch."
 return 1
   fi
+  testsAdded=`$GREP -c -i -e '+ \+@Test' $PATCH_DIR/patch`
+  echo "There appear to be $testsAdded new tests added in the patch."
   JIRA_COMMENT="$JIRA_COMMENT
 
-{color:green}+1 tests included{color}.  The patch appears to include 
$testReferences new or modified tests."
+{color:green}+1 tests included{color}.  The patch appears to include 
$testsAdded new or modified tests."
   return 0
 }
 



[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5737 Hadoop QA run says no tests even though there are added IT tests

2020-02-24 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 081e413  PHOENIX-5737 Hadoop QA run says no tests even though there 
are added IT tests
081e413 is described below

commit 081e41317b2482e885c95a8f504faec026eff87d
Author: Sandeep Guggilam 
AuthorDate: Thu Feb 20 19:18:03 2020 -0800

PHOENIX-5737 Hadoop QA run says no tests even though there are added IT 
tests

Signed-off-by: Abhishek Singh Chouhan 
---
 dev/test-patch.sh | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/dev/test-patch.sh b/dev/test-patch.sh
index a1161fb..8e22fc4 100755
--- a/dev/test-patch.sh
+++ b/dev/test-patch.sh
@@ -384,7 +384,7 @@ checkTests () {
   echo "=="
   echo ""
   echo ""
-  testReferences=`$GREP -c -i '/test' $PATCH_DIR/patch`
+  testReferences=`$GREP -c -i '/test\|/it' $PATCH_DIR/patch`
   echo "There appear to be $testReferences test files referenced in the patch."
   if [[ $testReferences == 0 ]] ; then
 if [[ $JENKINS == "true" ]] ; then
@@ -414,9 +414,11 @@ checkTests () {
 Also please list what manual steps were performed to 
verify this patch."
 return 1
   fi
+  testsAdded=`$GREP -c -i -e '+ \+@Test' $PATCH_DIR/patch`
+  echo "There appear to be $testsAdded new tests added in the patch."
   JIRA_COMMENT="$JIRA_COMMENT
 
-{color:green}+1 tests included{color}.  The patch appears to include 
$testReferences new or modified tests."
+{color:green}+1 tests included{color}.  The patch appears to include 
$testsAdded new or modified tests."
   return 0
 }
 



[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5737 Hadoop QA run says no tests even though there are added IT tests

2020-02-24 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new f75cb46  PHOENIX-5737 Hadoop QA run says no tests even though there 
are added IT tests
f75cb46 is described below

commit f75cb4641641c962a536ddc758722e0ac3b4d980
Author: Sandeep Guggilam 
AuthorDate: Thu Feb 20 19:18:03 2020 -0800

PHOENIX-5737 Hadoop QA run says no tests even though there are added IT 
tests

Signed-off-by: Abhishek Singh Chouhan 
---
 dev/test-patch.sh | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/dev/test-patch.sh b/dev/test-patch.sh
index a1161fb..8e22fc4 100755
--- a/dev/test-patch.sh
+++ b/dev/test-patch.sh
@@ -384,7 +384,7 @@ checkTests () {
   echo "=="
   echo ""
   echo ""
-  testReferences=`$GREP -c -i '/test' $PATCH_DIR/patch`
+  testReferences=`$GREP -c -i '/test\|/it' $PATCH_DIR/patch`
   echo "There appear to be $testReferences test files referenced in the patch."
   if [[ $testReferences == 0 ]] ; then
 if [[ $JENKINS == "true" ]] ; then
@@ -414,9 +414,11 @@ checkTests () {
 Also please list what manual steps were performed to 
verify this patch."
 return 1
   fi
+  testsAdded=`$GREP -c -i -e '+ \+@Test' $PATCH_DIR/patch`
+  echo "There appear to be $testsAdded new tests added in the patch."
   JIRA_COMMENT="$JIRA_COMMENT
 
-{color:green}+1 tests included{color}.  The patch appears to include 
$testReferences new or modified tests."
+{color:green}+1 tests included{color}.  The patch appears to include 
$testsAdded new or modified tests."
   return 0
 }
 



[phoenix] branch master updated: PHOENIX-5737 Hadoop QA run says no tests even though there are added IT tests

2020-02-24 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new e1aee8e  PHOENIX-5737 Hadoop QA run says no tests even though there 
are added IT tests
e1aee8e is described below

commit e1aee8e5d3fe335450db583f01cf91af1e6c5c7e
Author: Sandeep Guggilam 
AuthorDate: Thu Feb 20 19:18:03 2020 -0800

PHOENIX-5737 Hadoop QA run says no tests even though there are added IT 
tests

Signed-off-by: Abhishek Singh Chouhan 
---
 dev/test-patch.sh | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/dev/test-patch.sh b/dev/test-patch.sh
index a1161fb..8e22fc4 100755
--- a/dev/test-patch.sh
+++ b/dev/test-patch.sh
@@ -384,7 +384,7 @@ checkTests () {
   echo "=="
   echo ""
   echo ""
-  testReferences=`$GREP -c -i '/test' $PATCH_DIR/patch`
+  testReferences=`$GREP -c -i '/test\|/it' $PATCH_DIR/patch`
   echo "There appear to be $testReferences test files referenced in the patch."
   if [[ $testReferences == 0 ]] ; then
 if [[ $JENKINS == "true" ]] ; then
@@ -414,9 +414,11 @@ checkTests () {
 Also please list what manual steps were performed to 
verify this patch."
 return 1
   fi
+  testsAdded=`$GREP -c -i -e '+ \+@Test' $PATCH_DIR/patch`
+  echo "There appear to be $testsAdded new tests added in the patch."
   JIRA_COMMENT="$JIRA_COMMENT
 
-{color:green}+1 tests included{color}.  The patch appears to include 
$testReferences new or modified tests."
+{color:green}+1 tests included{color}.  The patch appears to include 
$testsAdded new or modified tests."
   return 0
 }
 



[phoenix] branch 4.14-HBase-1.3 updated: PHOENIX-5529 Creating a grand-child view on a table with an index fails

2020-02-14 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.3 by this push:
 new 3a5926d  PHOENIX-5529 Creating a grand-child view on a table with an 
index fails
3a5926d is described below

commit 3a5926d74ff2d3d645a65fe1790a4bd6389fee28
Author: Abhishek Singh Chouhan 
AuthorDate: Fri Feb 14 10:45:15 2020 -0800

PHOENIX-5529 Creating a grand-child view on a table with an index fails
---
 .../it/java/org/apache/phoenix/end2end/ViewIT.java | 70 +-
 .../org/apache/phoenix/schema/MetaDataClient.java  |  2 +-
 2 files changed, 70 insertions(+), 2 deletions(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
index a6e066b..61c809b 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
@@ -47,6 +47,7 @@ import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.ColumnAlreadyExistsException;
+import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.ReadOnlyTableException;
 import org.apache.phoenix.schema.TableNotFoundException;
 import org.apache.phoenix.util.PhoenixRuntime;
@@ -559,7 +560,74 @@ public class ViewIT extends BaseViewIT {
 "CLIENT PARALLEL 1-WAY SKIP SCAN ON 4 KEYS OVER " + 
fullIndexName1 + " [1,100] - [2,109]\n" + 
 "SERVER FILTER BY (\"S2\" = 'bas' AND \"S1\" = 
'foo')", queryPlan);
 }
-}
+}
+
+@Test
+public void testCreateChildViewWithBaseTableLocalIndex() throws Exception {
+testCreateChildViewWithBaseTableIndex(true);
+}
+
+@Test
+public void testCreateChildViewWithBaseTableGlobalIndex() throws Exception 
{
+testCreateChildViewWithBaseTableIndex(false);
+}
+
+public void testCreateChildViewWithBaseTableIndex(boolean localIndex) 
throws Exception {
+String schema1 = "S_" + generateUniqueName();
+String schema2 = "S_" + generateUniqueName();
+String fullTableName = SchemaUtil.getTableName(schema1, 
generateUniqueName());
+String fullViewName = SchemaUtil.getTableName(schema2, 
generateUniqueName());
+String indexName = "I_" + generateUniqueName();
+String fullChildViewName = SchemaUtil.getTableName(schema2, 
generateUniqueName());
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+String sql =
+"CREATE TABLE " + fullTableName
++ " (ID INTEGER NOT NULL PRIMARY KEY, HOST 
VARCHAR(10), FLAG BOOLEAN)";
+conn.createStatement().execute(sql);
+sql =
+"CREATE VIEW " + fullViewName
++ " (COL1 INTEGER, COL2 INTEGER, COL3 INTEGER, 
COL4 INTEGER) AS SELECT * FROM "
++ fullTableName + " WHERE ID > 5";
+conn.createStatement().execute(sql);
+sql =
+"CREATE " + (localIndex ? "LOCAL " : "") + " INDEX " + 
indexName + " ON "
++ fullTableName + "(HOST)";
+conn.createStatement().execute(sql);
+sql =
+"CREATE VIEW " + fullChildViewName + " AS SELECT * FROM " 
+ fullViewName
++ " WHERE COL1 > 2";
+conn.createStatement().execute(sql);
+// Sanity upserts in baseTable, view, child view
+conn.createStatement()
+.executeUpdate("upsert into " + fullTableName + " values 
(1, 'host1', TRUE)");
+conn.createStatement()
+.executeUpdate("upsert into " + fullTableName + " values 
(5, 'host5', FALSE)");
+conn.createStatement()
+.executeUpdate("upsert into " + fullTableName + " values 
(7, 'host7', TRUE)");
+conn.commit();
+// View is not updateable
+try {
+conn.createStatement().executeUpdate("upsert into " + 
fullViewName
++ " (ID, HOST, FLAG, COL1) values (7, 'host7', TRUE, 
1)");
+fail();
+} catch (Exception e) {
+}
+// Check view inherits index, but child view doesn't
+PTable table = PhoenixRuntime.getTable(conn, fullViewName);
+assertEquals(1, table.getIn

[phoenix] branch 4.14-HBase-1.4 updated: PHOENIX-5529 Creating a grand-child view on a table with an index fails

2020-02-14 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.14-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.4 by this push:
 new 3e8ddde  PHOENIX-5529 Creating a grand-child view on a table with an 
index fails
3e8ddde is described below

commit 3e8ddde696fb5b7ca006857e01ec7d471c163e05
Author: Abhishek Singh Chouhan 
AuthorDate: Fri Feb 14 10:45:15 2020 -0800

PHOENIX-5529 Creating a grand-child view on a table with an index fails
---
 .../it/java/org/apache/phoenix/end2end/ViewIT.java | 70 +-
 .../org/apache/phoenix/schema/MetaDataClient.java  |  2 +-
 2 files changed, 70 insertions(+), 2 deletions(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
index a6e066b..61c809b 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
@@ -47,6 +47,7 @@ import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.ColumnAlreadyExistsException;
+import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.ReadOnlyTableException;
 import org.apache.phoenix.schema.TableNotFoundException;
 import org.apache.phoenix.util.PhoenixRuntime;
@@ -559,7 +560,74 @@ public class ViewIT extends BaseViewIT {
 "CLIENT PARALLEL 1-WAY SKIP SCAN ON 4 KEYS OVER " + 
fullIndexName1 + " [1,100] - [2,109]\n" + 
 "SERVER FILTER BY (\"S2\" = 'bas' AND \"S1\" = 
'foo')", queryPlan);
 }
-}
+}
+
+@Test
+public void testCreateChildViewWithBaseTableLocalIndex() throws Exception {
+testCreateChildViewWithBaseTableIndex(true);
+}
+
+@Test
+public void testCreateChildViewWithBaseTableGlobalIndex() throws Exception 
{
+testCreateChildViewWithBaseTableIndex(false);
+}
+
+public void testCreateChildViewWithBaseTableIndex(boolean localIndex) 
throws Exception {
+String schema1 = "S_" + generateUniqueName();
+String schema2 = "S_" + generateUniqueName();
+String fullTableName = SchemaUtil.getTableName(schema1, 
generateUniqueName());
+String fullViewName = SchemaUtil.getTableName(schema2, 
generateUniqueName());
+String indexName = "I_" + generateUniqueName();
+String fullChildViewName = SchemaUtil.getTableName(schema2, 
generateUniqueName());
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+String sql =
+"CREATE TABLE " + fullTableName
++ " (ID INTEGER NOT NULL PRIMARY KEY, HOST 
VARCHAR(10), FLAG BOOLEAN)";
+conn.createStatement().execute(sql);
+sql =
+"CREATE VIEW " + fullViewName
++ " (COL1 INTEGER, COL2 INTEGER, COL3 INTEGER, 
COL4 INTEGER) AS SELECT * FROM "
++ fullTableName + " WHERE ID > 5";
+conn.createStatement().execute(sql);
+sql =
+"CREATE " + (localIndex ? "LOCAL " : "") + " INDEX " + 
indexName + " ON "
++ fullTableName + "(HOST)";
+conn.createStatement().execute(sql);
+sql =
+"CREATE VIEW " + fullChildViewName + " AS SELECT * FROM " 
+ fullViewName
++ " WHERE COL1 > 2";
+conn.createStatement().execute(sql);
+// Sanity upserts in baseTable, view, child view
+conn.createStatement()
+.executeUpdate("upsert into " + fullTableName + " values 
(1, 'host1', TRUE)");
+conn.createStatement()
+.executeUpdate("upsert into " + fullTableName + " values 
(5, 'host5', FALSE)");
+conn.createStatement()
+.executeUpdate("upsert into " + fullTableName + " values 
(7, 'host7', TRUE)");
+conn.commit();
+// View is not updateable
+try {
+conn.createStatement().executeUpdate("upsert into " + 
fullViewName
++ " (ID, HOST, FLAG, COL1) values (7, 'host7', TRUE, 
1)");
+fail();
+} catch (Exception e) {
+}
+// Check view inherits index, but child view doesn't
+PTable table = PhoenixRuntime.getTable(conn, fullViewName);
+assertEquals(1, table.getIn

[phoenix] branch 4.15-HBase-1.3 updated: PHOENIX-5529 Creating a grand-child view on a table with an index fails

2020-02-14 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.15-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.15-HBase-1.3 by this push:
 new 16f6a46  PHOENIX-5529 Creating a grand-child view on a table with an 
index fails
16f6a46 is described below

commit 16f6a46c19bc9b131697cc5abec73faffc62af18
Author: Abhishek Singh Chouhan 
AuthorDate: Tue Feb 11 11:31:06 2020 -0800

PHOENIX-5529 Creating a grand-child view on a table with an index fails
---
 .../it/java/org/apache/phoenix/end2end/ViewIT.java | 67 +-
 .../java/org/apache/phoenix/util/ViewUtil.java |  2 +-
 2 files changed, 67 insertions(+), 2 deletions(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
index aeec7f0..c962740 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
@@ -410,7 +410,72 @@ public class ViewIT extends SplitSystemCatalogIT {
 "CLIENT PARALLEL 1-WAY SKIP SCAN ON 4 KEYS OVER " + 
fullIndexName1 + " [1,100] - [2,109]\n" + 
 "SERVER FILTER BY (\"S2\" = 'bas' AND \"S1\" = 
'foo')", queryPlan);
 }
-}
+}
+
+@Test
+public void testCreateChildViewWithBaseTableLocalIndex() throws Exception {
+testCreateChildViewWithBaseTableIndex(true);
+}
+
+@Test
+public void testCreateChildViewWithBaseTableGlobalIndex() throws Exception 
{
+testCreateChildViewWithBaseTableIndex(false);
+}
+
+public void testCreateChildViewWithBaseTableIndex(boolean localIndex) 
throws Exception {
+String fullTableName = SchemaUtil.getTableName(SCHEMA1, 
generateUniqueName());
+String fullViewName = SchemaUtil.getTableName(SCHEMA2, 
generateUniqueName());
+String indexName = "I_" + generateUniqueName();
+String fullChildViewName = SchemaUtil.getTableName(SCHEMA2, 
generateUniqueName());
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+String sql =
+"CREATE TABLE " + fullTableName
++ " (ID INTEGER NOT NULL PRIMARY KEY, HOST 
VARCHAR(10), FLAG BOOLEAN)";
+conn.createStatement().execute(sql);
+sql =
+"CREATE VIEW " + fullViewName
++ " (COL1 INTEGER, COL2 INTEGER, COL3 INTEGER, 
COL4 INTEGER) AS SELECT * FROM "
++ fullTableName + " WHERE ID > 5";
+conn.createStatement().execute(sql);
+sql =
+"CREATE " + (localIndex ? "LOCAL " : "") + " INDEX " + 
indexName + " ON "
++ fullTableName + "(HOST)";
+conn.createStatement().execute(sql);
+sql =
+"CREATE VIEW " + fullChildViewName + " AS SELECT * FROM " 
+ fullViewName
++ " WHERE COL1 > 2";
+conn.createStatement().execute(sql);
+// Sanity upserts in baseTable, view, child view
+conn.createStatement()
+.executeUpdate("upsert into " + fullTableName + " values 
(1, 'host1', TRUE)");
+conn.createStatement()
+.executeUpdate("upsert into " + fullTableName + " values 
(5, 'host5', FALSE)");
+conn.createStatement()
+.executeUpdate("upsert into " + fullTableName + " values 
(7, 'host7', TRUE)");
+conn.commit();
+// View is not updateable
+try {
+conn.createStatement().executeUpdate("upsert into " + 
fullViewName
++ " (ID, HOST, FLAG, COL1) values (7, 'host7', TRUE, 
1)");
+fail();
+} catch (Exception e) {
+}
+// Check view inherits index, but child view doesn't
+PTable table = PhoenixRuntime.getTable(conn, fullViewName);
+assertEquals(1, table.getIndexes().size());
+table = PhoenixRuntime.getTable(conn, fullChildViewName);
+assertEquals(0, table.getIndexes().size());
+
+ResultSet rs =
+conn.createStatement().executeQuery("select count(*) from 
" + fullTableName);
+assertTrue(rs.next());
+assertEquals(3, rs.getInt(1));
+
+rs = conn.createStatement().executeQuery("select count(*) from " + 
fullViewName);
+assertTrue(rs.next());
+assertEquals(1, rs.getInt(1

[phoenix] branch 4.15-HBase-1.4 updated: PHOENIX-5529 Creating a grand-child view on a table with an index fails

2020-02-14 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.15-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.15-HBase-1.4 by this push:
 new 23ae709  PHOENIX-5529 Creating a grand-child view on a table with an 
index fails
23ae709 is described below

commit 23ae7096d0dd49215b3281e4c531b529ecaef8c5
Author: Abhishek Singh Chouhan 
AuthorDate: Tue Feb 11 11:31:06 2020 -0800

PHOENIX-5529 Creating a grand-child view on a table with an index fails
---
 .../it/java/org/apache/phoenix/end2end/ViewIT.java | 67 +-
 .../java/org/apache/phoenix/util/ViewUtil.java |  2 +-
 2 files changed, 67 insertions(+), 2 deletions(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
index aeec7f0..c962740 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
@@ -410,7 +410,72 @@ public class ViewIT extends SplitSystemCatalogIT {
 "CLIENT PARALLEL 1-WAY SKIP SCAN ON 4 KEYS OVER " + 
fullIndexName1 + " [1,100] - [2,109]\n" + 
 "SERVER FILTER BY (\"S2\" = 'bas' AND \"S1\" = 
'foo')", queryPlan);
 }
-}
+}
+
+@Test
+public void testCreateChildViewWithBaseTableLocalIndex() throws Exception {
+testCreateChildViewWithBaseTableIndex(true);
+}
+
+@Test
+public void testCreateChildViewWithBaseTableGlobalIndex() throws Exception 
{
+testCreateChildViewWithBaseTableIndex(false);
+}
+
+public void testCreateChildViewWithBaseTableIndex(boolean localIndex) 
throws Exception {
+String fullTableName = SchemaUtil.getTableName(SCHEMA1, 
generateUniqueName());
+String fullViewName = SchemaUtil.getTableName(SCHEMA2, 
generateUniqueName());
+String indexName = "I_" + generateUniqueName();
+String fullChildViewName = SchemaUtil.getTableName(SCHEMA2, 
generateUniqueName());
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+String sql =
+"CREATE TABLE " + fullTableName
++ " (ID INTEGER NOT NULL PRIMARY KEY, HOST 
VARCHAR(10), FLAG BOOLEAN)";
+conn.createStatement().execute(sql);
+sql =
+"CREATE VIEW " + fullViewName
++ " (COL1 INTEGER, COL2 INTEGER, COL3 INTEGER, 
COL4 INTEGER) AS SELECT * FROM "
++ fullTableName + " WHERE ID > 5";
+conn.createStatement().execute(sql);
+sql =
+"CREATE " + (localIndex ? "LOCAL " : "") + " INDEX " + 
indexName + " ON "
++ fullTableName + "(HOST)";
+conn.createStatement().execute(sql);
+sql =
+"CREATE VIEW " + fullChildViewName + " AS SELECT * FROM " 
+ fullViewName
++ " WHERE COL1 > 2";
+conn.createStatement().execute(sql);
+// Sanity upserts in baseTable, view, child view
+conn.createStatement()
+.executeUpdate("upsert into " + fullTableName + " values 
(1, 'host1', TRUE)");
+conn.createStatement()
+.executeUpdate("upsert into " + fullTableName + " values 
(5, 'host5', FALSE)");
+conn.createStatement()
+.executeUpdate("upsert into " + fullTableName + " values 
(7, 'host7', TRUE)");
+conn.commit();
+// View is not updateable
+try {
+conn.createStatement().executeUpdate("upsert into " + 
fullViewName
++ " (ID, HOST, FLAG, COL1) values (7, 'host7', TRUE, 
1)");
+fail();
+} catch (Exception e) {
+}
+// Check view inherits index, but child view doesn't
+PTable table = PhoenixRuntime.getTable(conn, fullViewName);
+assertEquals(1, table.getIndexes().size());
+table = PhoenixRuntime.getTable(conn, fullChildViewName);
+assertEquals(0, table.getIndexes().size());
+
+ResultSet rs =
+conn.createStatement().executeQuery("select count(*) from 
" + fullTableName);
+assertTrue(rs.next());
+assertEquals(3, rs.getInt(1));
+
+rs = conn.createStatement().executeQuery("select count(*) from " + 
fullViewName);
+assertTrue(rs.next());
+assertEquals(1, rs.getInt(1

[phoenix] branch 4.15-HBase-1.5 updated: PHOENIX-5529 Creating a grand-child view on a table with an index fails

2020-02-14 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.15-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.15-HBase-1.5 by this push:
 new 8fb692c  PHOENIX-5529 Creating a grand-child view on a table with an 
index fails
8fb692c is described below

commit 8fb692c04cffaedb67d4588f8eb63312743651d6
Author: Abhishek Singh Chouhan 
AuthorDate: Tue Feb 11 11:31:06 2020 -0800

PHOENIX-5529 Creating a grand-child view on a table with an index fails
---
 .../it/java/org/apache/phoenix/end2end/ViewIT.java | 67 +-
 .../java/org/apache/phoenix/util/ViewUtil.java |  2 +-
 2 files changed, 67 insertions(+), 2 deletions(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
index aeec7f0..c962740 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
@@ -410,7 +410,72 @@ public class ViewIT extends SplitSystemCatalogIT {
 "CLIENT PARALLEL 1-WAY SKIP SCAN ON 4 KEYS OVER " + 
fullIndexName1 + " [1,100] - [2,109]\n" + 
 "SERVER FILTER BY (\"S2\" = 'bas' AND \"S1\" = 
'foo')", queryPlan);
 }
-}
+}
+
+@Test
+public void testCreateChildViewWithBaseTableLocalIndex() throws Exception {
+testCreateChildViewWithBaseTableIndex(true);
+}
+
+@Test
+public void testCreateChildViewWithBaseTableGlobalIndex() throws Exception 
{
+testCreateChildViewWithBaseTableIndex(false);
+}
+
+public void testCreateChildViewWithBaseTableIndex(boolean localIndex) 
throws Exception {
+String fullTableName = SchemaUtil.getTableName(SCHEMA1, 
generateUniqueName());
+String fullViewName = SchemaUtil.getTableName(SCHEMA2, 
generateUniqueName());
+String indexName = "I_" + generateUniqueName();
+String fullChildViewName = SchemaUtil.getTableName(SCHEMA2, 
generateUniqueName());
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+String sql =
+"CREATE TABLE " + fullTableName
++ " (ID INTEGER NOT NULL PRIMARY KEY, HOST 
VARCHAR(10), FLAG BOOLEAN)";
+conn.createStatement().execute(sql);
+sql =
+"CREATE VIEW " + fullViewName
++ " (COL1 INTEGER, COL2 INTEGER, COL3 INTEGER, 
COL4 INTEGER) AS SELECT * FROM "
++ fullTableName + " WHERE ID > 5";
+conn.createStatement().execute(sql);
+sql =
+"CREATE " + (localIndex ? "LOCAL " : "") + " INDEX " + 
indexName + " ON "
++ fullTableName + "(HOST)";
+conn.createStatement().execute(sql);
+sql =
+"CREATE VIEW " + fullChildViewName + " AS SELECT * FROM " 
+ fullViewName
++ " WHERE COL1 > 2";
+conn.createStatement().execute(sql);
+// Sanity upserts in baseTable, view, child view
+conn.createStatement()
+.executeUpdate("upsert into " + fullTableName + " values 
(1, 'host1', TRUE)");
+conn.createStatement()
+.executeUpdate("upsert into " + fullTableName + " values 
(5, 'host5', FALSE)");
+conn.createStatement()
+.executeUpdate("upsert into " + fullTableName + " values 
(7, 'host7', TRUE)");
+conn.commit();
+// View is not updateable
+try {
+conn.createStatement().executeUpdate("upsert into " + 
fullViewName
++ " (ID, HOST, FLAG, COL1) values (7, 'host7', TRUE, 
1)");
+fail();
+} catch (Exception e) {
+}
+// Check view inherits index, but child view doesn't
+PTable table = PhoenixRuntime.getTable(conn, fullViewName);
+assertEquals(1, table.getIndexes().size());
+table = PhoenixRuntime.getTable(conn, fullChildViewName);
+assertEquals(0, table.getIndexes().size());
+
+ResultSet rs =
+conn.createStatement().executeQuery("select count(*) from 
" + fullTableName);
+assertTrue(rs.next());
+assertEquals(3, rs.getInt(1));
+
+rs = conn.createStatement().executeQuery("select count(*) from " + 
fullViewName);
+assertTrue(rs.next());
+assertEquals(1, rs.getInt(1

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5529 Creating a grand-child view on a table with an index fails

2020-02-14 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 9792ed0  PHOENIX-5529 Creating a grand-child view on a table with an 
index fails
9792ed0 is described below

commit 9792ed0943dfaf1ba4bc1f518f3ab996476e9515
Author: Abhishek Singh Chouhan 
AuthorDate: Tue Feb 11 11:31:06 2020 -0800

PHOENIX-5529 Creating a grand-child view on a table with an index fails
---
 .../it/java/org/apache/phoenix/end2end/ViewIT.java | 67 +-
 .../java/org/apache/phoenix/util/ViewUtil.java |  2 +-
 2 files changed, 67 insertions(+), 2 deletions(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
index aeec7f0..c962740 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
@@ -410,7 +410,72 @@ public class ViewIT extends SplitSystemCatalogIT {
 "CLIENT PARALLEL 1-WAY SKIP SCAN ON 4 KEYS OVER " + 
fullIndexName1 + " [1,100] - [2,109]\n" + 
 "SERVER FILTER BY (\"S2\" = 'bas' AND \"S1\" = 
'foo')", queryPlan);
 }
-}
+}
+
+@Test
+public void testCreateChildViewWithBaseTableLocalIndex() throws Exception {
+testCreateChildViewWithBaseTableIndex(true);
+}
+
+@Test
+public void testCreateChildViewWithBaseTableGlobalIndex() throws Exception 
{
+testCreateChildViewWithBaseTableIndex(false);
+}
+
+public void testCreateChildViewWithBaseTableIndex(boolean localIndex) 
throws Exception {
+String fullTableName = SchemaUtil.getTableName(SCHEMA1, 
generateUniqueName());
+String fullViewName = SchemaUtil.getTableName(SCHEMA2, 
generateUniqueName());
+String indexName = "I_" + generateUniqueName();
+String fullChildViewName = SchemaUtil.getTableName(SCHEMA2, 
generateUniqueName());
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+String sql =
+"CREATE TABLE " + fullTableName
++ " (ID INTEGER NOT NULL PRIMARY KEY, HOST 
VARCHAR(10), FLAG BOOLEAN)";
+conn.createStatement().execute(sql);
+sql =
+"CREATE VIEW " + fullViewName
++ " (COL1 INTEGER, COL2 INTEGER, COL3 INTEGER, 
COL4 INTEGER) AS SELECT * FROM "
++ fullTableName + " WHERE ID > 5";
+conn.createStatement().execute(sql);
+sql =
+"CREATE " + (localIndex ? "LOCAL " : "") + " INDEX " + 
indexName + " ON "
++ fullTableName + "(HOST)";
+conn.createStatement().execute(sql);
+sql =
+"CREATE VIEW " + fullChildViewName + " AS SELECT * FROM " 
+ fullViewName
++ " WHERE COL1 > 2";
+conn.createStatement().execute(sql);
+// Sanity upserts in baseTable, view, child view
+conn.createStatement()
+.executeUpdate("upsert into " + fullTableName + " values 
(1, 'host1', TRUE)");
+conn.createStatement()
+.executeUpdate("upsert into " + fullTableName + " values 
(5, 'host5', FALSE)");
+conn.createStatement()
+.executeUpdate("upsert into " + fullTableName + " values 
(7, 'host7', TRUE)");
+conn.commit();
+// View is not updateable
+try {
+conn.createStatement().executeUpdate("upsert into " + 
fullViewName
++ " (ID, HOST, FLAG, COL1) values (7, 'host7', TRUE, 
1)");
+fail();
+} catch (Exception e) {
+}
+// Check view inherits index, but child view doesn't
+PTable table = PhoenixRuntime.getTable(conn, fullViewName);
+assertEquals(1, table.getIndexes().size());
+table = PhoenixRuntime.getTable(conn, fullChildViewName);
+assertEquals(0, table.getIndexes().size());
+
+ResultSet rs =
+conn.createStatement().executeQuery("select count(*) from 
" + fullTableName);
+assertTrue(rs.next());
+assertEquals(3, rs.getInt(1));
+
+rs = conn.createStatement().executeQuery("select count(*) from " + 
fullViewName);
+assertTrue(rs.next());
+assertEquals(1, rs.getInt(1));
+

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5529 Creating a grand-child view on a table with an index fails

2020-02-14 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new c891ca9  PHOENIX-5529 Creating a grand-child view on a table with an 
index fails
c891ca9 is described below

commit c891ca97acd3bdc4534234e9a944e60fdc1fd0d0
Author: Abhishek Singh Chouhan 
AuthorDate: Tue Feb 11 11:31:06 2020 -0800

PHOENIX-5529 Creating a grand-child view on a table with an index fails
---
 .../it/java/org/apache/phoenix/end2end/ViewIT.java | 67 +-
 .../java/org/apache/phoenix/util/ViewUtil.java |  2 +-
 2 files changed, 67 insertions(+), 2 deletions(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
index aeec7f0..c962740 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
@@ -410,7 +410,72 @@ public class ViewIT extends SplitSystemCatalogIT {
 "CLIENT PARALLEL 1-WAY SKIP SCAN ON 4 KEYS OVER " + 
fullIndexName1 + " [1,100] - [2,109]\n" + 
 "SERVER FILTER BY (\"S2\" = 'bas' AND \"S1\" = 
'foo')", queryPlan);
 }
-}
+}
+
+@Test
+public void testCreateChildViewWithBaseTableLocalIndex() throws Exception {
+testCreateChildViewWithBaseTableIndex(true);
+}
+
+@Test
+public void testCreateChildViewWithBaseTableGlobalIndex() throws Exception 
{
+testCreateChildViewWithBaseTableIndex(false);
+}
+
+public void testCreateChildViewWithBaseTableIndex(boolean localIndex) 
throws Exception {
+String fullTableName = SchemaUtil.getTableName(SCHEMA1, 
generateUniqueName());
+String fullViewName = SchemaUtil.getTableName(SCHEMA2, 
generateUniqueName());
+String indexName = "I_" + generateUniqueName();
+String fullChildViewName = SchemaUtil.getTableName(SCHEMA2, 
generateUniqueName());
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+String sql =
+"CREATE TABLE " + fullTableName
++ " (ID INTEGER NOT NULL PRIMARY KEY, HOST 
VARCHAR(10), FLAG BOOLEAN)";
+conn.createStatement().execute(sql);
+sql =
+"CREATE VIEW " + fullViewName
++ " (COL1 INTEGER, COL2 INTEGER, COL3 INTEGER, 
COL4 INTEGER) AS SELECT * FROM "
++ fullTableName + " WHERE ID > 5";
+conn.createStatement().execute(sql);
+sql =
+"CREATE " + (localIndex ? "LOCAL " : "") + " INDEX " + 
indexName + " ON "
++ fullTableName + "(HOST)";
+conn.createStatement().execute(sql);
+sql =
+"CREATE VIEW " + fullChildViewName + " AS SELECT * FROM " 
+ fullViewName
++ " WHERE COL1 > 2";
+conn.createStatement().execute(sql);
+// Sanity upserts in baseTable, view, child view
+conn.createStatement()
+.executeUpdate("upsert into " + fullTableName + " values 
(1, 'host1', TRUE)");
+conn.createStatement()
+.executeUpdate("upsert into " + fullTableName + " values 
(5, 'host5', FALSE)");
+conn.createStatement()
+.executeUpdate("upsert into " + fullTableName + " values 
(7, 'host7', TRUE)");
+conn.commit();
+// View is not updateable
+try {
+conn.createStatement().executeUpdate("upsert into " + 
fullViewName
++ " (ID, HOST, FLAG, COL1) values (7, 'host7', TRUE, 
1)");
+fail();
+} catch (Exception e) {
+}
+// Check view inherits index, but child view doesn't
+PTable table = PhoenixRuntime.getTable(conn, fullViewName);
+assertEquals(1, table.getIndexes().size());
+table = PhoenixRuntime.getTable(conn, fullChildViewName);
+assertEquals(0, table.getIndexes().size());
+
+ResultSet rs =
+conn.createStatement().executeQuery("select count(*) from 
" + fullTableName);
+assertTrue(rs.next());
+assertEquals(3, rs.getInt(1));
+
+rs = conn.createStatement().executeQuery("select count(*) from " + 
fullViewName);
+assertTrue(rs.next());
+assertEquals(1, rs.getInt(1));
+

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5529 Creating a grand-child view on a table with an index fails

2020-02-14 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 5064ff0  PHOENIX-5529 Creating a grand-child view on a table with an 
index fails
5064ff0 is described below

commit 5064ff0d1924c49c8fe3fd84fb15d55360602a4d
Author: Abhishek Singh Chouhan 
AuthorDate: Tue Feb 11 11:31:06 2020 -0800

PHOENIX-5529 Creating a grand-child view on a table with an index fails
---
 .../it/java/org/apache/phoenix/end2end/ViewIT.java | 67 +-
 .../java/org/apache/phoenix/util/ViewUtil.java |  2 +-
 2 files changed, 67 insertions(+), 2 deletions(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
index aeec7f0..c962740 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
@@ -410,7 +410,72 @@ public class ViewIT extends SplitSystemCatalogIT {
 "CLIENT PARALLEL 1-WAY SKIP SCAN ON 4 KEYS OVER " + 
fullIndexName1 + " [1,100] - [2,109]\n" + 
 "SERVER FILTER BY (\"S2\" = 'bas' AND \"S1\" = 
'foo')", queryPlan);
 }
-}
+}
+
+@Test
+public void testCreateChildViewWithBaseTableLocalIndex() throws Exception {
+testCreateChildViewWithBaseTableIndex(true);
+}
+
+@Test
+public void testCreateChildViewWithBaseTableGlobalIndex() throws Exception 
{
+testCreateChildViewWithBaseTableIndex(false);
+}
+
+public void testCreateChildViewWithBaseTableIndex(boolean localIndex) 
throws Exception {
+String fullTableName = SchemaUtil.getTableName(SCHEMA1, 
generateUniqueName());
+String fullViewName = SchemaUtil.getTableName(SCHEMA2, 
generateUniqueName());
+String indexName = "I_" + generateUniqueName();
+String fullChildViewName = SchemaUtil.getTableName(SCHEMA2, 
generateUniqueName());
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+String sql =
+"CREATE TABLE " + fullTableName
++ " (ID INTEGER NOT NULL PRIMARY KEY, HOST 
VARCHAR(10), FLAG BOOLEAN)";
+conn.createStatement().execute(sql);
+sql =
+"CREATE VIEW " + fullViewName
++ " (COL1 INTEGER, COL2 INTEGER, COL3 INTEGER, 
COL4 INTEGER) AS SELECT * FROM "
++ fullTableName + " WHERE ID > 5";
+conn.createStatement().execute(sql);
+sql =
+"CREATE " + (localIndex ? "LOCAL " : "") + " INDEX " + 
indexName + " ON "
++ fullTableName + "(HOST)";
+conn.createStatement().execute(sql);
+sql =
+"CREATE VIEW " + fullChildViewName + " AS SELECT * FROM " 
+ fullViewName
++ " WHERE COL1 > 2";
+conn.createStatement().execute(sql);
+// Sanity upserts in baseTable, view, child view
+conn.createStatement()
+.executeUpdate("upsert into " + fullTableName + " values 
(1, 'host1', TRUE)");
+conn.createStatement()
+.executeUpdate("upsert into " + fullTableName + " values 
(5, 'host5', FALSE)");
+conn.createStatement()
+.executeUpdate("upsert into " + fullTableName + " values 
(7, 'host7', TRUE)");
+conn.commit();
+// View is not updateable
+try {
+conn.createStatement().executeUpdate("upsert into " + 
fullViewName
++ " (ID, HOST, FLAG, COL1) values (7, 'host7', TRUE, 
1)");
+fail();
+} catch (Exception e) {
+}
+// Check view inherits index, but child view doesn't
+PTable table = PhoenixRuntime.getTable(conn, fullViewName);
+assertEquals(1, table.getIndexes().size());
+table = PhoenixRuntime.getTable(conn, fullChildViewName);
+assertEquals(0, table.getIndexes().size());
+
+ResultSet rs =
+conn.createStatement().executeQuery("select count(*) from 
" + fullTableName);
+assertTrue(rs.next());
+assertEquals(3, rs.getInt(1));
+
+rs = conn.createStatement().executeQuery("select count(*) from " + 
fullViewName);
+assertTrue(rs.next());
+assertEquals(1, rs.getInt(1));
+

[phoenix] branch master updated: PHOENIX-5529 Creating a grand-child view on a table with an index fails

2020-02-14 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new cf51c60  PHOENIX-5529 Creating a grand-child view on a table with an 
index fails
cf51c60 is described below

commit cf51c60bbf2e06e22d05af5a8e66d2ce43ca21f2
Author: Abhishek Singh Chouhan 
AuthorDate: Tue Feb 11 11:31:06 2020 -0800

PHOENIX-5529 Creating a grand-child view on a table with an index fails
---
 .../it/java/org/apache/phoenix/end2end/ViewIT.java | 67 +-
 .../java/org/apache/phoenix/util/ViewUtil.java |  2 +-
 2 files changed, 67 insertions(+), 2 deletions(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
index 1ee3dd7..bea5aa3 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
@@ -420,7 +420,72 @@ public class ViewIT extends SplitSystemCatalogIT {
 "CLIENT PARALLEL 1-WAY SKIP SCAN ON 4 KEYS OVER " + 
fullIndexName1 + " [1,100] - [2,109]\n" + 
 "SERVER FILTER BY (\"S2\" = 'bas' AND \"S1\" = 
'foo')", queryPlan);
 }
-}
+}
+
+@Test
+public void testCreateChildViewWithBaseTableLocalIndex() throws Exception {
+testCreateChildViewWithBaseTableIndex(true);
+}
+
+@Test
+public void testCreateChildViewWithBaseTableGlobalIndex() throws Exception 
{
+testCreateChildViewWithBaseTableIndex(false);
+}
+
+public void testCreateChildViewWithBaseTableIndex(boolean localIndex) 
throws Exception {
+String fullTableName = SchemaUtil.getTableName(SCHEMA1, 
generateUniqueName());
+String fullViewName = SchemaUtil.getTableName(SCHEMA2, 
generateUniqueName());
+String indexName = "I_" + generateUniqueName();
+String fullChildViewName = SchemaUtil.getTableName(SCHEMA2, 
generateUniqueName());
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+String sql =
+"CREATE TABLE " + fullTableName
++ " (ID INTEGER NOT NULL PRIMARY KEY, HOST 
VARCHAR(10), FLAG BOOLEAN)";
+conn.createStatement().execute(sql);
+sql =
+"CREATE VIEW " + fullViewName
++ " (COL1 INTEGER, COL2 INTEGER, COL3 INTEGER, 
COL4 INTEGER) AS SELECT * FROM "
++ fullTableName + " WHERE ID > 5";
+conn.createStatement().execute(sql);
+sql =
+"CREATE " + (localIndex ? "LOCAL " : "") + " INDEX " + 
indexName + " ON "
++ fullTableName + "(HOST)";
+conn.createStatement().execute(sql);
+sql =
+"CREATE VIEW " + fullChildViewName + " AS SELECT * FROM " 
+ fullViewName
++ " WHERE COL1 > 2";
+conn.createStatement().execute(sql);
+// Sanity upserts in baseTable, view, child view
+conn.createStatement()
+.executeUpdate("upsert into " + fullTableName + " values 
(1, 'host1', TRUE)");
+conn.createStatement()
+.executeUpdate("upsert into " + fullTableName + " values 
(5, 'host5', FALSE)");
+conn.createStatement()
+.executeUpdate("upsert into " + fullTableName + " values 
(7, 'host7', TRUE)");
+conn.commit();
+// View is not updateable
+try {
+conn.createStatement().executeUpdate("upsert into " + 
fullViewName
++ " (ID, HOST, FLAG, COL1) values (7, 'host7', TRUE, 
1)");
+fail();
+} catch (Exception e) {
+}
+// Check view inherits index, but child view doesn't
+PTable table = PhoenixRuntime.getTable(conn, fullViewName);
+assertEquals(1, table.getIndexes().size());
+table = PhoenixRuntime.getTable(conn, fullChildViewName);
+assertEquals(0, table.getIndexes().size());
+
+ResultSet rs =
+conn.createStatement().executeQuery("select count(*) from 
" + fullTableName);
+assertTrue(rs.next());
+assertEquals(3, rs.getInt(1));
+
+rs = conn.createStatement().executeQuery("select count(*) from " + 
fullViewName);
+assertTrue(rs.next());
+assertEquals(1, rs.getInt(1));
+}
+

[phoenix] branch 4.15-HBase-1.3 updated: PHOENIX-5704 Covered column updates are not generated for previously deleted data table row

2020-02-03 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.15-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.15-HBase-1.3 by this push:
 new 4102a53  PHOENIX-5704 Covered column updates are not generated for 
previously deleted data table row
4102a53 is described below

commit 4102a53b168288d2677d03b9f3706cec59a144c7
Author: Abhishek Singh Chouhan 
AuthorDate: Fri Jan 31 17:43:42 2020 -0800

PHOENIX-5704 Covered column updates are not generated for previously 
deleted data table row
---
 .../phoenix/end2end/index/MutableIndexIT.java  | 53 --
 .../filter/ApplyAndFilterDeletesFilter.java|  9 
 .../index/scanner/FilteredKeyValueScanner.java |  8 +++-
 3 files changed, 55 insertions(+), 15 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
index 8d42075..379ad86 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
@@ -766,11 +766,11 @@ public class MutableIndexIT extends 
ParallelStatsDisabledIT {
   try (Connection conn = getConnection()) {
 conn.createStatement().execute(
 "create table " + fullTableName + " (id integer primary key, "
-+ (multiCf ? columnFamily1 : "") + "f float, "
-+ (multiCf ? columnFamily2 : "") + "s varchar)" + 
tableDDLOptions);
++ (multiCf ? columnFamily1 + "." : "") + "f float, "
++ (multiCf ? columnFamily2 + "." : "") + "s varchar)" 
+ tableDDLOptions);
 conn.createStatement().execute(
 "create " + (localIndex ? "LOCAL" : "") + " index " + 
indexName + " on " + fullTableName + " ("
-+ (multiCf ? columnFamily1 : "") + "f) include 
("+(multiCf ? columnFamily2 : "") +"s)");
++ (multiCf ? columnFamily1 + "." : "") + "f) include 
("+(multiCf ? columnFamily2 + "." : "") +"s)");
 conn.createStatement().execute(
 "upsert into " + fullTableName + " values (1, 0.5, 'foo')");
   conn.commit();
@@ -785,9 +785,52 @@ public class MutableIndexIT extends 
ParallelStatsDisabledIT {
   assertEquals(0.5F, rs.getFloat(1), 0.0);
   assertEquals("foo", rs.getString(3));
   }
-  }
+}
+
+@Test
+public void testUpsertingDeletedRowWithNullCoveredColumn() throws 
Exception {
+testUpsertingDeletedRowWithNullCoveredColumn(false);
+}
 
-  /**
+@Test
+public void testUpsertingDeletedRowWithNullCoveredColumnMultiCfs() throws 
Exception {
+testUpsertingDeletedRowWithNullCoveredColumn(true);
+}
+
+public void testUpsertingDeletedRowWithNullCoveredColumn(boolean multiCf) 
throws Exception {
+String tableName = "TBL_" + generateUniqueName();
+String indexName = "IDX_" + generateUniqueName();
+String columnFamily1 = "cf1";
+String columnFamily2 = "cf2";
+String fullTableName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, tableName);
+String fullIndexName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, indexName);
+try (Connection conn = getConnection()) {
+conn.createStatement()
+.execute("create table " + fullTableName + " (id integer 
primary key, "
++ (multiCf ? columnFamily1 + "." : "") + "f 
varchar, "
++ (multiCf ? columnFamily2 + "." : "") + "s 
varchar)"
++ tableDDLOptions);
+conn.createStatement()
+.execute("create " + (localIndex ? "LOCAL" : "") + " index 
" + indexName
++ " on " + fullTableName + " (" + (multiCf ? 
columnFamily1 + "." : "")
++ "f) include (" + (multiCf ? columnFamily2 + "." 
: "") + "s)");
+conn.createStatement()
+.execute("upsert into " + fullTableName + " values (1, 
'foo', 'bar')");
+conn.commit();
+conn.createStatement().execute("delete from  " + fullTableName +

[phoenix] branch 4.15-HBase-1.5 updated: PHOENIX-5704 Covered column updates are not generated for previously deleted data table row

2020-02-03 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.15-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.15-HBase-1.5 by this push:
 new 426078e  PHOENIX-5704 Covered column updates are not generated for 
previously deleted data table row
426078e is described below

commit 426078ea9794f0f5a5037fc579bbf1c27c02409e
Author: Abhishek Singh Chouhan 
AuthorDate: Mon Feb 3 10:12:00 2020 -0800

PHOENIX-5704 Covered column updates are not generated for previously 
deleted data table row
---
 .../phoenix/end2end/index/MutableIndexIT.java  | 53 --
 1 file changed, 48 insertions(+), 5 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
index f17451d..f61f02c 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
@@ -767,11 +767,11 @@ public class MutableIndexIT extends 
ParallelStatsDisabledIT {
   try (Connection conn = getConnection()) {
 conn.createStatement().execute(
 "create table " + fullTableName + " (id integer primary key, "
-+ (multiCf ? columnFamily1 : "") + "f float, "
-+ (multiCf ? columnFamily2 : "") + "s varchar)" + 
tableDDLOptions);
++ (multiCf ? columnFamily1 + "." : "") + "f float, "
++ (multiCf ? columnFamily2 + "." : "") + "s varchar)" 
+ tableDDLOptions);
 conn.createStatement().execute(
 "create " + (localIndex ? "LOCAL" : "") + " index " + 
indexName + " on " + fullTableName + " ("
-+ (multiCf ? columnFamily1 : "") + "f) include 
("+(multiCf ? columnFamily2 : "") +"s)");
++ (multiCf ? columnFamily1 + "." : "") + "f) include 
("+(multiCf ? columnFamily2 + "." : "") +"s)");
 conn.createStatement().execute(
 "upsert into " + fullTableName + " values (1, 0.5, 'foo')");
   conn.commit();
@@ -786,9 +786,52 @@ public class MutableIndexIT extends 
ParallelStatsDisabledIT {
   assertEquals(0.5F, rs.getFloat(1), 0.0);
   assertEquals("foo", rs.getString(3));
   }
-  }
+}
+
+@Test
+public void testUpsertingDeletedRowWithNullCoveredColumn() throws 
Exception {
+testUpsertingDeletedRowWithNullCoveredColumn(false);
+}
 
-  /**
+@Test
+public void testUpsertingDeletedRowWithNullCoveredColumnMultiCfs() throws 
Exception {
+testUpsertingDeletedRowWithNullCoveredColumn(true);
+}
+
+public void testUpsertingDeletedRowWithNullCoveredColumn(boolean multiCf) 
throws Exception {
+String tableName = "TBL_" + generateUniqueName();
+String indexName = "IDX_" + generateUniqueName();
+String columnFamily1 = "cf1";
+String columnFamily2 = "cf2";
+String fullTableName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, tableName);
+String fullIndexName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, indexName);
+try (Connection conn = getConnection()) {
+conn.createStatement()
+.execute("create table " + fullTableName + " (id integer 
primary key, "
++ (multiCf ? columnFamily1 + "." : "") + "f 
varchar, "
++ (multiCf ? columnFamily2 + "." : "") + "s 
varchar)"
++ tableDDLOptions);
+conn.createStatement()
+.execute("create " + (localIndex ? "LOCAL" : "") + " index 
" + indexName
++ " on " + fullTableName + " (" + (multiCf ? 
columnFamily1 + "." : "")
++ "f) include (" + (multiCf ? columnFamily2 + "." 
: "") + "s)");
+conn.createStatement()
+.execute("upsert into " + fullTableName + " values (1, 
'foo', 'bar')");
+conn.commit();
+conn.createStatement().execute("delete from  " + fullTableName + " 
where id = 1");
+conn.commit();
+conn.createStatement()
+.execute("upsert into  " + fullTableName + " values (1, 
null, 'bar')");
+conn.commit();
+ResultSet rs = conn.createStatement().executeQuery("select * from 
" + fullIndexName);
+assertTrue(rs.next());
+assertEquals(1, rs.getInt(2));
+assertEquals(null, rs.getString(1));
+assertEquals("bar", rs.getString(3));
+}
+}
+
+/**
* PHOENIX-4988
* Test updating only a non-indexed column after two successive deletes to 
an indexed row
*/



[phoenix] branch 4.15-HBase-1.4 updated: PHOENIX-5704 Covered column updates are not generated for previously deleted data table row

2020-02-03 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.15-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.15-HBase-1.4 by this push:
 new 329acc8  PHOENIX-5704 Covered column updates are not generated for 
previously deleted data table row
329acc8 is described below

commit 329acc8b3cfd958456464e2faddaf50e1331b45c
Author: Abhishek Singh Chouhan 
AuthorDate: Mon Feb 3 10:12:00 2020 -0800

PHOENIX-5704 Covered column updates are not generated for previously 
deleted data table row
---
 .../phoenix/end2end/index/MutableIndexIT.java  | 53 --
 1 file changed, 48 insertions(+), 5 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
index f17451d..f61f02c 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
@@ -767,11 +767,11 @@ public class MutableIndexIT extends 
ParallelStatsDisabledIT {
   try (Connection conn = getConnection()) {
 conn.createStatement().execute(
 "create table " + fullTableName + " (id integer primary key, "
-+ (multiCf ? columnFamily1 : "") + "f float, "
-+ (multiCf ? columnFamily2 : "") + "s varchar)" + 
tableDDLOptions);
++ (multiCf ? columnFamily1 + "." : "") + "f float, "
++ (multiCf ? columnFamily2 + "." : "") + "s varchar)" 
+ tableDDLOptions);
 conn.createStatement().execute(
 "create " + (localIndex ? "LOCAL" : "") + " index " + 
indexName + " on " + fullTableName + " ("
-+ (multiCf ? columnFamily1 : "") + "f) include 
("+(multiCf ? columnFamily2 : "") +"s)");
++ (multiCf ? columnFamily1 + "." : "") + "f) include 
("+(multiCf ? columnFamily2 + "." : "") +"s)");
 conn.createStatement().execute(
 "upsert into " + fullTableName + " values (1, 0.5, 'foo')");
   conn.commit();
@@ -786,9 +786,52 @@ public class MutableIndexIT extends 
ParallelStatsDisabledIT {
   assertEquals(0.5F, rs.getFloat(1), 0.0);
   assertEquals("foo", rs.getString(3));
   }
-  }
+}
+
+@Test
+public void testUpsertingDeletedRowWithNullCoveredColumn() throws 
Exception {
+testUpsertingDeletedRowWithNullCoveredColumn(false);
+}
 
-  /**
+@Test
+public void testUpsertingDeletedRowWithNullCoveredColumnMultiCfs() throws 
Exception {
+testUpsertingDeletedRowWithNullCoveredColumn(true);
+}
+
+public void testUpsertingDeletedRowWithNullCoveredColumn(boolean multiCf) 
throws Exception {
+String tableName = "TBL_" + generateUniqueName();
+String indexName = "IDX_" + generateUniqueName();
+String columnFamily1 = "cf1";
+String columnFamily2 = "cf2";
+String fullTableName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, tableName);
+String fullIndexName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, indexName);
+try (Connection conn = getConnection()) {
+conn.createStatement()
+.execute("create table " + fullTableName + " (id integer 
primary key, "
++ (multiCf ? columnFamily1 + "." : "") + "f 
varchar, "
++ (multiCf ? columnFamily2 + "." : "") + "s 
varchar)"
++ tableDDLOptions);
+conn.createStatement()
+.execute("create " + (localIndex ? "LOCAL" : "") + " index 
" + indexName
++ " on " + fullTableName + " (" + (multiCf ? 
columnFamily1 + "." : "")
++ "f) include (" + (multiCf ? columnFamily2 + "." 
: "") + "s)");
+conn.createStatement()
+.execute("upsert into " + fullTableName + " values (1, 
'foo', 'bar')");
+conn.commit();
+conn.createStatement().execute("delete from  " + fullTableName + " 
where id = 1");
+conn.commit();
+conn.createStatement()
+.execute("upsert into  " + fullTableName + " values (1, 
null, 'bar')");
+conn.commit();
+ResultSet rs = conn.createStatement().executeQuery("select * from 
" + fullIndexName);
+assertTrue(rs.next());
+assertEquals(1, rs.getInt(2));
+assertEquals(null, rs.getString(1));
+assertEquals("bar", rs.getString(3));
+}
+}
+
+/**
* PHOENIX-4988
* Test updating only a non-indexed column after two successive deletes to 
an indexed row
*/



[phoenix] branch 4.14-HBase-1.3 updated: PHOENIX-5704 Covered column updates are not generated for previously deleted data table row

2020-02-03 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.3 by this push:
 new ff32ea1  PHOENIX-5704 Covered column updates are not generated for 
previously deleted data table row
ff32ea1 is described below

commit ff32ea1959763df72e81182a6c8cb5f4fe83ac52
Author: Abhishek Singh Chouhan 
AuthorDate: Mon Feb 3 16:16:57 2020 -0800

PHOENIX-5704 Covered column updates are not generated for previously 
deleted data table row
---
 .../phoenix/end2end/index/MutableIndexIT.java  | 45 +-
 .../filter/ApplyAndFilterDeletesFilter.java|  9 -
 .../index/scanner/FilteredKeyValueScanner.java |  8 +++-
 3 files changed, 51 insertions(+), 11 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
index 9a9fa91..e47fe4c 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
@@ -908,7 +908,50 @@ public class MutableIndexIT extends 
ParallelStatsDisabledIT {
   store.triggerMajorCompaction();
   store.compactRecentForTestingAssumingDefaultPolicy(1);
   }
-  }
+}
+
+@Test
+public void testUpsertingDeletedRowWithNullCoveredColumn() throws 
Exception {
+testUpsertingDeletedRowWithNullCoveredColumn(false);
+}
+
+@Test
+public void testUpsertingDeletedRowWithNullCoveredColumnMultiCfs() throws 
Exception {
+testUpsertingDeletedRowWithNullCoveredColumn(true);
+}
+
+public void testUpsertingDeletedRowWithNullCoveredColumn(boolean multiCf) 
throws Exception {
+String tableName = "TBL_" + generateUniqueName();
+String indexName = "IDX_" + generateUniqueName();
+String columnFamily1 = "cf1";
+String columnFamily2 = "cf2";
+String fullTableName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, tableName);
+String fullIndexName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, indexName);
+try (Connection conn = getConnection()) {
+conn.createStatement()
+.execute("create table " + fullTableName + " (id integer 
primary key, "
++ (multiCf ? columnFamily1 + "." : "") + "f 
varchar, "
++ (multiCf ? columnFamily2 + "." : "") + "s 
varchar)"
++ tableDDLOptions);
+conn.createStatement()
+.execute("create " + (localIndex ? "LOCAL" : "") + " index 
" + indexName
++ " on " + fullTableName + " (" + (multiCf ? 
columnFamily1 + "." : "")
++ "f) include (" + (multiCf ? columnFamily2 + "." 
: "") + "s)");
+conn.createStatement()
+.execute("upsert into " + fullTableName + " values (1, 
'foo', 'bar')");
+conn.commit();
+conn.createStatement().execute("delete from  " + fullTableName + " 
where id = 1");
+conn.commit();
+conn.createStatement()
+.execute("upsert into  " + fullTableName + " values (1, 
null, 'bar')");
+conn.commit();
+ResultSet rs = conn.createStatement().executeQuery("select * from 
" + fullIndexName);
+assertTrue(rs.next());
+assertEquals(1, rs.getInt(2));
+assertEquals(null, rs.getString(1));
+assertEquals("bar", rs.getString(3));
+}
+}
 
   /**
* PHOENIX-4988
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/covered/filter/ApplyAndFilterDeletesFilter.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/covered/filter/ApplyAndFilterDeletesFilter.java
index b5c3414..66e2818 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/covered/filter/ApplyAndFilterDeletesFilter.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/covered/filter/ApplyAndFilterDeletesFilter.java
@@ -53,7 +53,6 @@ import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
  */
 public class ApplyAndFilterDeletesFilter extends FilterBase {
 
-  private boolean done = false;
   List families;
   private final DeleteTracker coveringDelete = new DeleteTracker();
   private Hinter currentHint;
@@ -95,7 +94,6 @@ public class ApplyAndFilterDeletesFilter extends FilterBase {
   @Overrid

[phoenix] branch 4.14-HBase-1.4 updated: PHOENIX-5704 Covered column updates are not generated for previously deleted data table row

2020-02-03 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.14-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.4 by this push:
 new 0975e11  PHOENIX-5704 Covered column updates are not generated for 
previously deleted data table row
0975e11 is described below

commit 0975e11e246a7d992261aee6777ac03448554559
Author: Abhishek Singh Chouhan 
AuthorDate: Mon Feb 3 12:05:19 2020 -0800

PHOENIX-5704 Covered column updates are not generated for previously 
deleted data table row
---
 .../phoenix/end2end/index/MutableIndexIT.java  | 53 --
 1 file changed, 48 insertions(+), 5 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
index a2635d7..4ebeb7b 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
@@ -930,11 +930,11 @@ public class MutableIndexIT extends 
ParallelStatsDisabledIT {
   try (Connection conn = getConnection()) {
 conn.createStatement().execute(
 "create table " + fullTableName + " (id integer primary key, "
-+ (multiCf ? columnFamily1 : "") + "f float, "
-+ (multiCf ? columnFamily2 : "") + "s varchar)" + 
tableDDLOptions);
++ (multiCf ? columnFamily1 + "." : "") + "f float, "
++ (multiCf ? columnFamily2 + "." : "") + "s varchar)" 
+ tableDDLOptions);
 conn.createStatement().execute(
 "create index " + indexName + " on " + fullTableName + " ("
-+ (multiCf ? columnFamily1 : "") + "f) include 
("+(multiCf ? columnFamily2 : "") +"s)");
++ (multiCf ? columnFamily1 + "." : "") + "f) include 
("+(multiCf ? columnFamily2 + "." : "") +"s)");
 conn.createStatement().execute(
 "upsert into " + fullTableName + " values (1, 0.5, 'foo')");
   conn.commit();
@@ -949,9 +949,52 @@ public class MutableIndexIT extends 
ParallelStatsDisabledIT {
   assertEquals(0.5F, rs.getFloat(1), 0.0);
   assertEquals("foo", rs.getString(3));
   }
-  }
+}
+
+@Test
+public void testUpsertingDeletedRowWithNullCoveredColumn() throws 
Exception {
+testUpsertingDeletedRowWithNullCoveredColumn(false);
+}
 
-  /**
+@Test
+public void testUpsertingDeletedRowWithNullCoveredColumnMultiCfs() throws 
Exception {
+testUpsertingDeletedRowWithNullCoveredColumn(true);
+}
+
+public void testUpsertingDeletedRowWithNullCoveredColumn(boolean multiCf) 
throws Exception {
+String tableName = "TBL_" + generateUniqueName();
+String indexName = "IDX_" + generateUniqueName();
+String columnFamily1 = "cf1";
+String columnFamily2 = "cf2";
+String fullTableName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, tableName);
+String fullIndexName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, indexName);
+try (Connection conn = getConnection()) {
+conn.createStatement()
+.execute("create table " + fullTableName + " (id integer 
primary key, "
++ (multiCf ? columnFamily1 + "." : "") + "f 
varchar, "
++ (multiCf ? columnFamily2 + "." : "") + "s 
varchar)"
++ tableDDLOptions);
+conn.createStatement()
+.execute("create " + (localIndex ? "LOCAL" : "") + " index 
" + indexName
++ " on " + fullTableName + " (" + (multiCf ? 
columnFamily1 + "." : "")
++ "f) include (" + (multiCf ? columnFamily2 + "." 
: "") + "s)");
+conn.createStatement()
+.execute("upsert into " + fullTableName + " values (1, 
'foo', 'bar')");
+conn.commit();
+conn.createStatement().execute("delete from  " + fullTableName + " 
where id = 1");
+conn.commit();
+conn.createStatement()
+.execute("upsert into  " + fullTableName + " values (1, 
null, 'bar')");
+conn.commit();
+ResultSet rs = conn.createStatement().executeQuery("select * from 
" + fullIndexName);
+assertTrue(rs.next());
+assertEquals(1, rs.getInt(2));
+assertEquals(null, rs.getString(1));
+assertEquals("bar", rs.getString(3));
+}
+}
+
+/**
* PHOENIX-4988
* Test updating only a non-indexed column after two successive deletes to 
an indexed row
*/



[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5704 Covered column updates are not generated for previously deleted data table row

2020-02-03 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 06533eb  PHOENIX-5704 Covered column updates are not generated for 
previously deleted data table row
06533eb is described below

commit 06533ebf24bd439af175fc76021c4c415f432851
Author: Abhishek Singh Chouhan 
AuthorDate: Fri Jan 31 17:43:42 2020 -0800

PHOENIX-5704 Covered column updates are not generated for previously 
deleted data table row
---
 .../phoenix/end2end/index/MutableIndexIT.java  | 53 --
 .../filter/ApplyAndFilterDeletesFilter.java|  9 
 .../index/scanner/FilteredKeyValueScanner.java |  8 +++-
 3 files changed, 55 insertions(+), 15 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
index 8d42075..379ad86 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
@@ -766,11 +766,11 @@ public class MutableIndexIT extends 
ParallelStatsDisabledIT {
   try (Connection conn = getConnection()) {
 conn.createStatement().execute(
 "create table " + fullTableName + " (id integer primary key, "
-+ (multiCf ? columnFamily1 : "") + "f float, "
-+ (multiCf ? columnFamily2 : "") + "s varchar)" + 
tableDDLOptions);
++ (multiCf ? columnFamily1 + "." : "") + "f float, "
++ (multiCf ? columnFamily2 + "." : "") + "s varchar)" 
+ tableDDLOptions);
 conn.createStatement().execute(
 "create " + (localIndex ? "LOCAL" : "") + " index " + 
indexName + " on " + fullTableName + " ("
-+ (multiCf ? columnFamily1 : "") + "f) include 
("+(multiCf ? columnFamily2 : "") +"s)");
++ (multiCf ? columnFamily1 + "." : "") + "f) include 
("+(multiCf ? columnFamily2 + "." : "") +"s)");
 conn.createStatement().execute(
 "upsert into " + fullTableName + " values (1, 0.5, 'foo')");
   conn.commit();
@@ -785,9 +785,52 @@ public class MutableIndexIT extends 
ParallelStatsDisabledIT {
   assertEquals(0.5F, rs.getFloat(1), 0.0);
   assertEquals("foo", rs.getString(3));
   }
-  }
+}
+
+@Test
+public void testUpsertingDeletedRowWithNullCoveredColumn() throws 
Exception {
+testUpsertingDeletedRowWithNullCoveredColumn(false);
+}
 
-  /**
+@Test
+public void testUpsertingDeletedRowWithNullCoveredColumnMultiCfs() throws 
Exception {
+testUpsertingDeletedRowWithNullCoveredColumn(true);
+}
+
+public void testUpsertingDeletedRowWithNullCoveredColumn(boolean multiCf) 
throws Exception {
+String tableName = "TBL_" + generateUniqueName();
+String indexName = "IDX_" + generateUniqueName();
+String columnFamily1 = "cf1";
+String columnFamily2 = "cf2";
+String fullTableName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, tableName);
+String fullIndexName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, indexName);
+try (Connection conn = getConnection()) {
+conn.createStatement()
+.execute("create table " + fullTableName + " (id integer 
primary key, "
++ (multiCf ? columnFamily1 + "." : "") + "f 
varchar, "
++ (multiCf ? columnFamily2 + "." : "") + "s 
varchar)"
++ tableDDLOptions);
+conn.createStatement()
+.execute("create " + (localIndex ? "LOCAL" : "") + " index 
" + indexName
++ " on " + fullTableName + " (" + (multiCf ? 
columnFamily1 + "." : "")
++ "f) include (" + (multiCf ? columnFamily2 + "." 
: "") + "s)");
+conn.createStatement()
+.execute("upsert into " + fullTableName + " values (1, 
'foo', 'bar')");
+conn.commit();
+conn.createStatement().execute("delete from  " + fullTableName +

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5704 Covered column updates are not generated for previously deleted data table row

2020-02-03 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 7f86943  PHOENIX-5704 Covered column updates are not generated for 
previously deleted data table row
7f86943 is described below

commit 7f86943e5eaba424c34efddd2847d61eef6e38c6
Author: Abhishek Singh Chouhan 
AuthorDate: Mon Feb 3 10:12:00 2020 -0800

PHOENIX-5704 Covered column updates are not generated for previously 
deleted data table row
---
 .../phoenix/end2end/index/MutableIndexIT.java  | 53 --
 1 file changed, 48 insertions(+), 5 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
index f17451d..f61f02c 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
@@ -767,11 +767,11 @@ public class MutableIndexIT extends 
ParallelStatsDisabledIT {
   try (Connection conn = getConnection()) {
 conn.createStatement().execute(
 "create table " + fullTableName + " (id integer primary key, "
-+ (multiCf ? columnFamily1 : "") + "f float, "
-+ (multiCf ? columnFamily2 : "") + "s varchar)" + 
tableDDLOptions);
++ (multiCf ? columnFamily1 + "." : "") + "f float, "
++ (multiCf ? columnFamily2 + "." : "") + "s varchar)" 
+ tableDDLOptions);
 conn.createStatement().execute(
 "create " + (localIndex ? "LOCAL" : "") + " index " + 
indexName + " on " + fullTableName + " ("
-+ (multiCf ? columnFamily1 : "") + "f) include 
("+(multiCf ? columnFamily2 : "") +"s)");
++ (multiCf ? columnFamily1 + "." : "") + "f) include 
("+(multiCf ? columnFamily2 + "." : "") +"s)");
 conn.createStatement().execute(
 "upsert into " + fullTableName + " values (1, 0.5, 'foo')");
   conn.commit();
@@ -786,9 +786,52 @@ public class MutableIndexIT extends 
ParallelStatsDisabledIT {
   assertEquals(0.5F, rs.getFloat(1), 0.0);
   assertEquals("foo", rs.getString(3));
   }
-  }
+}
+
+@Test
+public void testUpsertingDeletedRowWithNullCoveredColumn() throws 
Exception {
+testUpsertingDeletedRowWithNullCoveredColumn(false);
+}
 
-  /**
+@Test
+public void testUpsertingDeletedRowWithNullCoveredColumnMultiCfs() throws 
Exception {
+testUpsertingDeletedRowWithNullCoveredColumn(true);
+}
+
+public void testUpsertingDeletedRowWithNullCoveredColumn(boolean multiCf) 
throws Exception {
+String tableName = "TBL_" + generateUniqueName();
+String indexName = "IDX_" + generateUniqueName();
+String columnFamily1 = "cf1";
+String columnFamily2 = "cf2";
+String fullTableName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, tableName);
+String fullIndexName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, indexName);
+try (Connection conn = getConnection()) {
+conn.createStatement()
+.execute("create table " + fullTableName + " (id integer 
primary key, "
++ (multiCf ? columnFamily1 + "." : "") + "f 
varchar, "
++ (multiCf ? columnFamily2 + "." : "") + "s 
varchar)"
++ tableDDLOptions);
+conn.createStatement()
+.execute("create " + (localIndex ? "LOCAL" : "") + " index 
" + indexName
++ " on " + fullTableName + " (" + (multiCf ? 
columnFamily1 + "." : "")
++ "f) include (" + (multiCf ? columnFamily2 + "." 
: "") + "s)");
+conn.createStatement()
+.execute("upsert into " + fullTableName + " values (1, 
'foo', 'bar')");
+conn.commit();
+conn.createStatement().execute("delete from  " + fullTableName + " 
where id = 1");
+conn.commit();
+conn.createStatement()
+.execute("upsert into  " + fullTableName + " values (1, 
null, 'bar')");
+conn.commit();
+ResultSet rs = conn.createStatement().executeQuery("select * from 
" + fullIndexName);
+assertTrue(rs.next());
+assertEquals(1, rs.getInt(2));
+assertEquals(null, rs.getString(1));
+assertEquals("bar", rs.getString(3));
+}
+}
+
+/**
* PHOENIX-4988
* Test updating only a non-indexed column after two successive deletes to 
an indexed row
*/



[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5704 Covered column updates are not generated for previously deleted data table row

2020-02-03 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 0e4f979  PHOENIX-5704 Covered column updates are not generated for 
previously deleted data table row
0e4f979 is described below

commit 0e4f9790d6380b8d57c55d3cee75dea5617d35f2
Author: Abhishek Singh Chouhan 
AuthorDate: Mon Feb 3 10:12:00 2020 -0800

PHOENIX-5704 Covered column updates are not generated for previously 
deleted data table row
---
 .../phoenix/end2end/index/MutableIndexIT.java  | 53 --
 1 file changed, 48 insertions(+), 5 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
index f17451d..f61f02c 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
@@ -767,11 +767,11 @@ public class MutableIndexIT extends 
ParallelStatsDisabledIT {
   try (Connection conn = getConnection()) {
 conn.createStatement().execute(
 "create table " + fullTableName + " (id integer primary key, "
-+ (multiCf ? columnFamily1 : "") + "f float, "
-+ (multiCf ? columnFamily2 : "") + "s varchar)" + 
tableDDLOptions);
++ (multiCf ? columnFamily1 + "." : "") + "f float, "
++ (multiCf ? columnFamily2 + "." : "") + "s varchar)" 
+ tableDDLOptions);
 conn.createStatement().execute(
 "create " + (localIndex ? "LOCAL" : "") + " index " + 
indexName + " on " + fullTableName + " ("
-+ (multiCf ? columnFamily1 : "") + "f) include 
("+(multiCf ? columnFamily2 : "") +"s)");
++ (multiCf ? columnFamily1 + "." : "") + "f) include 
("+(multiCf ? columnFamily2 + "." : "") +"s)");
 conn.createStatement().execute(
 "upsert into " + fullTableName + " values (1, 0.5, 'foo')");
   conn.commit();
@@ -786,9 +786,52 @@ public class MutableIndexIT extends 
ParallelStatsDisabledIT {
   assertEquals(0.5F, rs.getFloat(1), 0.0);
   assertEquals("foo", rs.getString(3));
   }
-  }
+}
+
+@Test
+public void testUpsertingDeletedRowWithNullCoveredColumn() throws 
Exception {
+testUpsertingDeletedRowWithNullCoveredColumn(false);
+}
 
-  /**
+@Test
+public void testUpsertingDeletedRowWithNullCoveredColumnMultiCfs() throws 
Exception {
+testUpsertingDeletedRowWithNullCoveredColumn(true);
+}
+
+public void testUpsertingDeletedRowWithNullCoveredColumn(boolean multiCf) 
throws Exception {
+String tableName = "TBL_" + generateUniqueName();
+String indexName = "IDX_" + generateUniqueName();
+String columnFamily1 = "cf1";
+String columnFamily2 = "cf2";
+String fullTableName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, tableName);
+String fullIndexName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, indexName);
+try (Connection conn = getConnection()) {
+conn.createStatement()
+.execute("create table " + fullTableName + " (id integer 
primary key, "
++ (multiCf ? columnFamily1 + "." : "") + "f 
varchar, "
++ (multiCf ? columnFamily2 + "." : "") + "s 
varchar)"
++ tableDDLOptions);
+conn.createStatement()
+.execute("create " + (localIndex ? "LOCAL" : "") + " index 
" + indexName
++ " on " + fullTableName + " (" + (multiCf ? 
columnFamily1 + "." : "")
++ "f) include (" + (multiCf ? columnFamily2 + "." 
: "") + "s)");
+conn.createStatement()
+.execute("upsert into " + fullTableName + " values (1, 
'foo', 'bar')");
+conn.commit();
+conn.createStatement().execute("delete from  " + fullTableName + " 
where id = 1");
+conn.commit();
+conn.createStatement()
+.execute("upsert into  " + fullTableName + " values (1, 
null, 'bar')");
+conn.commit();
+ResultSet rs = conn.createStatement().executeQuery("select * from 
" + fullIndexName);
+assertTrue(rs.next());
+assertEquals(1, rs.getInt(2));
+assertEquals(null, rs.getString(1));
+assertEquals("bar", rs.getString(3));
+}
+}
+
+/**
* PHOENIX-4988
* Test updating only a non-indexed column after two successive deletes to 
an indexed row
*/



[phoenix] branch master updated: PHOENIX-5704 Covered column updates are not generated for previously deleted data table row

2020-02-03 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new c8e4990  PHOENIX-5704 Covered column updates are not generated for 
previously deleted data table row
c8e4990 is described below

commit c8e4990a7fcec1775e3bb252a8baec935762b826
Author: Abhishek Singh Chouhan 
AuthorDate: Fri Jan 31 18:32:06 2020 -0800

PHOENIX-5704 Covered column updates are not generated for previously 
deleted data table row
---
 .../phoenix/end2end/index/MutableIndexIT.java  | 51 +++---
 1 file changed, 46 insertions(+), 5 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
index 0810aa3..23c1956 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
@@ -31,7 +31,6 @@ import java.sql.SQLException;
 import java.sql.Statement;
 import java.util.Arrays;
 import java.util.Collection;
-import java.util.List;
 import java.util.Properties;
 
 import org.apache.hadoop.hbase.TableName;
@@ -665,7 +664,6 @@ public class MutableIndexIT extends ParallelStatsDisabledIT 
{
   }
   }
 
-
   @Test
   public void testUpsertingDeletedRowShouldGiveProperDataWithIndexes() throws 
Exception {
   testUpsertingDeletedRowShouldGiveProperDataWithIndexes(false);
@@ -686,11 +684,11 @@ public class MutableIndexIT extends 
ParallelStatsDisabledIT {
   try (Connection conn = getConnection()) {
 conn.createStatement().execute(
 "create table " + fullTableName + " (id integer primary key, "
-+ (multiCf ? columnFamily1 : "") + "f float, "
-+ (multiCf ? columnFamily2 : "") + "s varchar)" + 
tableDDLOptions);
++ (multiCf ? columnFamily1 + "." : "") + "f float, "
++ (multiCf ? columnFamily2 + "." : "") + "s varchar)" 
+ tableDDLOptions);
 conn.createStatement().execute(
 "create " + (localIndex ? "LOCAL" : "") + " index " + 
indexName + " on " + fullTableName + " ("
-+ (multiCf ? columnFamily1 : "") + "f) include 
("+(multiCf ? columnFamily2 : "") +"s)");
++ (multiCf ? columnFamily1 + "." : "") + "f) include 
("+(multiCf ? columnFamily2 + "." : "") +"s)");
 conn.createStatement().execute(
 "upsert into " + fullTableName + " values (1, 0.5, 'foo')");
   conn.commit();
@@ -707,6 +705,49 @@ public class MutableIndexIT extends 
ParallelStatsDisabledIT {
   } 
   }
 
+@Test
+public void testUpsertingDeletedRowWithNullCoveredColumn() throws 
Exception {
+testUpsertingDeletedRowWithNullCoveredColumn(false);
+}
+
+@Test
+public void testUpsertingDeletedRowWithNullCoveredColumnMultiCfs() throws 
Exception {
+testUpsertingDeletedRowWithNullCoveredColumn(true);
+}
+
+public void testUpsertingDeletedRowWithNullCoveredColumn(boolean multiCf) 
throws Exception {
+String tableName = "TBL_" + generateUniqueName();
+String indexName = "IDX_" + generateUniqueName();
+String columnFamily1 = "cf1";
+String columnFamily2 = "cf2";
+String fullTableName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, tableName);
+String fullIndexName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, indexName);
+try (Connection conn = getConnection()) {
+conn.createStatement()
+.execute("create table " + fullTableName + " (id integer 
primary key, "
++ (multiCf ? columnFamily1 + "." : "") + "f 
varchar, "
++ (multiCf ? columnFamily2 + "." : "") + "s 
varchar)"
++ tableDDLOptions);
+conn.createStatement()
+.execute("create " + (localIndex ? "LOCAL" : "") + " index 
" + indexName
++ " on " + fullTableName + " (" + (multiCf ? 
columnFamily1 + "." : "")
++ "f) include (" + (multiCf ? columnFamily2 + "." 
: "") + "s)");
+conn.create

[phoenix] branch 4.14-HBase-1.4 updated: PHOENIX-5703 Add MAVEN_HOME toPATH in jenkins build

2020-01-30 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.14-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.4 by this push:
 new 34d982d  PHOENIX-5703 Add MAVEN_HOME toPATH in jenkins build
34d982d is described below

commit 34d982db4db7de9bf1ab1f7c8050abe20837dc84
Author: Sandeep Guggilam 
AuthorDate: Wed Jan 29 21:46:03 2020 -0800

PHOENIX-5703 Add MAVEN_HOME toPATH in jenkins build

Signed-off-by: Abhishek Singh Chouhan 
---
 dev/jenkinsEnv.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/dev/jenkinsEnv.sh b/dev/jenkinsEnv.sh
index d2e4f65..2717cb7 100755
--- a/dev/jenkinsEnv.sh
+++ b/dev/jenkinsEnv.sh
@@ -24,7 +24,7 @@ export FINDBUGS_HOME=/home/jenkins/tools/findbugs/latest
 export CLOVER_HOME=/home/jenkins/tools/clover/latest
 export MAVEN_HOME=/home/jenkins/tools/maven/latest
 
-export PATH=$PATH:$JAVA_HOME/bin:$ANT_HOME/bin:
+export PATH=$PATH:$JAVA_HOME/bin:$ANT_HOME/bin:$MAVEN_HOME/bin
 export MAVEN_OPTS="${MAVEN_OPTS:-"-Xmx3100M -XX:-UsePerfData 
-XX:MaxPermSize=256m"}"
 
 ulimit -n



[phoenix] branch 4.14-HBase-1.3 updated: PHOENIX-5703 Add MAVEN_HOME toPATH in jenkins build

2020-01-30 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.3 by this push:
 new a78653d  PHOENIX-5703 Add MAVEN_HOME toPATH in jenkins build
a78653d is described below

commit a78653d8632402e311009fdb121928e2ee84c92d
Author: Sandeep Guggilam 
AuthorDate: Wed Jan 29 21:46:03 2020 -0800

PHOENIX-5703 Add MAVEN_HOME toPATH in jenkins build

Signed-off-by: Abhishek Singh Chouhan 
---
 dev/jenkinsEnv.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/dev/jenkinsEnv.sh b/dev/jenkinsEnv.sh
index d2e4f65..2717cb7 100755
--- a/dev/jenkinsEnv.sh
+++ b/dev/jenkinsEnv.sh
@@ -24,7 +24,7 @@ export FINDBUGS_HOME=/home/jenkins/tools/findbugs/latest
 export CLOVER_HOME=/home/jenkins/tools/clover/latest
 export MAVEN_HOME=/home/jenkins/tools/maven/latest
 
-export PATH=$PATH:$JAVA_HOME/bin:$ANT_HOME/bin:
+export PATH=$PATH:$JAVA_HOME/bin:$ANT_HOME/bin:$MAVEN_HOME/bin
 export MAVEN_OPTS="${MAVEN_OPTS:-"-Xmx3100M -XX:-UsePerfData 
-XX:MaxPermSize=256m"}"
 
 ulimit -n



[phoenix] branch 4.15-HBase-1.3 updated: PHOENIX-5703 Add MAVEN_HOME toPATH in jenkins build

2020-01-30 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.15-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.15-HBase-1.3 by this push:
 new 4495b53  PHOENIX-5703 Add MAVEN_HOME toPATH in jenkins build
4495b53 is described below

commit 4495b53b2f7bcfcef635d9241977c20f155d9ae6
Author: Sandeep Guggilam 
AuthorDate: Wed Jan 29 21:46:03 2020 -0800

PHOENIX-5703 Add MAVEN_HOME toPATH in jenkins build

Signed-off-by: Abhishek Singh Chouhan 
---
 dev/jenkinsEnv.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/dev/jenkinsEnv.sh b/dev/jenkinsEnv.sh
index d2e4f65..2717cb7 100755
--- a/dev/jenkinsEnv.sh
+++ b/dev/jenkinsEnv.sh
@@ -24,7 +24,7 @@ export FINDBUGS_HOME=/home/jenkins/tools/findbugs/latest
 export CLOVER_HOME=/home/jenkins/tools/clover/latest
 export MAVEN_HOME=/home/jenkins/tools/maven/latest
 
-export PATH=$PATH:$JAVA_HOME/bin:$ANT_HOME/bin:
+export PATH=$PATH:$JAVA_HOME/bin:$ANT_HOME/bin:$MAVEN_HOME/bin
 export MAVEN_OPTS="${MAVEN_OPTS:-"-Xmx3100M -XX:-UsePerfData 
-XX:MaxPermSize=256m"}"
 
 ulimit -n



[phoenix] branch 4.15-HBase-1.4 updated: PHOENIX-5703 Add MAVEN_HOME toPATH in jenkins build

2020-01-30 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.15-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.15-HBase-1.4 by this push:
 new 7fb7d24  PHOENIX-5703 Add MAVEN_HOME toPATH in jenkins build
7fb7d24 is described below

commit 7fb7d24714be5f059b3d35809b5fb030eb1658b9
Author: Sandeep Guggilam 
AuthorDate: Wed Jan 29 21:46:03 2020 -0800

PHOENIX-5703 Add MAVEN_HOME toPATH in jenkins build

Signed-off-by: Abhishek Singh Chouhan 
---
 dev/jenkinsEnv.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/dev/jenkinsEnv.sh b/dev/jenkinsEnv.sh
index d2e4f65..2717cb7 100755
--- a/dev/jenkinsEnv.sh
+++ b/dev/jenkinsEnv.sh
@@ -24,7 +24,7 @@ export FINDBUGS_HOME=/home/jenkins/tools/findbugs/latest
 export CLOVER_HOME=/home/jenkins/tools/clover/latest
 export MAVEN_HOME=/home/jenkins/tools/maven/latest
 
-export PATH=$PATH:$JAVA_HOME/bin:$ANT_HOME/bin:
+export PATH=$PATH:$JAVA_HOME/bin:$ANT_HOME/bin:$MAVEN_HOME/bin
 export MAVEN_OPTS="${MAVEN_OPTS:-"-Xmx3100M -XX:-UsePerfData 
-XX:MaxPermSize=256m"}"
 
 ulimit -n



[phoenix] branch 4.15-HBase-1.5 updated: PHOENIX-5703 Add MAVEN_HOME toPATH in jenkins build

2020-01-30 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.15-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.15-HBase-1.5 by this push:
 new e7d6f1b  PHOENIX-5703 Add MAVEN_HOME toPATH in jenkins build
e7d6f1b is described below

commit e7d6f1b58e9e994c6687d10d2ba17b7f13a110f2
Author: Sandeep Guggilam 
AuthorDate: Wed Jan 29 21:46:03 2020 -0800

PHOENIX-5703 Add MAVEN_HOME toPATH in jenkins build

Signed-off-by: Abhishek Singh Chouhan 
---
 dev/jenkinsEnv.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/dev/jenkinsEnv.sh b/dev/jenkinsEnv.sh
index d2e4f65..2717cb7 100755
--- a/dev/jenkinsEnv.sh
+++ b/dev/jenkinsEnv.sh
@@ -24,7 +24,7 @@ export FINDBUGS_HOME=/home/jenkins/tools/findbugs/latest
 export CLOVER_HOME=/home/jenkins/tools/clover/latest
 export MAVEN_HOME=/home/jenkins/tools/maven/latest
 
-export PATH=$PATH:$JAVA_HOME/bin:$ANT_HOME/bin:
+export PATH=$PATH:$JAVA_HOME/bin:$ANT_HOME/bin:$MAVEN_HOME/bin
 export MAVEN_OPTS="${MAVEN_OPTS:-"-Xmx3100M -XX:-UsePerfData 
-XX:MaxPermSize=256m"}"
 
 ulimit -n



[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5703 Add MAVEN_HOME toPATH in jenkins build

2020-01-30 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 14ba32d  PHOENIX-5703 Add MAVEN_HOME toPATH in jenkins build
14ba32d is described below

commit 14ba32ddf9b747d7246e61975e01a275d4480abd
Author: Sandeep Guggilam 
AuthorDate: Wed Jan 29 21:46:03 2020 -0800

PHOENIX-5703 Add MAVEN_HOME toPATH in jenkins build

Signed-off-by: Abhishek Singh Chouhan 
---
 dev/jenkinsEnv.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/dev/jenkinsEnv.sh b/dev/jenkinsEnv.sh
index d2e4f65..2717cb7 100755
--- a/dev/jenkinsEnv.sh
+++ b/dev/jenkinsEnv.sh
@@ -24,7 +24,7 @@ export FINDBUGS_HOME=/home/jenkins/tools/findbugs/latest
 export CLOVER_HOME=/home/jenkins/tools/clover/latest
 export MAVEN_HOME=/home/jenkins/tools/maven/latest
 
-export PATH=$PATH:$JAVA_HOME/bin:$ANT_HOME/bin:
+export PATH=$PATH:$JAVA_HOME/bin:$ANT_HOME/bin:$MAVEN_HOME/bin
 export MAVEN_OPTS="${MAVEN_OPTS:-"-Xmx3100M -XX:-UsePerfData 
-XX:MaxPermSize=256m"}"
 
 ulimit -n



[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5703 Add MAVEN_HOME toPATH in jenkins build

2020-01-30 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new fe44018  PHOENIX-5703 Add MAVEN_HOME toPATH in jenkins build
fe44018 is described below

commit fe440185eeabc1a5ad9f45fbcd3238170a120846
Author: Sandeep Guggilam 
AuthorDate: Wed Jan 29 21:46:03 2020 -0800

PHOENIX-5703 Add MAVEN_HOME toPATH in jenkins build

Signed-off-by: Abhishek Singh Chouhan 
---
 dev/jenkinsEnv.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/dev/jenkinsEnv.sh b/dev/jenkinsEnv.sh
index d2e4f65..2717cb7 100755
--- a/dev/jenkinsEnv.sh
+++ b/dev/jenkinsEnv.sh
@@ -24,7 +24,7 @@ export FINDBUGS_HOME=/home/jenkins/tools/findbugs/latest
 export CLOVER_HOME=/home/jenkins/tools/clover/latest
 export MAVEN_HOME=/home/jenkins/tools/maven/latest
 
-export PATH=$PATH:$JAVA_HOME/bin:$ANT_HOME/bin:
+export PATH=$PATH:$JAVA_HOME/bin:$ANT_HOME/bin:$MAVEN_HOME/bin
 export MAVEN_OPTS="${MAVEN_OPTS:-"-Xmx3100M -XX:-UsePerfData 
-XX:MaxPermSize=256m"}"
 
 ulimit -n



[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5703 Add MAVEN_HOME toPATH in jenkins build

2020-01-30 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new efaa39b  PHOENIX-5703 Add MAVEN_HOME toPATH in jenkins build
efaa39b is described below

commit efaa39b75760bcd43ad1ce4fa22a17c4c5b32907
Author: Sandeep Guggilam 
AuthorDate: Wed Jan 29 21:46:03 2020 -0800

PHOENIX-5703 Add MAVEN_HOME toPATH in jenkins build

Signed-off-by: Abhishek Singh Chouhan 
---
 dev/jenkinsEnv.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/dev/jenkinsEnv.sh b/dev/jenkinsEnv.sh
index d2e4f65..2717cb7 100755
--- a/dev/jenkinsEnv.sh
+++ b/dev/jenkinsEnv.sh
@@ -24,7 +24,7 @@ export FINDBUGS_HOME=/home/jenkins/tools/findbugs/latest
 export CLOVER_HOME=/home/jenkins/tools/clover/latest
 export MAVEN_HOME=/home/jenkins/tools/maven/latest
 
-export PATH=$PATH:$JAVA_HOME/bin:$ANT_HOME/bin:
+export PATH=$PATH:$JAVA_HOME/bin:$ANT_HOME/bin:$MAVEN_HOME/bin
 export MAVEN_OPTS="${MAVEN_OPTS:-"-Xmx3100M -XX:-UsePerfData 
-XX:MaxPermSize=256m"}"
 
 ulimit -n



[phoenix] branch master updated: PHOENIX-5703 Add MAVEN_HOME toPATH in jenkins build

2020-01-30 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 42bb52b  PHOENIX-5703 Add MAVEN_HOME toPATH in jenkins build
42bb52b is described below

commit 42bb52b4878d8d782a26605c854a4c9f0177e249
Author: Sandeep Guggilam 
AuthorDate: Wed Jan 29 21:46:03 2020 -0800

PHOENIX-5703 Add MAVEN_HOME toPATH in jenkins build

Signed-off-by: Abhishek Singh Chouhan 
---
 dev/jenkinsEnv.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/dev/jenkinsEnv.sh b/dev/jenkinsEnv.sh
index d2e4f65..2717cb7 100755
--- a/dev/jenkinsEnv.sh
+++ b/dev/jenkinsEnv.sh
@@ -24,7 +24,7 @@ export FINDBUGS_HOME=/home/jenkins/tools/findbugs/latest
 export CLOVER_HOME=/home/jenkins/tools/clover/latest
 export MAVEN_HOME=/home/jenkins/tools/maven/latest
 
-export PATH=$PATH:$JAVA_HOME/bin:$ANT_HOME/bin:
+export PATH=$PATH:$JAVA_HOME/bin:$ANT_HOME/bin:$MAVEN_HOME/bin
 export MAVEN_OPTS="${MAVEN_OPTS:-"-Xmx3100M -XX:-UsePerfData 
-XX:MaxPermSize=256m"}"
 
 ulimit -n



[phoenix] branch 4.14-HBase-1.3 updated: PHOENIX-5676 Inline-verification from IndexTool does not handle TTL/row-expiry

2020-01-15 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.3 by this push:
 new 3afe683  PHOENIX-5676 Inline-verification from IndexTool does not 
handle TTL/row-expiry
3afe683 is described below

commit 3afe683f5af58fc1f7add3a053bb8db334f5a259
Author: Abhishek Singh Chouhan 
AuthorDate: Wed Jan 15 22:50:20 2020 -0800

PHOENIX-5676 Inline-verification from IndexTool does not handle 
TTL/row-expiry
---
 .../org/apache/phoenix/end2end/IndexToolIT.java| 71 +-
 .../coprocessor/IndexRebuildRegionScanner.java | 62 +++
 2 files changed, 118 insertions(+), 15 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
index 5187639..b3b1613 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
@@ -43,6 +43,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CellUtil;
 import org.apache.hadoop.hbase.HBaseIOException;
+import org.apache.hadoop.hbase.HColumnDescriptor;
 import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
@@ -59,15 +60,15 @@ import org.apache.hadoop.hbase.regionserver.HRegion;
 import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
 import org.apache.hadoop.hbase.regionserver.RegionScanner;
 import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
 import org.apache.hadoop.mapreduce.Job;
 import org.apache.phoenix.hbase.index.IndexRegionObserver;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.index.IndexTool;
-import org.apache.phoenix.query.ConnectionQueryServices;
 import org.apache.phoenix.mapreduce.index.PhoenixIndexImportDirectMapper;
 import org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper;
 import org.apache.phoenix.mapreduce.index.PhoenixServerBuildIndexMapper;
-
+import org.apache.phoenix.query.ConnectionQueryServices;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.schema.PIndexState;
@@ -571,6 +572,72 @@ public class IndexToolIT extends ParallelStatsEnabledIT {
 }
 
 @Test
+public void testIndexToolVerifyWithExpiredIndexRows() throws Exception {
+if (localIndex || transactional || !directApi || useSnapshot) {
+return;
+}
+String schemaName = generateUniqueName();
+String dataTableName = generateUniqueName();
+String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
+String indexTableName = generateUniqueName();
+String indexTableFullName = SchemaUtil.getTableName(schemaName, 
indexTableName);
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+conn.createStatement().execute("CREATE TABLE " + dataTableFullName
++ " (ID INTEGER NOT NULL PRIMARY KEY, NAME VARCHAR, CODE 
VARCHAR) COLUMN_ENCODED_BYTES=0");
+// Insert a row
+conn.createStatement()
+.execute("upsert into " + dataTableFullName + " values (1, 
'Phoenix', 'A')");
+conn.commit();
+conn.createStatement()
+.execute(String.format("CREATE INDEX %s ON %s (NAME) 
INCLUDE (CODE) ASYNC",
+indexTableName, dataTableFullName));
+runIndexTool(directApi, useSnapshot, schemaName, dataTableName, 
indexTableName, null, 0,
+IndexTool.IndexVerifyType.ONLY);
+Cell cell =
+getErrorMessageFromIndexToolOutputTable(conn, 
dataTableFullName,
+indexTableFullName);
+byte[] expectedValueBytes = Bytes.toBytes("Missing index row");
+assertTrue(Bytes.compareTo(cell.getValueArray(), 
cell.getValueOffset(),
+cell.getValueLength(), expectedValueBytes, 0, 
expectedValueBytes.length) == 0);
+
+// Run the index tool to populate the index while verifying rows
+runIndexTool(directApi, useSnapshot, schemaName, dataTableName, 
indexTableName, null, 0,
+IndexTool.IndexVerifyType.AFTER);
+
+// Set ttl of index table ridiculously low so that all data is 
expired
+Admin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();
+TableName indexTable = TableName.valueOf(indexTableFullName);
+  

[phoenix] branch 4.14-HBase-1.4 updated: PHOENIX-5676 Inline-verification from IndexTool does not handle TTL/row-expiry

2020-01-15 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.14-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.4 by this push:
 new 130f5cb  PHOENIX-5676 Inline-verification from IndexTool does not 
handle TTL/row-expiry
130f5cb is described below

commit 130f5cb3b2d04a1846a80df45d6b1a142ffa3a33
Author: Abhishek Singh Chouhan 
AuthorDate: Wed Jan 15 22:50:20 2020 -0800

PHOENIX-5676 Inline-verification from IndexTool does not handle 
TTL/row-expiry
---
 .../org/apache/phoenix/end2end/IndexToolIT.java| 71 +-
 .../coprocessor/IndexRebuildRegionScanner.java | 62 +++
 2 files changed, 118 insertions(+), 15 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
index 5187639..b3b1613 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
@@ -43,6 +43,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CellUtil;
 import org.apache.hadoop.hbase.HBaseIOException;
+import org.apache.hadoop.hbase.HColumnDescriptor;
 import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
@@ -59,15 +60,15 @@ import org.apache.hadoop.hbase.regionserver.HRegion;
 import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
 import org.apache.hadoop.hbase.regionserver.RegionScanner;
 import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
 import org.apache.hadoop.mapreduce.Job;
 import org.apache.phoenix.hbase.index.IndexRegionObserver;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.index.IndexTool;
-import org.apache.phoenix.query.ConnectionQueryServices;
 import org.apache.phoenix.mapreduce.index.PhoenixIndexImportDirectMapper;
 import org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper;
 import org.apache.phoenix.mapreduce.index.PhoenixServerBuildIndexMapper;
-
+import org.apache.phoenix.query.ConnectionQueryServices;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.schema.PIndexState;
@@ -571,6 +572,72 @@ public class IndexToolIT extends ParallelStatsEnabledIT {
 }
 
 @Test
+public void testIndexToolVerifyWithExpiredIndexRows() throws Exception {
+if (localIndex || transactional || !directApi || useSnapshot) {
+return;
+}
+String schemaName = generateUniqueName();
+String dataTableName = generateUniqueName();
+String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
+String indexTableName = generateUniqueName();
+String indexTableFullName = SchemaUtil.getTableName(schemaName, 
indexTableName);
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+conn.createStatement().execute("CREATE TABLE " + dataTableFullName
++ " (ID INTEGER NOT NULL PRIMARY KEY, NAME VARCHAR, CODE 
VARCHAR) COLUMN_ENCODED_BYTES=0");
+// Insert a row
+conn.createStatement()
+.execute("upsert into " + dataTableFullName + " values (1, 
'Phoenix', 'A')");
+conn.commit();
+conn.createStatement()
+.execute(String.format("CREATE INDEX %s ON %s (NAME) 
INCLUDE (CODE) ASYNC",
+indexTableName, dataTableFullName));
+runIndexTool(directApi, useSnapshot, schemaName, dataTableName, 
indexTableName, null, 0,
+IndexTool.IndexVerifyType.ONLY);
+Cell cell =
+getErrorMessageFromIndexToolOutputTable(conn, 
dataTableFullName,
+indexTableFullName);
+byte[] expectedValueBytes = Bytes.toBytes("Missing index row");
+assertTrue(Bytes.compareTo(cell.getValueArray(), 
cell.getValueOffset(),
+cell.getValueLength(), expectedValueBytes, 0, 
expectedValueBytes.length) == 0);
+
+// Run the index tool to populate the index while verifying rows
+runIndexTool(directApi, useSnapshot, schemaName, dataTableName, 
indexTableName, null, 0,
+IndexTool.IndexVerifyType.AFTER);
+
+// Set ttl of index table ridiculously low so that all data is 
expired
+Admin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();
+TableName indexTable = TableName.valueOf(indexTableFullName);
+  

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5676 Inline-verification from IndexTool does not handle TTL/row-expiry

2020-01-15 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new a189be5  PHOENIX-5676 Inline-verification from IndexTool does not 
handle TTL/row-expiry
a189be5 is described below

commit a189be530e53fe321b01a48cb7482aff19ec
Author: Abhishek Singh Chouhan 
AuthorDate: Wed Jan 15 17:17:41 2020 -0800

PHOENIX-5676 Inline-verification from IndexTool does not handle 
TTL/row-expiry
---
 .../org/apache/phoenix/end2end/IndexToolIT.java| 71 +-
 .../coprocessor/IndexRebuildRegionScanner.java | 62 +++
 2 files changed, 118 insertions(+), 15 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
index ce24e6d..fc4fe69 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
@@ -43,6 +43,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CellUtil;
 import org.apache.hadoop.hbase.HBaseIOException;
+import org.apache.hadoop.hbase.HColumnDescriptor;
 import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
@@ -59,14 +60,14 @@ import org.apache.hadoop.hbase.regionserver.HRegion;
 import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
 import org.apache.hadoop.hbase.regionserver.RegionScanner;
 import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
 import org.apache.hadoop.mapreduce.Job;
 import org.apache.phoenix.hbase.index.IndexRegionObserver;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.index.IndexTool;
-import org.apache.phoenix.query.ConnectionQueryServices;
 import org.apache.phoenix.mapreduce.index.PhoenixIndexImportDirectMapper;
 import org.apache.phoenix.mapreduce.index.PhoenixServerBuildIndexMapper;
-
+import org.apache.phoenix.query.ConnectionQueryServices;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.schema.PIndexState;
@@ -580,6 +581,72 @@ public class IndexToolIT extends 
BaseUniqueNamesOwnClusterIT {
 }
 
 @Test
+public void testIndexToolVerifyWithExpiredIndexRows() throws Exception {
+if (localIndex || transactional || !directApi || useSnapshot) {
+return;
+}
+String schemaName = generateUniqueName();
+String dataTableName = generateUniqueName();
+String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
+String indexTableName = generateUniqueName();
+String indexTableFullName = SchemaUtil.getTableName(schemaName, 
indexTableName);
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+conn.createStatement().execute("CREATE TABLE " + dataTableFullName
++ " (ID INTEGER NOT NULL PRIMARY KEY, NAME VARCHAR, CODE 
VARCHAR) COLUMN_ENCODED_BYTES=0");
+// Insert a row
+conn.createStatement()
+.execute("upsert into " + dataTableFullName + " values (1, 
'Phoenix', 'A')");
+conn.commit();
+conn.createStatement()
+.execute(String.format("CREATE INDEX %s ON %s (NAME) 
INCLUDE (CODE) ASYNC",
+indexTableName, dataTableFullName));
+runIndexTool(directApi, useSnapshot, schemaName, dataTableName, 
indexTableName, null, 0,
+IndexTool.IndexVerifyType.ONLY);
+Cell cell =
+getErrorMessageFromIndexToolOutputTable(conn, 
dataTableFullName,
+indexTableFullName);
+byte[] expectedValueBytes = Bytes.toBytes("Missing index row");
+assertTrue(Bytes.compareTo(cell.getValueArray(), 
cell.getValueOffset(),
+cell.getValueLength(), expectedValueBytes, 0, 
expectedValueBytes.length) == 0);
+
+// Run the index tool to populate the index while verifying rows
+runIndexTool(directApi, useSnapshot, schemaName, dataTableName, 
indexTableName, null, 0,
+IndexTool.IndexVerifyType.AFTER);
+
+// Set ttl of index table ridiculously low so that all data is 
expired
+Admin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();
+TableName indexTable = TableName.valueOf(indexTableFullName);
+HColumnDescriptor desc =

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5676 Inline-verification from IndexTool does not handle TTL/row-expiry

2020-01-15 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 90126c4  PHOENIX-5676 Inline-verification from IndexTool does not 
handle TTL/row-expiry
90126c4 is described below

commit 90126c49b0877dcf6bbdd58e421e8228171f8204
Author: Abhishek Singh Chouhan 
AuthorDate: Wed Jan 15 17:17:41 2020 -0800

PHOENIX-5676 Inline-verification from IndexTool does not handle 
TTL/row-expiry
---
 .../org/apache/phoenix/end2end/IndexToolIT.java| 71 +-
 .../coprocessor/IndexRebuildRegionScanner.java | 62 +++
 2 files changed, 118 insertions(+), 15 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
index 490ecb5..d79689d 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
@@ -43,6 +43,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CellUtil;
 import org.apache.hadoop.hbase.HBaseIOException;
+import org.apache.hadoop.hbase.HColumnDescriptor;
 import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
@@ -59,14 +60,14 @@ import org.apache.hadoop.hbase.regionserver.HRegion;
 import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
 import org.apache.hadoop.hbase.regionserver.RegionScanner;
 import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
 import org.apache.hadoop.mapreduce.Job;
 import org.apache.phoenix.hbase.index.IndexRegionObserver;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.index.IndexTool;
-import org.apache.phoenix.query.ConnectionQueryServices;
 import org.apache.phoenix.mapreduce.index.PhoenixIndexImportDirectMapper;
 import org.apache.phoenix.mapreduce.index.PhoenixServerBuildIndexMapper;
-
+import org.apache.phoenix.query.ConnectionQueryServices;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.schema.PIndexState;
@@ -579,6 +580,72 @@ public class IndexToolIT extends 
BaseUniqueNamesOwnClusterIT {
 }
 
 @Test
+public void testIndexToolVerifyWithExpiredIndexRows() throws Exception {
+if (localIndex || transactional || !directApi || useSnapshot) {
+return;
+}
+String schemaName = generateUniqueName();
+String dataTableName = generateUniqueName();
+String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
+String indexTableName = generateUniqueName();
+String indexTableFullName = SchemaUtil.getTableName(schemaName, 
indexTableName);
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+conn.createStatement().execute("CREATE TABLE " + dataTableFullName
++ " (ID INTEGER NOT NULL PRIMARY KEY, NAME VARCHAR, CODE 
VARCHAR) COLUMN_ENCODED_BYTES=0");
+// Insert a row
+conn.createStatement()
+.execute("upsert into " + dataTableFullName + " values (1, 
'Phoenix', 'A')");
+conn.commit();
+conn.createStatement()
+.execute(String.format("CREATE INDEX %s ON %s (NAME) 
INCLUDE (CODE) ASYNC",
+indexTableName, dataTableFullName));
+runIndexTool(directApi, useSnapshot, schemaName, dataTableName, 
indexTableName, null, 0,
+IndexTool.IndexVerifyType.ONLY);
+Cell cell =
+getErrorMessageFromIndexToolOutputTable(conn, 
dataTableFullName,
+indexTableFullName);
+byte[] expectedValueBytes = Bytes.toBytes("Missing index row");
+assertTrue(Bytes.compareTo(cell.getValueArray(), 
cell.getValueOffset(),
+cell.getValueLength(), expectedValueBytes, 0, 
expectedValueBytes.length) == 0);
+
+// Run the index tool to populate the index while verifying rows
+runIndexTool(directApi, useSnapshot, schemaName, dataTableName, 
indexTableName, null, 0,
+IndexTool.IndexVerifyType.AFTER);
+
+// Set ttl of index table ridiculously low so that all data is 
expired
+Admin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();
+TableName indexTable = TableName.valueOf(indexTableFullName);
+HColumnDescriptor desc =

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5676 Inline-verification from IndexTool does not handle TTL/row-expiry

2020-01-15 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new c0506f1  PHOENIX-5676 Inline-verification from IndexTool does not 
handle TTL/row-expiry
c0506f1 is described below

commit c0506f155b48a01d2a5d780758e2ca9b737bf76c
Author: Abhishek Singh Chouhan 
AuthorDate: Wed Jan 15 17:17:41 2020 -0800

PHOENIX-5676 Inline-verification from IndexTool does not handle 
TTL/row-expiry
---
 .../org/apache/phoenix/end2end/IndexToolIT.java| 71 +-
 .../coprocessor/IndexRebuildRegionScanner.java | 62 +++
 2 files changed, 118 insertions(+), 15 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
index 490ecb5..d79689d 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
@@ -43,6 +43,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CellUtil;
 import org.apache.hadoop.hbase.HBaseIOException;
+import org.apache.hadoop.hbase.HColumnDescriptor;
 import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
@@ -59,14 +60,14 @@ import org.apache.hadoop.hbase.regionserver.HRegion;
 import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
 import org.apache.hadoop.hbase.regionserver.RegionScanner;
 import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
 import org.apache.hadoop.mapreduce.Job;
 import org.apache.phoenix.hbase.index.IndexRegionObserver;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.index.IndexTool;
-import org.apache.phoenix.query.ConnectionQueryServices;
 import org.apache.phoenix.mapreduce.index.PhoenixIndexImportDirectMapper;
 import org.apache.phoenix.mapreduce.index.PhoenixServerBuildIndexMapper;
-
+import org.apache.phoenix.query.ConnectionQueryServices;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.schema.PIndexState;
@@ -579,6 +580,72 @@ public class IndexToolIT extends 
BaseUniqueNamesOwnClusterIT {
 }
 
 @Test
+public void testIndexToolVerifyWithExpiredIndexRows() throws Exception {
+if (localIndex || transactional || !directApi || useSnapshot) {
+return;
+}
+String schemaName = generateUniqueName();
+String dataTableName = generateUniqueName();
+String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
+String indexTableName = generateUniqueName();
+String indexTableFullName = SchemaUtil.getTableName(schemaName, 
indexTableName);
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+conn.createStatement().execute("CREATE TABLE " + dataTableFullName
++ " (ID INTEGER NOT NULL PRIMARY KEY, NAME VARCHAR, CODE 
VARCHAR) COLUMN_ENCODED_BYTES=0");
+// Insert a row
+conn.createStatement()
+.execute("upsert into " + dataTableFullName + " values (1, 
'Phoenix', 'A')");
+conn.commit();
+conn.createStatement()
+.execute(String.format("CREATE INDEX %s ON %s (NAME) 
INCLUDE (CODE) ASYNC",
+indexTableName, dataTableFullName));
+runIndexTool(directApi, useSnapshot, schemaName, dataTableName, 
indexTableName, null, 0,
+IndexTool.IndexVerifyType.ONLY);
+Cell cell =
+getErrorMessageFromIndexToolOutputTable(conn, 
dataTableFullName,
+indexTableFullName);
+byte[] expectedValueBytes = Bytes.toBytes("Missing index row");
+assertTrue(Bytes.compareTo(cell.getValueArray(), 
cell.getValueOffset(),
+cell.getValueLength(), expectedValueBytes, 0, 
expectedValueBytes.length) == 0);
+
+// Run the index tool to populate the index while verifying rows
+runIndexTool(directApi, useSnapshot, schemaName, dataTableName, 
indexTableName, null, 0,
+IndexTool.IndexVerifyType.AFTER);
+
+// Set ttl of index table ridiculously low so that all data is 
expired
+Admin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();
+TableName indexTable = TableName.valueOf(indexTableFullName);
+HColumnDescriptor desc =

[phoenix] branch master updated: PHOENIX-5676 Inline-verification from IndexTool does not handle TTL/row-expiry

2020-01-15 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 05a44ad  PHOENIX-5676 Inline-verification from IndexTool does not 
handle TTL/row-expiry
05a44ad is described below

commit 05a44ad162181cd63ba6d7ad00a259fa17a6c60c
Author: Abhishek Singh Chouhan 
AuthorDate: Wed Jan 15 17:46:47 2020 -0800

PHOENIX-5676 Inline-verification from IndexTool does not handle 
TTL/row-expiry
---
 .../org/apache/phoenix/end2end/IndexToolIT.java| 69 +-
 .../coprocessor/IndexRebuildRegionScanner.java | 62 +++
 2 files changed, 115 insertions(+), 16 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
index 8d42020..feb855c 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
@@ -38,15 +38,18 @@ import java.util.List;
 import java.util.Map;
 import java.util.Properties;
 import java.util.UUID;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicInteger;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.Cell;
-import org.apache.hadoop.hbase.CellUtil;
 import org.apache.hadoop.hbase.HBaseIOException;
 import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
 import org.apache.hadoop.hbase.client.Mutation;
 import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.client.ResultScanner;
@@ -63,10 +66,9 @@ import org.apache.hadoop.mapreduce.Job;
 import org.apache.phoenix.hbase.index.IndexRegionObserver;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.index.IndexTool;
-import org.apache.phoenix.query.ConnectionQueryServices;
 import org.apache.phoenix.mapreduce.index.PhoenixIndexImportDirectMapper;
 import org.apache.phoenix.mapreduce.index.PhoenixServerBuildIndexMapper;
-
+import org.apache.phoenix.query.ConnectionQueryServices;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.schema.PIndexState;
@@ -579,6 +581,67 @@ public class IndexToolIT extends 
BaseUniqueNamesOwnClusterIT {
 }
 
 @Test
+public void testIndexToolVerifyWithExpiredIndexRows() throws Exception {
+if (localIndex || transactional || !directApi || useSnapshot) {
+return;
+}
+String schemaName = generateUniqueName();
+String dataTableName = generateUniqueName();
+String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
+String indexTableName = generateUniqueName();
+String indexTableFullName = SchemaUtil.getTableName(schemaName, 
indexTableName);
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+conn.createStatement().execute("CREATE TABLE " + dataTableFullName
++ " (ID INTEGER NOT NULL PRIMARY KEY, NAME VARCHAR, CODE 
VARCHAR) COLUMN_ENCODED_BYTES=0");
+// Insert a row
+conn.createStatement()
+.execute("upsert into " + dataTableFullName + " values (1, 
'Phoenix', 'A')");
+conn.commit();
+conn.createStatement()
+.execute(String.format("CREATE INDEX %s ON %s (NAME) 
INCLUDE (CODE) ASYNC",
+indexTableName, dataTableFullName));
+runIndexTool(directApi, useSnapshot, schemaName, dataTableName, 
indexTableName, null, 0,
+IndexTool.IndexVerifyType.ONLY);
+Cell cell =
+getErrorMessageFromIndexToolOutputTable(conn, 
dataTableFullName,
+indexTableFullName);
+byte[] expectedValueBytes = Bytes.toBytes("Missing index row");
+assertTrue(Bytes.compareTo(cell.getValueArray(), 
cell.getValueOffset(),
+cell.getValueLength(), expectedValueBytes, 0, 
expectedValueBytes.length) == 0);
+
+// Run the index tool to populate the index while verifying rows
+runIndexTool(directApi, useSnapshot, schemaName, dataTableName, 
indexTableName, null, 0,
+IndexTool.IndexVerifyType.AFTER);
+
+// Set ttl of index table ridiculously low so that all data is 
expired
+Admin admin = 
conn.unwrap(PhoenixConnection.

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5628 Phoenix Function to Return HBase Row Key of Column Cell

2020-01-12 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 1b5b5ee  PHOENIX-5628 Phoenix Function to Return HBase Row Key of 
Column Cell
1b5b5ee is described below

commit 1b5b5eedaafacc355e05a5995d9292be6569892e
Author: Abhishek Singh Chouhan 
AuthorDate: Fri Jan 10 17:09:16 2020 -0800

PHOENIX-5628 Phoenix Function to Return HBase Row Key of Column Cell
---
 .../end2end/RowKeyBytesStringFunctionIT.java   | 81 ++
 .../apache/phoenix/expression/ExpressionType.java  |  1 +
 .../function/RowKeyBytesStringFunction.java| 73 +++
 3 files changed, 155 insertions(+)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowKeyBytesStringFunctionIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowKeyBytesStringFunctionIT.java
new file mode 100644
index 000..cde7aa2
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowKeyBytesStringFunctionIT.java
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.types.PInteger;
+import org.junit.Test;
+
+public class RowKeyBytesStringFunctionIT extends ParallelStatsDisabledIT {
+
+@Test
+public void getRowKeyBytesAndVerify() throws Exception {
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+int[] values = {3,7,9,158,5};
+String tableName = generateUniqueName();
+String ddl =
+"CREATE TABLE IF NOT EXISTS " + tableName + " "
++ "(id INTEGER NOT NULL, pkcol VARCHAR, page_id 
UNSIGNED_LONG,"
++ " \"DATE\" BIGINT, \"value\" INTEGER,"
++ " constraint pk primary key(id, pkcol)) 
COLUMN_ENCODED_BYTES = 0";
+conn.createStatement().execute(ddl);
+
+conn.createStatement().execute("UPSERT INTO " + tableName
++ " (id, pkcol, page_id, \"DATE\", \"value\") VALUES (1, 
'a', 8, 1," + values[0] + ")");
+conn.createStatement().execute("UPSERT INTO " + tableName
++ " (id, pkcol, page_id, \"DATE\", \"value\") VALUES (2, 
'ab', 8, 2," + values[1] + ")");
+conn.createStatement().execute("UPSERT INTO " + tableName
++ " (id, pkcol, page_id, \"DATE\", \"value\") VALUES (3, 
'abc', 8, 3," + values[2] + ")");
+conn.createStatement().execute("UPSERT INTO " + tableName
++ " (id, pkcol, page_id, \"DATE\", \"value\") VALUES (5, 
'abcde', 8, 5," + values[4] + ")");
+conn.createStatement().execute("UPSERT INTO " + tableName
++ " (id, pkcol, page_id, \"DATE\", \"value\") VALUES (4, 
'abcd', 8, 4," + values[3] + ")");
+conn.commit();
+
+ResultSet rs =
+conn.createStatement().executeQuery("SELECT 
ROWKEY_BYTES_STRING() FROM " + tableName);
+try (org.apache.hadoop.hbase.client.Connection hconn =
+ConnectionFactory.createConnection(config)) {
+Table table = hconn.getTable(TableName.val

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5628 Phoenix Function to Return HBase Row Key of Column Cell

2020-01-12 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new a60cb5a  PHOENIX-5628 Phoenix Function to Return HBase Row Key of 
Column Cell
a60cb5a is described below

commit a60cb5a18c3d31ae767d0a3b7c44482f1707366a
Author: Abhishek Singh Chouhan 
AuthorDate: Fri Jan 10 17:09:16 2020 -0800

PHOENIX-5628 Phoenix Function to Return HBase Row Key of Column Cell
---
 .../end2end/RowKeyBytesStringFunctionIT.java   | 81 ++
 .../apache/phoenix/expression/ExpressionType.java  |  1 +
 .../function/RowKeyBytesStringFunction.java| 73 +++
 3 files changed, 155 insertions(+)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowKeyBytesStringFunctionIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowKeyBytesStringFunctionIT.java
new file mode 100644
index 000..cde7aa2
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowKeyBytesStringFunctionIT.java
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.types.PInteger;
+import org.junit.Test;
+
+public class RowKeyBytesStringFunctionIT extends ParallelStatsDisabledIT {
+
+@Test
+public void getRowKeyBytesAndVerify() throws Exception {
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+int[] values = {3,7,9,158,5};
+String tableName = generateUniqueName();
+String ddl =
+"CREATE TABLE IF NOT EXISTS " + tableName + " "
++ "(id INTEGER NOT NULL, pkcol VARCHAR, page_id 
UNSIGNED_LONG,"
++ " \"DATE\" BIGINT, \"value\" INTEGER,"
++ " constraint pk primary key(id, pkcol)) 
COLUMN_ENCODED_BYTES = 0";
+conn.createStatement().execute(ddl);
+
+conn.createStatement().execute("UPSERT INTO " + tableName
++ " (id, pkcol, page_id, \"DATE\", \"value\") VALUES (1, 
'a', 8, 1," + values[0] + ")");
+conn.createStatement().execute("UPSERT INTO " + tableName
++ " (id, pkcol, page_id, \"DATE\", \"value\") VALUES (2, 
'ab', 8, 2," + values[1] + ")");
+conn.createStatement().execute("UPSERT INTO " + tableName
++ " (id, pkcol, page_id, \"DATE\", \"value\") VALUES (3, 
'abc', 8, 3," + values[2] + ")");
+conn.createStatement().execute("UPSERT INTO " + tableName
++ " (id, pkcol, page_id, \"DATE\", \"value\") VALUES (5, 
'abcde', 8, 5," + values[4] + ")");
+conn.createStatement().execute("UPSERT INTO " + tableName
++ " (id, pkcol, page_id, \"DATE\", \"value\") VALUES (4, 
'abcd', 8, 4," + values[3] + ")");
+conn.commit();
+
+ResultSet rs =
+conn.createStatement().executeQuery("SELECT 
ROWKEY_BYTES_STRING() FROM " + tableName);
+try (org.apache.hadoop.hbase.client.Connection hconn =
+ConnectionFactory.createConnection(config)) {
+Table table = hconn.getTable(TableName.val

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5628 Phoenix Function to Return HBase Row Key of Column Cell

2020-01-12 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 458f252  PHOENIX-5628 Phoenix Function to Return HBase Row Key of 
Column Cell
458f252 is described below

commit 458f252a9781f5d1ba174a5a60fef867d4dc7a09
Author: Abhishek Singh Chouhan 
AuthorDate: Fri Jan 10 17:09:16 2020 -0800

PHOENIX-5628 Phoenix Function to Return HBase Row Key of Column Cell
---
 .../end2end/RowKeyBytesStringFunctionIT.java   | 81 ++
 .../apache/phoenix/expression/ExpressionType.java  |  1 +
 .../function/RowKeyBytesStringFunction.java| 73 +++
 3 files changed, 155 insertions(+)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowKeyBytesStringFunctionIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowKeyBytesStringFunctionIT.java
new file mode 100644
index 000..cde7aa2
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowKeyBytesStringFunctionIT.java
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.types.PInteger;
+import org.junit.Test;
+
+public class RowKeyBytesStringFunctionIT extends ParallelStatsDisabledIT {
+
+@Test
+public void getRowKeyBytesAndVerify() throws Exception {
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+int[] values = {3,7,9,158,5};
+String tableName = generateUniqueName();
+String ddl =
+"CREATE TABLE IF NOT EXISTS " + tableName + " "
++ "(id INTEGER NOT NULL, pkcol VARCHAR, page_id 
UNSIGNED_LONG,"
++ " \"DATE\" BIGINT, \"value\" INTEGER,"
++ " constraint pk primary key(id, pkcol)) 
COLUMN_ENCODED_BYTES = 0";
+conn.createStatement().execute(ddl);
+
+conn.createStatement().execute("UPSERT INTO " + tableName
++ " (id, pkcol, page_id, \"DATE\", \"value\") VALUES (1, 
'a', 8, 1," + values[0] + ")");
+conn.createStatement().execute("UPSERT INTO " + tableName
++ " (id, pkcol, page_id, \"DATE\", \"value\") VALUES (2, 
'ab', 8, 2," + values[1] + ")");
+conn.createStatement().execute("UPSERT INTO " + tableName
++ " (id, pkcol, page_id, \"DATE\", \"value\") VALUES (3, 
'abc', 8, 3," + values[2] + ")");
+conn.createStatement().execute("UPSERT INTO " + tableName
++ " (id, pkcol, page_id, \"DATE\", \"value\") VALUES (5, 
'abcde', 8, 5," + values[4] + ")");
+conn.createStatement().execute("UPSERT INTO " + tableName
++ " (id, pkcol, page_id, \"DATE\", \"value\") VALUES (4, 
'abcd', 8, 4," + values[3] + ")");
+conn.commit();
+
+ResultSet rs =
+conn.createStatement().executeQuery("SELECT 
ROWKEY_BYTES_STRING() FROM " + tableName);
+try (org.apache.hadoop.hbase.client.Connection hconn =
+ConnectionFactory.createConnection(config)) {
+Table table = hconn.getTable(TableName.val

[phoenix] branch master updated: PHOENIX-5628 Phoenix Function to Return HBase Row Key of Column Cell

2020-01-12 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 87f2cec  PHOENIX-5628 Phoenix Function to Return HBase Row Key of 
Column Cell
87f2cec is described below

commit 87f2cecf2405fd6050b003e55249ae5672319fe1
Author: Abhishek Singh Chouhan 
AuthorDate: Fri Jan 10 17:09:16 2020 -0800

PHOENIX-5628 Phoenix Function to Return HBase Row Key of Column Cell
---
 .../end2end/RowKeyBytesStringFunctionIT.java   | 81 ++
 .../apache/phoenix/expression/ExpressionType.java  |  1 +
 .../function/RowKeyBytesStringFunction.java| 73 +++
 3 files changed, 155 insertions(+)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowKeyBytesStringFunctionIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowKeyBytesStringFunctionIT.java
new file mode 100644
index 000..cde7aa2
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowKeyBytesStringFunctionIT.java
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.types.PInteger;
+import org.junit.Test;
+
+public class RowKeyBytesStringFunctionIT extends ParallelStatsDisabledIT {
+
+@Test
+public void getRowKeyBytesAndVerify() throws Exception {
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+int[] values = {3,7,9,158,5};
+String tableName = generateUniqueName();
+String ddl =
+"CREATE TABLE IF NOT EXISTS " + tableName + " "
++ "(id INTEGER NOT NULL, pkcol VARCHAR, page_id 
UNSIGNED_LONG,"
++ " \"DATE\" BIGINT, \"value\" INTEGER,"
++ " constraint pk primary key(id, pkcol)) 
COLUMN_ENCODED_BYTES = 0";
+conn.createStatement().execute(ddl);
+
+conn.createStatement().execute("UPSERT INTO " + tableName
++ " (id, pkcol, page_id, \"DATE\", \"value\") VALUES (1, 
'a', 8, 1," + values[0] + ")");
+conn.createStatement().execute("UPSERT INTO " + tableName
++ " (id, pkcol, page_id, \"DATE\", \"value\") VALUES (2, 
'ab', 8, 2," + values[1] + ")");
+conn.createStatement().execute("UPSERT INTO " + tableName
++ " (id, pkcol, page_id, \"DATE\", \"value\") VALUES (3, 
'abc', 8, 3," + values[2] + ")");
+conn.createStatement().execute("UPSERT INTO " + tableName
++ " (id, pkcol, page_id, \"DATE\", \"value\") VALUES (5, 
'abcde', 8, 5," + values[4] + ")");
+conn.createStatement().execute("UPSERT INTO " + tableName
++ " (id, pkcol, page_id, \"DATE\", \"value\") VALUES (4, 
'abcd', 8, 4," + values[3] + ")");
+conn.commit();
+
+ResultSet rs =
+conn.createStatement().executeQuery("SELECT 
ROWKEY_BYTES_STRING() FROM " + tableName);
+try (org.apache.hadoop.hbase.client.Connection hconn =
+ConnectionFactory.createConnection(config)) {
+Table table = hconn.getTable(TableName.valueOf(tableName));

[phoenix] branch 4.14-HBase-1.3 updated: PHOENIX-5637 Queries with SCN return expired rows

2019-12-20 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.3 by this push:
 new 126bfca  PHOENIX-5637 Queries with SCN return expired rows
126bfca is described below

commit 126bfca932cba95f7f306744177f3430201d81dd
Author: Abhishek Singh Chouhan 
AuthorDate: Tue Dec 17 19:33:50 2019 -0800

PHOENIX-5637 Queries with SCN return expired rows
---
 .../it/java/org/apache/phoenix/end2end/SCNIT.java  | 109 +
 .../hadoop/hbase/regionserver/ScanInfoUtil.java|   2 +-
 2 files changed, 110 insertions(+), 1 deletion(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SCNIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SCNIT.java
new file mode 100644
index 000..6c45b06
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SCNIT.java
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.util.Properties;
+
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.Test;
+
+public class SCNIT extends ParallelStatsDisabledIT {
+
+   @Test
+   public void testReadBeforeDelete() throws Exception {
+   String schemaName = generateUniqueName();
+   String tableName = generateUniqueName();
+   String fullTableName = SchemaUtil.getTableName(schemaName, 
tableName);
+   long timeBeforeDelete;
+   long timeAfterDelete;
+   try (Connection conn = DriverManager.getConnection(getUrl())) {
+   conn.createStatement().execute("CREATE TABLE " + 
fullTableName + "(k VARCHAR PRIMARY KEY, v VARCHAR)");
+   conn.createStatement().execute("UPSERT INTO " + 
fullTableName + " VALUES('a','aa')");
+   conn.createStatement().execute("UPSERT INTO " + 
fullTableName + " VALUES('b','bb')");
+   conn.createStatement().execute("UPSERT INTO " + 
fullTableName + " VALUES('c','cc')");
+   conn.commit();
+   timeBeforeDelete = EnvironmentEdgeManager.currentTime() 
+ 1;
+   Thread.sleep(2);
+   conn.createStatement().execute("DELETE FROM " + 
fullTableName + " WHERE k = 'b'");
+   conn.commit();
+   timeAfterDelete = EnvironmentEdgeManager.currentTime() 
+ 1;
+   }
+
+   Properties props = new Properties();
+   props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, 
Long.toString(timeBeforeDelete));
+   try (Connection connscn = DriverManager.getConnection(getUrl(), 
props)) {
+   ResultSet rs = 
connscn.createStatement().executeQuery("select * from " + fullTableName);
+   assertTrue(rs.next());
+   assertEquals("a", rs.getString(1));
+   assertTrue(rs.next());
+   assertEquals("b", rs.getString(1));
+   assertTrue(rs.next());
+   assertEquals("c", rs.getString(1));
+   assertFalse(rs.next());
+   rs.close();
+   }
+   props.clear();
+   props.setProperty("CurrentSCN", Long.toString(timeAfterDelete));
+   try (Connection connscn = DriverManager.getConnection(getUrl(), 
props)) {
+   ResultSet rs = 
connscn.createStatement().executeQuery("select * from " + fullTableName);
+   assert

[phoenix] branch 4.14-HBase-1.4 updated: PHOENIX-5637 Queries with SCN return expired rows

2019-12-20 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.14-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.4 by this push:
 new 2e31a3a  PHOENIX-5637 Queries with SCN return expired rows
2e31a3a is described below

commit 2e31a3a3c69d75dd3c2f5f17c7c5d8dc979fc14e
Author: Abhishek Singh Chouhan 
AuthorDate: Tue Dec 17 19:33:50 2019 -0800

PHOENIX-5637 Queries with SCN return expired rows
---
 .../it/java/org/apache/phoenix/end2end/SCNIT.java  | 109 +
 .../hadoop/hbase/regionserver/ScanInfoUtil.java|   2 +-
 2 files changed, 110 insertions(+), 1 deletion(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SCNIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SCNIT.java
new file mode 100644
index 000..6c45b06
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SCNIT.java
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.util.Properties;
+
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.Test;
+
+public class SCNIT extends ParallelStatsDisabledIT {
+
+   @Test
+   public void testReadBeforeDelete() throws Exception {
+   String schemaName = generateUniqueName();
+   String tableName = generateUniqueName();
+   String fullTableName = SchemaUtil.getTableName(schemaName, 
tableName);
+   long timeBeforeDelete;
+   long timeAfterDelete;
+   try (Connection conn = DriverManager.getConnection(getUrl())) {
+   conn.createStatement().execute("CREATE TABLE " + 
fullTableName + "(k VARCHAR PRIMARY KEY, v VARCHAR)");
+   conn.createStatement().execute("UPSERT INTO " + 
fullTableName + " VALUES('a','aa')");
+   conn.createStatement().execute("UPSERT INTO " + 
fullTableName + " VALUES('b','bb')");
+   conn.createStatement().execute("UPSERT INTO " + 
fullTableName + " VALUES('c','cc')");
+   conn.commit();
+   timeBeforeDelete = EnvironmentEdgeManager.currentTime() 
+ 1;
+   Thread.sleep(2);
+   conn.createStatement().execute("DELETE FROM " + 
fullTableName + " WHERE k = 'b'");
+   conn.commit();
+   timeAfterDelete = EnvironmentEdgeManager.currentTime() 
+ 1;
+   }
+
+   Properties props = new Properties();
+   props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, 
Long.toString(timeBeforeDelete));
+   try (Connection connscn = DriverManager.getConnection(getUrl(), 
props)) {
+   ResultSet rs = 
connscn.createStatement().executeQuery("select * from " + fullTableName);
+   assertTrue(rs.next());
+   assertEquals("a", rs.getString(1));
+   assertTrue(rs.next());
+   assertEquals("b", rs.getString(1));
+   assertTrue(rs.next());
+   assertEquals("c", rs.getString(1));
+   assertFalse(rs.next());
+   rs.close();
+   }
+   props.clear();
+   props.setProperty("CurrentSCN", Long.toString(timeAfterDelete));
+   try (Connection connscn = DriverManager.getConnection(getUrl(), 
props)) {
+   ResultSet rs = 
connscn.createStatement().executeQuery("select * from " + fullTableName);
+   assert

[phoenix] branch 4.15-HBase-1.3 updated: PHOENIX-5637 Queries with SCN return expired rows

2019-12-20 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.15-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.15-HBase-1.3 by this push:
 new 2e95d2d  PHOENIX-5637 Queries with SCN return expired rows
2e95d2d is described below

commit 2e95d2db9b443c3fc6352780330297e9289f1120
Author: Abhishek Singh Chouhan 
AuthorDate: Tue Dec 17 19:33:50 2019 -0800

PHOENIX-5637 Queries with SCN return expired rows
---
 .../it/java/org/apache/phoenix/end2end/SCNIT.java  | 109 +
 .../hadoop/hbase/regionserver/ScanInfoUtil.java|   2 +-
 2 files changed, 110 insertions(+), 1 deletion(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SCNIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SCNIT.java
new file mode 100644
index 000..6c45b06
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SCNIT.java
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.util.Properties;
+
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.Test;
+
+public class SCNIT extends ParallelStatsDisabledIT {
+
+   @Test
+   public void testReadBeforeDelete() throws Exception {
+   String schemaName = generateUniqueName();
+   String tableName = generateUniqueName();
+   String fullTableName = SchemaUtil.getTableName(schemaName, 
tableName);
+   long timeBeforeDelete;
+   long timeAfterDelete;
+   try (Connection conn = DriverManager.getConnection(getUrl())) {
+   conn.createStatement().execute("CREATE TABLE " + 
fullTableName + "(k VARCHAR PRIMARY KEY, v VARCHAR)");
+   conn.createStatement().execute("UPSERT INTO " + 
fullTableName + " VALUES('a','aa')");
+   conn.createStatement().execute("UPSERT INTO " + 
fullTableName + " VALUES('b','bb')");
+   conn.createStatement().execute("UPSERT INTO " + 
fullTableName + " VALUES('c','cc')");
+   conn.commit();
+   timeBeforeDelete = EnvironmentEdgeManager.currentTime() 
+ 1;
+   Thread.sleep(2);
+   conn.createStatement().execute("DELETE FROM " + 
fullTableName + " WHERE k = 'b'");
+   conn.commit();
+   timeAfterDelete = EnvironmentEdgeManager.currentTime() 
+ 1;
+   }
+
+   Properties props = new Properties();
+   props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, 
Long.toString(timeBeforeDelete));
+   try (Connection connscn = DriverManager.getConnection(getUrl(), 
props)) {
+   ResultSet rs = 
connscn.createStatement().executeQuery("select * from " + fullTableName);
+   assertTrue(rs.next());
+   assertEquals("a", rs.getString(1));
+   assertTrue(rs.next());
+   assertEquals("b", rs.getString(1));
+   assertTrue(rs.next());
+   assertEquals("c", rs.getString(1));
+   assertFalse(rs.next());
+   rs.close();
+   }
+   props.clear();
+   props.setProperty("CurrentSCN", Long.toString(timeAfterDelete));
+   try (Connection connscn = DriverManager.getConnection(getUrl(), 
props)) {
+   ResultSet rs = 
connscn.createStatement().executeQuery("select * from " + fullTableName);
+   assert

[phoenix] branch 4.15-HBase-1.4 updated: PHOENIX-5637 Queries with SCN return expired rows

2019-12-20 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.15-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.15-HBase-1.4 by this push:
 new 0bcd37f  PHOENIX-5637 Queries with SCN return expired rows
0bcd37f is described below

commit 0bcd37fe302df4c49dd149572b6ee91cf5286dec
Author: Abhishek Singh Chouhan 
AuthorDate: Tue Dec 17 19:33:50 2019 -0800

PHOENIX-5637 Queries with SCN return expired rows
---
 .../it/java/org/apache/phoenix/end2end/SCNIT.java  | 109 +
 .../hadoop/hbase/regionserver/ScanInfoUtil.java|   2 +-
 2 files changed, 110 insertions(+), 1 deletion(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SCNIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SCNIT.java
new file mode 100644
index 000..6c45b06
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SCNIT.java
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.util.Properties;
+
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.Test;
+
+public class SCNIT extends ParallelStatsDisabledIT {
+
+   @Test
+   public void testReadBeforeDelete() throws Exception {
+   String schemaName = generateUniqueName();
+   String tableName = generateUniqueName();
+   String fullTableName = SchemaUtil.getTableName(schemaName, 
tableName);
+   long timeBeforeDelete;
+   long timeAfterDelete;
+   try (Connection conn = DriverManager.getConnection(getUrl())) {
+   conn.createStatement().execute("CREATE TABLE " + 
fullTableName + "(k VARCHAR PRIMARY KEY, v VARCHAR)");
+   conn.createStatement().execute("UPSERT INTO " + 
fullTableName + " VALUES('a','aa')");
+   conn.createStatement().execute("UPSERT INTO " + 
fullTableName + " VALUES('b','bb')");
+   conn.createStatement().execute("UPSERT INTO " + 
fullTableName + " VALUES('c','cc')");
+   conn.commit();
+   timeBeforeDelete = EnvironmentEdgeManager.currentTime() 
+ 1;
+   Thread.sleep(2);
+   conn.createStatement().execute("DELETE FROM " + 
fullTableName + " WHERE k = 'b'");
+   conn.commit();
+   timeAfterDelete = EnvironmentEdgeManager.currentTime() 
+ 1;
+   }
+
+   Properties props = new Properties();
+   props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, 
Long.toString(timeBeforeDelete));
+   try (Connection connscn = DriverManager.getConnection(getUrl(), 
props)) {
+   ResultSet rs = 
connscn.createStatement().executeQuery("select * from " + fullTableName);
+   assertTrue(rs.next());
+   assertEquals("a", rs.getString(1));
+   assertTrue(rs.next());
+   assertEquals("b", rs.getString(1));
+   assertTrue(rs.next());
+   assertEquals("c", rs.getString(1));
+   assertFalse(rs.next());
+   rs.close();
+   }
+   props.clear();
+   props.setProperty("CurrentSCN", Long.toString(timeAfterDelete));
+   try (Connection connscn = DriverManager.getConnection(getUrl(), 
props)) {
+   ResultSet rs = 
connscn.createStatement().executeQuery("select * from " + fullTableName);
+   assert

[phoenix] branch 4.15-HBase-1.5 updated: PHOENIX-5637 Queries with SCN return expired rows

2019-12-20 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.15-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.15-HBase-1.5 by this push:
 new b46318c  PHOENIX-5637 Queries with SCN return expired rows
b46318c is described below

commit b46318c49d80c969ad4bb745863ca3e5f87149d6
Author: Abhishek Singh Chouhan 
AuthorDate: Tue Dec 17 19:33:50 2019 -0800

PHOENIX-5637 Queries with SCN return expired rows
---
 .../it/java/org/apache/phoenix/end2end/SCNIT.java  | 109 +
 .../hadoop/hbase/regionserver/ScanInfoUtil.java|   2 +-
 2 files changed, 110 insertions(+), 1 deletion(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SCNIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SCNIT.java
new file mode 100644
index 000..6c45b06
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SCNIT.java
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.util.Properties;
+
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.Test;
+
+public class SCNIT extends ParallelStatsDisabledIT {
+
+   @Test
+   public void testReadBeforeDelete() throws Exception {
+   String schemaName = generateUniqueName();
+   String tableName = generateUniqueName();
+   String fullTableName = SchemaUtil.getTableName(schemaName, 
tableName);
+   long timeBeforeDelete;
+   long timeAfterDelete;
+   try (Connection conn = DriverManager.getConnection(getUrl())) {
+   conn.createStatement().execute("CREATE TABLE " + 
fullTableName + "(k VARCHAR PRIMARY KEY, v VARCHAR)");
+   conn.createStatement().execute("UPSERT INTO " + 
fullTableName + " VALUES('a','aa')");
+   conn.createStatement().execute("UPSERT INTO " + 
fullTableName + " VALUES('b','bb')");
+   conn.createStatement().execute("UPSERT INTO " + 
fullTableName + " VALUES('c','cc')");
+   conn.commit();
+   timeBeforeDelete = EnvironmentEdgeManager.currentTime() 
+ 1;
+   Thread.sleep(2);
+   conn.createStatement().execute("DELETE FROM " + 
fullTableName + " WHERE k = 'b'");
+   conn.commit();
+   timeAfterDelete = EnvironmentEdgeManager.currentTime() 
+ 1;
+   }
+
+   Properties props = new Properties();
+   props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, 
Long.toString(timeBeforeDelete));
+   try (Connection connscn = DriverManager.getConnection(getUrl(), 
props)) {
+   ResultSet rs = 
connscn.createStatement().executeQuery("select * from " + fullTableName);
+   assertTrue(rs.next());
+   assertEquals("a", rs.getString(1));
+   assertTrue(rs.next());
+   assertEquals("b", rs.getString(1));
+   assertTrue(rs.next());
+   assertEquals("c", rs.getString(1));
+   assertFalse(rs.next());
+   rs.close();
+   }
+   props.clear();
+   props.setProperty("CurrentSCN", Long.toString(timeAfterDelete));
+   try (Connection connscn = DriverManager.getConnection(getUrl(), 
props)) {
+   ResultSet rs = 
connscn.createStatement().executeQuery("select * from " + fullTableName);
+   assert

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5637 Queries with SCN return expired rows

2019-12-20 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new ac1f97e  PHOENIX-5637 Queries with SCN return expired rows
ac1f97e is described below

commit ac1f97ebbb662a9dd6d5b2158402c0815eeb12dd
Author: Abhishek Singh Chouhan 
AuthorDate: Tue Dec 17 19:33:50 2019 -0800

PHOENIX-5637 Queries with SCN return expired rows
---
 .../it/java/org/apache/phoenix/end2end/SCNIT.java  | 109 +
 .../hadoop/hbase/regionserver/ScanInfoUtil.java|   2 +-
 2 files changed, 110 insertions(+), 1 deletion(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SCNIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SCNIT.java
new file mode 100644
index 000..6c45b06
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SCNIT.java
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.util.Properties;
+
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.Test;
+
+public class SCNIT extends ParallelStatsDisabledIT {
+
+   @Test
+   public void testReadBeforeDelete() throws Exception {
+   String schemaName = generateUniqueName();
+   String tableName = generateUniqueName();
+   String fullTableName = SchemaUtil.getTableName(schemaName, 
tableName);
+   long timeBeforeDelete;
+   long timeAfterDelete;
+   try (Connection conn = DriverManager.getConnection(getUrl())) {
+   conn.createStatement().execute("CREATE TABLE " + 
fullTableName + "(k VARCHAR PRIMARY KEY, v VARCHAR)");
+   conn.createStatement().execute("UPSERT INTO " + 
fullTableName + " VALUES('a','aa')");
+   conn.createStatement().execute("UPSERT INTO " + 
fullTableName + " VALUES('b','bb')");
+   conn.createStatement().execute("UPSERT INTO " + 
fullTableName + " VALUES('c','cc')");
+   conn.commit();
+   timeBeforeDelete = EnvironmentEdgeManager.currentTime() 
+ 1;
+   Thread.sleep(2);
+   conn.createStatement().execute("DELETE FROM " + 
fullTableName + " WHERE k = 'b'");
+   conn.commit();
+   timeAfterDelete = EnvironmentEdgeManager.currentTime() 
+ 1;
+   }
+
+   Properties props = new Properties();
+   props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, 
Long.toString(timeBeforeDelete));
+   try (Connection connscn = DriverManager.getConnection(getUrl(), 
props)) {
+   ResultSet rs = 
connscn.createStatement().executeQuery("select * from " + fullTableName);
+   assertTrue(rs.next());
+   assertEquals("a", rs.getString(1));
+   assertTrue(rs.next());
+   assertEquals("b", rs.getString(1));
+   assertTrue(rs.next());
+   assertEquals("c", rs.getString(1));
+   assertFalse(rs.next());
+   rs.close();
+   }
+   props.clear();
+   props.setProperty("CurrentSCN", Long.toString(timeAfterDelete));
+   try (Connection connscn = DriverManager.getConnection(getUrl(), 
props)) {
+   ResultSet rs = 
connscn.createStatement().executeQuery("select * from " + fullTableName);
+   assert

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5637 Queries with SCN return expired rows

2019-12-20 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 953164d  PHOENIX-5637 Queries with SCN return expired rows
953164d is described below

commit 953164da2d044c47117ba06a34acff70fdeed13d
Author: Abhishek Singh Chouhan 
AuthorDate: Tue Dec 17 19:33:50 2019 -0800

PHOENIX-5637 Queries with SCN return expired rows
---
 .../it/java/org/apache/phoenix/end2end/SCNIT.java  | 109 +
 .../hadoop/hbase/regionserver/ScanInfoUtil.java|   2 +-
 2 files changed, 110 insertions(+), 1 deletion(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SCNIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SCNIT.java
new file mode 100644
index 000..6c45b06
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SCNIT.java
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.util.Properties;
+
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.Test;
+
+public class SCNIT extends ParallelStatsDisabledIT {
+
+   @Test
+   public void testReadBeforeDelete() throws Exception {
+   String schemaName = generateUniqueName();
+   String tableName = generateUniqueName();
+   String fullTableName = SchemaUtil.getTableName(schemaName, 
tableName);
+   long timeBeforeDelete;
+   long timeAfterDelete;
+   try (Connection conn = DriverManager.getConnection(getUrl())) {
+   conn.createStatement().execute("CREATE TABLE " + 
fullTableName + "(k VARCHAR PRIMARY KEY, v VARCHAR)");
+   conn.createStatement().execute("UPSERT INTO " + 
fullTableName + " VALUES('a','aa')");
+   conn.createStatement().execute("UPSERT INTO " + 
fullTableName + " VALUES('b','bb')");
+   conn.createStatement().execute("UPSERT INTO " + 
fullTableName + " VALUES('c','cc')");
+   conn.commit();
+   timeBeforeDelete = EnvironmentEdgeManager.currentTime() 
+ 1;
+   Thread.sleep(2);
+   conn.createStatement().execute("DELETE FROM " + 
fullTableName + " WHERE k = 'b'");
+   conn.commit();
+   timeAfterDelete = EnvironmentEdgeManager.currentTime() 
+ 1;
+   }
+
+   Properties props = new Properties();
+   props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, 
Long.toString(timeBeforeDelete));
+   try (Connection connscn = DriverManager.getConnection(getUrl(), 
props)) {
+   ResultSet rs = 
connscn.createStatement().executeQuery("select * from " + fullTableName);
+   assertTrue(rs.next());
+   assertEquals("a", rs.getString(1));
+   assertTrue(rs.next());
+   assertEquals("b", rs.getString(1));
+   assertTrue(rs.next());
+   assertEquals("c", rs.getString(1));
+   assertFalse(rs.next());
+   rs.close();
+   }
+   props.clear();
+   props.setProperty("CurrentSCN", Long.toString(timeAfterDelete));
+   try (Connection connscn = DriverManager.getConnection(getUrl(), 
props)) {
+   ResultSet rs = 
connscn.createStatement().executeQuery("select * from " + fullTableName);
+   assert

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5637 Queries with SCN return expired rows

2019-12-20 Thread achouhan
This is an automated email from the ASF dual-hosted git repository.

achouhan pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 3e99416  PHOENIX-5637 Queries with SCN return expired rows
3e99416 is described below

commit 3e99416101c422a6f128d621ce2b86635782b916
Author: Abhishek Singh Chouhan 
AuthorDate: Tue Dec 17 19:33:50 2019 -0800

PHOENIX-5637 Queries with SCN return expired rows
---
 .../it/java/org/apache/phoenix/end2end/SCNIT.java  | 109 +
 .../hadoop/hbase/regionserver/ScanInfoUtil.java|   2 +-
 2 files changed, 110 insertions(+), 1 deletion(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SCNIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SCNIT.java
new file mode 100644
index 000..6c45b06
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SCNIT.java
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.util.Properties;
+
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.Test;
+
+public class SCNIT extends ParallelStatsDisabledIT {
+
+   @Test
+   public void testReadBeforeDelete() throws Exception {
+   String schemaName = generateUniqueName();
+   String tableName = generateUniqueName();
+   String fullTableName = SchemaUtil.getTableName(schemaName, 
tableName);
+   long timeBeforeDelete;
+   long timeAfterDelete;
+   try (Connection conn = DriverManager.getConnection(getUrl())) {
+   conn.createStatement().execute("CREATE TABLE " + 
fullTableName + "(k VARCHAR PRIMARY KEY, v VARCHAR)");
+   conn.createStatement().execute("UPSERT INTO " + 
fullTableName + " VALUES('a','aa')");
+   conn.createStatement().execute("UPSERT INTO " + 
fullTableName + " VALUES('b','bb')");
+   conn.createStatement().execute("UPSERT INTO " + 
fullTableName + " VALUES('c','cc')");
+   conn.commit();
+   timeBeforeDelete = EnvironmentEdgeManager.currentTime() 
+ 1;
+   Thread.sleep(2);
+   conn.createStatement().execute("DELETE FROM " + 
fullTableName + " WHERE k = 'b'");
+   conn.commit();
+   timeAfterDelete = EnvironmentEdgeManager.currentTime() 
+ 1;
+   }
+
+   Properties props = new Properties();
+   props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, 
Long.toString(timeBeforeDelete));
+   try (Connection connscn = DriverManager.getConnection(getUrl(), 
props)) {
+   ResultSet rs = 
connscn.createStatement().executeQuery("select * from " + fullTableName);
+   assertTrue(rs.next());
+   assertEquals("a", rs.getString(1));
+   assertTrue(rs.next());
+   assertEquals("b", rs.getString(1));
+   assertTrue(rs.next());
+   assertEquals("c", rs.getString(1));
+   assertFalse(rs.next());
+   rs.close();
+   }
+   props.clear();
+   props.setProperty("CurrentSCN", Long.toString(timeAfterDelete));
+   try (Connection connscn = DriverManager.getConnection(getUrl(), 
props)) {
+   ResultSet rs = 
connscn.createStatement().executeQuery("select * from " + fullTableName);
+   assert

svn commit: r1858961 - in /phoenix/site: publish/team.html source/src/site/markdown/team.md

2019-05-08 Thread achouhan
Author: achouhan
Date: Thu May  9 02:28:32 2019
New Revision: 1858961

URL: http://svn.apache.org/viewvc?rev=1858961=rev
Log:
Add Abhishek Singh Chouhan to team page

Modified:
phoenix/site/publish/team.html
phoenix/site/source/src/site/markdown/team.md

Modified: phoenix/site/publish/team.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/team.html?rev=1858961=1858960=1858961=diff
==
--- phoenix/site/publish/team.html (original)
+++ phoenix/site/publish/team.html Thu May  9 02:28:32 2019
@@ -1,7 +1,7 @@
 
 
 
 
@@ -358,60 +358,66 @@
PMC 


+   Abhishek Singh Chouhan  
+   Salesforce  
+   mailto:achou...@apache.org;>achou...@apache.org  
+   Committer 
+   
+   
Akshita Malhotra  
Salesforce  
mailto:akshitamalho...@apache.org;>akshitamalho...@apache.org  
Committer 

-   
+   
Chinmay Kulkarni  
Salesforce  
mailto:chinmayskulka...@apache.org;>chinmayskulka...@apache.org  
Committer 

-   
+   
Cody Marcel  
Salesforce  
mailto:codymar...@apache.org;>codymar...@apache.org  
Committer 

-   
+   
Dumindu Buddhika  
University of Moratuwa  
mailto:dumin...@apache.org;>dumin...@apache.org  
Committer 

-   
+   
Ethan Wang  
Facebook  
mailto:ew...@apache.org;>ew...@apache.org  
Committer 

-   
+   
Gerald Sangudi  
23andme  
mailto:sang...@apache.org;>sang...@apache.org  
Committer 

-   
+   
Jaanai Zhang  
Alibaba  
mailto:jaa...@apache.org;>jaa...@apache.org  
Committer 

-   
+   
Jan Fernando  
Salesforce  
mailto:jferna...@apache.org;>jferna...@apache.org  
Committer 

-   
+   
Kevin Liew  
Simba Technologies  
mailto:kl...@apache.org;>kl...@apache.org  
Committer 

-   
+   
Ohad Shacham  
Yahoo Research, Oath  
mailto:oh...@apache.org;>oh...@apache.org  

Modified: phoenix/site/source/src/site/markdown/team.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/team.md?rev=1858961=1858960=1858961=diff
==
--- phoenix/site/source/src/site/markdown/team.md (original)
+++ phoenix/site/source/src/site/markdown/team.md Thu May  9 02:28:32 2019
@@ -37,6 +37,7 @@ Simon Toens | Salesforce | stoens@apache
 Steven Noels | NGDATA | stev...@apache.org | PMC
 Thomas D'Silva | Salesforce | tdsi...@apache.org | PMC
 Vincent Poon | Salesforce | vincentp...@apache.org | PMC
+Abhishek Singh Chouhan | Salesforce | achou...@apache.org | Committer
 Akshita Malhotra | Salesforce | akshitamalho...@apache.org | Committer
 Chinmay Kulkarni | Salesforce | chinmayskulka...@apache.org | Committer
 Cody Marcel | Salesforce | codymar...@apache.org | Committer




svn commit: r1858953 - in /phoenix/site: publish/Phoenix-in-15-minutes-or-less.html source/src/site/markdown/Phoenix-in-15-minutes-or-less.md

2019-05-08 Thread achouhan
Author: achouhan
Date: Wed May  8 21:59:40 2019
New Revision: 1858953

URL: http://svn.apache.org/viewvc?rev=1858953=rev
Log:
PHOENIX-5254 Broken Link to Installation Section of the Download Page

Modified:
phoenix/site/publish/Phoenix-in-15-minutes-or-less.html
phoenix/site/source/src/site/markdown/Phoenix-in-15-minutes-or-less.md

Modified: phoenix/site/publish/Phoenix-in-15-minutes-or-less.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/Phoenix-in-15-minutes-or-less.html?rev=1858953=1858952=1858953=diff
==
--- phoenix/site/publish/Phoenix-in-15-minutes-or-less.html (original)
+++ phoenix/site/publish/Phoenix-in-15-minutes-or-less.html Wed May  8 21:59:40 
2019
@@ -1,7 +1,7 @@
 
 
 
 
@@ -189,7 +189,7 @@
  Opens the door for leveraging and integrating lots of existing 
tooling 
  
 But how can SQL support my favorite HBase technique of 
x,y,z Didn’t make it to the last HBase Meetup did you? SQL is 
just a way of expressing what you want to get not how you 
want to get it. Check out my http://files.meetup.com/1350427/IntelPhoenixHBaseMeetup.ppt;>presentation
 for various existing and to-be-done Phoenix features to support your favorite 
HBase trick. Have ideas of your own? We’d love to hear about them: file an issue for us and/or join our mailing list. 
-Blah, blah, blah - I just want to get started! Ok, 
great! Just follow our install 
instructions: 
+Blah, blah, blah - I just want to get started! Ok, 
great! Just follow our install 
instructions: 
  
  download and expand our installation tar 
  copy the phoenix server jar that is compatible with your HBase 
installation into the lib directory of every region server 
@@ -495,7 +495,7 @@ ORDER BY sum(population) DESC;


Back to 
top
-   Copyright 2018 http://www.apache.org;>Apache Software Foundation. All Rights 
Reserved.
+   Copyright 2019 http://www.apache.org;>Apache Software Foundation. All Rights 
Reserved.




Modified: phoenix/site/source/src/site/markdown/Phoenix-in-15-minutes-or-less.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/Phoenix-in-15-minutes-or-less.md?rev=1858953=1858952=1858953=diff
==
--- phoenix/site/source/src/site/markdown/Phoenix-in-15-minutes-or-less.md 
(original)
+++ phoenix/site/source/src/site/markdown/Phoenix-in-15-minutes-or-less.md Wed 
May  8 21:59:40 2019
@@ -31,7 +31,7 @@ Well, that's kind of the point: give fol
 Didn't make it to the last HBase Meetup did you? SQL is just a way of 
expressing *what you want to get* not *how you want to 
get it*. Check out my 
[presentation](http://files.meetup.com/1350427/IntelPhoenixHBaseMeetup.ppt) for 
various existing and to-be-done Phoenix features to support your favorite HBase 
trick. Have ideas of your own? We'd love to hear about them: file an 
[issue](issues.html) for us and/or join our [mailing list](mailing_list.html).
 
 *Blah, blah, blah - I just want to get started!*
-Ok, great! Just follow our [install instructions](download.html#Installation):
+Ok, great! Just follow our [install instructions](installation.html):
 
 * [download](download.html) and expand our installation tar
 * copy the phoenix server jar that is compatible with your HBase installation 
into the lib directory of every region server




svn commit: r1858952 - in /phoenix/site: publish/contributing.html source/src/site/markdown/contributing.md

2019-05-08 Thread achouhan
Author: achouhan
Date: Wed May  8 21:43:50 2019
New Revision: 1858952

URL: http://svn.apache.org/viewvc?rev=1858952=rev
Log:
PHOENIX-5203 Update contributing guidelines on Phoenix website

Modified:
phoenix/site/publish/contributing.html
phoenix/site/source/src/site/markdown/contributing.md

Modified: phoenix/site/publish/contributing.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/contributing.html?rev=1858952=1858951=1858952=diff
==
--- phoenix/site/publish/contributing.html (original)
+++ phoenix/site/publish/contributing.html Wed May  8 21:43:50 2019
@@ -1,7 +1,7 @@
 
 
 
 
@@ -192,14 +192,22 @@
   
   
   Generate a patch 
-  There are two general approaches that can be used for creating and 
submitting a patch: GitHub pull requests, or manually creating a patch with 
Git. Both of these are explained below. 
+  There are two general approaches that can be used for creating and 
submitting a patch: GitHub pull requests, or manually creating a patch with 
Git. Both of these are explained below. Please make sure that the patch applies 
cleanly on all the active branches including master: 5.x-HBase-2.0, 
4.x-HBase-1.4, 4.x-HBase-1.3, and 4.x-HBase-1.2 
   Regardless of which approach is taken, please make sure to follow the 
Phoenix code conventions (more information below). Whenever possible, unit 
tests or integration tests should be included with patches. 
-  The commit message should reference the jira ticket issue (which has the 
format PHOENIX-{NUMBER}). 
+  Please make sure that the patch contains only one commit and click on the 
‘Submit patch’ button to automatically trigger the tests on the patch. 
+  The commit message should reference the jira ticket issue (which has the 
format PHOENIX-{NUMBER}:{JIRA-TITLE}). 
+  To effectively get the patch reviewed, please raise the pull request 
against an appropriate branch. 
+   
+   Naming convention for the 
patch 
+   When you generate the patch, make sure the name of the patch has 
following format: PHOENIX-{NUMBER}.{BRANCH-NAME}.{VERSION}.patch 
+   Ex. PHOENIX-4872.master.v1.patch, PHOENIX-4872.master.v2.patch, 
PHOENIX-4872.4.x-HBase-1.3.v1.patch etc 
+   

GitHub workflow 
 
 Create a pull request in GitHub for the https://github.com/apache/phoenix;>mirror of the Phoenix Git 
repository. 
-Add a comment in the Jira issue with a link to the pull request. This 
makes it clear that the patch is ready for review. 
+Generate a patch and attach it to the jira, so that Hadoop QA runs 
automated tests. 
+If you update the PR, generate a new patch with different name from 
the previous, so that change in the patch is detected and tests are run on the 
new patch. 
 


@@ -474,7 +482,7 @@


Back to 
top
-   Copyright 2018 http://www.apache.org;>Apache Software Foundation. All Rights 
Reserved.
+   Copyright 2019 http://www.apache.org;>Apache Software Foundation. All Rights 
Reserved.




Modified: phoenix/site/source/src/site/markdown/contributing.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/contributing.md?rev=1858952=1858951=1858952=diff
==
--- phoenix/site/source/src/site/markdown/contributing.md (original)
+++ phoenix/site/source/src/site/markdown/contributing.md Wed May  8 21:43:50 
2019
@@ -30,18 +30,28 @@ To setup your development, see [these](d
 
 ### Generate a patch
 
-There are two general approaches that can be used for creating and submitting 
a patch: GitHub pull requests, or manually creating a patch with Git. Both of 
these are explained below.
+There are two general approaches that can be used for creating and submitting 
a patch: GitHub pull requests, or manually creating a patch with Git. Both of 
these are explained below. Please make sure that the patch applies cleanly on 
all the active branches including master: 5.x-HBase-2.0, 4.x-HBase-1.4, 
4.x-HBase-1.3, and 4.x-HBase-1.2
 
 Regardless of which approach is taken, please make sure to follow the Phoenix 
code conventions (more information below). Whenever possible, unit tests or 
integration tests should be included with patches.
 
+Please make sure that the patch contains only one commit and click on the 
'Submit patch' button to automatically trigger the tests on the patch.
+
 The commit message should reference the jira ticket issue (which has the format
-`PHOENIX-{NUMBER}`).
+`PHOENIX-{NUMBER}:{JIRA-TITLE}`).
+
+To effectively get the patch reviewed, please raise the pull request against 
an appropriate branch.
+
+ Naming convention for the patch
+When you generate the patch, make sure the name of the patch has following 
format:
+`PHOENIX-{NUMBER

svn commit: r1857611 - /phoenix/site/source/src/site/markdown/contributing.md

2019-04-15 Thread achouhan
Author: achouhan
Date: Tue Apr 16 01:01:48 2019
New Revision: 1857611

URL: http://svn.apache.org/viewvc?rev=1857611=rev
Log:
Revert PHOENIX-5203 Update contributing guidelines on Phoenix website

Modified:
phoenix/site/source/src/site/markdown/contributing.md

Modified: phoenix/site/source/src/site/markdown/contributing.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/contributing.md?rev=1857611=1857610=1857611=diff
==
--- phoenix/site/source/src/site/markdown/contributing.md (original)
+++ phoenix/site/source/src/site/markdown/contributing.md Tue Apr 16 01:01:48 
2019
@@ -30,28 +30,18 @@ To setup your development, see [these](d
 
 ### Generate a patch
 
-There are two general approaches that can be used for creating and submitting 
a patch: GitHub pull requests, or manually creating a patch with Git. Both of 
these are explained below. Please make sure that the patch applies cleanly on 
all the active branches including master: 5.x-HBase-2.0, 4.x-HBase-1.4, 
4.x-HBase-1.3, and 4.x-HBase-1.2
+There are two general approaches that can be used for creating and submitting 
a patch: GitHub pull requests, or manually creating a patch with Git. Both of 
these are explained below.
 
 Regardless of which approach is taken, please make sure to follow the Phoenix 
code conventions (more information below). Whenever possible, unit tests or 
integration tests should be included with patches.
 
-Please make sure that the patch contains only one commit and click on the 
'Submit patch' button to automatically trigger the tests on the patch.
-
 The commit message should reference the jira ticket issue (which has the format
-`PHOENIX-{NUMBER}`: ).
-
-To effectively get the patch reviewed, please raise the pull request against 
an appropriate branch.
-
- Naming convention for the patch
-When you generate the patch, make sure the name of the patch has following 
format:
-`PHOENIX-{NUMBER}.{BRANCH-NAME}.{VERSION}`
+`PHOENIX-{NUMBER}`).
 
-Ex. PHOENIX-4872.master.v1.patch, PHOENIX-4872.master.v2.patch, 
PHOENIX-4872.4.x-HBase-1.3.v1.patch etc
 
  GitHub workflow
 
 1. Create a pull request in GitHub for the [mirror of the Phoenix Git 
repository](https://github.com/apache/phoenix). 
-2. Generate a patch and attach it to the jira, so that Hadoop QA runs 
automated tests.
-3. If you update the PR, generate a new patch with different name from the 
previous, so that change in the patch is detected and tests are run on the new 
patch.
+2. Add a comment in the Jira issue with a link to the pull request. This makes 
it clear that the patch is ready for review.
 
  Local Git workflow
 




svn commit: r1857610 - /phoenix/site/source/src/site/markdown/contributing.md

2019-04-15 Thread achouhan
Author: achouhan
Date: Tue Apr 16 00:44:34 2019
New Revision: 1857610

URL: http://svn.apache.org/viewvc?rev=1857610=rev
Log:
PHOENIX-5203 Update contributing guidelines on Phoenix website

Modified:
phoenix/site/source/src/site/markdown/contributing.md

Modified: phoenix/site/source/src/site/markdown/contributing.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/contributing.md?rev=1857610=1857609=1857610=diff
==
--- phoenix/site/source/src/site/markdown/contributing.md (original)
+++ phoenix/site/source/src/site/markdown/contributing.md Tue Apr 16 00:44:34 
2019
@@ -30,18 +30,28 @@ To setup your development, see [these](d
 
 ### Generate a patch
 
-There are two general approaches that can be used for creating and submitting 
a patch: GitHub pull requests, or manually creating a patch with Git. Both of 
these are explained below.
+There are two general approaches that can be used for creating and submitting 
a patch: GitHub pull requests, or manually creating a patch with Git. Both of 
these are explained below. Please make sure that the patch applies cleanly on 
all the active branches including master: 5.x-HBase-2.0, 4.x-HBase-1.4, 
4.x-HBase-1.3, and 4.x-HBase-1.2
 
 Regardless of which approach is taken, please make sure to follow the Phoenix 
code conventions (more information below). Whenever possible, unit tests or 
integration tests should be included with patches.
 
+Please make sure that the patch contains only one commit and click on the 
'Submit patch' button to automatically trigger the tests on the patch.
+
 The commit message should reference the jira ticket issue (which has the format
-`PHOENIX-{NUMBER}`).
+`PHOENIX-{NUMBER}`: ).
+
+To effectively get the patch reviewed, please raise the pull request against 
an appropriate branch.
+
+ Naming convention for the patch
+When you generate the patch, make sure the name of the patch has following 
format:
+`PHOENIX-{NUMBER}.{BRANCH-NAME}.{VERSION}`
 
+Ex. PHOENIX-4872.master.v1.patch, PHOENIX-4872.master.v2.patch, 
PHOENIX-4872.4.x-HBase-1.3.v1.patch etc
 
  GitHub workflow
 
 1. Create a pull request in GitHub for the [mirror of the Phoenix Git 
repository](https://github.com/apache/phoenix). 
-2. Add a comment in the Jira issue with a link to the pull request. This makes 
it clear that the patch is ready for review.
+2. Generate a patch and attach it to the jira, so that Hadoop QA runs 
automated tests.
+3. If you update the PR, generate a new patch with different name from the 
previous, so that change in the patch is detected and tests are run on the new 
patch.
 
  Local Git workflow