[phoenix] branch 4.x updated: PHOENIX-5171 SkipScan incorrectly filters composite primary key which the key range contains all values.

2020-09-09 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x by this push:
 new fece8e6  PHOENIX-5171 SkipScan incorrectly filters composite primary 
key which the key range contains all values.
fece8e6 is described below

commit fece8e69b9c03c80db7a0801d99e5de31fe15ffa
Author: Lars 
AuthorDate: Wed Sep 9 16:32:03 2020 -0700

PHOENIX-5171 SkipScan incorrectly filters composite primary key which the 
key range contains all values.
---
 .../apache/phoenix/end2end/SkipScanQueryIT.java| 33 
 .../java/org/apache/phoenix/util/ScanUtil.java |  5 +--
 .../apache/phoenix/filter/SkipScanFilterTest.java  | 36 +-
 3 files changed, 69 insertions(+), 5 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SkipScanQueryIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SkipScanQueryIT.java
index ccba651..c06b528 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SkipScanQueryIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SkipScanQueryIT.java
@@ -712,4 +712,37 @@ public class SkipScanQueryIT extends 
ParallelStatsDisabledIT {
 assertResultSet(rs, new Object[][]{{1,3,2,10},{1,3,5,6}});
 }
 }
+
+@Test
+public void testKeyRangesContainsAllValues() throws Exception {
+String tableName = generateUniqueName();
+String ddl = "CREATE TABLE IF NOT EXISTS " + tableName + "(" +
+ " vdate VARCHAR, " +
+ " tab VARCHAR, " +
+ " dev TINYINT NOT NULL, " +
+ " app VARCHAR, " +
+ " target VARCHAR, " +
+ " channel VARCHAR, " +
+ " one VARCHAR, " +
+ " two VARCHAR, " +
+ " count1 INTEGER, " +
+ " count2 INTEGER, " +
+ " CONSTRAINT PK PRIMARY KEY 
(vdate,tab,dev,app,target,channel,one,two))";
+
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+conn.createStatement().execute(ddl);
+conn.createStatement().execute("UPSERT INTO " + tableName + " 
VALUES('2018-02-14','channel_agg',2,null,null,'A004',null,null,2,2)");
+conn.createStatement().execute("UPSERT INTO " + tableName + " 
VALUES('2018-02-14','channel_agg',2,null,null,null,null,null,2,2)");
+conn.commit();
+
+ResultSet rs = conn.createStatement().executeQuery(
+"SELECT * FROM " + tableName +
+" WHERE dev = 2 AND vdate BETWEEN '2018-02-10' AND 
'2019-02-19'" +
+" AND tab = 'channel_agg' AND channel='A004'");
+
+assertTrue(rs.next());
+assertEquals("2018-02-14", rs.getString(1));
+assertFalse(rs.next());
+}
+}
 }
diff --git a/phoenix-core/src/main/java/org/apache/phoenix/util/ScanUtil.java 
b/phoenix-core/src/main/java/org/apache/phoenix/util/ScanUtil.java
index 28ea349..be63fae 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/util/ScanUtil.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/util/ScanUtil.java
@@ -406,12 +406,9 @@ public class ScanUtil {
  *for the same reason. However, if the type is variable width
  *continue building the key because null values will be 
filtered
  *since our separator byte will be appended and incremented.
- * 3) if the range includes everything as we cannot add any more 
useful
- *information to the key after that.
  */
 lastUnboundUpper = false;
-if (  range.isUnbound(bound) &&
-( bound == Bound.UPPER || isFixedWidth || range == 
KeyRange.EVERYTHING_RANGE) ){
+if (range.isUnbound(bound) && (bound == Bound.UPPER || 
isFixedWidth)) {
 lastUnboundUpper = (bound == Bound.UPPER);
 break;
 }
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/filter/SkipScanFilterTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/filter/SkipScanFilterTest.java
index ee6e68b..396ad84 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/filter/SkipScanFilterTest.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/filter/SkipScanFilterTest.java
@@ -137,7 +137,8 @@ public class SkipScanFilterTest extends TestCase {
  
QueryConstants.SEPARATOR_BYTE_ARRAY,
  Bytes.toBytes("1") ), 
 
ByteU

[phoenix] branch master updated: PHOENIX-5171 SkipScan incorrectly filters composite primary key which the key range contains all values.

2020-09-09 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new f5cc0ad  PHOENIX-5171 SkipScan incorrectly filters composite primary 
key which the key range contains all values.
f5cc0ad is described below

commit f5cc0ad49f13283fefccddf1b187da95eecdb423
Author: Lars 
AuthorDate: Wed Sep 9 16:32:03 2020 -0700

PHOENIX-5171 SkipScan incorrectly filters composite primary key which the 
key range contains all values.
---
 .../apache/phoenix/end2end/SkipScanQueryIT.java| 33 
 .../java/org/apache/phoenix/util/ScanUtil.java |  5 +--
 .../apache/phoenix/filter/SkipScanFilterTest.java  | 36 +-
 3 files changed, 69 insertions(+), 5 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SkipScanQueryIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SkipScanQueryIT.java
index 64b897dd..6e269da 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SkipScanQueryIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SkipScanQueryIT.java
@@ -713,4 +713,37 @@ public class SkipScanQueryIT extends 
ParallelStatsDisabledIT {
 assertResultSet(rs, new Object[][]{{1,3,2,10},{1,3,5,6}});
 }
 }
+
+@Test
+public void testKeyRangesContainsAllValues() throws Exception {
+String tableName = generateUniqueName();
+String ddl = "CREATE TABLE IF NOT EXISTS " + tableName + "(" +
+ " vdate VARCHAR, " +
+ " tab VARCHAR, " +
+ " dev TINYINT NOT NULL, " +
+ " app VARCHAR, " +
+ " target VARCHAR, " +
+ " channel VARCHAR, " +
+ " one VARCHAR, " +
+ " two VARCHAR, " +
+ " count1 INTEGER, " +
+ " count2 INTEGER, " +
+ " CONSTRAINT PK PRIMARY KEY 
(vdate,tab,dev,app,target,channel,one,two))";
+
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+conn.createStatement().execute(ddl);
+conn.createStatement().execute("UPSERT INTO " + tableName + " 
VALUES('2018-02-14','channel_agg',2,null,null,'A004',null,null,2,2)");
+conn.createStatement().execute("UPSERT INTO " + tableName + " 
VALUES('2018-02-14','channel_agg',2,null,null,null,null,null,2,2)");
+conn.commit();
+
+ResultSet rs = conn.createStatement().executeQuery(
+"SELECT * FROM " + tableName +
+" WHERE dev = 2 AND vdate BETWEEN '2018-02-10' AND 
'2019-02-19'" +
+" AND tab = 'channel_agg' AND channel='A004'");
+
+assertTrue(rs.next());
+assertEquals("2018-02-14", rs.getString(1));
+assertFalse(rs.next());
+}
+}
 }
diff --git a/phoenix-core/src/main/java/org/apache/phoenix/util/ScanUtil.java 
b/phoenix-core/src/main/java/org/apache/phoenix/util/ScanUtil.java
index 0a37411..c892ed2 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/util/ScanUtil.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/util/ScanUtil.java
@@ -406,12 +406,9 @@ public class ScanUtil {
  *for the same reason. However, if the type is variable width
  *continue building the key because null values will be 
filtered
  *since our separator byte will be appended and incremented.
- * 3) if the range includes everything as we cannot add any more 
useful
- *information to the key after that.
  */
 lastUnboundUpper = false;
-if (  range.isUnbound(bound) &&
-( bound == Bound.UPPER || isFixedWidth || range == 
KeyRange.EVERYTHING_RANGE) ){
+if (range.isUnbound(bound) && (bound == Bound.UPPER || 
isFixedWidth)) {
 lastUnboundUpper = (bound == Bound.UPPER);
 break;
 }
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/filter/SkipScanFilterTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/filter/SkipScanFilterTest.java
index 8f78588..641db78 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/filter/SkipScanFilterTest.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/filter/SkipScanFilterTest.java
@@ -138,7 +138,8 @@ public class SkipScanFilterTest extends TestCase {
  
QueryConstants.SEPARATOR_BYTE_ARRAY,
  Bytes.toBytes("1") )

[phoenix] branch 4.x updated: PHOENIX-6115 Avoid scanning prior row state for uncovered local indexes on immutable tables.

2020-09-01 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x by this push:
 new 93a85f0  PHOENIX-6115 Avoid scanning prior row state for uncovered 
local indexes on immutable tables.
93a85f0 is described below

commit 93a85f0e67a771408be4840906dbd854415cf207
Author: Lars 
AuthorDate: Tue Sep 1 10:47:34 2020 -0700

PHOENIX-6115 Avoid scanning prior row state for uncovered local indexes on 
immutable tables.
---
 .../hbase/index/covered/data/CachedLocalTable.java| 19 ---
 1 file changed, 12 insertions(+), 7 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/covered/data/CachedLocalTable.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/covered/data/CachedLocalTable.java
index 7091178..83bec4b 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/covered/data/CachedLocalTable.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/covered/data/CachedLocalTable.java
@@ -21,8 +21,10 @@ import java.io.IOException;
 import java.util.ArrayList;
 import java.util.Collection;
 import java.util.Collections;
+import java.util.HashMap;
 import java.util.HashSet;
 import java.util.List;
+import java.util.Map;
 import java.util.Set;
 
 import org.apache.hadoop.hbase.Cell;
@@ -44,13 +46,11 @@ import org.apache.phoenix.schema.types.PVarbinary;
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.collect.Sets;
 
-import java.util.HashMap;
-
 public class CachedLocalTable implements LocalHBaseState {
 
-private final HashMap> rowKeyPtrToCells;
+private final Map> rowKeyPtrToCells;
 
-private CachedLocalTable(HashMap> 
rowKeyPtrToCells) {
+private CachedLocalTable(Map> 
rowKeyPtrToCells) {
 this.rowKeyPtrToCells = rowKeyPtrToCells;
 }
 
@@ -86,7 +86,7 @@ public class CachedLocalTable implements LocalHBaseState {
 }
 
 @VisibleForTesting
-public static CachedLocalTable build(HashMap> rowKeyPtrToCells) {
+public static CachedLocalTable build(Map> 
rowKeyPtrToCells) {
 return new CachedLocalTable(rowKeyPtrToCells);
 }
 
@@ -111,12 +111,17 @@ public class CachedLocalTable implements LocalHBaseState {
 Collection 
dataTableMutationsWithSameRowKeyAndTimestamp,
 PhoenixIndexMetaData indexMetaData,
 Region region) throws IOException {
-List indexTableMaintainers = 
indexMetaData.getIndexMaintainers();
 Set keys = new 
HashSet(dataTableMutationsWithSameRowKeyAndTimestamp.size());
 for (Mutation mutation : dataTableMutationsWithSameRowKeyAndTimestamp) 
{
+  if (indexMetaData.requiresPriorRowState(mutation)) {
 keys.add(PVarbinary.INSTANCE.getKeyRange(mutation.getRow()));
+  }
+}
+if (keys.isEmpty()) {
+return new CachedLocalTable(Collections.>emptyMap());
 }
 
+List indexTableMaintainers = 
indexMetaData.getIndexMaintainers();
 Set getterColumnReferences = Sets.newHashSet();
 for (IndexMaintainer indexTableMaintainer : indexTableMaintainers) {
 getterColumnReferences.addAll(
@@ -149,7 +154,7 @@ public class CachedLocalTable implements LocalHBaseState {
 scan.setFilter(skipScanFilter);
 }
 
-HashMap> rowKeyPtrToCells =
+Map> rowKeyPtrToCells =
 new HashMap>();
 try (RegionScanner scanner = region.getScanner(scan)) {
 boolean more = true;



[phoenix] branch master updated: PHOENIX-6115 Avoid scanning prior row state for uncovered local indexes on immutable tables.

2020-09-01 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new b0f22e6  PHOENIX-6115 Avoid scanning prior row state for uncovered 
local indexes on immutable tables.
b0f22e6 is described below

commit b0f22e66874676806b12a9abaf4e72570aadfff9
Author: Lars 
AuthorDate: Tue Sep 1 10:10:39 2020 -0700

PHOENIX-6115 Avoid scanning prior row state for uncovered local indexes on 
immutable tables.
---
 .../hbase/index/covered/data/CachedLocalTable.java  | 21 +
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/covered/data/CachedLocalTable.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/covered/data/CachedLocalTable.java
index 2fd91f7..c04796d 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/covered/data/CachedLocalTable.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/covered/data/CachedLocalTable.java
@@ -21,8 +21,10 @@ import java.io.IOException;
 import java.util.ArrayList;
 import java.util.Collection;
 import java.util.Collections;
+import java.util.HashMap;
 import java.util.HashSet;
 import java.util.List;
+import java.util.Map;
 import java.util.Set;
 
 import org.apache.hadoop.hbase.Cell;
@@ -44,14 +46,12 @@ import org.apache.phoenix.schema.types.PVarbinary;
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.collect.Sets;
 
-import java.util.HashMap;
-
 public class CachedLocalTable implements LocalHBaseState {
 
-private final HashMap> rowKeyPtrToCells;
+private final Map> rowKeyPtrToCells;
 private final Region region;
 
-private CachedLocalTable(HashMap> 
rowKeyPtrToCells, Region region) {
+private CachedLocalTable(Map> 
rowKeyPtrToCells, Region region) {
 this.rowKeyPtrToCells = rowKeyPtrToCells;
 this.region = region;
 }
@@ -95,7 +95,7 @@ public class CachedLocalTable implements LocalHBaseState {
 }
 
 @VisibleForTesting
-public static CachedLocalTable build(HashMap> rowKeyPtrToCells) {
+public static CachedLocalTable build(Map> 
rowKeyPtrToCells) {
 return new CachedLocalTable(rowKeyPtrToCells, null);
 }
 
@@ -105,7 +105,7 @@ public class CachedLocalTable implements LocalHBaseState {
 Region region) throws IOException {
 if(indexMetaData.getReplayWrite() != null)
 {
-return new CachedLocalTable(new 
HashMap>(), region);
+return new CachedLocalTable(Collections.emptyMap(), region);
 }
 return 
preScanAllRequiredRows(dataTableMutationsWithSameRowKeyAndTimestamp, 
indexMetaData, region);
 }
@@ -124,12 +124,17 @@ public class CachedLocalTable implements LocalHBaseState {
 Collection 
dataTableMutationsWithSameRowKeyAndTimestamp,
 PhoenixIndexMetaData indexMetaData,
 Region region) throws IOException {
-List indexTableMaintainers = 
indexMetaData.getIndexMaintainers();
 Set keys = new 
HashSet(dataTableMutationsWithSameRowKeyAndTimestamp.size());
 for (Mutation mutation : dataTableMutationsWithSameRowKeyAndTimestamp) 
{
+  if (indexMetaData.requiresPriorRowState(mutation)) {
 keys.add(PVarbinary.INSTANCE.getKeyRange(mutation.getRow()));
+  }
+}
+if (keys.isEmpty()) {
+return new CachedLocalTable(Collections.emptyMap(), region);
 }
 
+List indexTableMaintainers = 
indexMetaData.getIndexMaintainers();
 Set getterColumnReferences = Sets.newHashSet();
 for (IndexMaintainer indexTableMaintainer : indexTableMaintainers) {
 getterColumnReferences.addAll(
@@ -162,7 +167,7 @@ public class CachedLocalTable implements LocalHBaseState {
 scan.setFilter(skipScanFilter);
 }
 
-HashMap> rowKeyPtrToCells =
+Map> rowKeyPtrToCells =
 new HashMap>();
 try (RegionScanner scanner = region.getScanner(scan)) {
 boolean more = true;



[phoenix] branch 4.x updated: PHOENIX-6106 Speed up ConcurrentMutationsExtendedIT.

2020-08-26 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x by this push:
 new 3c2fa78  PHOENIX-6106 Speed up ConcurrentMutationsExtendedIT.
3c2fa78 is described below

commit 3c2fa78b52d0fd48f111383937e3266a5ec34de0
Author: Lars 
AuthorDate: Wed Aug 26 20:03:56 2020 -0700

PHOENIX-6106 Speed up ConcurrentMutationsExtendedIT.
---
 .../java/org/apache/phoenix/end2end/ConcurrentMutationsExtendedIT.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsExtendedIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsExtendedIT.java
index 1c8f7ad..0a52c66 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsExtendedIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsExtendedIT.java
@@ -271,7 +271,7 @@ public class ConcurrentMutationsExtendedIT extends 
ParallelStatsDisabledIT {
 verifyIndexTable(tableName, indexName, conn);
 }
 
-@Test @Repeat(5)
+@Test
 public void testConcurrentUpserts() throws Exception {
 int nThreads = 4;
 final int batchSize = 200;



[phoenix] branch master updated: PHOENIX-6106 Speed up ConcurrentMutationsExtendedIT.

2020-08-26 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 020dd0b  PHOENIX-6106 Speed up ConcurrentMutationsExtendedIT.
020dd0b is described below

commit 020dd0b92b7b860fdd9f8b3d62e36af8de8ec9f7
Author: Lars 
AuthorDate: Wed Aug 26 20:02:57 2020 -0700

PHOENIX-6106 Speed up ConcurrentMutationsExtendedIT.
---
 .../java/org/apache/phoenix/end2end/ConcurrentMutationsExtendedIT.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsExtendedIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsExtendedIT.java
index d35451a..9389d0c 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsExtendedIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsExtendedIT.java
@@ -227,7 +227,7 @@ public class ConcurrentMutationsExtendedIT extends 
ParallelStatsDisabledIT {
 verifyIndexTable(tableName, indexName, conn);
 }
 
-@Test @Repeat(5)
+@Test
 public void testConcurrentUpserts() throws Exception {
 int nThreads = 4;
 final int batchSize = 200;



[phoenix] branch 4.x updated: PHOENIX-6101 Avoid duplicate work between local and global indexes.

2020-08-25 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x by this push:
 new 81d404b  PHOENIX-6101 Avoid duplicate work between local and global 
indexes.
81d404b is described below

commit 81d404b4b02904211d95e86b492c8b52ee1bbcff
Author: Lars 
AuthorDate: Tue Aug 25 12:50:41 2020 -0700

PHOENIX-6101 Avoid duplicate work between local and global indexes.
---
 .../phoenix/hbase/index/IndexRegionObserver.java   | 106 -
 1 file changed, 63 insertions(+), 43 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
index 49b5509..bcf718c 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
@@ -43,6 +43,7 @@ import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HConstants.OperationStatusCode;
 import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.Durability;
 import org.apache.hadoop.hbase.client.Increment;
@@ -429,10 +430,20 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
 return context.multiMutationMap.values();
 }
 
-public static void setTimestamp(Mutation m, long ts) throws IOException {
-for (List cells : m.getFamilyCellMap().values()) {
-for (Cell cell : cells) {
-CellUtil.setTimestamp(cell, ts);
+public static void setTimestamps(MiniBatchOperationInProgress 
miniBatchOp, IndexBuildManager builder, long ts) throws IOException {
+for (Integer i = 0; i < miniBatchOp.size(); i++) {
+if (miniBatchOp.getOperationStatus(i) == IGNORE) {
+continue;
+}
+Mutation m = miniBatchOp.getOperation(i);
+// skip this mutation if we aren't enabling indexing
+if (!builder.isEnabled(m)) {
+continue;
+}
+for (List cells : m.getFamilyCellMap().values()) {
+for (Cell cell : cells) {
+CellUtil.setTimestamp(cell, ts);
+}
 }
 }
 }
@@ -502,9 +513,6 @@ public class IndexRegionObserver extends BaseRegionObserver 
{
 if (!this.builder.isEnabled(m)) {
 continue;
 }
-// We update the time stamp of the data table to prevent 
overlapping time stamps (which prevents index
-// inconsistencies as this case isn't handled correctly currently).
-setTimestamp(m, now);
 if (m instanceof Put) {
 ImmutableBytesPtr rowKeyPtr = new 
ImmutableBytesPtr(m.getRow());
 Pair dataRowState = 
context.dataRowStates.get(rowKeyPtr);
@@ -554,13 +562,13 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
  * The index update generation for local indexes uses the existing index 
update generation code (i.e.,
  * the {@link IndexBuilder} implementation).
  */
-private void 
handleLocalIndexUpdates(ObserverContext c,
+private void handleLocalIndexUpdates(TableName table,
  
MiniBatchOperationInProgress miniBatchOp,
  Collection 
pendingMutations,
  PhoenixIndexMetaData indexMetaData) 
throws Throwable {
 ListMultimap> 
indexUpdates = ArrayListMultimap.>create();
 this.builder.getIndexUpdates(indexUpdates, miniBatchOp, 
pendingMutations, indexMetaData);
-byte[] tableName = 
c.getEnvironment().getRegion().getTableDesc().getTableName().getName();
+byte[] tableName = table.getName();
 HTableInterfaceReference hTableInterfaceReference =
 new HTableInterfaceReference(new ImmutableBytesPtr(tableName));
 List> localIndexUpdates = 
indexUpdates.removeAll(hTableInterfaceReference);
@@ -685,10 +693,7 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
  * unverified status. In phase 2, data table mutations are applied. In 
phase 3, the status for an index table row is
  * either set to "verified" or the row is deleted.
  */
-private boolean 
preparePreIndexMutations(ObserverContext c,
-  
MiniBatchOperationInProgress miniBatchOp,
-  BatchMutateContext context,
-  Collection 
pendingMutations,
+private void preparePreIndexMutations(Batc

[phoenix] branch master updated: PHOENIX-6101 Avoid duplicate work between local and global indexes.

2020-08-25 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new c109a61  PHOENIX-6101 Avoid duplicate work between local and global 
indexes.
c109a61 is described below

commit c109a61890fd2ea14a7274808b43298b6e221b11
Author: Lars 
AuthorDate: Tue Aug 25 12:31:18 2020 -0700

PHOENIX-6101 Avoid duplicate work between local and global indexes.
---
 .../phoenix/hbase/index/IndexRegionObserver.java   | 107 -
 1 file changed, 63 insertions(+), 44 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
index 2d0cf51..50e1f68 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
@@ -41,6 +41,7 @@ import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HConstants.OperationStatusCode;
 import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.Durability;
 import org.apache.hadoop.hbase.client.Increment;
@@ -443,10 +444,20 @@ public class IndexRegionObserver implements 
RegionObserver, RegionCoprocessor {
 return context.multiMutationMap.values();
 }
 
-public static void setTimestamp(Mutation m, long ts) throws IOException {
-for (List cells : m.getFamilyCellMap().values()) {
-for (Cell cell : cells) {
-CellUtil.setTimestamp(cell, ts);
+public static void setTimestamps(MiniBatchOperationInProgress 
miniBatchOp, IndexBuildManager builder, long ts) throws IOException {
+for (Integer i = 0; i < miniBatchOp.size(); i++) {
+if (miniBatchOp.getOperationStatus(i) == IGNORE) {
+continue;
+}
+Mutation m = miniBatchOp.getOperation(i);
+// skip this mutation if we aren't enabling indexing
+if (!builder.isEnabled(m)) {
+continue;
+}
+for (List cells : m.getFamilyCellMap().values()) {
+for (Cell cell : cells) {
+CellUtil.setTimestamp(cell, ts);
+}
 }
 }
 }
@@ -516,10 +527,6 @@ public class IndexRegionObserver implements 
RegionObserver, RegionCoprocessor {
 if (!this.builder.isEnabled(m)) {
 continue;
 }
-// Unless we're replaying edits to rebuild the index, we update 
the time stamp
-// of the data table to prevent overlapping time stamps (which 
prevents index
-// inconsistencies as this case isn't handled correctly currently).
-setTimestamp(m, now);
 if (m instanceof Put) {
 ImmutableBytesPtr rowKeyPtr = new 
ImmutableBytesPtr(m.getRow());
 Pair dataRowState = 
context.dataRowStates.get(rowKeyPtr);
@@ -569,13 +576,13 @@ public class IndexRegionObserver implements 
RegionObserver, RegionCoprocessor {
  * The index update generation for local indexes uses the existing index 
update generation code (i.e.,
  * the {@link IndexBuilder} implementation).
  */
-private void 
handleLocalIndexUpdates(ObserverContext c,
+private void handleLocalIndexUpdates(TableName table,
  
MiniBatchOperationInProgress miniBatchOp,
  Collection 
pendingMutations,
  PhoenixIndexMetaData indexMetaData) 
throws Throwable {
 ListMultimap> 
indexUpdates = ArrayListMultimap.>create();
 this.builder.getIndexUpdates(indexUpdates, miniBatchOp, 
pendingMutations, indexMetaData);
-byte[] tableName = 
c.getEnvironment().getRegion().getTableDescriptor().getTableName().getName();
+byte[] tableName = table.getName();
 HTableInterfaceReference hTableInterfaceReference =
 new HTableInterfaceReference(new ImmutableBytesPtr(tableName));
 List> localIndexUpdates = 
indexUpdates.removeAll(hTableInterfaceReference);
@@ -702,10 +709,7 @@ public class IndexRegionObserver implements 
RegionObserver, RegionCoprocessor {
  * unverified status. In phase 2, data table mutations are applied. In 
phase 3, the status for an index table row is
  * either set to "verified" or the row is deleted.
  */
-private boolean 
preparePreIndexMutations(ObserverContext c,
-  
MiniBatchOperationInProgress miniBatchOp,
-  Batc

[phoenix] branch 4.x updated: PHOENIX-6097 Improve LOCAL index consistency tests.

2020-08-24 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x by this push:
 new c1f3f19  PHOENIX-6097 Improve LOCAL index consistency tests.
c1f3f19 is described below

commit c1f3f194bec0efaa83930aad16dd319b44bb27b0
Author: Lars 
AuthorDate: Mon Aug 24 09:55:58 2020 -0700

PHOENIX-6097 Improve LOCAL index consistency tests.
---
 .../apache/phoenix/end2end/index/LocalIndexIT.java | 61 +-
 1 file changed, 47 insertions(+), 14 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
index 012bbca..14e85ab 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
@@ -112,25 +112,58 @@ public class LocalIndexIT extends BaseLocalIndexIT {
 Connection conn = getConnection();
 conn.setAutoCommit(true);
 
-conn.createStatement().execute("CREATE TABLE " + tableName + " (pk 
INTEGER PRIMARY KEY, v1 FLOAT) SPLIT ON (2000)");
+conn.createStatement().execute("CREATE TABLE " + tableName + " (pk 
INTEGER PRIMARY KEY, v1 FLOAT) SPLIT ON (4000)");
 conn.createStatement().execute("CREATE LOCAL INDEX " + indexName + " 
ON " + tableName + "(v1)");
-conn.createStatement().execute("UPSERT INTO " + tableName + " 
VALUES(rand() * 4000, rand())");
+conn.createStatement().execute("UPSERT INTO " + tableName + " 
VALUES(rand() * 8000, rand())");
 
-ResultSet rs;
-for (int i=0; i<15; i++) {
-conn.createStatement().execute("UPSERT INTO " + tableName + " 
SELECT rand() * 4000, rand() FROM " + tableName);
+for (int i=0; i<16; i++) {
+conn.createStatement().execute("UPSERT INTO " + tableName + " 
SELECT rand() * 8000, rand() FROM " + tableName);
+assertEquals(getCountViaIndex(conn, tableName, null), 
getCountViaIndex(conn, tableName, indexName));
+}
+}
 
-rs = conn.createStatement().executeQuery("SELECT COUNT(*) FROM " + 
tableName);
-rs.next();
-int indexCount = rs.getInt(1);
-rs.close();
+@Test
+public void testLocalIndexConsistencyWithGlobalMix() throws Exception {
+if (isNamespaceMapped) {
+return;
+}
+String tableName = schemaName + "." + generateUniqueName();
+String localIndexNames[] = {"L_" + generateUniqueName(), "L_" + 
generateUniqueName()};
+String globalIndexNames[] = {"G_" + generateUniqueName(), "G_" + 
generateUniqueName()};
 
-rs = conn.createStatement().executeQuery("SELECT /*+ NO_INDEX */ 
COUNT(*) FROM " + tableName);
-rs.next();
-int tableCount = rs.getInt(1);
-rs.close();
+Connection conn = getConnection();
+conn.setAutoCommit(true);
 
-assertEquals(indexCount, tableCount);
+conn.createStatement().execute("CREATE TABLE " + tableName + " (pk 
INTEGER PRIMARY KEY, v1 FLOAT, v2 FLOAT, v3 FLOAT, v4 FLOAT) SPLIT ON (4000)");
+
+int idx=1;
+for (String indexName : localIndexNames) {
+conn.createStatement().execute("CREATE LOCAL INDEX " + indexName + 
" ON " + tableName + "(v" + idx++ +")");
+}
+for (String indexName : globalIndexNames) {
+conn.createStatement().execute("CREATE INDEX " + indexName + " ON 
" + tableName + "(v" + idx++ +")");
+}
+conn.createStatement().execute("UPSERT INTO " + tableName + " 
VALUES(rand() * 8000, rand())");
+
+for (int i=0; i<16; i++) {
+conn.createStatement().execute("UPSERT INTO " + tableName + " 
SELECT rand() * 8000, rand() FROM " + tableName);
+
+int count = getCountViaIndex(conn, tableName, null);
+for (String indexName : localIndexNames) {
+assertEquals(count, getCountViaIndex(conn, tableName, 
indexName));
+}
+
+for (String indexName : globalIndexNames) {
+assertEquals(count, getCountViaIndex(conn, tableName, 
indexName));
+}
+}
+}
+
+private int getCountViaIndex(Connection conn, String tableName, String 
indexName) throws SQLException {
+String hint = indexName == null ? "NO_INDEX" : "INDEX(" + tableName + 
" " + indexName + ")";
+try (ResultSet rs = conn.createStatement().executeQuery("SELECT /*+ " 
+ hint + " */ COUNT(*) FROM " + tableName)) {
+rs.next();
+return rs.getInt(1);
 }
 }
 



[phoenix] branch master updated: PHOENIX-6097 Improve LOCAL index consistency tests.

2020-08-24 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 3113307  PHOENIX-6097 Improve LOCAL index consistency tests.
3113307 is described below

commit 3113307a313c409343255c84f17e766ebdbd1d8a
Author: Lars 
AuthorDate: Mon Aug 24 09:47:20 2020 -0700

PHOENIX-6097 Improve LOCAL index consistency tests.
---
 .../apache/phoenix/end2end/index/LocalIndexIT.java | 61 +-
 1 file changed, 47 insertions(+), 14 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
index 0965ce1..3d5a323 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
@@ -113,25 +113,58 @@ public class LocalIndexIT extends BaseLocalIndexIT {
 Connection conn = getConnection();
 conn.setAutoCommit(true);
 
-conn.createStatement().execute("CREATE TABLE " + tableName + " (pk 
INTEGER PRIMARY KEY, v1 FLOAT) SPLIT ON (2000)");
+conn.createStatement().execute("CREATE TABLE " + tableName + " (pk 
INTEGER PRIMARY KEY, v1 FLOAT) SPLIT ON (4000)");
 conn.createStatement().execute("CREATE LOCAL INDEX " + indexName + " 
ON " + tableName + "(v1)");
-conn.createStatement().execute("UPSERT INTO " + tableName + " 
VALUES(rand() * 4000, rand())");
+conn.createStatement().execute("UPSERT INTO " + tableName + " 
VALUES(rand() * 8000, rand())");
 
-ResultSet rs;
-for (int i=0; i<15; i++) {
-conn.createStatement().execute("UPSERT INTO " + tableName + " 
SELECT rand() * 4000, rand() FROM " + tableName);
+for (int i=0; i<16; i++) {
+conn.createStatement().execute("UPSERT INTO " + tableName + " 
SELECT rand() * 8000, rand() FROM " + tableName);
+assertEquals(getCountViaIndex(conn, tableName, null), 
getCountViaIndex(conn, tableName, indexName));
+}
+}
 
-rs = conn.createStatement().executeQuery("SELECT COUNT(*) FROM " + 
tableName);
-rs.next();
-int indexCount = rs.getInt(1);
-rs.close();
+@Test
+public void testLocalIndexConsistencyWithGlobalMix() throws Exception {
+if (isNamespaceMapped) {
+return;
+}
+String tableName = schemaName + "." + generateUniqueName();
+String localIndexNames[] = {"L_" + generateUniqueName(), "L_" + 
generateUniqueName()};
+String globalIndexNames[] = {"G_" + generateUniqueName(), "G_" + 
generateUniqueName()};
 
-rs = conn.createStatement().executeQuery("SELECT /*+ NO_INDEX */ 
COUNT(*) FROM " + tableName);
-rs.next();
-int tableCount = rs.getInt(1);
-rs.close();
+Connection conn = getConnection();
+conn.setAutoCommit(true);
 
-assertEquals(indexCount, tableCount);
+conn.createStatement().execute("CREATE TABLE " + tableName + " (pk 
INTEGER PRIMARY KEY, v1 FLOAT, v2 FLOAT, v3 FLOAT, v4 FLOAT) SPLIT ON (4000)");
+
+int idx=1;
+for (String indexName : localIndexNames) {
+conn.createStatement().execute("CREATE LOCAL INDEX " + indexName + 
" ON " + tableName + "(v" + idx++ +")");
+}
+for (String indexName : globalIndexNames) {
+conn.createStatement().execute("CREATE INDEX " + indexName + " ON 
" + tableName + "(v" + idx++ +")");
+}
+conn.createStatement().execute("UPSERT INTO " + tableName + " 
VALUES(rand() * 8000, rand())");
+
+for (int i=0; i<16; i++) {
+conn.createStatement().execute("UPSERT INTO " + tableName + " 
SELECT rand() * 8000, rand() FROM " + tableName);
+
+int count = getCountViaIndex(conn, tableName, null);
+for (String indexName : localIndexNames) {
+assertEquals(count, getCountViaIndex(conn, tableName, 
indexName));
+}
+
+for (String indexName : globalIndexNames) {
+assertEquals(count, getCountViaIndex(conn, tableName, 
indexName));
+}
+}
+}
+
+private int getCountViaIndex(Connection conn, String tableName, String 
indexName) throws SQLException {
+String hint = indexName == null ? "NO_INDEX" : "INDEX(" + tableName + 
" " + indexName + ")";
+try (ResultSet rs = conn.createStatement().executeQuery("SELECT /*+ " 
+ hint + " */ COUNT(*) FROM " + tableName)) {
+rs.next();
+return rs.getInt(1);
 }
 }
 



[phoenix] branch 4.x updated: Update jacoco plugin version to 0.8.5.

2020-08-23 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x by this push:
 new 355d95a  Update jacoco plugin version to 0.8.5.
355d95a is described below

commit 355d95a4762c3ccac1be35659f3c02c385e17b3b
Author: Lars 
AuthorDate: Sun Aug 23 11:42:52 2020 -0700

Update jacoco plugin version to 0.8.5.
---
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pom.xml b/pom.xml
index b19cd3e..b98c498 100644
--- a/pom.xml
+++ b/pom.xml
@@ -137,7 +137,7 @@
 2.9
 
1.9.1
 3.0.0-M3
-0.7.9
+0.8.5
 
 
 8



[phoenix] branch master updated: Update jacoco plugin version to 0.8.5.

2020-08-23 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 3ec4599  Update jacoco plugin version to 0.8.5.
3ec4599 is described below

commit 3ec45999dbd38d689da2d4884bb1054107e55a1b
Author: Lars 
AuthorDate: Sun Aug 23 11:32:39 2020 -0700

Update jacoco plugin version to 0.8.5.
---
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pom.xml b/pom.xml
index 7eda4ed..81bdff7 100644
--- a/pom.xml
+++ b/pom.xml
@@ -132,7 +132,7 @@
 
1.9.1
 3.0.0-M3
 
${antlr.version}
-0.7.9
+0.8.5
 
 
 8



[phoenix] branch 4.x updated: Local indexes get out of sync after changes for global consistent indexes.

2020-08-22 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x by this push:
 new daa6816  Local indexes get out of sync after changes for global 
consistent indexes.
daa6816 is described below

commit daa6816dcb3ac035bf8553e6bf2ff8a18e80e6e4
Author: Lars 
AuthorDate: Sat Aug 22 11:55:24 2020 -0700

Local indexes get out of sync after changes for global consistent indexes.
---
 .../apache/phoenix/end2end/index/LocalIndexIT.java | 33 ++
 .../phoenix/hbase/index/IndexRegionObserver.java   | 70 --
 2 files changed, 71 insertions(+), 32 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
index 481ce1c..012bbca 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
@@ -102,6 +102,39 @@ public class LocalIndexIT extends BaseLocalIndexIT {
 }
 
 @Test
+public void testLocalIndexConsistency() throws Exception {
+if (isNamespaceMapped) {
+return;
+}
+String tableName = schemaName + "." + generateUniqueName();
+String indexName = "IDX_" + generateUniqueName();
+
+Connection conn = getConnection();
+conn.setAutoCommit(true);
+
+conn.createStatement().execute("CREATE TABLE " + tableName + " (pk 
INTEGER PRIMARY KEY, v1 FLOAT) SPLIT ON (2000)");
+conn.createStatement().execute("CREATE LOCAL INDEX " + indexName + " 
ON " + tableName + "(v1)");
+conn.createStatement().execute("UPSERT INTO " + tableName + " 
VALUES(rand() * 4000, rand())");
+
+ResultSet rs;
+for (int i=0; i<15; i++) {
+conn.createStatement().execute("UPSERT INTO " + tableName + " 
SELECT rand() * 4000, rand() FROM " + tableName);
+
+rs = conn.createStatement().executeQuery("SELECT COUNT(*) FROM " + 
tableName);
+rs.next();
+int indexCount = rs.getInt(1);
+rs.close();
+
+rs = conn.createStatement().executeQuery("SELECT /*+ NO_INDEX */ 
COUNT(*) FROM " + tableName);
+rs.next();
+int tableCount = rs.getInt(1);
+rs.close();
+
+assertEquals(indexCount, tableCount);
+}
+}
+
+@Test
 public void testUseUncoveredLocalIndexWithPrefix() throws Exception {
 String tableName = schemaName + "." + generateUniqueName();
 String indexName = "IDX_" + generateUniqueName();
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
index e24b8e2..49b5509 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
@@ -685,7 +685,7 @@ public class IndexRegionObserver extends BaseRegionObserver 
{
  * unverified status. In phase 2, data table mutations are applied. In 
phase 3, the status for an index table row is
  * either set to "verified" or the row is deleted.
  */
-private void 
preparePreIndexMutations(ObserverContext c,
+private boolean 
preparePreIndexMutations(ObserverContext c,
   
MiniBatchOperationInProgress miniBatchOp,
   BatchMutateContext context,
   Collection 
pendingMutations,
@@ -699,13 +699,6 @@ public class IndexRegionObserver extends 
BaseRegionObserver {
 current = NullSpan.INSTANCE;
 }
 current.addTimelineAnnotation("Built index updates, doing 
preStep");
-// Handle local index updates
-for (IndexMaintainer indexMaintainer : maintainers) {
-if (indexMaintainer.isLocalIndex()) {
-handleLocalIndexUpdates(c, miniBatchOp, pendingMutations, 
indexMetaData);
-break;
-}
-}
 // The rest of this method is for handling global index updates
 context.indexUpdates = 
ArrayListMultimap.>create();
 prepareIndexMutations(context, maintainers, now);
@@ -713,6 +706,9 @@ public class IndexRegionObserver extends BaseRegionObserver 
{
 context.preIndexUpdates = 
ArrayListMultimap.create();
 int updateCount = 0;
 for (IndexMaintainer indexMaintainer : maintainers) {
+if (indexMaintain

[phoenix] branch master updated: Local indexes get out of sync after changes for global consistent indexes.

2020-08-22 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new dbc98ac  Local indexes get out of sync after changes for global 
consistent indexes.
dbc98ac is described below

commit dbc98acd1f09d4d8c360a84f9d126b4e03a73fe0
Author: Lars 
AuthorDate: Sat Aug 22 10:50:53 2020 -0700

Local indexes get out of sync after changes for global consistent indexes.
---
 .../apache/phoenix/end2end/index/LocalIndexIT.java | 33 ++
 .../phoenix/hbase/index/IndexRegionObserver.java   | 70 --
 2 files changed, 71 insertions(+), 32 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
index 724da6e..0965ce1 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
@@ -103,6 +103,39 @@ public class LocalIndexIT extends BaseLocalIndexIT {
 }
 
 @Test
+public void testLocalIndexConsistency() throws Exception {
+if (isNamespaceMapped) {
+return;
+}
+String tableName = schemaName + "." + generateUniqueName();
+String indexName = "IDX_" + generateUniqueName();
+
+Connection conn = getConnection();
+conn.setAutoCommit(true);
+
+conn.createStatement().execute("CREATE TABLE " + tableName + " (pk 
INTEGER PRIMARY KEY, v1 FLOAT) SPLIT ON (2000)");
+conn.createStatement().execute("CREATE LOCAL INDEX " + indexName + " 
ON " + tableName + "(v1)");
+conn.createStatement().execute("UPSERT INTO " + tableName + " 
VALUES(rand() * 4000, rand())");
+
+ResultSet rs;
+for (int i=0; i<15; i++) {
+conn.createStatement().execute("UPSERT INTO " + tableName + " 
SELECT rand() * 4000, rand() FROM " + tableName);
+
+rs = conn.createStatement().executeQuery("SELECT COUNT(*) FROM " + 
tableName);
+rs.next();
+int indexCount = rs.getInt(1);
+rs.close();
+
+rs = conn.createStatement().executeQuery("SELECT /*+ NO_INDEX */ 
COUNT(*) FROM " + tableName);
+rs.next();
+int tableCount = rs.getInt(1);
+rs.close();
+
+assertEquals(indexCount, tableCount);
+}
+}
+
+@Test
 public void testUseUncoveredLocalIndexWithPrefix() throws Exception {
 String tableName = schemaName + "." + generateUniqueName();
 String indexName = "IDX_" + generateUniqueName();
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
index bfeadcb..2d0cf51 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java
@@ -702,7 +702,7 @@ public class IndexRegionObserver implements RegionObserver, 
RegionCoprocessor {
  * unverified status. In phase 2, data table mutations are applied. In 
phase 3, the status for an index table row is
  * either set to "verified" or the row is deleted.
  */
-private void 
preparePreIndexMutations(ObserverContext c,
+private boolean 
preparePreIndexMutations(ObserverContext c,
   
MiniBatchOperationInProgress miniBatchOp,
   BatchMutateContext context,
   Collection 
pendingMutations,
@@ -716,13 +716,6 @@ public class IndexRegionObserver implements 
RegionObserver, RegionCoprocessor {
 current = NullSpan.INSTANCE;
 }
 current.addTimelineAnnotation("Built index updates, doing 
preStep");
-// Handle local index updates
-for (IndexMaintainer indexMaintainer : maintainers) {
-if (indexMaintainer.isLocalIndex()) {
-handleLocalIndexUpdates(c, miniBatchOp, pendingMutations, 
indexMetaData);
-break;
-}
-}
 // The rest of this method is for handling global index updates
 context.indexUpdates = 
ArrayListMultimap.>create();
 prepareIndexMutations(context, maintainers, now);
@@ -730,6 +723,9 @@ public class IndexRegionObserver implements RegionObserver, 
RegionCoprocessor {
 context.preIndexUpdates = 
ArrayListMultimap.create();
 int updateCount = 0;
 for (IndexM

[phoenix] branch 4.x updated: PHOENIX-6000 Client side DELETEs should use local indexes for filtering.

2020-07-16 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x by this push:
 new fd22478  PHOENIX-6000 Client side DELETEs should use local indexes for 
filtering.
fd22478 is described below

commit fd22478d522ce5dcfc43c67696018a360e043d6c
Author: Lars 
AuthorDate: Thu Jul 16 10:27:18 2020 -0700

PHOENIX-6000 Client side DELETEs should use local indexes for filtering.
---
 .../end2end/index/GlobalIndexOptimizationIT.java   | 55 --
 .../org/apache/phoenix/compile/DeleteCompiler.java | 22 +
 2 files changed, 53 insertions(+), 24 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/GlobalIndexOptimizationIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/GlobalIndexOptimizationIT.java
index 5c2558e..0d0556b 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/GlobalIndexOptimizationIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/GlobalIndexOptimizationIT.java
@@ -50,14 +50,61 @@ public class GlobalIndexOptimizationIT extends 
ParallelStatsDisabledIT {
 conn.close();
 }
 
-private void createIndex(String indexName, String tableName, String 
columns) throws SQLException {
+private void createIndex(String indexName, String tableName, String 
columns, String includes, boolean local) throws SQLException {
 Connection conn = DriverManager.getConnection(getUrl());
-String ddl = "CREATE INDEX " + indexName + " ON " + tableName + " (" + 
columns + ")";
+String ddl = "CREATE " + (local ? "LOCAL " : "") + "INDEX " + 
indexName + " ON " + tableName + " (" + columns + ")" + (includes != null ? " 
INCLUDE (" + includes + ")" : "");
 conn.createStatement().execute(ddl);
 conn.close();
 }
 
 @Test
+public void testIndexDeleteOptimizationWithLocalIndex() throws Exception {
+String dataTableName = generateUniqueName();
+String indexTableName = generateUniqueName();
+createBaseTable(dataTableName, null, null, false);
+// create a local index that only covers k3
+createIndex(indexTableName+"L", dataTableName, "k3", null, true);
+// create a gloval index covering v1, and k3
+createIndex(indexTableName+"G", dataTableName, "v1", "k3", false);
+
+String query = "DELETE FROM " + dataTableName + " where k3 < 100";
+try (Connection conn1 = DriverManager.getConnection(getUrl())) {
+conn1.createStatement().execute("UPSERT INTO " + dataTableName + " 
values(TO_CHAR(rand()*100),rand()*1,rand()*1,rand()*1,TO_CHAR(rand()*100))");
+for (int i=0; i<16; i++) {
+conn1.createStatement().execute("UPSERT INTO " + dataTableName 
+ " SELECT 
TO_CHAR(rand()*100),rand()*1,rand()*1,rand()*1,TO_CHAR(rand()*100) 
FROM " + dataTableName);
+}
+ResultSet rs = conn1.createStatement().executeQuery("EXPLAIN "+ 
query);
+String expected =
+"DELETE ROWS CLIENT SELECT\n" +
+"CLIENT PARALLEL 1-WAY RANGE SCAN OVER " + dataTableName 
+" [1,*] - [1,100]\n" +
+"SERVER FILTER BY FIRST KEY ONLY\n" +
+"CLIENT MERGE SORT";
+String actual = QueryUtil.getExplainPlan(rs);
+assertEquals(expected, actual);
+rs = conn1.createStatement().executeQuery("SELECT COUNT(*) FROM " 
+ dataTableName);
+rs.next();
+int count = rs.getInt(1);
+int deleted = conn1.createStatement().executeUpdate(query);
+int expectedCount = count - deleted;
+
+rs = conn1.createStatement().executeQuery("SELECT COUNT(*) FROM " 
+ dataTableName);
+rs.next();
+count = rs.getInt(1);
+assertEquals(expectedCount, count);
+
+rs = conn1.createStatement().executeQuery("SELECT COUNT(*) FROM " 
+ indexTableName+"L");
+rs.next();
+count = rs.getInt(1);
+assertEquals(expectedCount, count);
+
+rs = conn1.createStatement().executeQuery("SELECT COUNT(*) FROM " 
+ indexTableName+"G");
+rs.next();
+count = rs.getInt(1);
+assertEquals(expectedCount, count);
+}
+}
+
+@Test
 public void testGlobalIndexOptimization() throws Exception {
 String dataTableName = generateUniqueName();
 

[phoenix] branch master updated: PHOENIX-6000 Client side DELETEs should use local indexes for filtering.

2020-07-16 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 02c9d5d  PHOENIX-6000 Client side DELETEs should use local indexes for 
filtering.
02c9d5d is described below

commit 02c9d5d15be727efaba8ce922857b4ba8d6f129b
Author: Lars 
AuthorDate: Thu Jul 16 10:26:31 2020 -0700

PHOENIX-6000 Client side DELETEs should use local indexes for filtering.
---
 .../end2end/index/GlobalIndexOptimizationIT.java   | 55 --
 .../org/apache/phoenix/compile/DeleteCompiler.java | 22 +
 ...eleteCompiler.java => DeleteCompiler.java.orig} |  0
 3 files changed, 53 insertions(+), 24 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/GlobalIndexOptimizationIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/GlobalIndexOptimizationIT.java
index 5c2558e..0d0556b 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/GlobalIndexOptimizationIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/GlobalIndexOptimizationIT.java
@@ -50,14 +50,61 @@ public class GlobalIndexOptimizationIT extends 
ParallelStatsDisabledIT {
 conn.close();
 }
 
-private void createIndex(String indexName, String tableName, String 
columns) throws SQLException {
+private void createIndex(String indexName, String tableName, String 
columns, String includes, boolean local) throws SQLException {
 Connection conn = DriverManager.getConnection(getUrl());
-String ddl = "CREATE INDEX " + indexName + " ON " + tableName + " (" + 
columns + ")";
+String ddl = "CREATE " + (local ? "LOCAL " : "") + "INDEX " + 
indexName + " ON " + tableName + " (" + columns + ")" + (includes != null ? " 
INCLUDE (" + includes + ")" : "");
 conn.createStatement().execute(ddl);
 conn.close();
 }
 
 @Test
+public void testIndexDeleteOptimizationWithLocalIndex() throws Exception {
+String dataTableName = generateUniqueName();
+String indexTableName = generateUniqueName();
+createBaseTable(dataTableName, null, null, false);
+// create a local index that only covers k3
+createIndex(indexTableName+"L", dataTableName, "k3", null, true);
+// create a gloval index covering v1, and k3
+createIndex(indexTableName+"G", dataTableName, "v1", "k3", false);
+
+String query = "DELETE FROM " + dataTableName + " where k3 < 100";
+try (Connection conn1 = DriverManager.getConnection(getUrl())) {
+conn1.createStatement().execute("UPSERT INTO " + dataTableName + " 
values(TO_CHAR(rand()*100),rand()*1,rand()*1,rand()*1,TO_CHAR(rand()*100))");
+for (int i=0; i<16; i++) {
+conn1.createStatement().execute("UPSERT INTO " + dataTableName 
+ " SELECT 
TO_CHAR(rand()*100),rand()*1,rand()*1,rand()*1,TO_CHAR(rand()*100) 
FROM " + dataTableName);
+}
+ResultSet rs = conn1.createStatement().executeQuery("EXPLAIN "+ 
query);
+String expected =
+"DELETE ROWS CLIENT SELECT\n" +
+"CLIENT PARALLEL 1-WAY RANGE SCAN OVER " + dataTableName 
+" [1,*] - [1,100]\n" +
+"SERVER FILTER BY FIRST KEY ONLY\n" +
+"CLIENT MERGE SORT";
+String actual = QueryUtil.getExplainPlan(rs);
+assertEquals(expected, actual);
+rs = conn1.createStatement().executeQuery("SELECT COUNT(*) FROM " 
+ dataTableName);
+rs.next();
+int count = rs.getInt(1);
+int deleted = conn1.createStatement().executeUpdate(query);
+int expectedCount = count - deleted;
+
+rs = conn1.createStatement().executeQuery("SELECT COUNT(*) FROM " 
+ dataTableName);
+rs.next();
+count = rs.getInt(1);
+assertEquals(expectedCount, count);
+
+rs = conn1.createStatement().executeQuery("SELECT COUNT(*) FROM " 
+ indexTableName+"L");
+rs.next();
+count = rs.getInt(1);
+assertEquals(expectedCount, count);
+
+rs = conn1.createStatement().executeQuery("SELECT COUNT(*) FROM " 
+ indexTableName+"G");
+rs.next();
+count = rs.getInt(1);
+assertEquals(expectedCount, count);
+}
+}
+
+@Test
 public void testGlobalIndexOptimization() throws Exception {

[phoenix] branch master updated: PHOENIX-5096 Local index region pruning is not working as expected.

2019-12-24 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 356f4cd  PHOENIX-5096 Local index region pruning is not working as 
expected.
356f4cd is described below

commit 356f4cd7d43bccb9538a5a2b94863b1c52cd9aad
Author: Lars Hofhansl 
AuthorDate: Tue Dec 24 06:26:39 2019 -0800

PHOENIX-5096 Local index region pruning is not working as expected.
---
 .../phoenix/iterate/BaseResultIterators.java   |  9 
 .../apache/phoenix/compile/QueryCompilerTest.java  | 60 ++
 2 files changed, 69 insertions(+)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
index 8fd368a..12a6b3a 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
@@ -1023,6 +1023,15 @@ public abstract class BaseResultIterators extends 
ExplainTable implements Result
 endKey = regionBoundaries.get(regionIndex);
 }
 if (isLocalIndex) {
+if (dataPlan != null && 
dataPlan.getTableRef().getTable().getType() != PTableType.INDEX) { // Sanity 
check
+ScanRanges dataScanRanges = 
dataPlan.getContext().getScanRanges();
+// we can skip a region completely for local indexes 
if the data plan does not intersect
+if 
(!dataScanRanges.intersectRegion(regionInfo.getStartKey(), 
regionInfo.getEndKey(), false)) {
+currentKeyBytes = endKey;
+regionIndex++;
+continue;
+}
+}
 // Only attempt further pruning if the prefix range is 
using
 // a skip scan since we've already pruned the range of 
regions
 // based on the start/stop key.
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java
index 31369be..c4c47e7 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java
@@ -4877,6 +4877,66 @@ public class QueryCompilerTest extends 
BaseConnectionlessQueryTest {
 }
 
 @Test
+public void testLocalIndexRegionPruning() throws SQLException {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+conn.createStatement().execute("CREATE TABLE T (\n" + 
+"A CHAR(1) NOT NULL,\n" + 
+"B CHAR(1) NOT NULL,\n" + 
+"C CHAR(1) NOT NULL,\n" + 
+"D CHAR(1),\n" + 
+"CONSTRAINT PK PRIMARY KEY (\n" + 
+"A,\n" + 
+"B,\n" + 
+"C\n" + 
+")\n" + 
+") SPLIT ON ('A','C','E','G','I')");
+
+conn.createStatement().execute("CREATE LOCAL INDEX IDX ON T(D)");
+
+// un-pruned, need to scan all six regions
+String query = "SELECT * FROM T WHERE D = 'C'";
+PhoenixStatement statement = 
conn.createStatement().unwrap(PhoenixStatement.class);
+QueryPlan plan = statement.optimizeQuery(query);
+assertEquals("IDX", 
plan.getContext().getCurrentTable().getTable().getName().getString());
+plan.iterator();
+assertEquals(6, plan.getScans().size());
+
+// fixing first part of the key, can limit scanning to two regions
+query = "SELECT * FROM T WHERE A = 'A' AND D = 'C'";
+statement = conn.createStatement().unwrap(PhoenixStatement.class);
+plan = statement.optimizeQuery(query);
+assertEquals("IDX", 
plan.getContext().getCurrentTable().getTable().getName().getString());
+plan.iterator();
+assertEquals(2, plan.getScans().size());
+
+// same with skipscan filter
+query = "SELECT * FROM T WHERE A IN ('A', 'C') AND D = 'C'";
+statement = conn.createStatement().unwrap(PhoenixStatement.class);
+plan = statement.optimizeQuery(query);
+assertEquals("IDX", 
plan.getContext().getCurrentTable().getTable().getName().getString());

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5096 Local index region pruning is not working as expected.

2019-12-24 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 214b487  PHOENIX-5096 Local index region pruning is not working as 
expected.
214b487 is described below

commit 214b487cbe87ab8571b4a310953799e25568f1d5
Author: Lars Hofhansl 
AuthorDate: Tue Dec 24 06:26:39 2019 -0800

PHOENIX-5096 Local index region pruning is not working as expected.
---
 .../phoenix/iterate/BaseResultIterators.java   |  9 
 .../apache/phoenix/compile/QueryCompilerTest.java  | 60 ++
 2 files changed, 69 insertions(+)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
index 45b4d4d..2dcc88b 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
@@ -1023,6 +1023,15 @@ public abstract class BaseResultIterators extends 
ExplainTable implements Result
 endKey = regionBoundaries.get(regionIndex);
 }
 if (isLocalIndex) {
+if (dataPlan != null && 
dataPlan.getTableRef().getTable().getType() != PTableType.INDEX) { // Sanity 
check
+ScanRanges dataScanRanges = 
dataPlan.getContext().getScanRanges();
+// we can skip a region completely for local indexes 
if the data plan does not intersect
+if 
(!dataScanRanges.intersectRegion(regionInfo.getStartKey(), 
regionInfo.getEndKey(), false)) {
+currentKeyBytes = endKey;
+regionIndex++;
+continue;
+}
+}
 // Only attempt further pruning if the prefix range is 
using
 // a skip scan since we've already pruned the range of 
regions
 // based on the start/stop key.
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java
index 3dca5b6..f72c3f6 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java
@@ -4877,6 +4877,66 @@ public class QueryCompilerTest extends 
BaseConnectionlessQueryTest {
 }
 
 @Test
+public void testLocalIndexRegionPruning() throws SQLException {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+conn.createStatement().execute("CREATE TABLE T (\n" + 
+"A CHAR(1) NOT NULL,\n" + 
+"B CHAR(1) NOT NULL,\n" + 
+"C CHAR(1) NOT NULL,\n" + 
+"D CHAR(1),\n" + 
+"CONSTRAINT PK PRIMARY KEY (\n" + 
+"A,\n" + 
+"B,\n" + 
+"C\n" + 
+")\n" + 
+") SPLIT ON ('A','C','E','G','I')");
+
+conn.createStatement().execute("CREATE LOCAL INDEX IDX ON T(D)");
+
+// un-pruned, need to scan all six regions
+String query = "SELECT * FROM T WHERE D = 'C'";
+PhoenixStatement statement = 
conn.createStatement().unwrap(PhoenixStatement.class);
+QueryPlan plan = statement.optimizeQuery(query);
+assertEquals("IDX", 
plan.getContext().getCurrentTable().getTable().getName().getString());
+plan.iterator();
+assertEquals(6, plan.getScans().size());
+
+// fixing first part of the key, can limit scanning to two regions
+query = "SELECT * FROM T WHERE A = 'A' AND D = 'C'";
+statement = conn.createStatement().unwrap(PhoenixStatement.class);
+plan = statement.optimizeQuery(query);
+assertEquals("IDX", 
plan.getContext().getCurrentTable().getTable().getName().getString());
+plan.iterator();
+assertEquals(2, plan.getScans().size());
+
+// same with skipscan filter
+query = "SELECT * FROM T WHERE A IN ('A', 'C') AND D = 'C'";
+statement = conn.createStatement().unwrap(PhoenixStatement.class);
+plan = statement.optimizeQuery(query);
+assertEquals("IDX", 
plan.getContext().getCurrentTable().getTab

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5096 Local index region pruning is not working as expected.

2019-12-24 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 2f9ad87  PHOENIX-5096 Local index region pruning is not working as 
expected.
2f9ad87 is described below

commit 2f9ad87e1c7d3ad879af490f2546a596516d2ccb
Author: Lars Hofhansl 
AuthorDate: Tue Dec 24 06:26:39 2019 -0800

PHOENIX-5096 Local index region pruning is not working as expected.
---
 .../phoenix/iterate/BaseResultIterators.java   |  9 
 .../apache/phoenix/compile/QueryCompilerTest.java  | 60 ++
 2 files changed, 69 insertions(+)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
index 45b4d4d..2dcc88b 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
@@ -1023,6 +1023,15 @@ public abstract class BaseResultIterators extends 
ExplainTable implements Result
 endKey = regionBoundaries.get(regionIndex);
 }
 if (isLocalIndex) {
+if (dataPlan != null && 
dataPlan.getTableRef().getTable().getType() != PTableType.INDEX) { // Sanity 
check
+ScanRanges dataScanRanges = 
dataPlan.getContext().getScanRanges();
+// we can skip a region completely for local indexes 
if the data plan does not intersect
+if 
(!dataScanRanges.intersectRegion(regionInfo.getStartKey(), 
regionInfo.getEndKey(), false)) {
+currentKeyBytes = endKey;
+regionIndex++;
+continue;
+}
+}
 // Only attempt further pruning if the prefix range is 
using
 // a skip scan since we've already pruned the range of 
regions
 // based on the start/stop key.
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java
index e6337fa..b49aaf8 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java
@@ -4870,6 +4870,66 @@ public class QueryCompilerTest extends 
BaseConnectionlessQueryTest {
 }
 
 @Test
+public void testLocalIndexRegionPruning() throws SQLException {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+conn.createStatement().execute("CREATE TABLE T (\n" + 
+"A CHAR(1) NOT NULL,\n" + 
+"B CHAR(1) NOT NULL,\n" + 
+"C CHAR(1) NOT NULL,\n" + 
+"D CHAR(1),\n" + 
+"CONSTRAINT PK PRIMARY KEY (\n" + 
+"A,\n" + 
+"B,\n" + 
+"C\n" + 
+")\n" + 
+") SPLIT ON ('A','C','E','G','I')");
+
+conn.createStatement().execute("CREATE LOCAL INDEX IDX ON T(D)");
+
+// un-pruned, need to scan all six regions
+String query = "SELECT * FROM T WHERE D = 'C'";
+PhoenixStatement statement = 
conn.createStatement().unwrap(PhoenixStatement.class);
+QueryPlan plan = statement.optimizeQuery(query);
+assertEquals("IDX", 
plan.getContext().getCurrentTable().getTable().getName().getString());
+plan.iterator();
+assertEquals(6, plan.getScans().size());
+
+// fixing first part of the key, can limit scanning to two regions
+query = "SELECT * FROM T WHERE A = 'A' AND D = 'C'";
+statement = conn.createStatement().unwrap(PhoenixStatement.class);
+plan = statement.optimizeQuery(query);
+assertEquals("IDX", 
plan.getContext().getCurrentTable().getTable().getName().getString());
+plan.iterator();
+assertEquals(2, plan.getScans().size());
+
+// same with skipscan filter
+query = "SELECT * FROM T WHERE A IN ('A', 'C') AND D = 'C'";
+statement = conn.createStatement().unwrap(PhoenixStatement.class);
+plan = statement.optimizeQuery(query);
+assertEquals("IDX", 
plan.getContext().getCurrentTable().getTab

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5096 Local index region pruning is not working as expected.

2019-12-24 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 1620c7d  PHOENIX-5096 Local index region pruning is not working as 
expected.
1620c7d is described below

commit 1620c7d7f6b8ce8ccf38e27e6bff1182ec6f7985
Author: Lars Hofhansl 
AuthorDate: Tue Dec 24 06:26:39 2019 -0800

PHOENIX-5096 Local index region pruning is not working as expected.
---
 .../phoenix/iterate/BaseResultIterators.java   |  9 
 .../apache/phoenix/compile/QueryCompilerTest.java  | 60 ++
 2 files changed, 69 insertions(+)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
index 45b4d4d..2dcc88b 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
@@ -1023,6 +1023,15 @@ public abstract class BaseResultIterators extends 
ExplainTable implements Result
 endKey = regionBoundaries.get(regionIndex);
 }
 if (isLocalIndex) {
+if (dataPlan != null && 
dataPlan.getTableRef().getTable().getType() != PTableType.INDEX) { // Sanity 
check
+ScanRanges dataScanRanges = 
dataPlan.getContext().getScanRanges();
+// we can skip a region completely for local indexes 
if the data plan does not intersect
+if 
(!dataScanRanges.intersectRegion(regionInfo.getStartKey(), 
regionInfo.getEndKey(), false)) {
+currentKeyBytes = endKey;
+regionIndex++;
+continue;
+}
+}
 // Only attempt further pruning if the prefix range is 
using
 // a skip scan since we've already pruned the range of 
regions
 // based on the start/stop key.
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java
index e6337fa..b49aaf8 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java
@@ -4870,6 +4870,66 @@ public class QueryCompilerTest extends 
BaseConnectionlessQueryTest {
 }
 
 @Test
+public void testLocalIndexRegionPruning() throws SQLException {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+conn.createStatement().execute("CREATE TABLE T (\n" + 
+"A CHAR(1) NOT NULL,\n" + 
+"B CHAR(1) NOT NULL,\n" + 
+"C CHAR(1) NOT NULL,\n" + 
+"D CHAR(1),\n" + 
+"CONSTRAINT PK PRIMARY KEY (\n" + 
+"A,\n" + 
+"B,\n" + 
+"C\n" + 
+")\n" + 
+") SPLIT ON ('A','C','E','G','I')");
+
+conn.createStatement().execute("CREATE LOCAL INDEX IDX ON T(D)");
+
+// un-pruned, need to scan all six regions
+String query = "SELECT * FROM T WHERE D = 'C'";
+PhoenixStatement statement = 
conn.createStatement().unwrap(PhoenixStatement.class);
+QueryPlan plan = statement.optimizeQuery(query);
+assertEquals("IDX", 
plan.getContext().getCurrentTable().getTable().getName().getString());
+plan.iterator();
+assertEquals(6, plan.getScans().size());
+
+// fixing first part of the key, can limit scanning to two regions
+query = "SELECT * FROM T WHERE A = 'A' AND D = 'C'";
+statement = conn.createStatement().unwrap(PhoenixStatement.class);
+plan = statement.optimizeQuery(query);
+assertEquals("IDX", 
plan.getContext().getCurrentTable().getTable().getName().getString());
+plan.iterator();
+assertEquals(2, plan.getScans().size());
+
+// same with skipscan filter
+query = "SELECT * FROM T WHERE A IN ('A', 'C') AND D = 'C'";
+statement = conn.createStatement().unwrap(PhoenixStatement.class);
+plan = statement.optimizeQuery(query);
+assertEquals("IDX", 
plan.getContext().getCurrentTable().getTab

[phoenix] branch master updated: PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT.

2019-12-22 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 25cc076  PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT.
25cc076 is described below

commit 25cc0764bc7c69511df61e777f851b963f4798fb
Author: Lars Hofhansl 
AuthorDate: Sun Dec 22 04:27:50 2019 -0800

PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT.

This reverts commit 4ff6da4a00941052e5c81d79ed21b0aca9e49c44.
---
 .../end2end/ParameterizedIndexUpgradeToolIT.java   | 38 ++
 1 file changed, 17 insertions(+), 21 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
index 5f9dc9a..b320446 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
@@ -310,44 +310,40 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 }
 
 @Test
-public void testToolWithIncorrectTables() throws Exception {
+public void testDryRunAndFailures() throws Exception {
 validate(true);
+
+// test with incorrect table
 iut.setInputTables("TEST3.TABLE_NOT_PRESENT");
 iut.prepareToolSetup();
 
 int status = iut.executeTool();
 Assert.assertEquals(-1, status);
 validate(true);
-}
 
-@Test
-public void testToolWithNoIndex() throws Exception {
-if (!upgrade || isNamespaceEnabled) {
-return;
-}
-conn.createStatement().execute("CREATE TABLE TEST.NEW_TABLE (id bigint 
NOT NULL "
-+ "PRIMARY KEY, a.name varchar, sal bigint, address varchar)" 
+ tableDDLOptions);
-iut.setInputTables("TEST.NEW_TABLE");
-iut.prepareToolSetup();
-int status = iut.executeTool();
-Assert.assertEquals(0, status);
-conn.createStatement().execute("DROP TABLE TEST.NEW_TABLE");
-}
-
-@Test
-public void testToolWithInputFileParameter() throws Exception {
+// test with input file parameter
 BufferedWriter writer = new BufferedWriter(new FileWriter(new 
File(INPUT_FILE)));
 writer.write(INPUT_LIST);
 writer.close();
 
-validate(true);
-
 iut.setInputTables(null);
 iut.setInputFile(INPUT_FILE);
 iut.prepareToolSetup();
-iut.executeTool();
+status = iut.executeTool();
+Assert.assertEquals(0, status);
 
 validate(true);
+
+// test table without index
+if (upgrade && !isNamespaceEnabled) {
+conn.createStatement().execute("CREATE TABLE TEST.NEW_TABLE (id 
bigint NOT NULL "
++ "PRIMARY KEY, a.name varchar, sal bigint, address 
varchar)" + tableDDLOptions);
+iut.setInputTables("TEST.NEW_TABLE");
+iut.prepareToolSetup();
+status = iut.executeTool();
+Assert.assertEquals(0, status);
+conn.createStatement().execute("DROP TABLE TEST.NEW_TABLE");
+}
 }
 
 @After



[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT.

2019-12-22 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 71fdb77  PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT.
71fdb77 is described below

commit 71fdb77d0143eb9db0f04da96576830a6e0e8e02
Author: Lars Hofhansl 
AuthorDate: Sun Dec 22 04:27:16 2019 -0800

PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT.

This reverts commit 0c5f0d6d308b3c9be9537e9a0915f0ee19f2271c.
---
 .../end2end/ParameterizedIndexUpgradeToolIT.java   | 38 ++
 1 file changed, 17 insertions(+), 21 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
index 16f99e3..5a2cef9 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
@@ -310,44 +310,40 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 }
 
 @Test
-public void testToolWithIncorrectTables() throws Exception {
+public void testDryRunAndFailures() throws Exception {
 validate(true);
+
+// test with incorrect table
 iut.setInputTables("TEST3.TABLE_NOT_PRESENT");
 iut.prepareToolSetup();
 
 int status = iut.executeTool();
 Assert.assertEquals(-1, status);
 validate(true);
-}
 
-@Test
-public void testToolWithNoIndex() throws Exception {
-if (!upgrade || isNamespaceEnabled) {
-return;
-}
-conn.createStatement().execute("CREATE TABLE TEST.NEW_TABLE (id bigint 
NOT NULL "
-+ "PRIMARY KEY, a.name varchar, sal bigint, address varchar)" 
+ tableDDLOptions);
-iut.setInputTables("TEST.NEW_TABLE");
-iut.prepareToolSetup();
-int status = iut.executeTool();
-Assert.assertEquals(0, status);
-conn.createStatement().execute("DROP TABLE TEST.NEW_TABLE");
-}
-
-@Test
-public void testToolWithInputFileParameter() throws Exception {
+// test with input file parameter
 BufferedWriter writer = new BufferedWriter(new FileWriter(new 
File(INPUT_FILE)));
 writer.write(INPUT_LIST);
 writer.close();
 
-validate(true);
-
 iut.setInputTables(null);
 iut.setInputFile(INPUT_FILE);
 iut.prepareToolSetup();
-iut.executeTool();
+status = iut.executeTool();
+Assert.assertEquals(0, status);
 
 validate(true);
+
+// test table without index
+if (upgrade && !isNamespaceEnabled) {
+conn.createStatement().execute("CREATE TABLE TEST.NEW_TABLE (id 
bigint NOT NULL "
++ "PRIMARY KEY, a.name varchar, sal bigint, address 
varchar)" + tableDDLOptions);
+iut.setInputTables("TEST.NEW_TABLE");
+iut.prepareToolSetup();
+status = iut.executeTool();
+Assert.assertEquals(0, status);
+conn.createStatement().execute("DROP TABLE TEST.NEW_TABLE");
+}
 }
 
 @After



[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT.

2019-12-22 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 80c912e  PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT.
80c912e is described below

commit 80c912e1f88ca6bd39e7c2f3dcee1e2d089535dc
Author: Lars Hofhansl 
AuthorDate: Sun Dec 22 04:26:41 2019 -0800

PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT.

This reverts commit 95fd8e0d1abbb763f59e30d569b9c002f7253ada.
---
 .../end2end/ParameterizedIndexUpgradeToolIT.java   | 38 ++
 1 file changed, 17 insertions(+), 21 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
index 16f99e3..5a2cef9 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
@@ -310,44 +310,40 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 }
 
 @Test
-public void testToolWithIncorrectTables() throws Exception {
+public void testDryRunAndFailures() throws Exception {
 validate(true);
+
+// test with incorrect table
 iut.setInputTables("TEST3.TABLE_NOT_PRESENT");
 iut.prepareToolSetup();
 
 int status = iut.executeTool();
 Assert.assertEquals(-1, status);
 validate(true);
-}
 
-@Test
-public void testToolWithNoIndex() throws Exception {
-if (!upgrade || isNamespaceEnabled) {
-return;
-}
-conn.createStatement().execute("CREATE TABLE TEST.NEW_TABLE (id bigint 
NOT NULL "
-+ "PRIMARY KEY, a.name varchar, sal bigint, address varchar)" 
+ tableDDLOptions);
-iut.setInputTables("TEST.NEW_TABLE");
-iut.prepareToolSetup();
-int status = iut.executeTool();
-Assert.assertEquals(0, status);
-conn.createStatement().execute("DROP TABLE TEST.NEW_TABLE");
-}
-
-@Test
-public void testToolWithInputFileParameter() throws Exception {
+// test with input file parameter
 BufferedWriter writer = new BufferedWriter(new FileWriter(new 
File(INPUT_FILE)));
 writer.write(INPUT_LIST);
 writer.close();
 
-validate(true);
-
 iut.setInputTables(null);
 iut.setInputFile(INPUT_FILE);
 iut.prepareToolSetup();
-iut.executeTool();
+status = iut.executeTool();
+Assert.assertEquals(0, status);
 
 validate(true);
+
+// test table without index
+if (upgrade && !isNamespaceEnabled) {
+conn.createStatement().execute("CREATE TABLE TEST.NEW_TABLE (id 
bigint NOT NULL "
++ "PRIMARY KEY, a.name varchar, sal bigint, address 
varchar)" + tableDDLOptions);
+iut.setInputTables("TEST.NEW_TABLE");
+iut.prepareToolSetup();
+status = iut.executeTool();
+Assert.assertEquals(0, status);
+conn.createStatement().execute("DROP TABLE TEST.NEW_TABLE");
+}
 }
 
 @After



[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT.

2019-12-22 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 9a948d0  PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT.
9a948d0 is described below

commit 9a948d0cd654045d68f102c9d3e0f527c7b77b5d
Author: Lars Hofhansl 
AuthorDate: Sun Dec 22 04:25:45 2019 -0800

PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT.

This reverts commit 02d5935cbbd75ad2491413042e5010bb76ed57c8.
---
 .../end2end/ParameterizedIndexUpgradeToolIT.java   | 38 ++
 1 file changed, 17 insertions(+), 21 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
index 16f99e3..5a2cef9 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
@@ -310,44 +310,40 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 }
 
 @Test
-public void testToolWithIncorrectTables() throws Exception {
+public void testDryRunAndFailures() throws Exception {
 validate(true);
+
+// test with incorrect table
 iut.setInputTables("TEST3.TABLE_NOT_PRESENT");
 iut.prepareToolSetup();
 
 int status = iut.executeTool();
 Assert.assertEquals(-1, status);
 validate(true);
-}
 
-@Test
-public void testToolWithNoIndex() throws Exception {
-if (!upgrade || isNamespaceEnabled) {
-return;
-}
-conn.createStatement().execute("CREATE TABLE TEST.NEW_TABLE (id bigint 
NOT NULL "
-+ "PRIMARY KEY, a.name varchar, sal bigint, address varchar)" 
+ tableDDLOptions);
-iut.setInputTables("TEST.NEW_TABLE");
-iut.prepareToolSetup();
-int status = iut.executeTool();
-Assert.assertEquals(0, status);
-conn.createStatement().execute("DROP TABLE TEST.NEW_TABLE");
-}
-
-@Test
-public void testToolWithInputFileParameter() throws Exception {
+// test with input file parameter
 BufferedWriter writer = new BufferedWriter(new FileWriter(new 
File(INPUT_FILE)));
 writer.write(INPUT_LIST);
 writer.close();
 
-validate(true);
-
 iut.setInputTables(null);
 iut.setInputFile(INPUT_FILE);
 iut.prepareToolSetup();
-iut.executeTool();
+status = iut.executeTool();
+Assert.assertEquals(0, status);
 
 validate(true);
+
+// test table without index
+if (upgrade && !isNamespaceEnabled) {
+conn.createStatement().execute("CREATE TABLE TEST.NEW_TABLE (id 
bigint NOT NULL "
++ "PRIMARY KEY, a.name varchar, sal bigint, address 
varchar)" + tableDDLOptions);
+iut.setInputTables("TEST.NEW_TABLE");
+iut.prepareToolSetup();
+status = iut.executeTool();
+Assert.assertEquals(0, status);
+conn.createStatement().execute("DROP TABLE TEST.NEW_TABLE");
+}
 }
 
 @After



[phoenix] branch master updated: Revert "PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT."

2019-12-21 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 4ff6da4  Revert "PHOENIX-5616 Speed up 
ParameterizedIndexUpgradeToolIT."
4ff6da4 is described below

commit 4ff6da4a00941052e5c81d79ed21b0aca9e49c44
Author: Lars Hofhansl 
AuthorDate: Sat Dec 21 07:47:40 2019 -0800

Revert "PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT."

This reverts commit 9cd873492d4047d20c09259bedcd6df91348a08a.
---
 .../end2end/ParameterizedIndexUpgradeToolIT.java   | 38 --
 1 file changed, 21 insertions(+), 17 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
index b320446..5f9dc9a 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
@@ -310,40 +310,44 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 }
 
 @Test
-public void testDryRunAndFailures() throws Exception {
+public void testToolWithIncorrectTables() throws Exception {
 validate(true);
-
-// test with incorrect table
 iut.setInputTables("TEST3.TABLE_NOT_PRESENT");
 iut.prepareToolSetup();
 
 int status = iut.executeTool();
 Assert.assertEquals(-1, status);
 validate(true);
+}
 
-// test with input file parameter
+@Test
+public void testToolWithNoIndex() throws Exception {
+if (!upgrade || isNamespaceEnabled) {
+return;
+}
+conn.createStatement().execute("CREATE TABLE TEST.NEW_TABLE (id bigint 
NOT NULL "
++ "PRIMARY KEY, a.name varchar, sal bigint, address varchar)" 
+ tableDDLOptions);
+iut.setInputTables("TEST.NEW_TABLE");
+iut.prepareToolSetup();
+int status = iut.executeTool();
+Assert.assertEquals(0, status);
+conn.createStatement().execute("DROP TABLE TEST.NEW_TABLE");
+}
+
+@Test
+public void testToolWithInputFileParameter() throws Exception {
 BufferedWriter writer = new BufferedWriter(new FileWriter(new 
File(INPUT_FILE)));
 writer.write(INPUT_LIST);
 writer.close();
 
+validate(true);
+
 iut.setInputTables(null);
 iut.setInputFile(INPUT_FILE);
 iut.prepareToolSetup();
-status = iut.executeTool();
-Assert.assertEquals(0, status);
+iut.executeTool();
 
 validate(true);
-
-// test table without index
-if (upgrade && !isNamespaceEnabled) {
-conn.createStatement().execute("CREATE TABLE TEST.NEW_TABLE (id 
bigint NOT NULL "
-+ "PRIMARY KEY, a.name varchar, sal bigint, address 
varchar)" + tableDDLOptions);
-iut.setInputTables("TEST.NEW_TABLE");
-iut.prepareToolSetup();
-status = iut.executeTool();
-Assert.assertEquals(0, status);
-conn.createStatement().execute("DROP TABLE TEST.NEW_TABLE");
-}
 }
 
 @After



[phoenix] branch 4.x-HBase-1.3 updated: Revert "PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT."

2019-12-21 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 0c5f0d6  Revert "PHOENIX-5616 Speed up 
ParameterizedIndexUpgradeToolIT."
0c5f0d6 is described below

commit 0c5f0d6d308b3c9be9537e9a0915f0ee19f2271c
Author: Lars Hofhansl 
AuthorDate: Sat Dec 21 07:47:03 2019 -0800

Revert "PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT."

This reverts commit 53c0089a29a33124b0a0be4c5315995ace2c70fd.
---
 .../end2end/ParameterizedIndexUpgradeToolIT.java   | 38 --
 1 file changed, 21 insertions(+), 17 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
index 5a2cef9..16f99e3 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
@@ -310,40 +310,44 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 }
 
 @Test
-public void testDryRunAndFailures() throws Exception {
+public void testToolWithIncorrectTables() throws Exception {
 validate(true);
-
-// test with incorrect table
 iut.setInputTables("TEST3.TABLE_NOT_PRESENT");
 iut.prepareToolSetup();
 
 int status = iut.executeTool();
 Assert.assertEquals(-1, status);
 validate(true);
+}
 
-// test with input file parameter
+@Test
+public void testToolWithNoIndex() throws Exception {
+if (!upgrade || isNamespaceEnabled) {
+return;
+}
+conn.createStatement().execute("CREATE TABLE TEST.NEW_TABLE (id bigint 
NOT NULL "
++ "PRIMARY KEY, a.name varchar, sal bigint, address varchar)" 
+ tableDDLOptions);
+iut.setInputTables("TEST.NEW_TABLE");
+iut.prepareToolSetup();
+int status = iut.executeTool();
+Assert.assertEquals(0, status);
+conn.createStatement().execute("DROP TABLE TEST.NEW_TABLE");
+}
+
+@Test
+public void testToolWithInputFileParameter() throws Exception {
 BufferedWriter writer = new BufferedWriter(new FileWriter(new 
File(INPUT_FILE)));
 writer.write(INPUT_LIST);
 writer.close();
 
+validate(true);
+
 iut.setInputTables(null);
 iut.setInputFile(INPUT_FILE);
 iut.prepareToolSetup();
-status = iut.executeTool();
-Assert.assertEquals(0, status);
+iut.executeTool();
 
 validate(true);
-
-// test table without index
-if (upgrade && !isNamespaceEnabled) {
-conn.createStatement().execute("CREATE TABLE TEST.NEW_TABLE (id 
bigint NOT NULL "
-+ "PRIMARY KEY, a.name varchar, sal bigint, address 
varchar)" + tableDDLOptions);
-iut.setInputTables("TEST.NEW_TABLE");
-iut.prepareToolSetup();
-status = iut.executeTool();
-Assert.assertEquals(0, status);
-conn.createStatement().execute("DROP TABLE TEST.NEW_TABLE");
-}
 }
 
 @After



[phoenix] branch 4.x-HBase-1.4 updated: Revert "PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT."

2019-12-21 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 95fd8e0  Revert "PHOENIX-5616 Speed up 
ParameterizedIndexUpgradeToolIT."
95fd8e0 is described below

commit 95fd8e0d1abbb763f59e30d569b9c002f7253ada
Author: Lars Hofhansl 
AuthorDate: Sat Dec 21 07:46:32 2019 -0800

Revert "PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT."

This reverts commit adc58977c9e1069345c82f838b39083fa3fb6e4a.
---
 .../end2end/ParameterizedIndexUpgradeToolIT.java   | 38 --
 1 file changed, 21 insertions(+), 17 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
index 5a2cef9..16f99e3 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
@@ -310,40 +310,44 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 }
 
 @Test
-public void testDryRunAndFailures() throws Exception {
+public void testToolWithIncorrectTables() throws Exception {
 validate(true);
-
-// test with incorrect table
 iut.setInputTables("TEST3.TABLE_NOT_PRESENT");
 iut.prepareToolSetup();
 
 int status = iut.executeTool();
 Assert.assertEquals(-1, status);
 validate(true);
+}
 
-// test with input file parameter
+@Test
+public void testToolWithNoIndex() throws Exception {
+if (!upgrade || isNamespaceEnabled) {
+return;
+}
+conn.createStatement().execute("CREATE TABLE TEST.NEW_TABLE (id bigint 
NOT NULL "
++ "PRIMARY KEY, a.name varchar, sal bigint, address varchar)" 
+ tableDDLOptions);
+iut.setInputTables("TEST.NEW_TABLE");
+iut.prepareToolSetup();
+int status = iut.executeTool();
+Assert.assertEquals(0, status);
+conn.createStatement().execute("DROP TABLE TEST.NEW_TABLE");
+}
+
+@Test
+public void testToolWithInputFileParameter() throws Exception {
 BufferedWriter writer = new BufferedWriter(new FileWriter(new 
File(INPUT_FILE)));
 writer.write(INPUT_LIST);
 writer.close();
 
+validate(true);
+
 iut.setInputTables(null);
 iut.setInputFile(INPUT_FILE);
 iut.prepareToolSetup();
-status = iut.executeTool();
-Assert.assertEquals(0, status);
+iut.executeTool();
 
 validate(true);
-
-// test table without index
-if (upgrade && !isNamespaceEnabled) {
-conn.createStatement().execute("CREATE TABLE TEST.NEW_TABLE (id 
bigint NOT NULL "
-+ "PRIMARY KEY, a.name varchar, sal bigint, address 
varchar)" + tableDDLOptions);
-iut.setInputTables("TEST.NEW_TABLE");
-iut.prepareToolSetup();
-status = iut.executeTool();
-Assert.assertEquals(0, status);
-conn.createStatement().execute("DROP TABLE TEST.NEW_TABLE");
-}
 }
 
 @After



[phoenix] branch 4.x-HBase-1.5 updated: Revert "PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT."

2019-12-21 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 02d5935  Revert "PHOENIX-5616 Speed up 
ParameterizedIndexUpgradeToolIT."
02d5935 is described below

commit 02d5935cbbd75ad2491413042e5010bb76ed57c8
Author: Lars Hofhansl 
AuthorDate: Sat Dec 21 07:45:50 2019 -0800

Revert "PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT."

This reverts commit 3d8b3f042ba9357ef6e1e047156839aa5513f05e.
---
 .../end2end/ParameterizedIndexUpgradeToolIT.java   | 38 --
 1 file changed, 21 insertions(+), 17 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
index 5a2cef9..16f99e3 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
@@ -310,40 +310,44 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 }
 
 @Test
-public void testDryRunAndFailures() throws Exception {
+public void testToolWithIncorrectTables() throws Exception {
 validate(true);
-
-// test with incorrect table
 iut.setInputTables("TEST3.TABLE_NOT_PRESENT");
 iut.prepareToolSetup();
 
 int status = iut.executeTool();
 Assert.assertEquals(-1, status);
 validate(true);
+}
 
-// test with input file parameter
+@Test
+public void testToolWithNoIndex() throws Exception {
+if (!upgrade || isNamespaceEnabled) {
+return;
+}
+conn.createStatement().execute("CREATE TABLE TEST.NEW_TABLE (id bigint 
NOT NULL "
++ "PRIMARY KEY, a.name varchar, sal bigint, address varchar)" 
+ tableDDLOptions);
+iut.setInputTables("TEST.NEW_TABLE");
+iut.prepareToolSetup();
+int status = iut.executeTool();
+Assert.assertEquals(0, status);
+conn.createStatement().execute("DROP TABLE TEST.NEW_TABLE");
+}
+
+@Test
+public void testToolWithInputFileParameter() throws Exception {
 BufferedWriter writer = new BufferedWriter(new FileWriter(new 
File(INPUT_FILE)));
 writer.write(INPUT_LIST);
 writer.close();
 
+validate(true);
+
 iut.setInputTables(null);
 iut.setInputFile(INPUT_FILE);
 iut.prepareToolSetup();
-status = iut.executeTool();
-Assert.assertEquals(0, status);
+iut.executeTool();
 
 validate(true);
-
-// test table without index
-if (upgrade && !isNamespaceEnabled) {
-conn.createStatement().execute("CREATE TABLE TEST.NEW_TABLE (id 
bigint NOT NULL "
-+ "PRIMARY KEY, a.name varchar, sal bigint, address 
varchar)" + tableDDLOptions);
-iut.setInputTables("TEST.NEW_TABLE");
-iut.prepareToolSetup();
-status = iut.executeTool();
-Assert.assertEquals(0, status);
-conn.createStatement().execute("DROP TABLE TEST.NEW_TABLE");
-}
 }
 
 @After



[phoenix] branch master updated: PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT.

2019-12-19 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 9cd8734  PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT.
9cd8734 is described below

commit 9cd873492d4047d20c09259bedcd6df91348a08a
Author: Lars Hofhansl 
AuthorDate: Thu Dec 19 01:02:05 2019 -0800

PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT.
---
 .../end2end/ParameterizedIndexUpgradeToolIT.java   | 38 ++
 1 file changed, 17 insertions(+), 21 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
index 5f9dc9a..b320446 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
@@ -310,44 +310,40 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 }
 
 @Test
-public void testToolWithIncorrectTables() throws Exception {
+public void testDryRunAndFailures() throws Exception {
 validate(true);
+
+// test with incorrect table
 iut.setInputTables("TEST3.TABLE_NOT_PRESENT");
 iut.prepareToolSetup();
 
 int status = iut.executeTool();
 Assert.assertEquals(-1, status);
 validate(true);
-}
 
-@Test
-public void testToolWithNoIndex() throws Exception {
-if (!upgrade || isNamespaceEnabled) {
-return;
-}
-conn.createStatement().execute("CREATE TABLE TEST.NEW_TABLE (id bigint 
NOT NULL "
-+ "PRIMARY KEY, a.name varchar, sal bigint, address varchar)" 
+ tableDDLOptions);
-iut.setInputTables("TEST.NEW_TABLE");
-iut.prepareToolSetup();
-int status = iut.executeTool();
-Assert.assertEquals(0, status);
-conn.createStatement().execute("DROP TABLE TEST.NEW_TABLE");
-}
-
-@Test
-public void testToolWithInputFileParameter() throws Exception {
+// test with input file parameter
 BufferedWriter writer = new BufferedWriter(new FileWriter(new 
File(INPUT_FILE)));
 writer.write(INPUT_LIST);
 writer.close();
 
-validate(true);
-
 iut.setInputTables(null);
 iut.setInputFile(INPUT_FILE);
 iut.prepareToolSetup();
-iut.executeTool();
+status = iut.executeTool();
+Assert.assertEquals(0, status);
 
 validate(true);
+
+// test table without index
+if (upgrade && !isNamespaceEnabled) {
+conn.createStatement().execute("CREATE TABLE TEST.NEW_TABLE (id 
bigint NOT NULL "
++ "PRIMARY KEY, a.name varchar, sal bigint, address 
varchar)" + tableDDLOptions);
+iut.setInputTables("TEST.NEW_TABLE");
+iut.prepareToolSetup();
+status = iut.executeTool();
+Assert.assertEquals(0, status);
+conn.createStatement().execute("DROP TABLE TEST.NEW_TABLE");
+}
 }
 
 @After



[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT.

2019-12-19 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 53c0089  PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT.
53c0089 is described below

commit 53c0089a29a33124b0a0be4c5315995ace2c70fd
Author: Lars Hofhansl 
AuthorDate: Thu Dec 19 01:02:05 2019 -0800

PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT.
---
 .../end2end/ParameterizedIndexUpgradeToolIT.java   | 38 ++
 1 file changed, 17 insertions(+), 21 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
index 16f99e3..5a2cef9 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
@@ -310,44 +310,40 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 }
 
 @Test
-public void testToolWithIncorrectTables() throws Exception {
+public void testDryRunAndFailures() throws Exception {
 validate(true);
+
+// test with incorrect table
 iut.setInputTables("TEST3.TABLE_NOT_PRESENT");
 iut.prepareToolSetup();
 
 int status = iut.executeTool();
 Assert.assertEquals(-1, status);
 validate(true);
-}
 
-@Test
-public void testToolWithNoIndex() throws Exception {
-if (!upgrade || isNamespaceEnabled) {
-return;
-}
-conn.createStatement().execute("CREATE TABLE TEST.NEW_TABLE (id bigint 
NOT NULL "
-+ "PRIMARY KEY, a.name varchar, sal bigint, address varchar)" 
+ tableDDLOptions);
-iut.setInputTables("TEST.NEW_TABLE");
-iut.prepareToolSetup();
-int status = iut.executeTool();
-Assert.assertEquals(0, status);
-conn.createStatement().execute("DROP TABLE TEST.NEW_TABLE");
-}
-
-@Test
-public void testToolWithInputFileParameter() throws Exception {
+// test with input file parameter
 BufferedWriter writer = new BufferedWriter(new FileWriter(new 
File(INPUT_FILE)));
 writer.write(INPUT_LIST);
 writer.close();
 
-validate(true);
-
 iut.setInputTables(null);
 iut.setInputFile(INPUT_FILE);
 iut.prepareToolSetup();
-iut.executeTool();
+status = iut.executeTool();
+Assert.assertEquals(0, status);
 
 validate(true);
+
+// test table without index
+if (upgrade && !isNamespaceEnabled) {
+conn.createStatement().execute("CREATE TABLE TEST.NEW_TABLE (id 
bigint NOT NULL "
++ "PRIMARY KEY, a.name varchar, sal bigint, address 
varchar)" + tableDDLOptions);
+iut.setInputTables("TEST.NEW_TABLE");
+iut.prepareToolSetup();
+status = iut.executeTool();
+Assert.assertEquals(0, status);
+conn.createStatement().execute("DROP TABLE TEST.NEW_TABLE");
+}
 }
 
 @After



[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT.

2019-12-19 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 3d8b3f0  PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT.
3d8b3f0 is described below

commit 3d8b3f042ba9357ef6e1e047156839aa5513f05e
Author: Lars Hofhansl 
AuthorDate: Thu Dec 19 01:02:05 2019 -0800

PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT.
---
 .../end2end/ParameterizedIndexUpgradeToolIT.java   | 38 ++
 1 file changed, 17 insertions(+), 21 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
index 16f99e3..5a2cef9 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
@@ -310,44 +310,40 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 }
 
 @Test
-public void testToolWithIncorrectTables() throws Exception {
+public void testDryRunAndFailures() throws Exception {
 validate(true);
+
+// test with incorrect table
 iut.setInputTables("TEST3.TABLE_NOT_PRESENT");
 iut.prepareToolSetup();
 
 int status = iut.executeTool();
 Assert.assertEquals(-1, status);
 validate(true);
-}
 
-@Test
-public void testToolWithNoIndex() throws Exception {
-if (!upgrade || isNamespaceEnabled) {
-return;
-}
-conn.createStatement().execute("CREATE TABLE TEST.NEW_TABLE (id bigint 
NOT NULL "
-+ "PRIMARY KEY, a.name varchar, sal bigint, address varchar)" 
+ tableDDLOptions);
-iut.setInputTables("TEST.NEW_TABLE");
-iut.prepareToolSetup();
-int status = iut.executeTool();
-Assert.assertEquals(0, status);
-conn.createStatement().execute("DROP TABLE TEST.NEW_TABLE");
-}
-
-@Test
-public void testToolWithInputFileParameter() throws Exception {
+// test with input file parameter
 BufferedWriter writer = new BufferedWriter(new FileWriter(new 
File(INPUT_FILE)));
 writer.write(INPUT_LIST);
 writer.close();
 
-validate(true);
-
 iut.setInputTables(null);
 iut.setInputFile(INPUT_FILE);
 iut.prepareToolSetup();
-iut.executeTool();
+status = iut.executeTool();
+Assert.assertEquals(0, status);
 
 validate(true);
+
+// test table without index
+if (upgrade && !isNamespaceEnabled) {
+conn.createStatement().execute("CREATE TABLE TEST.NEW_TABLE (id 
bigint NOT NULL "
++ "PRIMARY KEY, a.name varchar, sal bigint, address 
varchar)" + tableDDLOptions);
+iut.setInputTables("TEST.NEW_TABLE");
+iut.prepareToolSetup();
+status = iut.executeTool();
+Assert.assertEquals(0, status);
+conn.createStatement().execute("DROP TABLE TEST.NEW_TABLE");
+}
 }
 
 @After



[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT.

2019-12-19 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new adc5897  PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT.
adc5897 is described below

commit adc58977c9e1069345c82f838b39083fa3fb6e4a
Author: Lars Hofhansl 
AuthorDate: Thu Dec 19 01:02:05 2019 -0800

PHOENIX-5616 Speed up ParameterizedIndexUpgradeToolIT.
---
 .../end2end/ParameterizedIndexUpgradeToolIT.java   | 38 ++
 1 file changed, 17 insertions(+), 21 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
index 16f99e3..5a2cef9 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
@@ -310,44 +310,40 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 }
 
 @Test
-public void testToolWithIncorrectTables() throws Exception {
+public void testDryRunAndFailures() throws Exception {
 validate(true);
+
+// test with incorrect table
 iut.setInputTables("TEST3.TABLE_NOT_PRESENT");
 iut.prepareToolSetup();
 
 int status = iut.executeTool();
 Assert.assertEquals(-1, status);
 validate(true);
-}
 
-@Test
-public void testToolWithNoIndex() throws Exception {
-if (!upgrade || isNamespaceEnabled) {
-return;
-}
-conn.createStatement().execute("CREATE TABLE TEST.NEW_TABLE (id bigint 
NOT NULL "
-+ "PRIMARY KEY, a.name varchar, sal bigint, address varchar)" 
+ tableDDLOptions);
-iut.setInputTables("TEST.NEW_TABLE");
-iut.prepareToolSetup();
-int status = iut.executeTool();
-Assert.assertEquals(0, status);
-conn.createStatement().execute("DROP TABLE TEST.NEW_TABLE");
-}
-
-@Test
-public void testToolWithInputFileParameter() throws Exception {
+// test with input file parameter
 BufferedWriter writer = new BufferedWriter(new FileWriter(new 
File(INPUT_FILE)));
 writer.write(INPUT_LIST);
 writer.close();
 
-validate(true);
-
 iut.setInputTables(null);
 iut.setInputFile(INPUT_FILE);
 iut.prepareToolSetup();
-iut.executeTool();
+status = iut.executeTool();
+Assert.assertEquals(0, status);
 
 validate(true);
+
+// test table without index
+if (upgrade && !isNamespaceEnabled) {
+conn.createStatement().execute("CREATE TABLE TEST.NEW_TABLE (id 
bigint NOT NULL "
++ "PRIMARY KEY, a.name varchar, sal bigint, address 
varchar)" + tableDDLOptions);
+iut.setInputTables("TEST.NEW_TABLE");
+iut.prepareToolSetup();
+status = iut.executeTool();
+Assert.assertEquals(0, status);
+conn.createStatement().execute("DROP TABLE TEST.NEW_TABLE");
+}
 }
 
 @After



[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5617 Allow using the server side JDBC client in Phoenix Sandbox.

2019-12-19 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 26c7680  PHOENIX-5617 Allow using the server side JDBC client in 
Phoenix Sandbox.
26c7680 is described below

commit 26c768063ac5253d083820cdd43c5faa384ea459
Author: Lars Hofhansl 
AuthorDate: Thu Dec 19 00:55:04 2019 -0800

PHOENIX-5617 Allow using the server side JDBC client in Phoenix Sandbox.
---
 phoenix-core/src/it/java/org/apache/phoenix/Sandbox.java | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/Sandbox.java 
b/phoenix-core/src/it/java/org/apache/phoenix/Sandbox.java
index ec4e920..102e97c 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/Sandbox.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/Sandbox.java
@@ -18,10 +18,12 @@
 package org.apache.phoenix;
 
 import com.google.common.collect.ImmutableMap;
+
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HBaseConfiguration;
 import org.apache.hadoop.hbase.HBaseTestingUtility;
 import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.util.ReadOnlyProps;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -37,7 +39,9 @@ public class Sandbox {
 public static void main(String[] args) throws Exception {
 System.out.println("Starting Phoenix sandbox");
 Configuration conf = HBaseConfiguration.create();
-BaseTest.setUpConfigForMiniCluster(conf, new 
ReadOnlyProps(ImmutableMap.of()));
+// unset test=true parameter
+BaseTest.setUpConfigForMiniCluster(conf, new ReadOnlyProps(
+ImmutableMap. 
of(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, "")));
 
 final HBaseTestingUtility testUtil = new HBaseTestingUtility(conf);
 testUtil.startMiniCluster();



[phoenix] branch master updated: PHOENIX-5617 Allow using the server side JDBC client in Phoenix Sandbox.

2019-12-19 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 834d79c  PHOENIX-5617 Allow using the server side JDBC client in 
Phoenix Sandbox.
834d79c is described below

commit 834d79c0e57665d7e0ae89bc0d5723c0846fbc3b
Author: Lars Hofhansl 
AuthorDate: Thu Dec 19 00:55:04 2019 -0800

PHOENIX-5617 Allow using the server side JDBC client in Phoenix Sandbox.
---
 phoenix-core/src/it/java/org/apache/phoenix/Sandbox.java | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/Sandbox.java 
b/phoenix-core/src/it/java/org/apache/phoenix/Sandbox.java
index ec4e920..102e97c 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/Sandbox.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/Sandbox.java
@@ -18,10 +18,12 @@
 package org.apache.phoenix;
 
 import com.google.common.collect.ImmutableMap;
+
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HBaseConfiguration;
 import org.apache.hadoop.hbase.HBaseTestingUtility;
 import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.util.ReadOnlyProps;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -37,7 +39,9 @@ public class Sandbox {
 public static void main(String[] args) throws Exception {
 System.out.println("Starting Phoenix sandbox");
 Configuration conf = HBaseConfiguration.create();
-BaseTest.setUpConfigForMiniCluster(conf, new 
ReadOnlyProps(ImmutableMap.of()));
+// unset test=true parameter
+BaseTest.setUpConfigForMiniCluster(conf, new ReadOnlyProps(
+ImmutableMap. 
of(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, "")));
 
 final HBaseTestingUtility testUtil = new HBaseTestingUtility(conf);
 testUtil.startMiniCluster();



[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5617 Allow using the server side JDBC client in Phoenix Sandbox.

2019-12-19 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 8f4d9b2  PHOENIX-5617 Allow using the server side JDBC client in 
Phoenix Sandbox.
8f4d9b2 is described below

commit 8f4d9b2d75e68f318f0bca5bec17556e0d9bde77
Author: Lars Hofhansl 
AuthorDate: Thu Dec 19 00:55:04 2019 -0800

PHOENIX-5617 Allow using the server side JDBC client in Phoenix Sandbox.
---
 phoenix-core/src/it/java/org/apache/phoenix/Sandbox.java | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/Sandbox.java 
b/phoenix-core/src/it/java/org/apache/phoenix/Sandbox.java
index ec4e920..102e97c 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/Sandbox.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/Sandbox.java
@@ -18,10 +18,12 @@
 package org.apache.phoenix;
 
 import com.google.common.collect.ImmutableMap;
+
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HBaseConfiguration;
 import org.apache.hadoop.hbase.HBaseTestingUtility;
 import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.util.ReadOnlyProps;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -37,7 +39,9 @@ public class Sandbox {
 public static void main(String[] args) throws Exception {
 System.out.println("Starting Phoenix sandbox");
 Configuration conf = HBaseConfiguration.create();
-BaseTest.setUpConfigForMiniCluster(conf, new 
ReadOnlyProps(ImmutableMap.of()));
+// unset test=true parameter
+BaseTest.setUpConfigForMiniCluster(conf, new ReadOnlyProps(
+ImmutableMap. 
of(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, "")));
 
 final HBaseTestingUtility testUtil = new HBaseTestingUtility(conf);
 testUtil.startMiniCluster();



[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5617 Allow using the server side JDBC client in Phoenix Sandbox.

2019-12-19 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 6e20b4f  PHOENIX-5617 Allow using the server side JDBC client in 
Phoenix Sandbox.
6e20b4f is described below

commit 6e20b4f9a4fa4d68c60cce104ba15d58ec5ebe9d
Author: Lars Hofhansl 
AuthorDate: Thu Dec 19 00:55:04 2019 -0800

PHOENIX-5617 Allow using the server side JDBC client in Phoenix Sandbox.
---
 phoenix-core/src/it/java/org/apache/phoenix/Sandbox.java | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/Sandbox.java 
b/phoenix-core/src/it/java/org/apache/phoenix/Sandbox.java
index ec4e920..102e97c 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/Sandbox.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/Sandbox.java
@@ -18,10 +18,12 @@
 package org.apache.phoenix;
 
 import com.google.common.collect.ImmutableMap;
+
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HBaseConfiguration;
 import org.apache.hadoop.hbase.HBaseTestingUtility;
 import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.util.ReadOnlyProps;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -37,7 +39,9 @@ public class Sandbox {
 public static void main(String[] args) throws Exception {
 System.out.println("Starting Phoenix sandbox");
 Configuration conf = HBaseConfiguration.create();
-BaseTest.setUpConfigForMiniCluster(conf, new 
ReadOnlyProps(ImmutableMap.of()));
+// unset test=true parameter
+BaseTest.setUpConfigForMiniCluster(conf, new ReadOnlyProps(
+ImmutableMap. 
of(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, "")));
 
 final HBaseTestingUtility testUtil = new HBaseTestingUtility(conf);
 testUtil.startMiniCluster();



[phoenix] branch master updated: PHOENIX-5610 Dropping a view or column with a 4.14 client raises an ArrayIndexOutOfBoundsException on 4.15 server.

2019-12-12 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new e91b614  PHOENIX-5610 Dropping a view or column with a 4.14 client 
raises an ArrayIndexOutOfBoundsException on 4.15 server.
e91b614 is described below

commit e91b614d7e6f2867f3ac9930aff66311f779dded
Author: Lars Hofhansl 
AuthorDate: Thu Dec 12 09:10:19 2019 -0800

PHOENIX-5610 Dropping a view or column with a 4.14 client raises an 
ArrayIndexOutOfBoundsException on 4.15 server.
---
 .../apache/phoenix/coprocessor/MetaDataEndpointImpl.java | 16 
 1 file changed, 16 insertions(+)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 86afe8d..fb626c4 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -2490,6 +2490,14 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements RegionCopr
 }
 }
 
+if (clientVersion < MIN_SPLITTABLE_SYSTEM_CATALOG && tableType == 
PTableType.VIEW) {
+try (PhoenixConnection connection = 
QueryUtil.getConnectionOnServer(env.getConfiguration()).unwrap(PhoenixConnection.class))
 {
+PTable pTable = PhoenixRuntime.getTableNoCache(connection, 
table.getParentName().getString());
+table = 
ViewUtil.addDerivedColumnsAndIndexesFromParent(connection, table, pTable);
+} catch (ClassNotFoundException e) {
+throw new IOException(e);
+}
+}
 return new MetaDataMutationResult(MutationCode.TABLE_ALREADY_EXISTS,
 EnvironmentEdgeManager.currentTimeMillis(), table, 
tableNamesToDelete, sharedTablesToDelete);
 }
@@ -2731,6 +2739,14 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements RegionCopr
 return result;
 } else {
 table = buildTable(key, cacheKey, region, 
HConstants.LATEST_TIMESTAMP, clientVersion);
+if (clientVersion < MIN_SPLITTABLE_SYSTEM_CATALOG && type 
== PTableType.VIEW) {
+try (PhoenixConnection connection = 
QueryUtil.getConnectionOnServer(env.getConfiguration()).unwrap(PhoenixConnection.class))
 {
+PTable pTable = 
PhoenixRuntime.getTableNoCache(connection, table.getParentName().getString());
+table = 
ViewUtil.addDerivedColumnsAndIndexesFromParent(connection, table, pTable);
+} catch (ClassNotFoundException e) {
+throw new IOException(e);
+}
+}
 return new 
MetaDataMutationResult(MutationCode.TABLE_ALREADY_EXISTS, currentTime, table,
 tableNamesToDelete, sharedTablesToDelete);
 }



[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5610 Dropping a view or column with a 4.14 client raises an ArrayIndexOutOfBoundsException on 4.15 server.

2019-12-12 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new fb35da9  PHOENIX-5610 Dropping a view or column with a 4.14 client 
raises an ArrayIndexOutOfBoundsException on 4.15 server.
fb35da9 is described below

commit fb35da9a2ee898122c0814ddfa8de742ead9567b
Author: Lars Hofhansl 
AuthorDate: Thu Dec 12 09:10:19 2019 -0800

PHOENIX-5610 Dropping a view or column with a 4.14 client raises an 
ArrayIndexOutOfBoundsException on 4.15 server.
---
 .../apache/phoenix/coprocessor/MetaDataEndpointImpl.java | 16 
 1 file changed, 16 insertions(+)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index dab77b6..12e2f12 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -2473,6 +2473,14 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 }
 }
 
+if (clientVersion < MIN_SPLITTABLE_SYSTEM_CATALOG && tableType == 
PTableType.VIEW) {
+try (PhoenixConnection connection = 
QueryUtil.getConnectionOnServer(env.getConfiguration()).unwrap(PhoenixConnection.class))
 {
+PTable pTable = PhoenixRuntime.getTableNoCache(connection, 
table.getParentName().getString());
+table = 
ViewUtil.addDerivedColumnsAndIndexesFromParent(connection, table, pTable);
+} catch (ClassNotFoundException e) {
+throw new IOException(e);
+}
+}
 return new MetaDataMutationResult(MutationCode.TABLE_ALREADY_EXISTS,
 EnvironmentEdgeManager.currentTimeMillis(), table, 
tableNamesToDelete, sharedTablesToDelete);
 }
@@ -2713,6 +2721,14 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 return result;
 } else {
 table = buildTable(key, cacheKey, region, 
HConstants.LATEST_TIMESTAMP, clientVersion);
+if (clientVersion < MIN_SPLITTABLE_SYSTEM_CATALOG && type 
== PTableType.VIEW) {
+try (PhoenixConnection connection = 
QueryUtil.getConnectionOnServer(env.getConfiguration()).unwrap(PhoenixConnection.class))
 {
+PTable pTable = 
PhoenixRuntime.getTableNoCache(connection, table.getParentName().getString());
+table = 
ViewUtil.addDerivedColumnsAndIndexesFromParent(connection, table, pTable);
+} catch (ClassNotFoundException e) {
+throw new IOException(e);
+}
+}
 return new 
MetaDataMutationResult(MutationCode.TABLE_ALREADY_EXISTS, currentTime, table,
 tableNamesToDelete, sharedTablesToDelete);
 }



[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5610 Dropping a view or column with a 4.14 client raises an ArrayIndexOutOfBoundsException on 4.15 server.

2019-12-12 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 7cc5f06  PHOENIX-5610 Dropping a view or column with a 4.14 client 
raises an ArrayIndexOutOfBoundsException on 4.15 server.
7cc5f06 is described below

commit 7cc5f062df25e819fcba8d30942926be42cc50e2
Author: Lars Hofhansl 
AuthorDate: Thu Dec 12 09:10:19 2019 -0800

PHOENIX-5610 Dropping a view or column with a 4.14 client raises an 
ArrayIndexOutOfBoundsException on 4.15 server.
---
 .../apache/phoenix/coprocessor/MetaDataEndpointImpl.java | 16 
 1 file changed, 16 insertions(+)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index dab77b6..12e2f12 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -2473,6 +2473,14 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 }
 }
 
+if (clientVersion < MIN_SPLITTABLE_SYSTEM_CATALOG && tableType == 
PTableType.VIEW) {
+try (PhoenixConnection connection = 
QueryUtil.getConnectionOnServer(env.getConfiguration()).unwrap(PhoenixConnection.class))
 {
+PTable pTable = PhoenixRuntime.getTableNoCache(connection, 
table.getParentName().getString());
+table = 
ViewUtil.addDerivedColumnsAndIndexesFromParent(connection, table, pTable);
+} catch (ClassNotFoundException e) {
+throw new IOException(e);
+}
+}
 return new MetaDataMutationResult(MutationCode.TABLE_ALREADY_EXISTS,
 EnvironmentEdgeManager.currentTimeMillis(), table, 
tableNamesToDelete, sharedTablesToDelete);
 }
@@ -2713,6 +2721,14 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 return result;
 } else {
 table = buildTable(key, cacheKey, region, 
HConstants.LATEST_TIMESTAMP, clientVersion);
+if (clientVersion < MIN_SPLITTABLE_SYSTEM_CATALOG && type 
== PTableType.VIEW) {
+try (PhoenixConnection connection = 
QueryUtil.getConnectionOnServer(env.getConfiguration()).unwrap(PhoenixConnection.class))
 {
+PTable pTable = 
PhoenixRuntime.getTableNoCache(connection, table.getParentName().getString());
+table = 
ViewUtil.addDerivedColumnsAndIndexesFromParent(connection, table, pTable);
+} catch (ClassNotFoundException e) {
+throw new IOException(e);
+}
+}
 return new 
MetaDataMutationResult(MutationCode.TABLE_ALREADY_EXISTS, currentTime, table,
 tableNamesToDelete, sharedTablesToDelete);
 }



[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5610 Dropping a view or column with a 4.14 client raises an ArrayIndexOutOfBoundsException on 4.15 server.

2019-12-12 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 5e41585  PHOENIX-5610 Dropping a view or column with a 4.14 client 
raises an ArrayIndexOutOfBoundsException on 4.15 server.
5e41585 is described below

commit 5e4158536292ea83389efda89b9e2de9eb3a70ea
Author: Lars Hofhansl 
AuthorDate: Thu Dec 12 09:10:19 2019 -0800

PHOENIX-5610 Dropping a view or column with a 4.14 client raises an 
ArrayIndexOutOfBoundsException on 4.15 server.
---
 .../apache/phoenix/coprocessor/MetaDataEndpointImpl.java | 16 
 1 file changed, 16 insertions(+)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index dab77b6..12e2f12 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -2473,6 +2473,14 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 }
 }
 
+if (clientVersion < MIN_SPLITTABLE_SYSTEM_CATALOG && tableType == 
PTableType.VIEW) {
+try (PhoenixConnection connection = 
QueryUtil.getConnectionOnServer(env.getConfiguration()).unwrap(PhoenixConnection.class))
 {
+PTable pTable = PhoenixRuntime.getTableNoCache(connection, 
table.getParentName().getString());
+table = 
ViewUtil.addDerivedColumnsAndIndexesFromParent(connection, table, pTable);
+} catch (ClassNotFoundException e) {
+throw new IOException(e);
+}
+}
 return new MetaDataMutationResult(MutationCode.TABLE_ALREADY_EXISTS,
 EnvironmentEdgeManager.currentTimeMillis(), table, 
tableNamesToDelete, sharedTablesToDelete);
 }
@@ -2713,6 +2721,14 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 return result;
 } else {
 table = buildTable(key, cacheKey, region, 
HConstants.LATEST_TIMESTAMP, clientVersion);
+if (clientVersion < MIN_SPLITTABLE_SYSTEM_CATALOG && type 
== PTableType.VIEW) {
+try (PhoenixConnection connection = 
QueryUtil.getConnectionOnServer(env.getConfiguration()).unwrap(PhoenixConnection.class))
 {
+PTable pTable = 
PhoenixRuntime.getTableNoCache(connection, table.getParentName().getString());
+table = 
ViewUtil.addDerivedColumnsAndIndexesFromParent(connection, table, pTable);
+} catch (ClassNotFoundException e) {
+throw new IOException(e);
+}
+}
 return new 
MetaDataMutationResult(MutationCode.TABLE_ALREADY_EXISTS, currentTime, table,
 tableNamesToDelete, sharedTablesToDelete);
 }



[phoenix] branch master updated: PHOENIX-5584 Older clients don't get correct view metadata when a 4.15 client creates a view.

2019-11-25 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new cdabf29  PHOENIX-5584 Older clients don't get correct view metadata 
when a 4.15 client creates a view.
cdabf29 is described below

commit cdabf29aa7440c4a8b8c85b81542121ffcb7baac
Author: Lars Hofhansl 
AuthorDate: Mon Nov 25 16:20:58 2019 -0800

PHOENIX-5584 Older clients don't get correct view metadata when a 4.15 
client creates a view.
---
 .../apache/phoenix/coprocessor/MetaDataEndpointImpl.java| 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 0be2383..c3739d4 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -603,6 +603,14 @@ public class MetaDataEndpointImpl extends MetaDataProtocol 
implements RegionCopr
 getCoprocessorHost().preGetTable(Bytes.toString(tenantId), 
SchemaUtil.getTableName(schemaName, tableName),
 TableName.valueOf(table.getPhysicalName().getBytes()));
 
+if (request.getClientVersion() < MIN_SPLITTABLE_SYSTEM_CATALOG
+&& table.getType() == PTableType.VIEW
+&& table.getViewType() != ViewType.MAPPED) {
+try (PhoenixConnection connection = 
QueryUtil.getConnectionOnServer(env.getConfiguration()).unwrap(PhoenixConnection.class))
 {
+PTable pTable = PhoenixRuntime.getTableNoCache(connection, 
table.getParentName().getString());
+table = 
ViewUtil.addDerivedColumnsAndIndexesFromParent(connection, table, pTable);
+}
+}
 
builder.setReturnCode(MetaDataProtos.MutationCode.TABLE_ALREADY_EXISTS);
 builder.setMutationTime(currentTime);
 if (blockWriteRebuildIndex) {
@@ -2823,14 +2831,13 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements RegionCopr
 }
 
 /**
- * Looks up the table locally if its present on this region, or else makes 
an rpc call
- * to look up the region using PhoenixRuntime.getTable
+ * Looks up the table locally if its present on this region.
  */
 private PTable doGetTable(byte[] tenantId, byte[] schemaName, byte[] 
tableName,
   long clientTimeStamp, RowLock rowLock, int 
clientVersion) throws IOException, SQLException {
 Region region = env.getRegion();
 final byte[] key = SchemaUtil.getTableKey(tenantId, schemaName, 
tableName);
-// if this region doesn't contain the metadata rows look up the table 
by using PhoenixRuntime.getTable
+// if this region doesn't contain the metadata rows then fail
 if (!region.getRegionInfo().containsRow(key)) {
 throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.GET_TABLE_ERROR)
 .setSchemaName(Bytes.toString(schemaName))



[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5584 Older clients don't get correct view metadata when a 4.15 client creates a view.

2019-11-25 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 5f7acf4  PHOENIX-5584 Older clients don't get correct view metadata 
when a 4.15 client creates a view.
5f7acf4 is described below

commit 5f7acf46bf01e8e1e714c6a7556d794cc9c4c7cf
Author: Lars Hofhansl 
AuthorDate: Mon Nov 25 16:20:58 2019 -0800

PHOENIX-5584 Older clients don't get correct view metadata when a 4.15 
client creates a view.
---
 .../apache/phoenix/coprocessor/MetaDataEndpointImpl.java| 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 9fc6020..21c0823 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -601,6 +601,14 @@ public class MetaDataEndpointImpl extends MetaDataProtocol 
implements Coprocesso
 getCoprocessorHost().preGetTable(Bytes.toString(tenantId), 
SchemaUtil.getTableName(schemaName, tableName),
 TableName.valueOf(table.getPhysicalName().getBytes()));
 
+if (request.getClientVersion() < MIN_SPLITTABLE_SYSTEM_CATALOG
+&& table.getType() == PTableType.VIEW
+&& table.getViewType() != ViewType.MAPPED) {
+try (PhoenixConnection connection = 
QueryUtil.getConnectionOnServer(env.getConfiguration()).unwrap(PhoenixConnection.class))
 {
+PTable pTable = PhoenixRuntime.getTableNoCache(connection, 
table.getParentName().getString());
+table = 
ViewUtil.addDerivedColumnsAndIndexesFromParent(connection, table, pTable);
+}
+}
 
builder.setReturnCode(MetaDataProtos.MutationCode.TABLE_ALREADY_EXISTS);
 builder.setMutationTime(currentTime);
 if (blockWriteRebuildIndex) {
@@ -2806,14 +2814,13 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 }
 
 /**
- * Looks up the table locally if its present on this region, or else makes 
an rpc call
- * to look up the region using PhoenixRuntime.getTable
+ * Looks up the table locally if its present on this region.
  */
 private PTable doGetTable(byte[] tenantId, byte[] schemaName, byte[] 
tableName,
   long clientTimeStamp, RowLock rowLock, int 
clientVersion) throws IOException, SQLException {
 Region region = env.getRegion();
 final byte[] key = SchemaUtil.getTableKey(tenantId, schemaName, 
tableName);
-// if this region doesn't contain the metadata rows look up the table 
by using PhoenixRuntime.getTable
+// if this region doesn't contain the metadata rows then fail
 if (!region.getRegionInfo().containsRow(key)) {
 throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.GET_TABLE_ERROR)
 .setSchemaName(Bytes.toString(schemaName))



[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5584 Older clients don't get correct view metadata when a 4.15 client creates a view.

2019-11-25 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 35e19bd  PHOENIX-5584 Older clients don't get correct view metadata 
when a 4.15 client creates a view.
35e19bd is described below

commit 35e19bde32613bca31b09c48e8047b979136a951
Author: Lars Hofhansl 
AuthorDate: Mon Nov 25 16:20:58 2019 -0800

PHOENIX-5584 Older clients don't get correct view metadata when a 4.15 
client creates a view.
---
 .../apache/phoenix/coprocessor/MetaDataEndpointImpl.java| 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 9fc6020..21c0823 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -601,6 +601,14 @@ public class MetaDataEndpointImpl extends MetaDataProtocol 
implements Coprocesso
 getCoprocessorHost().preGetTable(Bytes.toString(tenantId), 
SchemaUtil.getTableName(schemaName, tableName),
 TableName.valueOf(table.getPhysicalName().getBytes()));
 
+if (request.getClientVersion() < MIN_SPLITTABLE_SYSTEM_CATALOG
+&& table.getType() == PTableType.VIEW
+&& table.getViewType() != ViewType.MAPPED) {
+try (PhoenixConnection connection = 
QueryUtil.getConnectionOnServer(env.getConfiguration()).unwrap(PhoenixConnection.class))
 {
+PTable pTable = PhoenixRuntime.getTableNoCache(connection, 
table.getParentName().getString());
+table = 
ViewUtil.addDerivedColumnsAndIndexesFromParent(connection, table, pTable);
+}
+}
 
builder.setReturnCode(MetaDataProtos.MutationCode.TABLE_ALREADY_EXISTS);
 builder.setMutationTime(currentTime);
 if (blockWriteRebuildIndex) {
@@ -2806,14 +2814,13 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 }
 
 /**
- * Looks up the table locally if its present on this region, or else makes 
an rpc call
- * to look up the region using PhoenixRuntime.getTable
+ * Looks up the table locally if its present on this region.
  */
 private PTable doGetTable(byte[] tenantId, byte[] schemaName, byte[] 
tableName,
   long clientTimeStamp, RowLock rowLock, int 
clientVersion) throws IOException, SQLException {
 Region region = env.getRegion();
 final byte[] key = SchemaUtil.getTableKey(tenantId, schemaName, 
tableName);
-// if this region doesn't contain the metadata rows look up the table 
by using PhoenixRuntime.getTable
+// if this region doesn't contain the metadata rows then fail
 if (!region.getRegionInfo().containsRow(key)) {
 throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.GET_TABLE_ERROR)
 .setSchemaName(Bytes.toString(schemaName))



[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5584 Older clients don't get correct view metadata when a 4.15 client creates a view.

2019-11-25 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 8828b98  PHOENIX-5584 Older clients don't get correct view metadata 
when a 4.15 client creates a view.
8828b98 is described below

commit 8828b98c75bb7c6df1fb2d287beb44815b0a9909
Author: Lars Hofhansl 
AuthorDate: Mon Nov 25 16:20:58 2019 -0800

PHOENIX-5584 Older clients don't get correct view metadata when a 4.15 
client creates a view.
---
 .../apache/phoenix/coprocessor/MetaDataEndpointImpl.java| 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 9fc6020..21c0823 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -601,6 +601,14 @@ public class MetaDataEndpointImpl extends MetaDataProtocol 
implements Coprocesso
 getCoprocessorHost().preGetTable(Bytes.toString(tenantId), 
SchemaUtil.getTableName(schemaName, tableName),
 TableName.valueOf(table.getPhysicalName().getBytes()));
 
+if (request.getClientVersion() < MIN_SPLITTABLE_SYSTEM_CATALOG
+&& table.getType() == PTableType.VIEW
+&& table.getViewType() != ViewType.MAPPED) {
+try (PhoenixConnection connection = 
QueryUtil.getConnectionOnServer(env.getConfiguration()).unwrap(PhoenixConnection.class))
 {
+PTable pTable = PhoenixRuntime.getTableNoCache(connection, 
table.getParentName().getString());
+table = 
ViewUtil.addDerivedColumnsAndIndexesFromParent(connection, table, pTable);
+}
+}
 
builder.setReturnCode(MetaDataProtos.MutationCode.TABLE_ALREADY_EXISTS);
 builder.setMutationTime(currentTime);
 if (blockWriteRebuildIndex) {
@@ -2806,14 +2814,13 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 }
 
 /**
- * Looks up the table locally if its present on this region, or else makes 
an rpc call
- * to look up the region using PhoenixRuntime.getTable
+ * Looks up the table locally if its present on this region.
  */
 private PTable doGetTable(byte[] tenantId, byte[] schemaName, byte[] 
tableName,
   long clientTimeStamp, RowLock rowLock, int 
clientVersion) throws IOException, SQLException {
 Region region = env.getRegion();
 final byte[] key = SchemaUtil.getTableKey(tenantId, schemaName, 
tableName);
-// if this region doesn't contain the metadata rows look up the table 
by using PhoenixRuntime.getTable
+// if this region doesn't contain the metadata rows then fail
 if (!region.getRegionInfo().containsRow(key)) {
 throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.GET_TABLE_ERROR)
 .setSchemaName(Bytes.toString(schemaName))



[phoenix] branch master updated: PHOENIX-5559 Fix remaining issues with Long viewIndexIds.

2019-11-13 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 910b72b  PHOENIX-5559 Fix remaining issues with Long viewIndexIds.
910b72b is described below

commit 910b72bf5d3b51a0c30ce43d9b19c0ce089cda62
Author: Lars Hofhansl 
AuthorDate: Wed Nov 13 10:21:03 2019 -0800

PHOENIX-5559 Fix remaining issues with Long viewIndexIds.
---
 .../end2end/BaseTenantSpecificViewIndexIT.java |  10 +-
 .../org/apache/phoenix/end2end/BaseViewIT.java |   8 +-
 .../phoenix/end2end/TenantSpecificViewIndexIT.java |   4 +-
 .../java/org/apache/phoenix/end2end/UpgradeIT.java |   2 +-
 .../it/java/org/apache/phoenix/end2end/ViewIT.java |  10 +-
 .../index/ChildViewsUseParentViewIndexIT.java  |   4 +-
 .../end2end/index/GlobalIndexOptimizationIT.java   |   2 +-
 .../apache/phoenix/end2end/index/IndexUsageIT.java |   4 +-
 .../apache/phoenix/end2end/index/LocalIndexIT.java |   2 +-
 .../end2end/index/MutableIndexFailureIT.java   |   2 +-
 .../phoenix/end2end/index/ShortViewIndexIdIT.java  | 104 +
 .../apache/phoenix/end2end/index/ViewIndexIT.java  |   4 +-
 .../coprocessor/BaseScannerRegionObserver.java |  13 ++-
 .../phoenix/coprocessor/MetaDataEndpointImpl.java  |   6 +-
 .../org/apache/phoenix/iterate/ExplainTable.java   |   2 +-
 .../java/org/apache/phoenix/util/MetaDataUtil.java |  18 
 16 files changed, 163 insertions(+), 32 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseTenantSpecificViewIndexIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseTenantSpecificViewIndexIT.java
index 216e2d3..9860624 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseTenantSpecificViewIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseTenantSpecificViewIndexIT.java
@@ -140,18 +140,18 @@ public class BaseTenantSpecificViewIndexIT extends 
SplitSystemCatalogIT {
 ResultSet rs = conn.createStatement().executeQuery("EXPLAIN SELECT k1, 
k2, v2 FROM " + viewName + " WHERE v2='" + valuePrefix + "v2-1'");
 if(localIndex){
 assertEquals(saltBuckets == null ? 
-"CLIENT PARALLEL 1-WAY RANGE SCAN OVER " + tableName + " 
[" + Long.toString(1L + expectedIndexIdOffset) + ",'" + tenantId + "','" + 
valuePrefix + "v2-1']\n"
+"CLIENT PARALLEL 1-WAY RANGE SCAN OVER " + tableName + " 
[" + (1L + expectedIndexIdOffset) + ",'" + tenantId + "','" + valuePrefix + 
"v2-1']\n"
 + "SERVER FILTER BY FIRST KEY ONLY\n"
 + "CLIENT MERGE SORT" :
-"CLIENT PARALLEL 3-WAY RANGE SCAN OVER " + tableName + " 
[" + Long.toString(1L + expectedIndexIdOffset) + ",'" + tenantId + "','" + 
valuePrefix + "v2-1']\n"
+"CLIENT PARALLEL 3-WAY RANGE SCAN OVER " + tableName + " 
[" + (1L + expectedIndexIdOffset) + ",'" + tenantId + "','" + valuePrefix + 
"v2-1']\n"
 + "SERVER FILTER BY FIRST KEY ONLY\n"
 + "CLIENT MERGE SORT", 
QueryUtil.getExplainPlan(rs));
 } else {
 String expected = saltBuckets == null ? 
-"CLIENT PARALLEL 1-WAY RANGE SCAN OVER _IDX_" + tableName 
+ " [" + Long.toString(Long.MIN_VALUE + expectedIndexIdOffset) + ",'" + 
tenantId + "','" + valuePrefix + "v2-1']\n"
+"CLIENT PARALLEL 1-WAY RANGE SCAN OVER _IDX_" + tableName 
+ " [" + (Short.MIN_VALUE + expectedIndexIdOffset) + ",'" + tenantId + "','" + 
valuePrefix + "v2-1']\n"
 + "SERVER FILTER BY FIRST KEY ONLY" :
-"CLIENT PARALLEL 3-WAY RANGE SCAN OVER _IDX_" + tableName 
+ " [0," + Long.toString(Long.MIN_VALUE + expectedIndexIdOffset) + ",'" + 
tenantId + "','" + valuePrefix +
-"v2-1'] - ["+(saltBuckets.intValue()-1)+"," + 
Long.toString(Long.MIN_VALUE + expectedIndexIdOffset) + ",'" + tenantId + "','" 
+ valuePrefix + "v2-1']\n"
+"CLIENT PARALLEL 3-WAY RANGE SCAN OVER _IDX_" + tableName 
+ " [0," + (Short.MIN_VALUE + expectedIndexIdOffset) + ",'" + tenantId + "','" 
+ valuePrefix +
+"v2-1'] - ["+(saltBuckets.intValue()-1)+"," + 
(Short.MIN_VALUE + expectedIndexIdOffset) +

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5559 Fix remaining issues with Long viewIndexIds.

2019-11-13 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 0c5ae7b  PHOENIX-5559 Fix remaining issues with Long viewIndexIds.
0c5ae7b is described below

commit 0c5ae7b3a64f2bec6b9bc08cb8fbe204eae9528b
Author: Lars Hofhansl 
AuthorDate: Wed Nov 13 10:21:03 2019 -0800

PHOENIX-5559 Fix remaining issues with Long viewIndexIds.
---
 .../end2end/BaseTenantSpecificViewIndexIT.java |  10 +-
 .../org/apache/phoenix/end2end/BaseViewIT.java |   8 +-
 .../phoenix/end2end/TenantSpecificViewIndexIT.java |   4 +-
 .../java/org/apache/phoenix/end2end/UpgradeIT.java |   2 +-
 .../it/java/org/apache/phoenix/end2end/ViewIT.java |  10 +-
 .../index/ChildViewsUseParentViewIndexIT.java  |   4 +-
 .../end2end/index/GlobalIndexOptimizationIT.java   |   2 +-
 .../apache/phoenix/end2end/index/IndexUsageIT.java |   4 +-
 .../apache/phoenix/end2end/index/LocalIndexIT.java |   2 +-
 .../end2end/index/MutableIndexFailureIT.java   |   2 +-
 .../phoenix/end2end/index/ShortViewIndexIdIT.java  | 104 +
 .../apache/phoenix/end2end/index/ViewIndexIT.java  |   4 +-
 .../coprocessor/BaseScannerRegionObserver.java |  13 ++-
 .../phoenix/coprocessor/MetaDataEndpointImpl.java  |   6 +-
 .../org/apache/phoenix/iterate/ExplainTable.java   |   2 +-
 .../java/org/apache/phoenix/util/MetaDataUtil.java |  18 
 16 files changed, 163 insertions(+), 32 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseTenantSpecificViewIndexIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseTenantSpecificViewIndexIT.java
index 216e2d3..9860624 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseTenantSpecificViewIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseTenantSpecificViewIndexIT.java
@@ -140,18 +140,18 @@ public class BaseTenantSpecificViewIndexIT extends 
SplitSystemCatalogIT {
 ResultSet rs = conn.createStatement().executeQuery("EXPLAIN SELECT k1, 
k2, v2 FROM " + viewName + " WHERE v2='" + valuePrefix + "v2-1'");
 if(localIndex){
 assertEquals(saltBuckets == null ? 
-"CLIENT PARALLEL 1-WAY RANGE SCAN OVER " + tableName + " 
[" + Long.toString(1L + expectedIndexIdOffset) + ",'" + tenantId + "','" + 
valuePrefix + "v2-1']\n"
+"CLIENT PARALLEL 1-WAY RANGE SCAN OVER " + tableName + " 
[" + (1L + expectedIndexIdOffset) + ",'" + tenantId + "','" + valuePrefix + 
"v2-1']\n"
 + "SERVER FILTER BY FIRST KEY ONLY\n"
 + "CLIENT MERGE SORT" :
-"CLIENT PARALLEL 3-WAY RANGE SCAN OVER " + tableName + " 
[" + Long.toString(1L + expectedIndexIdOffset) + ",'" + tenantId + "','" + 
valuePrefix + "v2-1']\n"
+"CLIENT PARALLEL 3-WAY RANGE SCAN OVER " + tableName + " 
[" + (1L + expectedIndexIdOffset) + ",'" + tenantId + "','" + valuePrefix + 
"v2-1']\n"
 + "SERVER FILTER BY FIRST KEY ONLY\n"
 + "CLIENT MERGE SORT", 
QueryUtil.getExplainPlan(rs));
 } else {
 String expected = saltBuckets == null ? 
-"CLIENT PARALLEL 1-WAY RANGE SCAN OVER _IDX_" + tableName 
+ " [" + Long.toString(Long.MIN_VALUE + expectedIndexIdOffset) + ",'" + 
tenantId + "','" + valuePrefix + "v2-1']\n"
+"CLIENT PARALLEL 1-WAY RANGE SCAN OVER _IDX_" + tableName 
+ " [" + (Short.MIN_VALUE + expectedIndexIdOffset) + ",'" + tenantId + "','" + 
valuePrefix + "v2-1']\n"
 + "SERVER FILTER BY FIRST KEY ONLY" :
-"CLIENT PARALLEL 3-WAY RANGE SCAN OVER _IDX_" + tableName 
+ " [0," + Long.toString(Long.MIN_VALUE + expectedIndexIdOffset) + ",'" + 
tenantId + "','" + valuePrefix +
-"v2-1'] - ["+(saltBuckets.intValue()-1)+"," + 
Long.toString(Long.MIN_VALUE + expectedIndexIdOffset) + ",'" + tenantId + "','" 
+ valuePrefix + "v2-1']\n"
+"CLIENT PARALLEL 3-WAY RANGE SCAN OVER _IDX_" + tableName 
+ " [0," + (Short.MIN_VALUE + expectedIndexIdOffset) + ",'" + tenantId + "','" 
+ valuePrefix +
+"v2-1'] - ["+(saltBuckets.intValue()-1)+"," + 
(Short.MIN_VALUE + expectedI

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5559 Fix remaining issues with Long viewIndexIds.

2019-11-13 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 2c6986e  PHOENIX-5559 Fix remaining issues with Long viewIndexIds.
2c6986e is described below

commit 2c6986efccfa7d9bb323831334816ac6b9cb7d6d
Author: Lars Hofhansl 
AuthorDate: Wed Nov 13 10:21:03 2019 -0800

PHOENIX-5559 Fix remaining issues with Long viewIndexIds.
---
 .../end2end/BaseTenantSpecificViewIndexIT.java |  10 +-
 .../org/apache/phoenix/end2end/BaseViewIT.java |   8 +-
 .../phoenix/end2end/TenantSpecificViewIndexIT.java |   4 +-
 .../java/org/apache/phoenix/end2end/UpgradeIT.java |   2 +-
 .../it/java/org/apache/phoenix/end2end/ViewIT.java |  10 +-
 .../index/ChildViewsUseParentViewIndexIT.java  |   4 +-
 .../end2end/index/GlobalIndexOptimizationIT.java   |   2 +-
 .../apache/phoenix/end2end/index/IndexUsageIT.java |   4 +-
 .../apache/phoenix/end2end/index/LocalIndexIT.java |   2 +-
 .../end2end/index/MutableIndexFailureIT.java   |   2 +-
 .../phoenix/end2end/index/ShortViewIndexIdIT.java  | 104 +
 .../apache/phoenix/end2end/index/ViewIndexIT.java  |   4 +-
 .../coprocessor/BaseScannerRegionObserver.java |  13 ++-
 .../phoenix/coprocessor/MetaDataEndpointImpl.java  |   6 +-
 .../org/apache/phoenix/iterate/ExplainTable.java   |   2 +-
 .../java/org/apache/phoenix/util/MetaDataUtil.java |  18 
 16 files changed, 163 insertions(+), 32 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseTenantSpecificViewIndexIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseTenantSpecificViewIndexIT.java
index 216e2d3..9860624 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseTenantSpecificViewIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseTenantSpecificViewIndexIT.java
@@ -140,18 +140,18 @@ public class BaseTenantSpecificViewIndexIT extends 
SplitSystemCatalogIT {
 ResultSet rs = conn.createStatement().executeQuery("EXPLAIN SELECT k1, 
k2, v2 FROM " + viewName + " WHERE v2='" + valuePrefix + "v2-1'");
 if(localIndex){
 assertEquals(saltBuckets == null ? 
-"CLIENT PARALLEL 1-WAY RANGE SCAN OVER " + tableName + " 
[" + Long.toString(1L + expectedIndexIdOffset) + ",'" + tenantId + "','" + 
valuePrefix + "v2-1']\n"
+"CLIENT PARALLEL 1-WAY RANGE SCAN OVER " + tableName + " 
[" + (1L + expectedIndexIdOffset) + ",'" + tenantId + "','" + valuePrefix + 
"v2-1']\n"
 + "SERVER FILTER BY FIRST KEY ONLY\n"
 + "CLIENT MERGE SORT" :
-"CLIENT PARALLEL 3-WAY RANGE SCAN OVER " + tableName + " 
[" + Long.toString(1L + expectedIndexIdOffset) + ",'" + tenantId + "','" + 
valuePrefix + "v2-1']\n"
+"CLIENT PARALLEL 3-WAY RANGE SCAN OVER " + tableName + " 
[" + (1L + expectedIndexIdOffset) + ",'" + tenantId + "','" + valuePrefix + 
"v2-1']\n"
 + "SERVER FILTER BY FIRST KEY ONLY\n"
 + "CLIENT MERGE SORT", 
QueryUtil.getExplainPlan(rs));
 } else {
 String expected = saltBuckets == null ? 
-"CLIENT PARALLEL 1-WAY RANGE SCAN OVER _IDX_" + tableName 
+ " [" + Long.toString(Long.MIN_VALUE + expectedIndexIdOffset) + ",'" + 
tenantId + "','" + valuePrefix + "v2-1']\n"
+"CLIENT PARALLEL 1-WAY RANGE SCAN OVER _IDX_" + tableName 
+ " [" + (Short.MIN_VALUE + expectedIndexIdOffset) + ",'" + tenantId + "','" + 
valuePrefix + "v2-1']\n"
 + "SERVER FILTER BY FIRST KEY ONLY" :
-"CLIENT PARALLEL 3-WAY RANGE SCAN OVER _IDX_" + tableName 
+ " [0," + Long.toString(Long.MIN_VALUE + expectedIndexIdOffset) + ",'" + 
tenantId + "','" + valuePrefix +
-"v2-1'] - ["+(saltBuckets.intValue()-1)+"," + 
Long.toString(Long.MIN_VALUE + expectedIndexIdOffset) + ",'" + tenantId + "','" 
+ valuePrefix + "v2-1']\n"
+"CLIENT PARALLEL 3-WAY RANGE SCAN OVER _IDX_" + tableName 
+ " [0," + (Short.MIN_VALUE + expectedIndexIdOffset) + ",'" + tenantId + "','" 
+ valuePrefix +
+"v2-1'] - ["+(saltBuckets.intValue()-1)+"," + 
(Short.MIN_VALUE + expectedI

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5559 Fix remaining issues with Long viewIndexIds.

2019-11-13 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new a160314  PHOENIX-5559 Fix remaining issues with Long viewIndexIds.
a160314 is described below

commit a16031441823bacc872ec8f832594d2280c6e83b
Author: Lars Hofhansl 
AuthorDate: Wed Nov 13 10:21:03 2019 -0800

PHOENIX-5559 Fix remaining issues with Long viewIndexIds.
---
 .../end2end/BaseTenantSpecificViewIndexIT.java |  10 +-
 .../org/apache/phoenix/end2end/BaseViewIT.java |   8 +-
 .../phoenix/end2end/TenantSpecificViewIndexIT.java |   4 +-
 .../java/org/apache/phoenix/end2end/UpgradeIT.java |   2 +-
 .../it/java/org/apache/phoenix/end2end/ViewIT.java |  10 +-
 .../index/ChildViewsUseParentViewIndexIT.java  |   4 +-
 .../end2end/index/GlobalIndexOptimizationIT.java   |   2 +-
 .../apache/phoenix/end2end/index/IndexUsageIT.java |   4 +-
 .../apache/phoenix/end2end/index/LocalIndexIT.java |   2 +-
 .../end2end/index/MutableIndexFailureIT.java   |   2 +-
 .../phoenix/end2end/index/ShortViewIndexIdIT.java  | 104 +
 .../apache/phoenix/end2end/index/ViewIndexIT.java  |   4 +-
 .../coprocessor/BaseScannerRegionObserver.java |  13 ++-
 .../phoenix/coprocessor/MetaDataEndpointImpl.java  |   6 +-
 .../org/apache/phoenix/iterate/ExplainTable.java   |   2 +-
 .../java/org/apache/phoenix/util/MetaDataUtil.java |  18 
 16 files changed, 163 insertions(+), 32 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseTenantSpecificViewIndexIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseTenantSpecificViewIndexIT.java
index 216e2d3..9860624 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseTenantSpecificViewIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseTenantSpecificViewIndexIT.java
@@ -140,18 +140,18 @@ public class BaseTenantSpecificViewIndexIT extends 
SplitSystemCatalogIT {
 ResultSet rs = conn.createStatement().executeQuery("EXPLAIN SELECT k1, 
k2, v2 FROM " + viewName + " WHERE v2='" + valuePrefix + "v2-1'");
 if(localIndex){
 assertEquals(saltBuckets == null ? 
-"CLIENT PARALLEL 1-WAY RANGE SCAN OVER " + tableName + " 
[" + Long.toString(1L + expectedIndexIdOffset) + ",'" + tenantId + "','" + 
valuePrefix + "v2-1']\n"
+"CLIENT PARALLEL 1-WAY RANGE SCAN OVER " + tableName + " 
[" + (1L + expectedIndexIdOffset) + ",'" + tenantId + "','" + valuePrefix + 
"v2-1']\n"
 + "SERVER FILTER BY FIRST KEY ONLY\n"
 + "CLIENT MERGE SORT" :
-"CLIENT PARALLEL 3-WAY RANGE SCAN OVER " + tableName + " 
[" + Long.toString(1L + expectedIndexIdOffset) + ",'" + tenantId + "','" + 
valuePrefix + "v2-1']\n"
+"CLIENT PARALLEL 3-WAY RANGE SCAN OVER " + tableName + " 
[" + (1L + expectedIndexIdOffset) + ",'" + tenantId + "','" + valuePrefix + 
"v2-1']\n"
 + "SERVER FILTER BY FIRST KEY ONLY\n"
 + "CLIENT MERGE SORT", 
QueryUtil.getExplainPlan(rs));
 } else {
 String expected = saltBuckets == null ? 
-"CLIENT PARALLEL 1-WAY RANGE SCAN OVER _IDX_" + tableName 
+ " [" + Long.toString(Long.MIN_VALUE + expectedIndexIdOffset) + ",'" + 
tenantId + "','" + valuePrefix + "v2-1']\n"
+"CLIENT PARALLEL 1-WAY RANGE SCAN OVER _IDX_" + tableName 
+ " [" + (Short.MIN_VALUE + expectedIndexIdOffset) + ",'" + tenantId + "','" + 
valuePrefix + "v2-1']\n"
 + "SERVER FILTER BY FIRST KEY ONLY" :
-"CLIENT PARALLEL 3-WAY RANGE SCAN OVER _IDX_" + tableName 
+ " [0," + Long.toString(Long.MIN_VALUE + expectedIndexIdOffset) + ",'" + 
tenantId + "','" + valuePrefix +
-"v2-1'] - ["+(saltBuckets.intValue()-1)+"," + 
Long.toString(Long.MIN_VALUE + expectedIndexIdOffset) + ",'" + tenantId + "','" 
+ valuePrefix + "v2-1']\n"
+"CLIENT PARALLEL 3-WAY RANGE SCAN OVER _IDX_" + tableName 
+ " [0," + (Short.MIN_VALUE + expectedIndexIdOffset) + ",'" + tenantId + "','" 
+ valuePrefix +
+"v2-1'] - ["+(saltBuckets.intValue()-1)+"," + 
(Short.MIN_VALUE + expectedI

[phoenix] branch master updated: PHOENIX-5533 Creating a view or index with a 4.14 client and 4.15.0 server fails with a NullPointerException.

2019-10-24 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new f3f722e  PHOENIX-5533 Creating a view or index with a 4.14 client and 
4.15.0 server fails with a NullPointerException.
f3f722e is described below

commit f3f722e4f29293885f1854cca9dd4cd37e6ff085
Author: Lars Hofhansl 
AuthorDate: Thu Oct 24 08:47:44 2019 -0700

PHOENIX-5533 Creating a view or index with a 4.14 client and 4.15.0 server 
fails with a NullPointerException.
---
 .../phoenix/coprocessor/MetaDataEndpointImpl.java  | 46 ++
 1 file changed, 46 insertions(+)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 8c80cd3..312602b 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -1735,6 +1735,45 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements RegionCopr
 byte[][] parentPhysicalSchemaTableNames = new byte[3][];
 getParentAndPhysicalNames(tableMetadata, 
parentSchemaTableNames, parentPhysicalSchemaTableNames);
 if (parentPhysicalSchemaTableNames[2] != null) {
+if (parentTable == null) {
+// This is needed when we connect with a 4.14 client to
+// a 4.15.0+ server.
+// In that case we need to resolve the parent table on
+// the server.
+parentTable = doGetTable(ByteUtil.EMPTY_BYTE_ARRAY,
+parentPhysicalSchemaTableNames[1],
+parentPhysicalSchemaTableNames[2], 
clientTimeStamp, clientVersion);
+if (parentTable == null) {
+builder.setReturnCode(
+
MetaDataProtos.MutationCode.PARENT_TABLE_NOT_FOUND);
+
builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis());
+done.run(builder.build());
+return;
+}
+if (parentSchemaTableNames[2] != null
+&& Bytes.compareTo(parentSchemaTableNames[2],
+parentPhysicalSchemaTableNames[2]) != 
0) {
+// if view is created on view
+byte[] tenantId = parentSchemaTableNames[0] == null
+? ByteUtil.EMPTY_BYTE_ARRAY
+: parentSchemaTableNames[0];
+parentTable = doGetTable(tenantId, 
parentSchemaTableNames[1],
+parentSchemaTableNames[2], 
clientTimeStamp, clientVersion);
+if (parentTable == null) {
+// it could be a global view
+parentTable = 
doGetTable(ByteUtil.EMPTY_BYTE_ARRAY,
+parentSchemaTableNames[1], 
parentSchemaTableNames[2],
+clientTimeStamp, clientVersion);
+}
+}
+if (parentTable == null) {
+builder.setReturnCode(
+
MetaDataProtos.MutationCode.PARENT_TABLE_NOT_FOUND);
+
builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis());
+done.run(builder.build());
+return;
+}
+}
 parentTableKey = 
SchemaUtil.getTableKey(ByteUtil.EMPTY_BYTE_ARRAY,
 parentPhysicalSchemaTableNames[1], 
parentPhysicalSchemaTableNames[2]);
 cParentPhysicalName = 
parentTable.getPhysicalName().getBytes();
@@ -1757,6 +1796,13 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements RegionCopr
  */
 parentTableName = 
MetaDataUtil.getParentTableName(tableMetadata);
 parentTableKey = SchemaUtil.getTableKey(tenantIdBytes, 
parentSchemaName, parentTableName);
+if (parentTable == null) {
+// This is needed when we connect with a 4.14 client to a 
4.15.0+ server.
+// In that case we need to resolve the parent table on the 
server.
+parentTable =
+doGetTable(tenantIdBytes, parentSchemaName, 
parentTableName, clientTimeSta

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5533 Creating a view or index with a 4.14 client and 4.15.0 server fails with a NullPointerException.

2019-10-24 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 2ed532f  PHOENIX-5533 Creating a view or index with a 4.14 client and 
4.15.0 server fails with a NullPointerException.
2ed532f is described below

commit 2ed532f7d1e6574af246abe62ff92d0ff7e4f8b1
Author: Lars Hofhansl 
AuthorDate: Thu Oct 24 08:47:44 2019 -0700

PHOENIX-5533 Creating a view or index with a 4.14 client and 4.15.0 server 
fails with a NullPointerException.
---
 .../phoenix/coprocessor/MetaDataEndpointImpl.java  | 46 ++
 1 file changed, 46 insertions(+)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 6df5bf8..7558b8d 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -1730,6 +1730,45 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 byte[][] parentPhysicalSchemaTableNames = new byte[3][];
 getParentAndPhysicalNames(tableMetadata, 
parentSchemaTableNames, parentPhysicalSchemaTableNames);
 if (parentPhysicalSchemaTableNames[2] != null) {
+if (parentTable == null) {
+// This is needed when we connect with a 4.14 client to
+// a 4.15.0+ server.
+// In that case we need to resolve the parent table on
+// the server.
+parentTable = doGetTable(ByteUtil.EMPTY_BYTE_ARRAY,
+parentPhysicalSchemaTableNames[1],
+parentPhysicalSchemaTableNames[2], 
clientTimeStamp, clientVersion);
+if (parentTable == null) {
+builder.setReturnCode(
+
MetaDataProtos.MutationCode.PARENT_TABLE_NOT_FOUND);
+
builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis());
+done.run(builder.build());
+return;
+}
+if (parentSchemaTableNames[2] != null
+&& Bytes.compareTo(parentSchemaTableNames[2],
+parentPhysicalSchemaTableNames[2]) != 
0) {
+// if view is created on view
+byte[] tenantId = parentSchemaTableNames[0] == null
+? ByteUtil.EMPTY_BYTE_ARRAY
+: parentSchemaTableNames[0];
+parentTable = doGetTable(tenantId, 
parentSchemaTableNames[1],
+parentSchemaTableNames[2], 
clientTimeStamp, clientVersion);
+if (parentTable == null) {
+// it could be a global view
+parentTable = 
doGetTable(ByteUtil.EMPTY_BYTE_ARRAY,
+parentSchemaTableNames[1], 
parentSchemaTableNames[2],
+clientTimeStamp, clientVersion);
+}
+}
+if (parentTable == null) {
+builder.setReturnCode(
+
MetaDataProtos.MutationCode.PARENT_TABLE_NOT_FOUND);
+
builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis());
+done.run(builder.build());
+return;
+}
+}
 parentTableKey = 
SchemaUtil.getTableKey(ByteUtil.EMPTY_BYTE_ARRAY,
 parentPhysicalSchemaTableNames[1], 
parentPhysicalSchemaTableNames[2]);
 cParentPhysicalName = 
parentTable.getPhysicalName().getBytes();
@@ -1752,6 +1791,13 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
  */
 parentTableName = 
MetaDataUtil.getParentTableName(tableMetadata);
 parentTableKey = SchemaUtil.getTableKey(tenantIdBytes, 
parentSchemaName, parentTableName);
+if (parentTable == null) {
+// This is needed when we connect with a 4.14 client to a 
4.15.0+ server.
+// In that case we need to resolve the parent table on the 
server.
+parentTable =
+doGetTable(tenantIdBytes, parentSchemaName, 
parentT

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5533 Creating a view or index with a 4.14 client and 4.15.0 server fails with a NullPointerException.

2019-10-24 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new dd662b1  PHOENIX-5533 Creating a view or index with a 4.14 client and 
4.15.0 server fails with a NullPointerException.
dd662b1 is described below

commit dd662b1b92971ed3a377f49736759f375164e445
Author: Lars Hofhansl 
AuthorDate: Thu Oct 24 08:47:44 2019 -0700

PHOENIX-5533 Creating a view or index with a 4.14 client and 4.15.0 server 
fails with a NullPointerException.
---
 .../phoenix/coprocessor/MetaDataEndpointImpl.java  | 46 ++
 1 file changed, 46 insertions(+)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 6df5bf8..7558b8d 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -1730,6 +1730,45 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 byte[][] parentPhysicalSchemaTableNames = new byte[3][];
 getParentAndPhysicalNames(tableMetadata, 
parentSchemaTableNames, parentPhysicalSchemaTableNames);
 if (parentPhysicalSchemaTableNames[2] != null) {
+if (parentTable == null) {
+// This is needed when we connect with a 4.14 client to
+// a 4.15.0+ server.
+// In that case we need to resolve the parent table on
+// the server.
+parentTable = doGetTable(ByteUtil.EMPTY_BYTE_ARRAY,
+parentPhysicalSchemaTableNames[1],
+parentPhysicalSchemaTableNames[2], 
clientTimeStamp, clientVersion);
+if (parentTable == null) {
+builder.setReturnCode(
+
MetaDataProtos.MutationCode.PARENT_TABLE_NOT_FOUND);
+
builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis());
+done.run(builder.build());
+return;
+}
+if (parentSchemaTableNames[2] != null
+&& Bytes.compareTo(parentSchemaTableNames[2],
+parentPhysicalSchemaTableNames[2]) != 
0) {
+// if view is created on view
+byte[] tenantId = parentSchemaTableNames[0] == null
+? ByteUtil.EMPTY_BYTE_ARRAY
+: parentSchemaTableNames[0];
+parentTable = doGetTable(tenantId, 
parentSchemaTableNames[1],
+parentSchemaTableNames[2], 
clientTimeStamp, clientVersion);
+if (parentTable == null) {
+// it could be a global view
+parentTable = 
doGetTable(ByteUtil.EMPTY_BYTE_ARRAY,
+parentSchemaTableNames[1], 
parentSchemaTableNames[2],
+clientTimeStamp, clientVersion);
+}
+}
+if (parentTable == null) {
+builder.setReturnCode(
+
MetaDataProtos.MutationCode.PARENT_TABLE_NOT_FOUND);
+
builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis());
+done.run(builder.build());
+return;
+}
+}
 parentTableKey = 
SchemaUtil.getTableKey(ByteUtil.EMPTY_BYTE_ARRAY,
 parentPhysicalSchemaTableNames[1], 
parentPhysicalSchemaTableNames[2]);
 cParentPhysicalName = 
parentTable.getPhysicalName().getBytes();
@@ -1752,6 +1791,13 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
  */
 parentTableName = 
MetaDataUtil.getParentTableName(tableMetadata);
 parentTableKey = SchemaUtil.getTableKey(tenantIdBytes, 
parentSchemaName, parentTableName);
+if (parentTable == null) {
+// This is needed when we connect with a 4.14 client to a 
4.15.0+ server.
+// In that case we need to resolve the parent table on the 
server.
+parentTable =
+doGetTable(tenantIdBytes, parentSchemaName, 
parentT

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5533 Creating a view or index with a 4.14 client and 4.15.0 server fails with a NullPointerException.

2019-10-24 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 0b9a039  PHOENIX-5533 Creating a view or index with a 4.14 client and 
4.15.0 server fails with a NullPointerException.
0b9a039 is described below

commit 0b9a0395554dcf72ece54c131fb628e7c3329902
Author: Lars Hofhansl 
AuthorDate: Thu Oct 24 08:47:44 2019 -0700

PHOENIX-5533 Creating a view or index with a 4.14 client and 4.15.0 server 
fails with a NullPointerException.
---
 .../phoenix/coprocessor/MetaDataEndpointImpl.java  | 46 ++
 1 file changed, 46 insertions(+)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 6df5bf8..7558b8d 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -1730,6 +1730,45 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 byte[][] parentPhysicalSchemaTableNames = new byte[3][];
 getParentAndPhysicalNames(tableMetadata, 
parentSchemaTableNames, parentPhysicalSchemaTableNames);
 if (parentPhysicalSchemaTableNames[2] != null) {
+if (parentTable == null) {
+// This is needed when we connect with a 4.14 client to
+// a 4.15.0+ server.
+// In that case we need to resolve the parent table on
+// the server.
+parentTable = doGetTable(ByteUtil.EMPTY_BYTE_ARRAY,
+parentPhysicalSchemaTableNames[1],
+parentPhysicalSchemaTableNames[2], 
clientTimeStamp, clientVersion);
+if (parentTable == null) {
+builder.setReturnCode(
+
MetaDataProtos.MutationCode.PARENT_TABLE_NOT_FOUND);
+
builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis());
+done.run(builder.build());
+return;
+}
+if (parentSchemaTableNames[2] != null
+&& Bytes.compareTo(parentSchemaTableNames[2],
+parentPhysicalSchemaTableNames[2]) != 
0) {
+// if view is created on view
+byte[] tenantId = parentSchemaTableNames[0] == null
+? ByteUtil.EMPTY_BYTE_ARRAY
+: parentSchemaTableNames[0];
+parentTable = doGetTable(tenantId, 
parentSchemaTableNames[1],
+parentSchemaTableNames[2], 
clientTimeStamp, clientVersion);
+if (parentTable == null) {
+// it could be a global view
+parentTable = 
doGetTable(ByteUtil.EMPTY_BYTE_ARRAY,
+parentSchemaTableNames[1], 
parentSchemaTableNames[2],
+clientTimeStamp, clientVersion);
+}
+}
+if (parentTable == null) {
+builder.setReturnCode(
+
MetaDataProtos.MutationCode.PARENT_TABLE_NOT_FOUND);
+
builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis());
+done.run(builder.build());
+return;
+}
+}
 parentTableKey = 
SchemaUtil.getTableKey(ByteUtil.EMPTY_BYTE_ARRAY,
 parentPhysicalSchemaTableNames[1], 
parentPhysicalSchemaTableNames[2]);
 cParentPhysicalName = 
parentTable.getPhysicalName().getBytes();
@@ -1752,6 +1791,13 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
  */
 parentTableName = 
MetaDataUtil.getParentTableName(tableMetadata);
 parentTableKey = SchemaUtil.getTableKey(tenantIdBytes, 
parentSchemaName, parentTableName);
+if (parentTable == null) {
+// This is needed when we connect with a 4.14 client to a 
4.15.0+ server.
+// In that case we need to resolve the parent table on the 
server.
+parentTable =
+doGetTable(tenantIdBytes, parentSchemaName, 
parentT

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5523 Prepare for newly released HBase 1.5.0.

2019-10-14 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 7fb4528  PHOENIX-5523 Prepare for newly released HBase 1.5.0.
7fb4528 is described below

commit 7fb4528ea47bcc8ae0f71268d0690e2cd5ac0c9c
Author: Lars Hofhansl 
AuthorDate: Mon Oct 14 21:41:05 2019 -0700

PHOENIX-5523 Prepare for newly released HBase 1.5.0.
---
 .../java/org/apache/phoenix/coprocessor/DelegateRegionObserver.java  | 5 +
 pom.xml  | 2 +-
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/DelegateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/DelegateRegionObserver.java
index 9724126..6855da9 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/DelegateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/DelegateRegionObserver.java
@@ -680,6 +680,11 @@ public class DelegateRegionObserver implements 
RegionObserver {
 }
 
 @Override
+public void preWALAppend(ObserverContext 
ctx, WALKey key,
+WALEdit edit) throws IOException {
+}
+
+@Override
 public InternalScanner 
preFlushScannerOpen(ObserverContext c,
 Store store, KeyValueScanner memstoreScanner, InternalScanner s, 
long readPoint)
 throws IOException {
diff --git a/pom.xml b/pom.xml
index ff14eaf..ad8e9c4 100644
--- a/pom.xml
+++ b/pom.xml
@@ -79,7 +79,7 @@
 ${project.basedir}
 
 
-1.5.0-SNAPSHOT
+1.5.0
 2.7.5
 
 



[phoenix] branch master updated: PHOENIX-5499 Upgrading from 4.14.3 client to 4.15.0 client should remove parent->child links from SYSTEM.CATALOG.

2019-10-05 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 74f8464  PHOENIX-5499 Upgrading from 4.14.3 client to 4.15.0 client 
should remove parent->child links from SYSTEM.CATALOG.
74f8464 is described below

commit 74f8464108a98b476c1c39b12150ae37861b7452
Author: Lars Hofhansl 
AuthorDate: Sat Oct 5 13:39:14 2019 -0700

PHOENIX-5499 Upgrading from 4.14.3 client to 4.15.0 client should remove 
parent->child links from SYSTEM.CATALOG.
---
 .../src/main/java/org/apache/phoenix/execute/MutationState.java | 6 +-
 .../src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java| 2 +-
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java 
b/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
index 44760a8..d887468 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
@@ -161,7 +161,11 @@ public class MutationState implements SQLCloseable {
 }
 
 public MutationState(MutationState mutationState) {
-this(mutationState.maxSize, mutationState.maxSizeBytes, 
mutationState.connection, true, mutationState
+this(mutationState, mutationState.connection);
+}
+
+public MutationState(MutationState mutationState, PhoenixConnection 
connection) {
+this(mutationState.maxSize, mutationState.maxSizeBytes, connection, 
true, mutationState
 .getPhoenixTransactionContext());
 }
 
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java
index d668758..988a7c6 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java
@@ -372,7 +372,7 @@ public class PhoenixConnection implements Connection, 
MetaDataMutated, SQLClosea
 this.isRequestLevelMetricsEnabled = 
JDBCUtil.isCollectingRequestLevelMetricsEnabled(url, info,
 this.services.getProps());
 this.mutationState = mutationState == null ? newMutationState(maxSize,
-maxSizeBytes) : new MutationState(mutationState);
+maxSizeBytes) : new MutationState(mutationState, this);
 this.metaData = metaData;
 this.metaData.pruneTables(pruner);
 this.metaData.pruneFunctions(pruner);



[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5499 Upgrading from 4.14.3 client to 4.15.0 client should remove parent->child links from SYSTEM.CATALOG.

2019-10-05 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 86af6ea  PHOENIX-5499 Upgrading from 4.14.3 client to 4.15.0 client 
should remove parent->child links from SYSTEM.CATALOG.
86af6ea is described below

commit 86af6ea2ccd24cfcba4fe8901c422afb55bf9751
Author: Lars Hofhansl 
AuthorDate: Sat Oct 5 13:39:14 2019 -0700

PHOENIX-5499 Upgrading from 4.14.3 client to 4.15.0 client should remove 
parent->child links from SYSTEM.CATALOG.
---
 .../src/main/java/org/apache/phoenix/execute/MutationState.java | 6 +-
 .../src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java| 2 +-
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java 
b/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
index 856c6bc55..434d1f7 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
@@ -161,7 +161,11 @@ public class MutationState implements SQLCloseable {
 }
 
 public MutationState(MutationState mutationState) {
-this(mutationState.maxSize, mutationState.maxSizeBytes, 
mutationState.connection, true, mutationState
+this(mutationState, mutationState.connection);
+}
+
+public MutationState(MutationState mutationState, PhoenixConnection 
connection) {
+this(mutationState.maxSize, mutationState.maxSizeBytes, connection, 
true, mutationState
 .getPhoenixTransactionContext());
 }
 
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java
index d668758..988a7c6 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java
@@ -372,7 +372,7 @@ public class PhoenixConnection implements Connection, 
MetaDataMutated, SQLClosea
 this.isRequestLevelMetricsEnabled = 
JDBCUtil.isCollectingRequestLevelMetricsEnabled(url, info,
 this.services.getProps());
 this.mutationState = mutationState == null ? newMutationState(maxSize,
-maxSizeBytes) : new MutationState(mutationState);
+maxSizeBytes) : new MutationState(mutationState, this);
 this.metaData = metaData;
 this.metaData.pruneTables(pruner);
 this.metaData.pruneFunctions(pruner);



[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5499 Upgrading from 4.14.3 client to 4.15.0 client should remove parent->child links from SYSTEM.CATALOG.

2019-10-05 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new d99b57c  PHOENIX-5499 Upgrading from 4.14.3 client to 4.15.0 client 
should remove parent->child links from SYSTEM.CATALOG.
d99b57c is described below

commit d99b57cef03ff31d5b0f4a9de775aa8bf3f5b850
Author: Lars Hofhansl 
AuthorDate: Sat Oct 5 13:39:14 2019 -0700

PHOENIX-5499 Upgrading from 4.14.3 client to 4.15.0 client should remove 
parent->child links from SYSTEM.CATALOG.
---
 .../src/main/java/org/apache/phoenix/execute/MutationState.java | 6 +-
 .../src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java| 2 +-
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java 
b/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
index 856c6bc55..434d1f7 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
@@ -161,7 +161,11 @@ public class MutationState implements SQLCloseable {
 }
 
 public MutationState(MutationState mutationState) {
-this(mutationState.maxSize, mutationState.maxSizeBytes, 
mutationState.connection, true, mutationState
+this(mutationState, mutationState.connection);
+}
+
+public MutationState(MutationState mutationState, PhoenixConnection 
connection) {
+this(mutationState.maxSize, mutationState.maxSizeBytes, connection, 
true, mutationState
 .getPhoenixTransactionContext());
 }
 
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java
index d668758..988a7c6 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java
@@ -372,7 +372,7 @@ public class PhoenixConnection implements Connection, 
MetaDataMutated, SQLClosea
 this.isRequestLevelMetricsEnabled = 
JDBCUtil.isCollectingRequestLevelMetricsEnabled(url, info,
 this.services.getProps());
 this.mutationState = mutationState == null ? newMutationState(maxSize,
-maxSizeBytes) : new MutationState(mutationState);
+maxSizeBytes) : new MutationState(mutationState, this);
 this.metaData = metaData;
 this.metaData.pruneTables(pruner);
 this.metaData.pruneFunctions(pruner);



[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5499 Upgrading from 4.14.3 client to 4.15.0 client should remove parent->child links from SYSTEM.CATALOG.

2019-10-05 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new a9dd41c  PHOENIX-5499 Upgrading from 4.14.3 client to 4.15.0 client 
should remove parent->child links from SYSTEM.CATALOG.
a9dd41c is described below

commit a9dd41c3744f549f9288aa5b9c5b9ffda20b0102
Author: Lars Hofhansl 
AuthorDate: Sat Oct 5 13:39:14 2019 -0700

PHOENIX-5499 Upgrading from 4.14.3 client to 4.15.0 client should remove 
parent->child links from SYSTEM.CATALOG.
---
 .../src/main/java/org/apache/phoenix/execute/MutationState.java | 6 +-
 .../src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java| 2 +-
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java 
b/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
index 856c6bc55..434d1f7 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
@@ -161,7 +161,11 @@ public class MutationState implements SQLCloseable {
 }
 
 public MutationState(MutationState mutationState) {
-this(mutationState.maxSize, mutationState.maxSizeBytes, 
mutationState.connection, true, mutationState
+this(mutationState, mutationState.connection);
+}
+
+public MutationState(MutationState mutationState, PhoenixConnection 
connection) {
+this(mutationState.maxSize, mutationState.maxSizeBytes, connection, 
true, mutationState
 .getPhoenixTransactionContext());
 }
 
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java
index d668758..988a7c6 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java
@@ -372,7 +372,7 @@ public class PhoenixConnection implements Connection, 
MetaDataMutated, SQLClosea
 this.isRequestLevelMetricsEnabled = 
JDBCUtil.isCollectingRequestLevelMetricsEnabled(url, info,
 this.services.getProps());
 this.mutationState = mutationState == null ? newMutationState(maxSize,
-maxSizeBytes) : new MutationState(mutationState);
+maxSizeBytes) : new MutationState(mutationState, this);
 this.metaData = metaData;
 this.metaData.pruneTables(pruner);
 this.metaData.pruneFunctions(pruner);



[phoenix] branch master updated: Revert "PHOENIX-5463 Remove AndExpressionTest and OrExpressionTest since the author did not add a license header."

2019-09-27 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 3bea413  Revert "PHOENIX-5463 Remove AndExpressionTest and 
OrExpressionTest since the author did not add a license header."
3bea413 is described below

commit 3bea4131945943189edfe09fd6019c04552f8a19
Author: Lars Hofhansl 
AuthorDate: Fri Sep 27 11:49:06 2019 -0700

Revert "PHOENIX-5463 Remove AndExpressionTest and OrExpressionTest since 
the author did not add a license header."

This reverts commit d3e16ae7ab3e4a328d523dd7aa4b0b740109ae7f.
---
 .../phoenix/expression/AndExpressionTest.java  | 314 +
 .../phoenix/expression/OrExpressionTest.java   | 310 
 2 files changed, 624 insertions(+)

diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/expression/AndExpressionTest.java
 
b/phoenix-core/src/test/java/org/apache/phoenix/expression/AndExpressionTest.java
new file mode 100644
index 000..a223a19
--- /dev/null
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/expression/AndExpressionTest.java
@@ -0,0 +1,314 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.expression;
+
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.PBaseColumn;
+import org.apache.phoenix.schema.PColumn;
+import org.apache.phoenix.schema.PName;
+import org.apache.phoenix.schema.PNameFactory;
+import org.apache.phoenix.schema.SortOrder;
+import org.apache.phoenix.schema.tuple.MultiKeyValueTuple;
+import org.apache.phoenix.schema.types.PBoolean;
+import org.apache.phoenix.schema.types.PDataType;
+import org.junit.Test;
+
+import java.util.Arrays;
+import java.util.Collections;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+public class AndExpressionTest {
+
+private AndExpression createAnd(Expression lhs, Expression rhs) {
+return new AndExpression(Arrays.asList(lhs, rhs));
+}
+
+private AndExpression createAnd(Boolean x, Boolean y) {
+return createAnd(LiteralExpression.newConstant(x), 
LiteralExpression.newConstant(y));
+}
+
+private void testImmediateSingle(Boolean expected, Boolean lhs, Boolean 
rhs) {
+AndExpression and = createAnd(lhs, rhs);
+ImmutableBytesWritable out = new ImmutableBytesWritable();
+MultiKeyValueTuple tuple = new MultiKeyValueTuple();
+boolean success = and.evaluate(tuple, out);
+assertTrue(success);
+assertEquals(expected, PBoolean.INSTANCE.toObject(out));
+}
+
+// Evaluating AND when values of both sides are known should immediately 
succeed
+// and return the same result regardless of order.
+private void testImmediate(Boolean expected, Boolean a, Boolean b) {
+testImmediateSingle(expected, a, b);
+testImmediateSingle(expected, b, a);
+}
+
+private PColumn pcolumn(final String name) {
+return new PBaseColumn() {
+@Override public PName getName() {
+return PNameFactory.newName(name);
+}
+
+@Override public PDataType getDataType() {
+return PBoolean.INSTANCE;
+}
+
+@Override public PName getFamilyName() {
+return 
PNameFactory.newName(QueryConstants.DEFAULT_COLUMN_FAMILY);
+}
+
+@Override public int getPosition() {
+return 0;
+}
+
+@Override public Integer getArraySize() {
+return null;
+}
+
+@Override public byte[] getViewConstant() {
+return new byte[0];
+}
+
+@Override public boolean isViewReferenced() {
+return false;
+ 

[phoenix] branch 4.x-HBase-1.3 updated: Revert "PHOENIX-5463 Remove AndExpressionTest and OrExpressionTest since the author did not add a license header."

2019-09-27 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new c554d7d  Revert "PHOENIX-5463 Remove AndExpressionTest and 
OrExpressionTest since the author did not add a license header."
c554d7d is described below

commit c554d7da306115ac7a40cd55763fb62fcb7c0166
Author: Lars Hofhansl 
AuthorDate: Fri Sep 27 11:48:00 2019 -0700

Revert "PHOENIX-5463 Remove AndExpressionTest and OrExpressionTest since 
the author did not add a license header."

This reverts commit 9694bbb241117edb8b3711cfaee5e22fa57ca14c.
---
 .../phoenix/expression/AndExpressionTest.java  | 314 +
 .../phoenix/expression/OrExpressionTest.java   | 310 
 2 files changed, 624 insertions(+)

diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/expression/AndExpressionTest.java
 
b/phoenix-core/src/test/java/org/apache/phoenix/expression/AndExpressionTest.java
new file mode 100644
index 000..a223a19
--- /dev/null
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/expression/AndExpressionTest.java
@@ -0,0 +1,314 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.expression;
+
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.PBaseColumn;
+import org.apache.phoenix.schema.PColumn;
+import org.apache.phoenix.schema.PName;
+import org.apache.phoenix.schema.PNameFactory;
+import org.apache.phoenix.schema.SortOrder;
+import org.apache.phoenix.schema.tuple.MultiKeyValueTuple;
+import org.apache.phoenix.schema.types.PBoolean;
+import org.apache.phoenix.schema.types.PDataType;
+import org.junit.Test;
+
+import java.util.Arrays;
+import java.util.Collections;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+public class AndExpressionTest {
+
+private AndExpression createAnd(Expression lhs, Expression rhs) {
+return new AndExpression(Arrays.asList(lhs, rhs));
+}
+
+private AndExpression createAnd(Boolean x, Boolean y) {
+return createAnd(LiteralExpression.newConstant(x), 
LiteralExpression.newConstant(y));
+}
+
+private void testImmediateSingle(Boolean expected, Boolean lhs, Boolean 
rhs) {
+AndExpression and = createAnd(lhs, rhs);
+ImmutableBytesWritable out = new ImmutableBytesWritable();
+MultiKeyValueTuple tuple = new MultiKeyValueTuple();
+boolean success = and.evaluate(tuple, out);
+assertTrue(success);
+assertEquals(expected, PBoolean.INSTANCE.toObject(out));
+}
+
+// Evaluating AND when values of both sides are known should immediately 
succeed
+// and return the same result regardless of order.
+private void testImmediate(Boolean expected, Boolean a, Boolean b) {
+testImmediateSingle(expected, a, b);
+testImmediateSingle(expected, b, a);
+}
+
+private PColumn pcolumn(final String name) {
+return new PBaseColumn() {
+@Override public PName getName() {
+return PNameFactory.newName(name);
+}
+
+@Override public PDataType getDataType() {
+return PBoolean.INSTANCE;
+}
+
+@Override public PName getFamilyName() {
+return 
PNameFactory.newName(QueryConstants.DEFAULT_COLUMN_FAMILY);
+}
+
+@Override public int getPosition() {
+return 0;
+}
+
+@Override public Integer getArraySize() {
+return null;
+}
+
+@Override public byte[] getViewConstant() {
+return new byte[0];
+}
+
+@Override public boolean isViewRefe

[phoenix] branch 4.x-HBase-1.4 updated: Revert "PHOENIX-5463 Remove AndExpressionTest and OrExpressionTest since the author did not add a license header."

2019-09-27 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 334b320  Revert "PHOENIX-5463 Remove AndExpressionTest and 
OrExpressionTest since the author did not add a license header."
334b320 is described below

commit 334b320c1d4b132ad08cf8af58b56a54f16f293f
Author: Lars Hofhansl 
AuthorDate: Fri Sep 27 11:48:31 2019 -0700

Revert "PHOENIX-5463 Remove AndExpressionTest and OrExpressionTest since 
the author did not add a license header."

This reverts commit a4159c1ba013100a6891303183b4ca0e7d577e6a.
---
 .../phoenix/expression/AndExpressionTest.java  | 314 +
 .../phoenix/expression/OrExpressionTest.java   | 310 
 2 files changed, 624 insertions(+)

diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/expression/AndExpressionTest.java
 
b/phoenix-core/src/test/java/org/apache/phoenix/expression/AndExpressionTest.java
new file mode 100644
index 000..a223a19
--- /dev/null
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/expression/AndExpressionTest.java
@@ -0,0 +1,314 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.expression;
+
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.PBaseColumn;
+import org.apache.phoenix.schema.PColumn;
+import org.apache.phoenix.schema.PName;
+import org.apache.phoenix.schema.PNameFactory;
+import org.apache.phoenix.schema.SortOrder;
+import org.apache.phoenix.schema.tuple.MultiKeyValueTuple;
+import org.apache.phoenix.schema.types.PBoolean;
+import org.apache.phoenix.schema.types.PDataType;
+import org.junit.Test;
+
+import java.util.Arrays;
+import java.util.Collections;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+public class AndExpressionTest {
+
+private AndExpression createAnd(Expression lhs, Expression rhs) {
+return new AndExpression(Arrays.asList(lhs, rhs));
+}
+
+private AndExpression createAnd(Boolean x, Boolean y) {
+return createAnd(LiteralExpression.newConstant(x), 
LiteralExpression.newConstant(y));
+}
+
+private void testImmediateSingle(Boolean expected, Boolean lhs, Boolean 
rhs) {
+AndExpression and = createAnd(lhs, rhs);
+ImmutableBytesWritable out = new ImmutableBytesWritable();
+MultiKeyValueTuple tuple = new MultiKeyValueTuple();
+boolean success = and.evaluate(tuple, out);
+assertTrue(success);
+assertEquals(expected, PBoolean.INSTANCE.toObject(out));
+}
+
+// Evaluating AND when values of both sides are known should immediately 
succeed
+// and return the same result regardless of order.
+private void testImmediate(Boolean expected, Boolean a, Boolean b) {
+testImmediateSingle(expected, a, b);
+testImmediateSingle(expected, b, a);
+}
+
+private PColumn pcolumn(final String name) {
+return new PBaseColumn() {
+@Override public PName getName() {
+return PNameFactory.newName(name);
+}
+
+@Override public PDataType getDataType() {
+return PBoolean.INSTANCE;
+}
+
+@Override public PName getFamilyName() {
+return 
PNameFactory.newName(QueryConstants.DEFAULT_COLUMN_FAMILY);
+}
+
+@Override public int getPosition() {
+return 0;
+}
+
+@Override public Integer getArraySize() {
+return null;
+}
+
+@Override public byte[] getViewConstant() {
+return new byte[0];
+}
+
+@Override public boolean isViewRefe

[phoenix] branch 4.x-HBase-1.5 updated: Revert "PHOENIX-5463 Remove AndExpressionTest and OrExpressionTest since the author did not add a license header."

2019-09-27 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 7de8f5f  Revert "PHOENIX-5463 Remove AndExpressionTest and 
OrExpressionTest since the author did not add a license header."
7de8f5f is described below

commit 7de8f5f0c95c48172beff6920134087003a4a419
Author: Lars Hofhansl 
AuthorDate: Fri Sep 27 11:47:04 2019 -0700

Revert "PHOENIX-5463 Remove AndExpressionTest and OrExpressionTest since 
the author did not add a license header."

This reverts commit 82ee8d5917da6a56ea124f281dbbecf9fed4571d.
---
 .../phoenix/expression/AndExpressionTest.java  | 314 +
 .../phoenix/expression/OrExpressionTest.java   | 310 
 2 files changed, 624 insertions(+)

diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/expression/AndExpressionTest.java
 
b/phoenix-core/src/test/java/org/apache/phoenix/expression/AndExpressionTest.java
new file mode 100644
index 000..a223a19
--- /dev/null
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/expression/AndExpressionTest.java
@@ -0,0 +1,314 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.expression;
+
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.PBaseColumn;
+import org.apache.phoenix.schema.PColumn;
+import org.apache.phoenix.schema.PName;
+import org.apache.phoenix.schema.PNameFactory;
+import org.apache.phoenix.schema.SortOrder;
+import org.apache.phoenix.schema.tuple.MultiKeyValueTuple;
+import org.apache.phoenix.schema.types.PBoolean;
+import org.apache.phoenix.schema.types.PDataType;
+import org.junit.Test;
+
+import java.util.Arrays;
+import java.util.Collections;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+public class AndExpressionTest {
+
+private AndExpression createAnd(Expression lhs, Expression rhs) {
+return new AndExpression(Arrays.asList(lhs, rhs));
+}
+
+private AndExpression createAnd(Boolean x, Boolean y) {
+return createAnd(LiteralExpression.newConstant(x), 
LiteralExpression.newConstant(y));
+}
+
+private void testImmediateSingle(Boolean expected, Boolean lhs, Boolean 
rhs) {
+AndExpression and = createAnd(lhs, rhs);
+ImmutableBytesWritable out = new ImmutableBytesWritable();
+MultiKeyValueTuple tuple = new MultiKeyValueTuple();
+boolean success = and.evaluate(tuple, out);
+assertTrue(success);
+assertEquals(expected, PBoolean.INSTANCE.toObject(out));
+}
+
+// Evaluating AND when values of both sides are known should immediately 
succeed
+// and return the same result regardless of order.
+private void testImmediate(Boolean expected, Boolean a, Boolean b) {
+testImmediateSingle(expected, a, b);
+testImmediateSingle(expected, b, a);
+}
+
+private PColumn pcolumn(final String name) {
+return new PBaseColumn() {
+@Override public PName getName() {
+return PNameFactory.newName(name);
+}
+
+@Override public PDataType getDataType() {
+return PBoolean.INSTANCE;
+}
+
+@Override public PName getFamilyName() {
+return 
PNameFactory.newName(QueryConstants.DEFAULT_COLUMN_FAMILY);
+}
+
+@Override public int getPosition() {
+return 0;
+}
+
+@Override public Integer getArraySize() {
+return null;
+}
+
+@Override public byte[] getViewConstant() {
+return new byte[0];
+}
+
+@Override public boolean isViewRefe

[phoenix] branch master updated: PHOENIX-5463 Remove AndExpressionTest and OrExpressionTest since the author did not add a license header.

2019-09-25 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new d3e16ae  PHOENIX-5463 Remove AndExpressionTest and OrExpressionTest 
since the author did not add a license header.
d3e16ae is described below

commit d3e16ae7ab3e4a328d523dd7aa4b0b740109ae7f
Author: Lars Hofhansl 
AuthorDate: Wed Sep 25 10:32:29 2019 -0700

PHOENIX-5463 Remove AndExpressionTest and OrExpressionTest since the author 
did not add a license header.
---
 .../phoenix/expression/AndExpressionTest.java  | 314 -
 .../phoenix/expression/OrExpressionTest.java   | 310 
 2 files changed, 624 deletions(-)

diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/expression/AndExpressionTest.java
 
b/phoenix-core/src/test/java/org/apache/phoenix/expression/AndExpressionTest.java
deleted file mode 100644
index a223a19..000
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/expression/AndExpressionTest.java
+++ /dev/null
@@ -1,314 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.phoenix.expression;
-
-import org.apache.hadoop.hbase.Cell;
-import org.apache.hadoop.hbase.CellUtil;
-import org.apache.hadoop.hbase.KeyValue;
-import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.phoenix.query.QueryConstants;
-import org.apache.phoenix.schema.PBaseColumn;
-import org.apache.phoenix.schema.PColumn;
-import org.apache.phoenix.schema.PName;
-import org.apache.phoenix.schema.PNameFactory;
-import org.apache.phoenix.schema.SortOrder;
-import org.apache.phoenix.schema.tuple.MultiKeyValueTuple;
-import org.apache.phoenix.schema.types.PBoolean;
-import org.apache.phoenix.schema.types.PDataType;
-import org.junit.Test;
-
-import java.util.Arrays;
-import java.util.Collections;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertTrue;
-
-public class AndExpressionTest {
-
-private AndExpression createAnd(Expression lhs, Expression rhs) {
-return new AndExpression(Arrays.asList(lhs, rhs));
-}
-
-private AndExpression createAnd(Boolean x, Boolean y) {
-return createAnd(LiteralExpression.newConstant(x), 
LiteralExpression.newConstant(y));
-}
-
-private void testImmediateSingle(Boolean expected, Boolean lhs, Boolean 
rhs) {
-AndExpression and = createAnd(lhs, rhs);
-ImmutableBytesWritable out = new ImmutableBytesWritable();
-MultiKeyValueTuple tuple = new MultiKeyValueTuple();
-boolean success = and.evaluate(tuple, out);
-assertTrue(success);
-assertEquals(expected, PBoolean.INSTANCE.toObject(out));
-}
-
-// Evaluating AND when values of both sides are known should immediately 
succeed
-// and return the same result regardless of order.
-private void testImmediate(Boolean expected, Boolean a, Boolean b) {
-testImmediateSingle(expected, a, b);
-testImmediateSingle(expected, b, a);
-}
-
-private PColumn pcolumn(final String name) {
-return new PBaseColumn() {
-@Override public PName getName() {
-return PNameFactory.newName(name);
-}
-
-@Override public PDataType getDataType() {
-return PBoolean.INSTANCE;
-}
-
-@Override public PName getFamilyName() {
-return 
PNameFactory.newName(QueryConstants.DEFAULT_COLUMN_FAMILY);
-}
-
-@Override public int getPosition() {
-return 0;
-}
-
-@Override public Integer getArraySize() {
-return null;
-}
-
-@Override public byte[] getViewConstant() {
-return new byte[0];
-}
-
-@Override public boolean isViewReferenced() {
-return false;
-}
-
-@Override public String getExpressionStr() {
-   

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5463 Remove AndExpressionTest and OrExpressionTest since the author did not add a license header.

2019-09-25 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new a4159c1  PHOENIX-5463 Remove AndExpressionTest and OrExpressionTest 
since the author did not add a license header.
a4159c1 is described below

commit a4159c1ba013100a6891303183b4ca0e7d577e6a
Author: Lars Hofhansl 
AuthorDate: Wed Sep 25 10:32:29 2019 -0700

PHOENIX-5463 Remove AndExpressionTest and OrExpressionTest since the author 
did not add a license header.
---
 .../phoenix/expression/AndExpressionTest.java  | 314 -
 .../phoenix/expression/OrExpressionTest.java   | 310 
 2 files changed, 624 deletions(-)

diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/expression/AndExpressionTest.java
 
b/phoenix-core/src/test/java/org/apache/phoenix/expression/AndExpressionTest.java
deleted file mode 100644
index a223a19..000
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/expression/AndExpressionTest.java
+++ /dev/null
@@ -1,314 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.phoenix.expression;
-
-import org.apache.hadoop.hbase.Cell;
-import org.apache.hadoop.hbase.CellUtil;
-import org.apache.hadoop.hbase.KeyValue;
-import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.phoenix.query.QueryConstants;
-import org.apache.phoenix.schema.PBaseColumn;
-import org.apache.phoenix.schema.PColumn;
-import org.apache.phoenix.schema.PName;
-import org.apache.phoenix.schema.PNameFactory;
-import org.apache.phoenix.schema.SortOrder;
-import org.apache.phoenix.schema.tuple.MultiKeyValueTuple;
-import org.apache.phoenix.schema.types.PBoolean;
-import org.apache.phoenix.schema.types.PDataType;
-import org.junit.Test;
-
-import java.util.Arrays;
-import java.util.Collections;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertTrue;
-
-public class AndExpressionTest {
-
-private AndExpression createAnd(Expression lhs, Expression rhs) {
-return new AndExpression(Arrays.asList(lhs, rhs));
-}
-
-private AndExpression createAnd(Boolean x, Boolean y) {
-return createAnd(LiteralExpression.newConstant(x), 
LiteralExpression.newConstant(y));
-}
-
-private void testImmediateSingle(Boolean expected, Boolean lhs, Boolean 
rhs) {
-AndExpression and = createAnd(lhs, rhs);
-ImmutableBytesWritable out = new ImmutableBytesWritable();
-MultiKeyValueTuple tuple = new MultiKeyValueTuple();
-boolean success = and.evaluate(tuple, out);
-assertTrue(success);
-assertEquals(expected, PBoolean.INSTANCE.toObject(out));
-}
-
-// Evaluating AND when values of both sides are known should immediately 
succeed
-// and return the same result regardless of order.
-private void testImmediate(Boolean expected, Boolean a, Boolean b) {
-testImmediateSingle(expected, a, b);
-testImmediateSingle(expected, b, a);
-}
-
-private PColumn pcolumn(final String name) {
-return new PBaseColumn() {
-@Override public PName getName() {
-return PNameFactory.newName(name);
-}
-
-@Override public PDataType getDataType() {
-return PBoolean.INSTANCE;
-}
-
-@Override public PName getFamilyName() {
-return 
PNameFactory.newName(QueryConstants.DEFAULT_COLUMN_FAMILY);
-}
-
-@Override public int getPosition() {
-return 0;
-}
-
-@Override public Integer getArraySize() {
-return null;
-}
-
-@Override public byte[] getViewConstant() {
-return new byte[0];
-}
-
-@Override public boolean isViewReferenced() {
-return false;
-}
-
-@Override public String getExpressionStr() {
-   

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5463 Remove AndExpressionTest and OrExpressionTest since the author did not add a license header.

2019-09-25 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 9694bbb  PHOENIX-5463 Remove AndExpressionTest and OrExpressionTest 
since the author did not add a license header.
9694bbb is described below

commit 9694bbb241117edb8b3711cfaee5e22fa57ca14c
Author: Lars Hofhansl 
AuthorDate: Wed Sep 25 10:32:29 2019 -0700

PHOENIX-5463 Remove AndExpressionTest and OrExpressionTest since the author 
did not add a license header.
---
 .../phoenix/expression/AndExpressionTest.java  | 314 -
 .../phoenix/expression/OrExpressionTest.java   | 310 
 2 files changed, 624 deletions(-)

diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/expression/AndExpressionTest.java
 
b/phoenix-core/src/test/java/org/apache/phoenix/expression/AndExpressionTest.java
deleted file mode 100644
index a223a19..000
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/expression/AndExpressionTest.java
+++ /dev/null
@@ -1,314 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.phoenix.expression;
-
-import org.apache.hadoop.hbase.Cell;
-import org.apache.hadoop.hbase.CellUtil;
-import org.apache.hadoop.hbase.KeyValue;
-import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.phoenix.query.QueryConstants;
-import org.apache.phoenix.schema.PBaseColumn;
-import org.apache.phoenix.schema.PColumn;
-import org.apache.phoenix.schema.PName;
-import org.apache.phoenix.schema.PNameFactory;
-import org.apache.phoenix.schema.SortOrder;
-import org.apache.phoenix.schema.tuple.MultiKeyValueTuple;
-import org.apache.phoenix.schema.types.PBoolean;
-import org.apache.phoenix.schema.types.PDataType;
-import org.junit.Test;
-
-import java.util.Arrays;
-import java.util.Collections;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertTrue;
-
-public class AndExpressionTest {
-
-private AndExpression createAnd(Expression lhs, Expression rhs) {
-return new AndExpression(Arrays.asList(lhs, rhs));
-}
-
-private AndExpression createAnd(Boolean x, Boolean y) {
-return createAnd(LiteralExpression.newConstant(x), 
LiteralExpression.newConstant(y));
-}
-
-private void testImmediateSingle(Boolean expected, Boolean lhs, Boolean 
rhs) {
-AndExpression and = createAnd(lhs, rhs);
-ImmutableBytesWritable out = new ImmutableBytesWritable();
-MultiKeyValueTuple tuple = new MultiKeyValueTuple();
-boolean success = and.evaluate(tuple, out);
-assertTrue(success);
-assertEquals(expected, PBoolean.INSTANCE.toObject(out));
-}
-
-// Evaluating AND when values of both sides are known should immediately 
succeed
-// and return the same result regardless of order.
-private void testImmediate(Boolean expected, Boolean a, Boolean b) {
-testImmediateSingle(expected, a, b);
-testImmediateSingle(expected, b, a);
-}
-
-private PColumn pcolumn(final String name) {
-return new PBaseColumn() {
-@Override public PName getName() {
-return PNameFactory.newName(name);
-}
-
-@Override public PDataType getDataType() {
-return PBoolean.INSTANCE;
-}
-
-@Override public PName getFamilyName() {
-return 
PNameFactory.newName(QueryConstants.DEFAULT_COLUMN_FAMILY);
-}
-
-@Override public int getPosition() {
-return 0;
-}
-
-@Override public Integer getArraySize() {
-return null;
-}
-
-@Override public byte[] getViewConstant() {
-return new byte[0];
-}
-
-@Override public boolean isViewReferenced() {
-return false;
-}
-
-@Override public String getExpressionStr() {
-   

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5463 Remove AndExpressionTest and OrExpressionTest since the author did not add a license header.

2019-09-25 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 82ee8d5  PHOENIX-5463 Remove AndExpressionTest and OrExpressionTest 
since the author did not add a license header.
82ee8d5 is described below

commit 82ee8d5917da6a56ea124f281dbbecf9fed4571d
Author: Lars Hofhansl 
AuthorDate: Wed Sep 25 10:32:29 2019 -0700

PHOENIX-5463 Remove AndExpressionTest and OrExpressionTest since the author 
did not add a license header.
---
 .../phoenix/expression/AndExpressionTest.java  | 314 -
 .../phoenix/expression/OrExpressionTest.java   | 310 
 2 files changed, 624 deletions(-)

diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/expression/AndExpressionTest.java
 
b/phoenix-core/src/test/java/org/apache/phoenix/expression/AndExpressionTest.java
deleted file mode 100644
index a223a19..000
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/expression/AndExpressionTest.java
+++ /dev/null
@@ -1,314 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.phoenix.expression;
-
-import org.apache.hadoop.hbase.Cell;
-import org.apache.hadoop.hbase.CellUtil;
-import org.apache.hadoop.hbase.KeyValue;
-import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.phoenix.query.QueryConstants;
-import org.apache.phoenix.schema.PBaseColumn;
-import org.apache.phoenix.schema.PColumn;
-import org.apache.phoenix.schema.PName;
-import org.apache.phoenix.schema.PNameFactory;
-import org.apache.phoenix.schema.SortOrder;
-import org.apache.phoenix.schema.tuple.MultiKeyValueTuple;
-import org.apache.phoenix.schema.types.PBoolean;
-import org.apache.phoenix.schema.types.PDataType;
-import org.junit.Test;
-
-import java.util.Arrays;
-import java.util.Collections;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertTrue;
-
-public class AndExpressionTest {
-
-private AndExpression createAnd(Expression lhs, Expression rhs) {
-return new AndExpression(Arrays.asList(lhs, rhs));
-}
-
-private AndExpression createAnd(Boolean x, Boolean y) {
-return createAnd(LiteralExpression.newConstant(x), 
LiteralExpression.newConstant(y));
-}
-
-private void testImmediateSingle(Boolean expected, Boolean lhs, Boolean 
rhs) {
-AndExpression and = createAnd(lhs, rhs);
-ImmutableBytesWritable out = new ImmutableBytesWritable();
-MultiKeyValueTuple tuple = new MultiKeyValueTuple();
-boolean success = and.evaluate(tuple, out);
-assertTrue(success);
-assertEquals(expected, PBoolean.INSTANCE.toObject(out));
-}
-
-// Evaluating AND when values of both sides are known should immediately 
succeed
-// and return the same result regardless of order.
-private void testImmediate(Boolean expected, Boolean a, Boolean b) {
-testImmediateSingle(expected, a, b);
-testImmediateSingle(expected, b, a);
-}
-
-private PColumn pcolumn(final String name) {
-return new PBaseColumn() {
-@Override public PName getName() {
-return PNameFactory.newName(name);
-}
-
-@Override public PDataType getDataType() {
-return PBoolean.INSTANCE;
-}
-
-@Override public PName getFamilyName() {
-return 
PNameFactory.newName(QueryConstants.DEFAULT_COLUMN_FAMILY);
-}
-
-@Override public int getPosition() {
-return 0;
-}
-
-@Override public Integer getArraySize() {
-return null;
-}
-
-@Override public byte[] getViewConstant() {
-return new byte[0];
-}
-
-@Override public boolean isViewReferenced() {
-return false;
-}
-
-@Override public String getExpressionStr() {
-   

[phoenix] branch master updated: PHOENIX-5486 Projections from local indexes return garbage.

2019-09-20 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 4e4113e  PHOENIX-5486 Projections from local indexes return garbage.
4e4113e is described below

commit 4e4113e2be4f6a57600678173ea649e97a342a79
Author: Lars Hofhansl 
AuthorDate: Fri Sep 20 12:54:52 2019 -0700

PHOENIX-5486 Projections from local indexes return garbage.
---
 .../src/main/java/org/apache/phoenix/iterate/ExplainTable.java| 2 +-
 .../src/main/java/org/apache/phoenix/schema/MetaDataClient.java   | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ExplainTable.java 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ExplainTable.java
index 2671044..e53b084 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ExplainTable.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ExplainTable.java
@@ -215,7 +215,7 @@ public abstract class ExplainTable {
 private Long getViewIndexValue(PDataType type, byte[] range) {
 boolean useLongViewIndex = 
MetaDataUtil.getViewIndexIdDataType().equals(type);
 Object s = type.toObject(range);
-return (useLongViewIndex ? (Long) s : (Short) s) - (useLongViewIndex ? 
Long.MAX_VALUE : Short.MAX_VALUE);
+return (useLongViewIndex ? (Long) s : (Short) s) + (useLongViewIndex ? 
Long.MAX_VALUE : Short.MAX_VALUE) + 2;
 }
 
 private static class RowKeyValueIterator implements Iterator {
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
index 8a794ac..b28d404 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
@@ -1649,7 +1649,7 @@ public class MetaDataClient {
 PrimaryKeyConstraint pk = FACTORY.primaryKey(null, allPkColumns);
 tableProps.put(MetaDataUtil.DATA_TABLE_NAME_PROP_NAME, 
dataTable.getName().getString());
 CreateTableStatement tableStatement = 
FACTORY.createTable(indexTableName, statement.getProps(), columnDefs, pk, 
statement.getSplitNodes(), PTableType.INDEX, statement.ifNotExists(), null, 
null, statement.getBindCount(), null);
-table = createTableInternal(tableStatement, splits, dataTable, 
null, null, MetaDataUtil.getViewIndexIdDataType(),null, null, allocateIndexId, 
statement.getIndexType(), asyncCreatedDate, tableProps, commonFamilyProps);
+table = createTableInternal(tableStatement, splits, dataTable, 
null, null, getViewIndexDataType() ,null, null, allocateIndexId, 
statement.getIndexType(), asyncCreatedDate, tableProps, commonFamilyProps);
 }
 finally {
 deleteMutexCells(physicalSchemaName, physicalTableName, 
acquiredColumnMutexSet);
@@ -2825,7 +2825,7 @@ public class MetaDataClient {
 } else {
 tableUpsert.setBoolean(28, useStatsForParallelizationProp);
 }
-tableUpsert.setInt(29, Types.BIGINT);
+tableUpsert.setInt(29, viewIndexIdType.getSqlType());
 tableUpsert.execute();
 
 if (asyncCreatedDate != null) {



[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5486 Projections from local indexes return garbage.

2019-09-20 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 8404666  PHOENIX-5486 Projections from local indexes return garbage.
8404666 is described below

commit 84046661905723107bd27b91d55317249cf0e703
Author: Lars Hofhansl 
AuthorDate: Fri Sep 20 12:54:52 2019 -0700

PHOENIX-5486 Projections from local indexes return garbage.
---
 .../src/main/java/org/apache/phoenix/iterate/ExplainTable.java| 2 +-
 .../src/main/java/org/apache/phoenix/schema/MetaDataClient.java   | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ExplainTable.java 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ExplainTable.java
index 2671044..e53b084 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ExplainTable.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ExplainTable.java
@@ -215,7 +215,7 @@ public abstract class ExplainTable {
 private Long getViewIndexValue(PDataType type, byte[] range) {
 boolean useLongViewIndex = 
MetaDataUtil.getViewIndexIdDataType().equals(type);
 Object s = type.toObject(range);
-return (useLongViewIndex ? (Long) s : (Short) s) - (useLongViewIndex ? 
Long.MAX_VALUE : Short.MAX_VALUE);
+return (useLongViewIndex ? (Long) s : (Short) s) + (useLongViewIndex ? 
Long.MAX_VALUE : Short.MAX_VALUE) + 2;
 }
 
 private static class RowKeyValueIterator implements Iterator {
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
index 3bc7d53..5ae53bb 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
@@ -1648,7 +1648,7 @@ public class MetaDataClient {
 PrimaryKeyConstraint pk = FACTORY.primaryKey(null, allPkColumns);
 tableProps.put(MetaDataUtil.DATA_TABLE_NAME_PROP_NAME, 
dataTable.getName().getString());
 CreateTableStatement tableStatement = 
FACTORY.createTable(indexTableName, statement.getProps(), columnDefs, pk, 
statement.getSplitNodes(), PTableType.INDEX, statement.ifNotExists(), null, 
null, statement.getBindCount(), null);
-table = createTableInternal(tableStatement, splits, dataTable, 
null, null, MetaDataUtil.getViewIndexIdDataType(),null, null, allocateIndexId, 
statement.getIndexType(), asyncCreatedDate, tableProps, commonFamilyProps);
+table = createTableInternal(tableStatement, splits, dataTable, 
null, null, getViewIndexDataType() ,null, null, allocateIndexId, 
statement.getIndexType(), asyncCreatedDate, tableProps, commonFamilyProps);
 }
 finally {
 deleteMutexCells(physicalSchemaName, physicalTableName, 
acquiredColumnMutexSet);
@@ -2824,7 +2824,7 @@ public class MetaDataClient {
 } else {
 tableUpsert.setBoolean(28, useStatsForParallelizationProp);
 }
-tableUpsert.setInt(29, Types.BIGINT);
+tableUpsert.setInt(29, viewIndexIdType.getSqlType());
 tableUpsert.execute();
 
 if (asyncCreatedDate != null) {



[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5486 Projections from local indexes return garbage.

2019-09-20 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 4b9fee2  PHOENIX-5486 Projections from local indexes return garbage.
4b9fee2 is described below

commit 4b9fee2dcf3d99e056c272d3590117fcb9cbb3c4
Author: Lars Hofhansl 
AuthorDate: Fri Sep 20 12:54:52 2019 -0700

PHOENIX-5486 Projections from local indexes return garbage.
---
 .../src/main/java/org/apache/phoenix/iterate/ExplainTable.java| 2 +-
 .../src/main/java/org/apache/phoenix/schema/MetaDataClient.java   | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ExplainTable.java 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ExplainTable.java
index 2671044..e53b084 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ExplainTable.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ExplainTable.java
@@ -215,7 +215,7 @@ public abstract class ExplainTable {
 private Long getViewIndexValue(PDataType type, byte[] range) {
 boolean useLongViewIndex = 
MetaDataUtil.getViewIndexIdDataType().equals(type);
 Object s = type.toObject(range);
-return (useLongViewIndex ? (Long) s : (Short) s) - (useLongViewIndex ? 
Long.MAX_VALUE : Short.MAX_VALUE);
+return (useLongViewIndex ? (Long) s : (Short) s) + (useLongViewIndex ? 
Long.MAX_VALUE : Short.MAX_VALUE) + 2;
 }
 
 private static class RowKeyValueIterator implements Iterator {
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
index 70e68e3..04fdfec 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
@@ -1662,7 +1662,7 @@ public class MetaDataClient {
 PrimaryKeyConstraint pk = FACTORY.primaryKey(null, allPkColumns);
 tableProps.put(MetaDataUtil.DATA_TABLE_NAME_PROP_NAME, 
dataTable.getName().getString());
 CreateTableStatement tableStatement = 
FACTORY.createTable(indexTableName, statement.getProps(), columnDefs, pk, 
statement.getSplitNodes(), PTableType.INDEX, statement.ifNotExists(), null, 
null, statement.getBindCount(), null);
-table = createTableInternal(tableStatement, splits, dataTable, 
null, null, MetaDataUtil.getViewIndexIdDataType(),null, null, allocateIndexId, 
statement.getIndexType(), asyncCreatedDate, tableProps, commonFamilyProps);
+table = createTableInternal(tableStatement, splits, dataTable, 
null, null, getViewIndexDataType() ,null, null, allocateIndexId, 
statement.getIndexType(), asyncCreatedDate, tableProps, commonFamilyProps);
 }
 finally {
 deleteMutexCells(physicalSchemaName, physicalTableName, 
acquiredColumnMutexSet);
@@ -2838,7 +2838,7 @@ public class MetaDataClient {
 } else {
 tableUpsert.setBoolean(28, useStatsForParallelizationProp);
 }
-tableUpsert.setInt(29, Types.BIGINT);
+tableUpsert.setInt(29, viewIndexIdType.getSqlType());
 tableUpsert.execute();
 
 if (asyncCreatedDate != null) {



[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5486 Projections from local indexes return garbage.

2019-09-20 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 9fa0ee1  PHOENIX-5486 Projections from local indexes return garbage.
9fa0ee1 is described below

commit 9fa0ee1c56175641d37a886fa2b309bd7d4b0d80
Author: Lars Hofhansl 
AuthorDate: Fri Sep 20 12:54:52 2019 -0700

PHOENIX-5486 Projections from local indexes return garbage.
---
 .../src/main/java/org/apache/phoenix/iterate/ExplainTable.java| 2 +-
 .../src/main/java/org/apache/phoenix/schema/MetaDataClient.java   | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ExplainTable.java 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ExplainTable.java
index 2671044..e53b084 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ExplainTable.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ExplainTable.java
@@ -215,7 +215,7 @@ public abstract class ExplainTable {
 private Long getViewIndexValue(PDataType type, byte[] range) {
 boolean useLongViewIndex = 
MetaDataUtil.getViewIndexIdDataType().equals(type);
 Object s = type.toObject(range);
-return (useLongViewIndex ? (Long) s : (Short) s) - (useLongViewIndex ? 
Long.MAX_VALUE : Short.MAX_VALUE);
+return (useLongViewIndex ? (Long) s : (Short) s) + (useLongViewIndex ? 
Long.MAX_VALUE : Short.MAX_VALUE) + 2;
 }
 
 private static class RowKeyValueIterator implements Iterator {
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
index 3bc7d53..5ae53bb 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
@@ -1648,7 +1648,7 @@ public class MetaDataClient {
 PrimaryKeyConstraint pk = FACTORY.primaryKey(null, allPkColumns);
 tableProps.put(MetaDataUtil.DATA_TABLE_NAME_PROP_NAME, 
dataTable.getName().getString());
 CreateTableStatement tableStatement = 
FACTORY.createTable(indexTableName, statement.getProps(), columnDefs, pk, 
statement.getSplitNodes(), PTableType.INDEX, statement.ifNotExists(), null, 
null, statement.getBindCount(), null);
-table = createTableInternal(tableStatement, splits, dataTable, 
null, null, MetaDataUtil.getViewIndexIdDataType(),null, null, allocateIndexId, 
statement.getIndexType(), asyncCreatedDate, tableProps, commonFamilyProps);
+table = createTableInternal(tableStatement, splits, dataTable, 
null, null, getViewIndexDataType() ,null, null, allocateIndexId, 
statement.getIndexType(), asyncCreatedDate, tableProps, commonFamilyProps);
 }
 finally {
 deleteMutexCells(physicalSchemaName, physicalTableName, 
acquiredColumnMutexSet);
@@ -2824,7 +2824,7 @@ public class MetaDataClient {
 } else {
 tableUpsert.setBoolean(28, useStatsForParallelizationProp);
 }
-tableUpsert.setInt(29, Types.BIGINT);
+tableUpsert.setInt(29, viewIndexIdType.getSqlType());
 tableUpsert.execute();
 
 if (asyncCreatedDate != null) {



[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5406 addendum: Add missing license header.

2019-09-18 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new fda8583  PHOENIX-5406 addendum: Add missing license header.
fda8583 is described below

commit fda858375e88a21474f0c2cd9bb8692e5b448f47
Author: Lars Hofhansl 
AuthorDate: Wed Sep 18 09:51:15 2019 -0700

PHOENIX-5406 addendum: Add missing license header.
---
 .../org/apache/phoenix/index/IndexUpgradeToolTest.java  | 17 +
 1 file changed, 17 insertions(+)

diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
index e985479..d158b0d 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
@@ -1,3 +1,20 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.phoenix.index;
 
 import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.ROLLBACK_OP;



[phoenix] branch master updated: PHOENIX-5406 addendum: Add missing license header.

2019-09-18 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new c43a3e3  PHOENIX-5406 addendum: Add missing license header.
c43a3e3 is described below

commit c43a3e30246127bfd5139b9e556906d6be7a8c0f
Author: Lars Hofhansl 
AuthorDate: Wed Sep 18 09:51:15 2019 -0700

PHOENIX-5406 addendum: Add missing license header.
---
 .../org/apache/phoenix/index/IndexUpgradeToolTest.java  | 17 +
 1 file changed, 17 insertions(+)

diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
index e985479..d158b0d 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
@@ -1,3 +1,20 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.phoenix.index;
 
 import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.ROLLBACK_OP;



[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5406 addendum: Add missing license header.

2019-09-18 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new eb9b936  PHOENIX-5406 addendum: Add missing license header.
eb9b936 is described below

commit eb9b93603cb072af0cb09adb68371d2e9f03d359
Author: Lars Hofhansl 
AuthorDate: Wed Sep 18 09:51:15 2019 -0700

PHOENIX-5406 addendum: Add missing license header.
---
 .../org/apache/phoenix/index/IndexUpgradeToolTest.java  | 17 +
 1 file changed, 17 insertions(+)

diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
index e985479..d158b0d 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
@@ -1,3 +1,20 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.phoenix.index;
 
 import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.ROLLBACK_OP;



[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5406 addendum: Add missing license header.

2019-09-18 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new b30f080  PHOENIX-5406 addendum: Add missing license header.
b30f080 is described below

commit b30f080a5a901fb0ed50194ec0a77f8e3c239a37
Author: Lars Hofhansl 
AuthorDate: Wed Sep 18 09:51:15 2019 -0700

PHOENIX-5406 addendum: Add missing license header.
---
 .../org/apache/phoenix/index/IndexUpgradeToolTest.java  | 17 +
 1 file changed, 17 insertions(+)

diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
index e985479..d158b0d 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
@@ -1,3 +1,20 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.phoenix.index;
 
 import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.ROLLBACK_OP;



[phoenix] branch 4.x-HBase-1.5 updated: Revert "PHOENIX-5274: ConnectionQueryServiceImpl#ensureNamespaceCreated and ensureTableCreated should use HBase APIs that do not require ADMIN permissions for e

2019-09-16 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new bbcc24c  Revert "PHOENIX-5274: 
ConnectionQueryServiceImpl#ensureNamespaceCreated and ensureTableCreated should 
use HBase APIs that do not require ADMIN permissions for existence checks (Use 
hbaseAdmin listNamespaces API rather than getNamespaceDescriptor)"
bbcc24c is described below

commit bbcc24c5a62700b64409a1d5ea65b9b8fd97d5dc
Author: Lars Hofhansl 
AuthorDate: Mon Sep 16 13:54:15 2019 -0700

Revert "PHOENIX-5274: ConnectionQueryServiceImpl#ensureNamespaceCreated and 
ensureTableCreated should use HBase APIs that do not require ADMIN permissions 
for existence checks (Use hbaseAdmin listNamespaces API rather than 
getNamespaceDescriptor)"

This reverts commit e859f091aed1a3125a9fae338f28b0566e92844f, due to 
failing tests.
---
 .../org/apache/phoenix/end2end/CreateSchemaIT.java | 11 +++
 .../org/apache/phoenix/end2end/DropSchemaIT.java   | 14 ++---
 .../SystemCatalogCreationOnConnectionIT.java   |  9 --
 .../phoenix/query/ConnectionQueryServicesImpl.java | 23 ++
 .../java/org/apache/phoenix/util/ServerUtil.java   | 24 ---
 .../org/apache/phoenix/util/ServerUtilTest.java| 36 --
 6 files changed, 38 insertions(+), 79 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java
index 6e61c57..8002dc1 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java
@@ -18,7 +18,7 @@
 package org.apache.phoenix.end2end;
 
 import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.assertNotNull;
 import static org.junit.Assert.fail;
 
 import java.sql.Connection;
@@ -33,7 +33,6 @@ import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.SchemaAlreadyExistsException;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.SchemaUtil;
-import org.apache.phoenix.util.ServerUtil;
 import org.apache.phoenix.util.TestUtil;
 import org.junit.Test;
 
@@ -55,9 +54,9 @@ public class CreateSchemaIT extends ParallelStatsDisabledIT {
 try (Connection conn = DriverManager.getConnection(getUrl(), props);
 HBaseAdmin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();) {
 conn.createStatement().execute(ddl1);
-assertTrue(ServerUtil.isHbaseNamespaceAvailable(admin, 
schemaName1));
+assertNotNull(admin.getNamespaceDescriptor(schemaName1));
 conn.createStatement().execute(ddl2);
-assertTrue(ServerUtil.isHbaseNamespaceAvailable(admin, 
schemaName2.toUpperCase()));
+
assertNotNull(admin.getNamespaceDescriptor(schemaName2.toUpperCase()));
 }
 // Try creating it again and verify that it throws 
SchemaAlreadyExistsException
 try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
@@ -96,8 +95,8 @@ public class CreateSchemaIT extends ParallelStatsDisabledIT {
 conn.createStatement().execute("CREATE SCHEMA \""
 + SchemaUtil.HBASE_NAMESPACE.toUpperCase() + "\"");
 
-assertTrue(ServerUtil.isHbaseNamespaceAvailable(admin, 
SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE.toUpperCase()));
-assertTrue(ServerUtil.isHbaseNamespaceAvailable(admin, 
SchemaUtil.HBASE_NAMESPACE.toUpperCase()));
+
assertNotNull(admin.getNamespaceDescriptor(SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE.toUpperCase()));
+
assertNotNull(admin.getNamespaceDescriptor(SchemaUtil.HBASE_NAMESPACE.toUpperCase()));
 
 }
 }
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropSchemaIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropSchemaIT.java
index 9dfc829..5c5420c 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropSchemaIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropSchemaIT.java
@@ -18,8 +18,8 @@
 package org.apache.phoenix.end2end;
 
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
 import static org.junit.Assert.fail;
-import static org.junit.Assert.assertTrue;
 
 import java.sql.Connection;
 import java.sql.DriverManager;
@@ -30,6 +30,7 @@ import java.util.Map;
 import java.util.Properties;
 
 import org.apache.hadoop.hbase.NamespaceDescriptor;
+import org.apache.hadoop.hbase.NamespaceNotFoundException;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.phoenix.exception.SQLExceptionCode;

[phoenix] branch master updated: PHOENIX-5406 Speed up ParameterizedIndexUpgradeToolIT.

2019-07-24 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 4315fd2  PHOENIX-5406 Speed up ParameterizedIndexUpgradeToolIT.
4315fd2 is described below

commit 4315fd2a72474630742a7a88cff75ece9e3a591c
Author: Lars Hofhansl 
AuthorDate: Wed Jul 24 18:42:47 2019 -0700

PHOENIX-5406 Speed up ParameterizedIndexUpgradeToolIT.
---
 .../end2end/ParameterizedIndexUpgradeToolIT.java   | 17 
 .../apache/phoenix/index/IndexUpgradeToolTest.java | 48 ++
 2 files changed, 48 insertions(+), 17 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
index 2cde910..0f71733 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
@@ -18,7 +18,6 @@
 package org.apache.phoenix.end2end;
 
 import com.google.common.collect.Maps;
-import org.apache.commons.cli.CommandLine;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
 import org.apache.phoenix.hbase.index.IndexRegionObserver;
@@ -281,22 +280,6 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 validate(true);
 }
 
-@Test
-public void testCommandLineParsing() {
-
-String outputFile = "/tmp/index_upgrade_" + 
UUID.randomUUID().toString();
-String [] args = {"-o", upgrade ? UPGRADE_OP : ROLLBACK_OP, "-tb",
-INPUT_LIST, "-lf", outputFile, "-d"};
-IndexUpgradeTool iut = new IndexUpgradeTool();
-
-CommandLine cmd = iut.parseOptions(args);
-iut.initializeTool(cmd);
-Assert.assertEquals(iut.getDryRun(),true);
-Assert.assertEquals(iut.getInputTables(), INPUT_LIST);
-Assert.assertEquals(iut.getOperation(), upgrade ? UPGRADE_OP : 
ROLLBACK_OP);
-Assert.assertEquals(iut.getLogFile(), outputFile);
-}
-
 @After
 public void cleanup() throws SQLException {
 //TEST.MOCK1,TEST1.MOCK2,TEST.MOCK3
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
new file mode 100644
index 000..e985479
--- /dev/null
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
@@ -0,0 +1,48 @@
+package org.apache.phoenix.index;
+
+import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.ROLLBACK_OP;
+import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.UPGRADE_OP;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.UUID;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.phoenix.mapreduce.index.IndexUpgradeTool;
+import org.junit.Assert;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameters;
+
+@RunWith(Parameterized.class)
+public class IndexUpgradeToolTest {
+private static final String INPUT_LIST = 
"TEST.MOCK1,TEST1.MOCK2,TEST.MOCK3";
+private final boolean upgrade;
+
+public IndexUpgradeToolTest(boolean upgrade) {
+this.upgrade = upgrade;
+}
+
+@Test
+public void testCommandLineParsing() {
+
+String outputFile = "/tmp/index_upgrade_" + 
UUID.randomUUID().toString();
+String [] args = {"-o", upgrade ? UPGRADE_OP : ROLLBACK_OP, "-tb",
+INPUT_LIST, "-lf", outputFile, "-d"};
+IndexUpgradeTool iut = new IndexUpgradeTool();
+
+CommandLine cmd = iut.parseOptions(args);
+iut.initializeTool(cmd);
+Assert.assertEquals(iut.getDryRun(),true);
+Assert.assertEquals(iut.getInputTables(), INPUT_LIST);
+Assert.assertEquals(iut.getOperation(), upgrade ? UPGRADE_OP : 
ROLLBACK_OP);
+Assert.assertEquals(iut.getLogFile(), outputFile);
+}
+
+@Parameters(name ="IndexUpgradeToolTest_mutable={1}")
+public static Collection data() {
+return Arrays.asList( false, true);
+}
+
+}



[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5406 Speed up ParameterizedIndexUpgradeToolIT.

2019-07-24 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 04e71d9  PHOENIX-5406 Speed up ParameterizedIndexUpgradeToolIT.
04e71d9 is described below

commit 04e71d913d8abc4d42f18c57d631308e88fced74
Author: Lars Hofhansl 
AuthorDate: Wed Jul 24 18:42:47 2019 -0700

PHOENIX-5406 Speed up ParameterizedIndexUpgradeToolIT.
---
 .../end2end/ParameterizedIndexUpgradeToolIT.java   | 17 
 .../apache/phoenix/index/IndexUpgradeToolTest.java | 48 ++
 2 files changed, 48 insertions(+), 17 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
index 24c0f39..ceea647 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
@@ -18,7 +18,6 @@
 package org.apache.phoenix.end2end;
 
 import com.google.common.collect.Maps;
-import org.apache.commons.cli.CommandLine;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
 import org.apache.phoenix.hbase.index.IndexRegionObserver;
@@ -280,22 +279,6 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 validate(true);
 }
 
-@Test
-public void testCommandLineParsing() {
-
-String outputFile = "/tmp/index_upgrade_" + 
UUID.randomUUID().toString();
-String [] args = {"-o", upgrade ? UPGRADE_OP : ROLLBACK_OP, "-tb",
-INPUT_LIST, "-lf", outputFile, "-d"};
-IndexUpgradeTool iut = new IndexUpgradeTool();
-
-CommandLine cmd = iut.parseOptions(args);
-iut.initializeTool(cmd);
-Assert.assertEquals(iut.getDryRun(),true);
-Assert.assertEquals(iut.getInputTables(), INPUT_LIST);
-Assert.assertEquals(iut.getOperation(), upgrade ? UPGRADE_OP : 
ROLLBACK_OP);
-Assert.assertEquals(iut.getLogFile(), outputFile);
-}
-
 @After
 public void cleanup() throws SQLException {
 //TEST.MOCK1,TEST1.MOCK2,TEST.MOCK3
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
new file mode 100644
index 000..e985479
--- /dev/null
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
@@ -0,0 +1,48 @@
+package org.apache.phoenix.index;
+
+import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.ROLLBACK_OP;
+import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.UPGRADE_OP;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.UUID;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.phoenix.mapreduce.index.IndexUpgradeTool;
+import org.junit.Assert;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameters;
+
+@RunWith(Parameterized.class)
+public class IndexUpgradeToolTest {
+private static final String INPUT_LIST = 
"TEST.MOCK1,TEST1.MOCK2,TEST.MOCK3";
+private final boolean upgrade;
+
+public IndexUpgradeToolTest(boolean upgrade) {
+this.upgrade = upgrade;
+}
+
+@Test
+public void testCommandLineParsing() {
+
+String outputFile = "/tmp/index_upgrade_" + 
UUID.randomUUID().toString();
+String [] args = {"-o", upgrade ? UPGRADE_OP : ROLLBACK_OP, "-tb",
+INPUT_LIST, "-lf", outputFile, "-d"};
+IndexUpgradeTool iut = new IndexUpgradeTool();
+
+CommandLine cmd = iut.parseOptions(args);
+iut.initializeTool(cmd);
+Assert.assertEquals(iut.getDryRun(),true);
+Assert.assertEquals(iut.getInputTables(), INPUT_LIST);
+Assert.assertEquals(iut.getOperation(), upgrade ? UPGRADE_OP : 
ROLLBACK_OP);
+Assert.assertEquals(iut.getLogFile(), outputFile);
+}
+
+@Parameters(name ="IndexUpgradeToolTest_mutable={1}")
+public static Collection data() {
+return Arrays.asList( false, true);
+}
+
+}



[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5406 Speed up ParameterizedIndexUpgradeToolIT.

2019-07-24 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new fca930c  PHOENIX-5406 Speed up ParameterizedIndexUpgradeToolIT.
fca930c is described below

commit fca930cc05150a25353af45ad372ea2252b3df90
Author: Lars Hofhansl 
AuthorDate: Wed Jul 24 18:42:47 2019 -0700

PHOENIX-5406 Speed up ParameterizedIndexUpgradeToolIT.
---
 .../end2end/ParameterizedIndexUpgradeToolIT.java   | 17 
 .../apache/phoenix/index/IndexUpgradeToolTest.java | 48 ++
 2 files changed, 48 insertions(+), 17 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
index 24c0f39..ceea647 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
@@ -18,7 +18,6 @@
 package org.apache.phoenix.end2end;
 
 import com.google.common.collect.Maps;
-import org.apache.commons.cli.CommandLine;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
 import org.apache.phoenix.hbase.index.IndexRegionObserver;
@@ -280,22 +279,6 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 validate(true);
 }
 
-@Test
-public void testCommandLineParsing() {
-
-String outputFile = "/tmp/index_upgrade_" + 
UUID.randomUUID().toString();
-String [] args = {"-o", upgrade ? UPGRADE_OP : ROLLBACK_OP, "-tb",
-INPUT_LIST, "-lf", outputFile, "-d"};
-IndexUpgradeTool iut = new IndexUpgradeTool();
-
-CommandLine cmd = iut.parseOptions(args);
-iut.initializeTool(cmd);
-Assert.assertEquals(iut.getDryRun(),true);
-Assert.assertEquals(iut.getInputTables(), INPUT_LIST);
-Assert.assertEquals(iut.getOperation(), upgrade ? UPGRADE_OP : 
ROLLBACK_OP);
-Assert.assertEquals(iut.getLogFile(), outputFile);
-}
-
 @After
 public void cleanup() throws SQLException {
 //TEST.MOCK1,TEST1.MOCK2,TEST.MOCK3
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
new file mode 100644
index 000..e985479
--- /dev/null
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
@@ -0,0 +1,48 @@
+package org.apache.phoenix.index;
+
+import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.ROLLBACK_OP;
+import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.UPGRADE_OP;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.UUID;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.phoenix.mapreduce.index.IndexUpgradeTool;
+import org.junit.Assert;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameters;
+
+@RunWith(Parameterized.class)
+public class IndexUpgradeToolTest {
+private static final String INPUT_LIST = 
"TEST.MOCK1,TEST1.MOCK2,TEST.MOCK3";
+private final boolean upgrade;
+
+public IndexUpgradeToolTest(boolean upgrade) {
+this.upgrade = upgrade;
+}
+
+@Test
+public void testCommandLineParsing() {
+
+String outputFile = "/tmp/index_upgrade_" + 
UUID.randomUUID().toString();
+String [] args = {"-o", upgrade ? UPGRADE_OP : ROLLBACK_OP, "-tb",
+INPUT_LIST, "-lf", outputFile, "-d"};
+IndexUpgradeTool iut = new IndexUpgradeTool();
+
+CommandLine cmd = iut.parseOptions(args);
+iut.initializeTool(cmd);
+Assert.assertEquals(iut.getDryRun(),true);
+Assert.assertEquals(iut.getInputTables(), INPUT_LIST);
+Assert.assertEquals(iut.getOperation(), upgrade ? UPGRADE_OP : 
ROLLBACK_OP);
+Assert.assertEquals(iut.getLogFile(), outputFile);
+}
+
+@Parameters(name ="IndexUpgradeToolTest_mutable={1}")
+public static Collection data() {
+return Arrays.asList( false, true);
+}
+
+}



[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5406 Speed up ParameterizedIndexUpgradeToolIT.

2019-07-24 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new bbf3033  PHOENIX-5406 Speed up ParameterizedIndexUpgradeToolIT.
bbf3033 is described below

commit bbf3033c45e20df68e9627c30e508615cf954baa
Author: Lars Hofhansl 
AuthorDate: Wed Jul 24 18:42:47 2019 -0700

PHOENIX-5406 Speed up ParameterizedIndexUpgradeToolIT.
---
 .../end2end/ParameterizedIndexUpgradeToolIT.java   | 17 
 .../apache/phoenix/index/IndexUpgradeToolTest.java | 48 ++
 2 files changed, 48 insertions(+), 17 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
index 24c0f39..ceea647 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParameterizedIndexUpgradeToolIT.java
@@ -18,7 +18,6 @@
 package org.apache.phoenix.end2end;
 
 import com.google.common.collect.Maps;
-import org.apache.commons.cli.CommandLine;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
 import org.apache.phoenix.hbase.index.IndexRegionObserver;
@@ -280,22 +279,6 @@ public class ParameterizedIndexUpgradeToolIT extends 
BaseTest {
 validate(true);
 }
 
-@Test
-public void testCommandLineParsing() {
-
-String outputFile = "/tmp/index_upgrade_" + 
UUID.randomUUID().toString();
-String [] args = {"-o", upgrade ? UPGRADE_OP : ROLLBACK_OP, "-tb",
-INPUT_LIST, "-lf", outputFile, "-d"};
-IndexUpgradeTool iut = new IndexUpgradeTool();
-
-CommandLine cmd = iut.parseOptions(args);
-iut.initializeTool(cmd);
-Assert.assertEquals(iut.getDryRun(),true);
-Assert.assertEquals(iut.getInputTables(), INPUT_LIST);
-Assert.assertEquals(iut.getOperation(), upgrade ? UPGRADE_OP : 
ROLLBACK_OP);
-Assert.assertEquals(iut.getLogFile(), outputFile);
-}
-
 @After
 public void cleanup() throws SQLException {
 //TEST.MOCK1,TEST1.MOCK2,TEST.MOCK3
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
new file mode 100644
index 000..e985479
--- /dev/null
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/index/IndexUpgradeToolTest.java
@@ -0,0 +1,48 @@
+package org.apache.phoenix.index;
+
+import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.ROLLBACK_OP;
+import static org.apache.phoenix.mapreduce.index.IndexUpgradeTool.UPGRADE_OP;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.UUID;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.phoenix.mapreduce.index.IndexUpgradeTool;
+import org.junit.Assert;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameters;
+
+@RunWith(Parameterized.class)
+public class IndexUpgradeToolTest {
+private static final String INPUT_LIST = 
"TEST.MOCK1,TEST1.MOCK2,TEST.MOCK3";
+private final boolean upgrade;
+
+public IndexUpgradeToolTest(boolean upgrade) {
+this.upgrade = upgrade;
+}
+
+@Test
+public void testCommandLineParsing() {
+
+String outputFile = "/tmp/index_upgrade_" + 
UUID.randomUUID().toString();
+String [] args = {"-o", upgrade ? UPGRADE_OP : ROLLBACK_OP, "-tb",
+INPUT_LIST, "-lf", outputFile, "-d"};
+IndexUpgradeTool iut = new IndexUpgradeTool();
+
+CommandLine cmd = iut.parseOptions(args);
+iut.initializeTool(cmd);
+Assert.assertEquals(iut.getDryRun(),true);
+Assert.assertEquals(iut.getInputTables(), INPUT_LIST);
+Assert.assertEquals(iut.getOperation(), upgrade ? UPGRADE_OP : 
ROLLBACK_OP);
+Assert.assertEquals(iut.getLogFile(), outputFile);
+}
+
+@Parameters(name ="IndexUpgradeToolTest_mutable={1}")
+public static Collection data() {
+return Arrays.asList( false, true);
+}
+
+}



[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-4893 Addendum; add missing import.

2019-07-20 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 1ba412c  PHOENIX-4893 Addendum; add missing import.
1ba412c is described below

commit 1ba412c3e6a477f93ac8c9bf43b63be35faee0da
Author: Lars Hofhansl 
AuthorDate: Sat Jul 20 17:19:05 2019 -0700

PHOENIX-4893 Addendum; add missing import.
---
 phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java | 1 +
 1 file changed, 1 insertion(+)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
index 9084251..676ddcc 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
@@ -218,6 +218,7 @@ import 
org.apache.phoenix.schema.PTable.QualifierEncodingScheme;
 import 
org.apache.phoenix.schema.PTable.QualifierEncodingScheme.QualifierOutOfRangeException;
 import org.apache.phoenix.schema.PTable.ViewType;
 import org.apache.phoenix.schema.stats.GuidePostsKey;
+import org.apache.phoenix.schema.stats.StatisticsUtil;
 import org.apache.phoenix.schema.task.Task;
 import org.apache.phoenix.schema.types.PDataType;
 import org.apache.phoenix.schema.types.PDate;



[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-4893 Addendum; add missing import.

2019-07-20 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 989f0ec  PHOENIX-4893 Addendum; add missing import.
989f0ec is described below

commit 989f0ec31b93020f3b4ca15437a97063541fda4c
Author: Lars Hofhansl 
AuthorDate: Sat Jul 20 17:19:05 2019 -0700

PHOENIX-4893 Addendum; add missing import.
---
 phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java | 1 +
 1 file changed, 1 insertion(+)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
index 9084251..676ddcc 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
@@ -218,6 +218,7 @@ import 
org.apache.phoenix.schema.PTable.QualifierEncodingScheme;
 import 
org.apache.phoenix.schema.PTable.QualifierEncodingScheme.QualifierOutOfRangeException;
 import org.apache.phoenix.schema.PTable.ViewType;
 import org.apache.phoenix.schema.stats.GuidePostsKey;
+import org.apache.phoenix.schema.stats.StatisticsUtil;
 import org.apache.phoenix.schema.task.Task;
 import org.apache.phoenix.schema.types.PDataType;
 import org.apache.phoenix.schema.types.PDate;



[phoenix] branch master updated: PHOENIX-5290 HashJoinMoreIT is flapping.

2019-07-16 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new b5242ff  PHOENIX-5290 HashJoinMoreIT is flapping.
b5242ff is described below

commit b5242ff75f18696ea29fe4c95fde27a5d557966d
Author: Lars Hofhansl 
AuthorDate: Tue Jul 16 15:45:48 2019 -0700

PHOENIX-5290 HashJoinMoreIT is flapping.
---
 .../phoenix/end2end/RowValueConstructorIT.java | 32 ++
 .../org/apache/phoenix/compile/WhereOptimizer.java |  8 --
 .../org/apache/phoenix/schema/types/PVarchar.java  |  2 +-
 3 files changed, 39 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowValueConstructorIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowValueConstructorIT.java
index fb04261..390d831 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowValueConstructorIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowValueConstructorIT.java
@@ -51,6 +51,7 @@ import java.sql.Timestamp;
 import java.util.List;
 import java.util.Properties;
 
+import org.apache.phoenix.jdbc.PhoenixPreparedStatement;
 import org.apache.phoenix.util.DateUtil;
 import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
@@ -1691,6 +1692,37 @@ public class RowValueConstructorIT extends 
ParallelStatsDisabledIT {
 }
 }
 
+@Test
+public void testTrailingSeparator() throws Exception {
+Connection conn = null;
+try {
+conn = DriverManager.getConnection(getUrl());
+conn.createStatement().execute("CREATE TABLE test2961 (\n"
++ "ACCOUNT_ID VARCHAR NOT NULL,\n" + "BUCKET_ID VARCHAR 
NOT NULL,\n"
++ "OBJECT_ID VARCHAR NOT NULL,\n" + "OBJECT_VERSION 
VARCHAR NOT NULL,\n"
++ "LOC VARCHAR,\n"
++ "CONSTRAINT PK PRIMARY KEY (ACCOUNT_ID, BUCKET_ID, 
OBJECT_ID, OBJECT_VERSION DESC))");
+
+String sql = "SELECT  OBJ.ACCOUNT_ID from  test2961 as OBJ where "
++ "(OBJ.ACCOUNT_ID, OBJ.BUCKET_ID, OBJ.OBJECT_ID, 
OBJ.OBJECT_VERSION) IN "
++ "((?,?,?,?),(?,?,?,?))";
+
+PhoenixPreparedStatement statement = conn.prepareStatement(sql)
+.unwrap(PhoenixPreparedStatement.class);
+statement.setString(1, new String(new char[] { (char) 3 }));
+statement.setString(2, new String(new char[] { (char) 55 }));
+statement.setString(3, new String(new char[] { (char) 39 }));
+statement.setString(4, new String(new char[] { (char) 0 }));
+statement.setString(5, new String(new char[] { (char) 83 }));
+statement.setString(6, new String(new char[] { (char) 15 }));
+statement.setString(7, new String(new char[] { (char) 55 }));
+statement.setString(8, new String(new char[] { (char) 147 }));
+statement.optimizeQuery(sql);
+} finally {
+conn.close();
+}
+}
+
 private StringBuilder generateQueryToTest(int numItemsInClause, String 
fullViewName) {
 StringBuilder querySb =
 new StringBuilder("SELECT OBJECT_ID,OBJECT_DATA2,OBJECT_DATA 
FROM " + fullViewName);
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
index b845a09..0964d9d 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
@@ -347,7 +347,9 @@ public class WhereOptimizer {
 byte[] lowerRange = KeyRange.UNBOUND;
 boolean lowerInclusive = false;
 // Lower range of trailing part of RVC must be true, so we can form a 
new range to intersect going forward
-if (!range.lowerUnbound() && Bytes.startsWith(range.getLowerRange(), 
clippedResult.getLowerRange())) {
+if (!range.lowerUnbound()
+&& range.getLowerRange().length > 
clippedResult.getLowerRange().length
+&& Bytes.startsWith(range.getLowerRange(), 
clippedResult.getLowerRange())) {
 lowerRange = range.getLowerRange();
 int offset = clippedResult.getLowerRange().length + 
separatorLength;
 ptr.set(lowerRange, offset, lowerRange.length - offset);
@@ -356,7 +358,9 @@ public class WhereOptimizer {
 }
 byte[] upperRange = KeyRange.UNBOUND;
 boolean upperInclusive = false;
-if (!range.upperUnbound() && Bytes.startsWith(range.getUpperRange(), 
clippedResult.getUpperRange())) {
+if (!r

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5290 HashJoinMoreIT is flapping.

2019-07-16 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 9364a94  PHOENIX-5290 HashJoinMoreIT is flapping.
9364a94 is described below

commit 9364a9431d604d072ca78932c39edbc85d5aaf3d
Author: Lars Hofhansl 
AuthorDate: Tue Jul 16 15:42:58 2019 -0700

PHOENIX-5290 HashJoinMoreIT is flapping.
---
 .../phoenix/end2end/RowValueConstructorIT.java | 32 ++
 .../org/apache/phoenix/compile/WhereOptimizer.java |  8 --
 .../org/apache/phoenix/schema/types/PVarchar.java  |  2 +-
 3 files changed, 39 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowValueConstructorIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowValueConstructorIT.java
index fb04261..390d831 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowValueConstructorIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowValueConstructorIT.java
@@ -51,6 +51,7 @@ import java.sql.Timestamp;
 import java.util.List;
 import java.util.Properties;
 
+import org.apache.phoenix.jdbc.PhoenixPreparedStatement;
 import org.apache.phoenix.util.DateUtil;
 import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
@@ -1691,6 +1692,37 @@ public class RowValueConstructorIT extends 
ParallelStatsDisabledIT {
 }
 }
 
+@Test
+public void testTrailingSeparator() throws Exception {
+Connection conn = null;
+try {
+conn = DriverManager.getConnection(getUrl());
+conn.createStatement().execute("CREATE TABLE test2961 (\n"
++ "ACCOUNT_ID VARCHAR NOT NULL,\n" + "BUCKET_ID VARCHAR 
NOT NULL,\n"
++ "OBJECT_ID VARCHAR NOT NULL,\n" + "OBJECT_VERSION 
VARCHAR NOT NULL,\n"
++ "LOC VARCHAR,\n"
++ "CONSTRAINT PK PRIMARY KEY (ACCOUNT_ID, BUCKET_ID, 
OBJECT_ID, OBJECT_VERSION DESC))");
+
+String sql = "SELECT  OBJ.ACCOUNT_ID from  test2961 as OBJ where "
++ "(OBJ.ACCOUNT_ID, OBJ.BUCKET_ID, OBJ.OBJECT_ID, 
OBJ.OBJECT_VERSION) IN "
++ "((?,?,?,?),(?,?,?,?))";
+
+PhoenixPreparedStatement statement = conn.prepareStatement(sql)
+.unwrap(PhoenixPreparedStatement.class);
+statement.setString(1, new String(new char[] { (char) 3 }));
+statement.setString(2, new String(new char[] { (char) 55 }));
+statement.setString(3, new String(new char[] { (char) 39 }));
+statement.setString(4, new String(new char[] { (char) 0 }));
+statement.setString(5, new String(new char[] { (char) 83 }));
+statement.setString(6, new String(new char[] { (char) 15 }));
+statement.setString(7, new String(new char[] { (char) 55 }));
+statement.setString(8, new String(new char[] { (char) 147 }));
+statement.optimizeQuery(sql);
+} finally {
+conn.close();
+}
+}
+
 private StringBuilder generateQueryToTest(int numItemsInClause, String 
fullViewName) {
 StringBuilder querySb =
 new StringBuilder("SELECT OBJECT_ID,OBJECT_DATA2,OBJECT_DATA 
FROM " + fullViewName);
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
index b845a09..0964d9d 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
@@ -347,7 +347,9 @@ public class WhereOptimizer {
 byte[] lowerRange = KeyRange.UNBOUND;
 boolean lowerInclusive = false;
 // Lower range of trailing part of RVC must be true, so we can form a 
new range to intersect going forward
-if (!range.lowerUnbound() && Bytes.startsWith(range.getLowerRange(), 
clippedResult.getLowerRange())) {
+if (!range.lowerUnbound()
+&& range.getLowerRange().length > 
clippedResult.getLowerRange().length
+&& Bytes.startsWith(range.getLowerRange(), 
clippedResult.getLowerRange())) {
 lowerRange = range.getLowerRange();
 int offset = clippedResult.getLowerRange().length + 
separatorLength;
 ptr.set(lowerRange, offset, lowerRange.length - offset);
@@ -356,7 +358,9 @@ public class WhereOptimizer {
 }
 byte[] upperRange = KeyRange.UNBOUND;
 boolean upperInclusive = false;
-if (!range.upperUnbound() && Bytes.startsWith(range.getUpperRange(), 
clippedResul

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5290 HashJoinMoreIT is flapping.

2019-07-16 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 8599a4d  PHOENIX-5290 HashJoinMoreIT is flapping.
8599a4d is described below

commit 8599a4dc67f9f13a867e23e7c5c5a2cd54a89154
Author: Lars Hofhansl 
AuthorDate: Tue Jul 16 15:42:33 2019 -0700

PHOENIX-5290 HashJoinMoreIT is flapping.
---
 .../phoenix/end2end/RowValueConstructorIT.java | 32 ++
 .../org/apache/phoenix/compile/WhereOptimizer.java |  8 --
 .../org/apache/phoenix/schema/types/PVarchar.java  |  2 +-
 3 files changed, 39 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowValueConstructorIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowValueConstructorIT.java
index fb04261..390d831 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowValueConstructorIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowValueConstructorIT.java
@@ -51,6 +51,7 @@ import java.sql.Timestamp;
 import java.util.List;
 import java.util.Properties;
 
+import org.apache.phoenix.jdbc.PhoenixPreparedStatement;
 import org.apache.phoenix.util.DateUtil;
 import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
@@ -1691,6 +1692,37 @@ public class RowValueConstructorIT extends 
ParallelStatsDisabledIT {
 }
 }
 
+@Test
+public void testTrailingSeparator() throws Exception {
+Connection conn = null;
+try {
+conn = DriverManager.getConnection(getUrl());
+conn.createStatement().execute("CREATE TABLE test2961 (\n"
++ "ACCOUNT_ID VARCHAR NOT NULL,\n" + "BUCKET_ID VARCHAR 
NOT NULL,\n"
++ "OBJECT_ID VARCHAR NOT NULL,\n" + "OBJECT_VERSION 
VARCHAR NOT NULL,\n"
++ "LOC VARCHAR,\n"
++ "CONSTRAINT PK PRIMARY KEY (ACCOUNT_ID, BUCKET_ID, 
OBJECT_ID, OBJECT_VERSION DESC))");
+
+String sql = "SELECT  OBJ.ACCOUNT_ID from  test2961 as OBJ where "
++ "(OBJ.ACCOUNT_ID, OBJ.BUCKET_ID, OBJ.OBJECT_ID, 
OBJ.OBJECT_VERSION) IN "
++ "((?,?,?,?),(?,?,?,?))";
+
+PhoenixPreparedStatement statement = conn.prepareStatement(sql)
+.unwrap(PhoenixPreparedStatement.class);
+statement.setString(1, new String(new char[] { (char) 3 }));
+statement.setString(2, new String(new char[] { (char) 55 }));
+statement.setString(3, new String(new char[] { (char) 39 }));
+statement.setString(4, new String(new char[] { (char) 0 }));
+statement.setString(5, new String(new char[] { (char) 83 }));
+statement.setString(6, new String(new char[] { (char) 15 }));
+statement.setString(7, new String(new char[] { (char) 55 }));
+statement.setString(8, new String(new char[] { (char) 147 }));
+statement.optimizeQuery(sql);
+} finally {
+conn.close();
+}
+}
+
 private StringBuilder generateQueryToTest(int numItemsInClause, String 
fullViewName) {
 StringBuilder querySb =
 new StringBuilder("SELECT OBJECT_ID,OBJECT_DATA2,OBJECT_DATA 
FROM " + fullViewName);
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
index b845a09..0964d9d 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
@@ -347,7 +347,9 @@ public class WhereOptimizer {
 byte[] lowerRange = KeyRange.UNBOUND;
 boolean lowerInclusive = false;
 // Lower range of trailing part of RVC must be true, so we can form a 
new range to intersect going forward
-if (!range.lowerUnbound() && Bytes.startsWith(range.getLowerRange(), 
clippedResult.getLowerRange())) {
+if (!range.lowerUnbound()
+&& range.getLowerRange().length > 
clippedResult.getLowerRange().length
+&& Bytes.startsWith(range.getLowerRange(), 
clippedResult.getLowerRange())) {
 lowerRange = range.getLowerRange();
 int offset = clippedResult.getLowerRange().length + 
separatorLength;
 ptr.set(lowerRange, offset, lowerRange.length - offset);
@@ -356,7 +358,9 @@ public class WhereOptimizer {
 }
 byte[] upperRange = KeyRange.UNBOUND;
 boolean upperInclusive = false;
-if (!range.upperUnbound() && Bytes.startsWith(range.getUpperRange(), 
clippedResul

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5290 HashJoinMoreIT is flapping.

2019-07-16 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new f3318a9  PHOENIX-5290 HashJoinMoreIT is flapping.
f3318a9 is described below

commit f3318a97af8c71fc50b0332f46bd297b465983eb
Author: Lars Hofhansl 
AuthorDate: Tue Jul 16 15:39:57 2019 -0700

PHOENIX-5290 HashJoinMoreIT is flapping.
---
 .../phoenix/end2end/RowValueConstructorIT.java | 32 ++
 .../org/apache/phoenix/compile/WhereOptimizer.java |  8 --
 .../org/apache/phoenix/schema/types/PVarchar.java  |  2 +-
 3 files changed, 39 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowValueConstructorIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowValueConstructorIT.java
index fb04261..390d831 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowValueConstructorIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/RowValueConstructorIT.java
@@ -51,6 +51,7 @@ import java.sql.Timestamp;
 import java.util.List;
 import java.util.Properties;
 
+import org.apache.phoenix.jdbc.PhoenixPreparedStatement;
 import org.apache.phoenix.util.DateUtil;
 import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
@@ -1691,6 +1692,37 @@ public class RowValueConstructorIT extends 
ParallelStatsDisabledIT {
 }
 }
 
+@Test
+public void testTrailingSeparator() throws Exception {
+Connection conn = null;
+try {
+conn = DriverManager.getConnection(getUrl());
+conn.createStatement().execute("CREATE TABLE test2961 (\n"
++ "ACCOUNT_ID VARCHAR NOT NULL,\n" + "BUCKET_ID VARCHAR 
NOT NULL,\n"
++ "OBJECT_ID VARCHAR NOT NULL,\n" + "OBJECT_VERSION 
VARCHAR NOT NULL,\n"
++ "LOC VARCHAR,\n"
++ "CONSTRAINT PK PRIMARY KEY (ACCOUNT_ID, BUCKET_ID, 
OBJECT_ID, OBJECT_VERSION DESC))");
+
+String sql = "SELECT  OBJ.ACCOUNT_ID from  test2961 as OBJ where "
++ "(OBJ.ACCOUNT_ID, OBJ.BUCKET_ID, OBJ.OBJECT_ID, 
OBJ.OBJECT_VERSION) IN "
++ "((?,?,?,?),(?,?,?,?))";
+
+PhoenixPreparedStatement statement = conn.prepareStatement(sql)
+.unwrap(PhoenixPreparedStatement.class);
+statement.setString(1, new String(new char[] { (char) 3 }));
+statement.setString(2, new String(new char[] { (char) 55 }));
+statement.setString(3, new String(new char[] { (char) 39 }));
+statement.setString(4, new String(new char[] { (char) 0 }));
+statement.setString(5, new String(new char[] { (char) 83 }));
+statement.setString(6, new String(new char[] { (char) 15 }));
+statement.setString(7, new String(new char[] { (char) 55 }));
+statement.setString(8, new String(new char[] { (char) 147 }));
+statement.optimizeQuery(sql);
+} finally {
+conn.close();
+}
+}
+
 private StringBuilder generateQueryToTest(int numItemsInClause, String 
fullViewName) {
 StringBuilder querySb =
 new StringBuilder("SELECT OBJECT_ID,OBJECT_DATA2,OBJECT_DATA 
FROM " + fullViewName);
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
index b845a09..0964d9d 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
@@ -347,7 +347,9 @@ public class WhereOptimizer {
 byte[] lowerRange = KeyRange.UNBOUND;
 boolean lowerInclusive = false;
 // Lower range of trailing part of RVC must be true, so we can form a 
new range to intersect going forward
-if (!range.lowerUnbound() && Bytes.startsWith(range.getLowerRange(), 
clippedResult.getLowerRange())) {
+if (!range.lowerUnbound()
+&& range.getLowerRange().length > 
clippedResult.getLowerRange().length
+&& Bytes.startsWith(range.getLowerRange(), 
clippedResult.getLowerRange())) {
 lowerRange = range.getLowerRange();
 int offset = clippedResult.getLowerRange().length + 
separatorLength;
 ptr.set(lowerRange, offset, lowerRange.length - offset);
@@ -356,7 +358,9 @@ public class WhereOptimizer {
 }
 byte[] upperRange = KeyRange.UNBOUND;
 boolean upperInclusive = false;
-if (!range.upperUnbound() && Bytes.startsWith(range.getUpperRange(), 
clippedResul

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5381 Increase phoenix.task.handling.interval.ms to 10s for tests.

2019-06-29 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 35ec58f  PHOENIX-5381 Increase phoenix.task.handling.interval.ms to 
10s for tests.
35ec58f is described below

commit 35ec58faeece377b9eb85d074ec3dca3ce388612
Author: Lars Hofhansl 
AuthorDate: Sat Jun 29 22:38:46 2019 -0700

PHOENIX-5381 Increase phoenix.task.handling.interval.ms to 10s for tests.
---
 phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
index 799419f..431cc9b 100644
--- a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
+++ b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
@@ -623,7 +623,7 @@ public abstract class BaseTest {
 conf.setInt("hbase.assignment.zkevent.workers", 5);
 conf.setInt("hbase.assignment.threads.max", 5);
 conf.setInt("hbase.catalogjanitor.interval", 5000);
-conf.setInt(QueryServices.TASK_HANDLING_INTERVAL_MS_ATTRIB, 1000);
+conf.setInt(QueryServices.TASK_HANDLING_INTERVAL_MS_ATTRIB, 1);
 conf.setInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER, 2);
 conf.setInt(NUM_CONCURRENT_INDEX_WRITER_THREADS_CONF_KEY, 1);
 conf.setInt(GLOBAL_INDEX_ROW_REPAIR_COUNT_ATTRIB, 5);



[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5381 Increase phoenix.task.handling.interval.ms to 10s for tests.

2019-06-29 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new c3996f1  PHOENIX-5381 Increase phoenix.task.handling.interval.ms to 
10s for tests.
c3996f1 is described below

commit c3996f1eed7570f38475eb3969dba499faabbe9c
Author: Lars Hofhansl 
AuthorDate: Sat Jun 29 22:39:14 2019 -0700

PHOENIX-5381 Increase phoenix.task.handling.interval.ms to 10s for tests.
---
 phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
index dd89f07..e852c16 100644
--- a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
+++ b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
@@ -623,7 +623,7 @@ public abstract class BaseTest {
 conf.setInt("hbase.assignment.zkevent.workers", 5);
 conf.setInt("hbase.assignment.threads.max", 5);
 conf.setInt("hbase.catalogjanitor.interval", 5000);
-conf.setInt(QueryServices.TASK_HANDLING_INTERVAL_MS_ATTRIB, 1000);
+conf.setInt(QueryServices.TASK_HANDLING_INTERVAL_MS_ATTRIB, 1);
 conf.setInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER, 2);
 conf.setInt(NUM_CONCURRENT_INDEX_WRITER_THREADS_CONF_KEY, 1);
 conf.setInt(GLOBAL_INDEX_ROW_REPAIR_COUNT_ATTRIB, 5);



[phoenix] branch master updated: PHOENIX-5381 Increase phoenix.task.handling.interval.ms to 10s for tests.

2019-06-29 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 31daa09  PHOENIX-5381 Increase phoenix.task.handling.interval.ms to 
10s for tests.
31daa09 is described below

commit 31daa0917c534cbcfb160ec212d5fa839f7df584
Author: Lars Hofhansl 
AuthorDate: Sat Jun 29 22:37:55 2019 -0700

PHOENIX-5381 Increase phoenix.task.handling.interval.ms to 10s for tests.
---
 phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
index 160660e..383ca05 100644
--- a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
+++ b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
@@ -622,7 +622,7 @@ public abstract class BaseTest {
 conf.setInt("hbase.assignment.zkevent.workers", 5);
 conf.setInt("hbase.assignment.threads.max", 5);
 conf.setInt("hbase.catalogjanitor.interval", 5000);
-conf.setInt(QueryServices.TASK_HANDLING_INTERVAL_MS_ATTRIB, 1000);
+conf.setInt(QueryServices.TASK_HANDLING_INTERVAL_MS_ATTRIB, 1);
 conf.setInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER, 2);
 conf.setInt(NUM_CONCURRENT_INDEX_WRITER_THREADS_CONF_KEY, 1);
 conf.setInt(GLOBAL_INDEX_ROW_REPAIR_COUNT_ATTRIB, 5);



[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5381 Increase phoenix.task.handling.interval.ms to 10s for tests.

2019-06-29 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 3a86c2d  PHOENIX-5381 Increase phoenix.task.handling.interval.ms to 
10s for tests.
3a86c2d is described below

commit 3a86c2d57d4a47d5158b8cf437574b6cadf64670
Author: Lars Hofhansl 
AuthorDate: Sat Jun 29 22:37:05 2019 -0700

PHOENIX-5381 Increase phoenix.task.handling.interval.ms to 10s for tests.
---
 phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
index dd89f07..e852c16 100644
--- a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
+++ b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
@@ -623,7 +623,7 @@ public abstract class BaseTest {
 conf.setInt("hbase.assignment.zkevent.workers", 5);
 conf.setInt("hbase.assignment.threads.max", 5);
 conf.setInt("hbase.catalogjanitor.interval", 5000);
-conf.setInt(QueryServices.TASK_HANDLING_INTERVAL_MS_ATTRIB, 1000);
+conf.setInt(QueryServices.TASK_HANDLING_INTERVAL_MS_ATTRIB, 1);
 conf.setInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER, 2);
 conf.setInt(NUM_CONCURRENT_INDEX_WRITER_THREADS_CONF_KEY, 1);
 conf.setInt(GLOBAL_INDEX_ROW_REPAIR_COUNT_ATTRIB, 5);



[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5371 SystemCatalogCreationOnConnectionIT is slow.

2019-06-28 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new c7ad4d8  PHOENIX-5371 SystemCatalogCreationOnConnectionIT is slow.
c7ad4d8 is described below

commit c7ad4d822d14a49d6910a8f70fb56555a8e2ed7f
Author: Lars Hofhansl 
AuthorDate: Fri Jun 28 17:01:47 2019 -0700

PHOENIX-5371 SystemCatalogCreationOnConnectionIT is slow.
---
 .../SystemCatalogCreationOnConnectionIT.java   | 95 --
 1 file changed, 32 insertions(+), 63 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
index 7a5f80c..9f12d39 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
@@ -188,23 +188,6 @@ public class SystemCatalogCreationOnConnectionIT {
 /* Testing SYSTEM.CATALOG/SYSTEM:CATALOG 
creation/upgrade behavior for subsequent connections */
 
 
-// Conditions: server-side namespace mapping is enabled, the first 
connection to the server will create unmapped
-// SYSTEM tables i.e. SYSTEM\..*, the second connection has client-side 
namespace mapping enabled and
-// system table to system namespace mapping enabled
-// Expected: We will migrate all SYSTEM\..* tables to the SYSTEM namespace
-@Test
-public void testMigrateToSystemNamespace() throws Exception {
-SystemCatalogCreationOnConnectionIT.PhoenixSysCatCreationTestingDriver 
driver =
-  firstConnectionNSMappingServerEnabledClientEnabledMappingDisabled();
-driver.resetCQS();
-// Setting this to true to effect migration of SYSTEM tables to the 
SYSTEM namespace
-Properties clientProps = getClientProperties(true, true);
-driver.getConnectionQueryServices(getJdbcUrl(), clientProps);
-hbaseTables = getHBaseTables();
-assertEquals(PHOENIX_NAMESPACE_MAPPED_SYSTEM_TABLES, hbaseTables);
-assertEquals(1, countUpgradeAttempts);
-}
-
 // Conditions: server-side namespace mapping is enabled, the first 
connection to the server will create all namespace
 // mapped SYSTEM tables i.e. SYSTEM:.*, the SYSTEM:CATALOG timestamp at 
creation is purposefully set to be <
 // MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP. The subsequent connection 
has client-side namespace mapping enabled
@@ -260,22 +243,6 @@ public class SystemCatalogCreationOnConnectionIT {
 }
 }
 
-// Conditions: server-side namespace mapping is enabled, the first 
connection to the server will create only SYSTEM.CATALOG,
-// the second connection has client-side namespace mapping enabled
-// Expected: We will migrate SYSTEM.CATALOG to SYSTEM namespace and create 
all other SYSTEM:.* tables
-@Test
-public void testMigrateSysCatCreateOthers() throws Exception {
-SystemCatalogCreationOnConnectionIT.PhoenixSysCatCreationTestingDriver 
driver =
-  firstConnectionNSMappingServerEnabledClientDisabled();
-driver.resetCQS();
-Properties clientProps = getClientProperties(true, true);
-driver.getConnectionQueryServices(getJdbcUrl(), clientProps);
-hbaseTables = getHBaseTables();
-assertEquals(PHOENIX_NAMESPACE_MAPPED_SYSTEM_TABLES, hbaseTables);
-// SYSTEM.CATALOG migration to the SYSTEM namespace is counted as an 
upgrade
-assertEquals(1, countUpgradeAttempts);
-}
-
 // Conditions: server-side namespace mapping is enabled, the first 
connection to the server will create unmapped SYSTEM
 // tables SYSTEM\..* whose timestamp at creation is purposefully set to be 
< MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP.
 // The second connection has client-side namespace mapping enabled and 
system table to system namespace mapping enabled
@@ -318,8 +285,11 @@ public class SystemCatalogCreationOnConnectionIT {
 // Conditions: server-side namespace mapping is enabled, the first 
connection to the server will create only SYSTEM.CATALOG,
 // the second connection has client-side namespace mapping disabled
 // Expected: Throw Inconsistent namespace mapping exception when you check 
client-server compatibility
+//
+// A third connection has client-side namespace mapping enabled
+// Expected: We will migrate SYSTEM.CATALOG to SYSTEM namespace and create 
all other SYSTEM:.* tables
 @Test
-public void testUnmappedSysCatExistsInconsistentNSMappingFails() throws 
Exception {
+public void testUnmappedSysCat() throws Exception {
 SystemCatalogCreationOnConnectionIT.PhoenixSysCatCreationTestingDriver 

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5371 SystemCatalogCreationOnConnectionIT is slow.

2019-06-28 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new f945f42  PHOENIX-5371 SystemCatalogCreationOnConnectionIT is slow.
f945f42 is described below

commit f945f4263b0100eab30f0b2e8c719597579ba507
Author: Lars Hofhansl 
AuthorDate: Fri Jun 28 17:01:04 2019 -0700

PHOENIX-5371 SystemCatalogCreationOnConnectionIT is slow.
---
 .../SystemCatalogCreationOnConnectionIT.java   | 95 --
 1 file changed, 32 insertions(+), 63 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
index 7a5f80c..9f12d39 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
@@ -188,23 +188,6 @@ public class SystemCatalogCreationOnConnectionIT {
 /* Testing SYSTEM.CATALOG/SYSTEM:CATALOG 
creation/upgrade behavior for subsequent connections */
 
 
-// Conditions: server-side namespace mapping is enabled, the first 
connection to the server will create unmapped
-// SYSTEM tables i.e. SYSTEM\..*, the second connection has client-side 
namespace mapping enabled and
-// system table to system namespace mapping enabled
-// Expected: We will migrate all SYSTEM\..* tables to the SYSTEM namespace
-@Test
-public void testMigrateToSystemNamespace() throws Exception {
-SystemCatalogCreationOnConnectionIT.PhoenixSysCatCreationTestingDriver 
driver =
-  firstConnectionNSMappingServerEnabledClientEnabledMappingDisabled();
-driver.resetCQS();
-// Setting this to true to effect migration of SYSTEM tables to the 
SYSTEM namespace
-Properties clientProps = getClientProperties(true, true);
-driver.getConnectionQueryServices(getJdbcUrl(), clientProps);
-hbaseTables = getHBaseTables();
-assertEquals(PHOENIX_NAMESPACE_MAPPED_SYSTEM_TABLES, hbaseTables);
-assertEquals(1, countUpgradeAttempts);
-}
-
 // Conditions: server-side namespace mapping is enabled, the first 
connection to the server will create all namespace
 // mapped SYSTEM tables i.e. SYSTEM:.*, the SYSTEM:CATALOG timestamp at 
creation is purposefully set to be <
 // MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP. The subsequent connection 
has client-side namespace mapping enabled
@@ -260,22 +243,6 @@ public class SystemCatalogCreationOnConnectionIT {
 }
 }
 
-// Conditions: server-side namespace mapping is enabled, the first 
connection to the server will create only SYSTEM.CATALOG,
-// the second connection has client-side namespace mapping enabled
-// Expected: We will migrate SYSTEM.CATALOG to SYSTEM namespace and create 
all other SYSTEM:.* tables
-@Test
-public void testMigrateSysCatCreateOthers() throws Exception {
-SystemCatalogCreationOnConnectionIT.PhoenixSysCatCreationTestingDriver 
driver =
-  firstConnectionNSMappingServerEnabledClientDisabled();
-driver.resetCQS();
-Properties clientProps = getClientProperties(true, true);
-driver.getConnectionQueryServices(getJdbcUrl(), clientProps);
-hbaseTables = getHBaseTables();
-assertEquals(PHOENIX_NAMESPACE_MAPPED_SYSTEM_TABLES, hbaseTables);
-// SYSTEM.CATALOG migration to the SYSTEM namespace is counted as an 
upgrade
-assertEquals(1, countUpgradeAttempts);
-}
-
 // Conditions: server-side namespace mapping is enabled, the first 
connection to the server will create unmapped SYSTEM
 // tables SYSTEM\..* whose timestamp at creation is purposefully set to be 
< MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP.
 // The second connection has client-side namespace mapping enabled and 
system table to system namespace mapping enabled
@@ -318,8 +285,11 @@ public class SystemCatalogCreationOnConnectionIT {
 // Conditions: server-side namespace mapping is enabled, the first 
connection to the server will create only SYSTEM.CATALOG,
 // the second connection has client-side namespace mapping disabled
 // Expected: Throw Inconsistent namespace mapping exception when you check 
client-server compatibility
+//
+// A third connection has client-side namespace mapping enabled
+// Expected: We will migrate SYSTEM.CATALOG to SYSTEM namespace and create 
all other SYSTEM:.* tables
 @Test
-public void testUnmappedSysCatExistsInconsistentNSMappingFails() throws 
Exception {
+public void testUnmappedSysCat() throws Exception {
 SystemCatalogCreationOnConnectionIT.PhoenixSysCatCreationTestingDriver 

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5371 SystemCatalogCreationOnConnectionIT is slow.

2019-06-28 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 3429860  PHOENIX-5371 SystemCatalogCreationOnConnectionIT is slow.
3429860 is described below

commit 342986026186e29ca0b67ce526a8feaea3a59f01
Author: Lars Hofhansl 
AuthorDate: Fri Jun 28 17:01:26 2019 -0700

PHOENIX-5371 SystemCatalogCreationOnConnectionIT is slow.
---
 .../SystemCatalogCreationOnConnectionIT.java   | 95 --
 1 file changed, 32 insertions(+), 63 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
index 7a5f80c..9f12d39 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
@@ -188,23 +188,6 @@ public class SystemCatalogCreationOnConnectionIT {
 /* Testing SYSTEM.CATALOG/SYSTEM:CATALOG 
creation/upgrade behavior for subsequent connections */
 
 
-// Conditions: server-side namespace mapping is enabled, the first 
connection to the server will create unmapped
-// SYSTEM tables i.e. SYSTEM\..*, the second connection has client-side 
namespace mapping enabled and
-// system table to system namespace mapping enabled
-// Expected: We will migrate all SYSTEM\..* tables to the SYSTEM namespace
-@Test
-public void testMigrateToSystemNamespace() throws Exception {
-SystemCatalogCreationOnConnectionIT.PhoenixSysCatCreationTestingDriver 
driver =
-  firstConnectionNSMappingServerEnabledClientEnabledMappingDisabled();
-driver.resetCQS();
-// Setting this to true to effect migration of SYSTEM tables to the 
SYSTEM namespace
-Properties clientProps = getClientProperties(true, true);
-driver.getConnectionQueryServices(getJdbcUrl(), clientProps);
-hbaseTables = getHBaseTables();
-assertEquals(PHOENIX_NAMESPACE_MAPPED_SYSTEM_TABLES, hbaseTables);
-assertEquals(1, countUpgradeAttempts);
-}
-
 // Conditions: server-side namespace mapping is enabled, the first 
connection to the server will create all namespace
 // mapped SYSTEM tables i.e. SYSTEM:.*, the SYSTEM:CATALOG timestamp at 
creation is purposefully set to be <
 // MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP. The subsequent connection 
has client-side namespace mapping enabled
@@ -260,22 +243,6 @@ public class SystemCatalogCreationOnConnectionIT {
 }
 }
 
-// Conditions: server-side namespace mapping is enabled, the first 
connection to the server will create only SYSTEM.CATALOG,
-// the second connection has client-side namespace mapping enabled
-// Expected: We will migrate SYSTEM.CATALOG to SYSTEM namespace and create 
all other SYSTEM:.* tables
-@Test
-public void testMigrateSysCatCreateOthers() throws Exception {
-SystemCatalogCreationOnConnectionIT.PhoenixSysCatCreationTestingDriver 
driver =
-  firstConnectionNSMappingServerEnabledClientDisabled();
-driver.resetCQS();
-Properties clientProps = getClientProperties(true, true);
-driver.getConnectionQueryServices(getJdbcUrl(), clientProps);
-hbaseTables = getHBaseTables();
-assertEquals(PHOENIX_NAMESPACE_MAPPED_SYSTEM_TABLES, hbaseTables);
-// SYSTEM.CATALOG migration to the SYSTEM namespace is counted as an 
upgrade
-assertEquals(1, countUpgradeAttempts);
-}
-
 // Conditions: server-side namespace mapping is enabled, the first 
connection to the server will create unmapped SYSTEM
 // tables SYSTEM\..* whose timestamp at creation is purposefully set to be 
< MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP.
 // The second connection has client-side namespace mapping enabled and 
system table to system namespace mapping enabled
@@ -318,8 +285,11 @@ public class SystemCatalogCreationOnConnectionIT {
 // Conditions: server-side namespace mapping is enabled, the first 
connection to the server will create only SYSTEM.CATALOG,
 // the second connection has client-side namespace mapping disabled
 // Expected: Throw Inconsistent namespace mapping exception when you check 
client-server compatibility
+//
+// A third connection has client-side namespace mapping enabled
+// Expected: We will migrate SYSTEM.CATALOG to SYSTEM namespace and create 
all other SYSTEM:.* tables
 @Test
-public void testUnmappedSysCatExistsInconsistentNSMappingFails() throws 
Exception {
+public void testUnmappedSysCat() throws Exception {
 SystemCatalogCreationOnConnectionIT.PhoenixSysCatCreationTestingDriver 

[phoenix] branch master updated: PHOENIX-5371 SystemCatalogCreationOnConnectionIT is slow.

2019-06-28 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 84014f0  PHOENIX-5371 SystemCatalogCreationOnConnectionIT is slow.
84014f0 is described below

commit 84014f01205aecf6e7dd25b1d1b62a128b308b46
Author: Lars Hofhansl 
AuthorDate: Fri Jun 28 17:00:30 2019 -0700

PHOENIX-5371 SystemCatalogCreationOnConnectionIT is slow.
---
 .../SystemCatalogCreationOnConnectionIT.java   | 95 --
 1 file changed, 32 insertions(+), 63 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
index eadd391..9ffd2d2 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
@@ -188,23 +188,6 @@ public class SystemCatalogCreationOnConnectionIT {
 /* Testing SYSTEM.CATALOG/SYSTEM:CATALOG 
creation/upgrade behavior for subsequent connections */
 
 
-// Conditions: server-side namespace mapping is enabled, the first 
connection to the server will create unmapped
-// SYSTEM tables i.e. SYSTEM\..*, the second connection has client-side 
namespace mapping enabled and
-// system table to system namespace mapping enabled
-// Expected: We will migrate all SYSTEM\..* tables to the SYSTEM namespace
-@Test
-public void testMigrateToSystemNamespace() throws Exception {
-SystemCatalogCreationOnConnectionIT.PhoenixSysCatCreationTestingDriver 
driver =
-  firstConnectionNSMappingServerEnabledClientEnabledMappingDisabled();
-driver.resetCQS();
-// Setting this to true to effect migration of SYSTEM tables to the 
SYSTEM namespace
-Properties clientProps = getClientProperties(true, true);
-driver.getConnectionQueryServices(getJdbcUrl(), clientProps);
-hbaseTables = getHBaseTables();
-assertEquals(PHOENIX_NAMESPACE_MAPPED_SYSTEM_TABLES, hbaseTables);
-assertEquals(1, countUpgradeAttempts);
-}
-
 // Conditions: server-side namespace mapping is enabled, the first 
connection to the server will create all namespace
 // mapped SYSTEM tables i.e. SYSTEM:.*, the SYSTEM:CATALOG timestamp at 
creation is purposefully set to be <
 // MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP. The subsequent connection 
has client-side namespace mapping enabled
@@ -260,22 +243,6 @@ public class SystemCatalogCreationOnConnectionIT {
 }
 }
 
-// Conditions: server-side namespace mapping is enabled, the first 
connection to the server will create only SYSTEM.CATALOG,
-// the second connection has client-side namespace mapping enabled
-// Expected: We will migrate SYSTEM.CATALOG to SYSTEM namespace and create 
all other SYSTEM:.* tables
-@Test
-public void testMigrateSysCatCreateOthers() throws Exception {
-SystemCatalogCreationOnConnectionIT.PhoenixSysCatCreationTestingDriver 
driver =
-  firstConnectionNSMappingServerEnabledClientDisabled();
-driver.resetCQS();
-Properties clientProps = getClientProperties(true, true);
-driver.getConnectionQueryServices(getJdbcUrl(), clientProps);
-hbaseTables = getHBaseTables();
-assertEquals(PHOENIX_NAMESPACE_MAPPED_SYSTEM_TABLES, hbaseTables);
-// SYSTEM.CATALOG migration to the SYSTEM namespace is counted as an 
upgrade
-assertEquals(1, countUpgradeAttempts);
-}
-
 // Conditions: server-side namespace mapping is enabled, the first 
connection to the server will create unmapped SYSTEM
 // tables SYSTEM\..* whose timestamp at creation is purposefully set to be 
< MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP.
 // The second connection has client-side namespace mapping enabled and 
system table to system namespace mapping enabled
@@ -318,8 +285,11 @@ public class SystemCatalogCreationOnConnectionIT {
 // Conditions: server-side namespace mapping is enabled, the first 
connection to the server will create only SYSTEM.CATALOG,
 // the second connection has client-side namespace mapping disabled
 // Expected: Throw Inconsistent namespace mapping exception when you check 
client-server compatibility
+//
+// A third connection has client-side namespace mapping enabled
+// Expected: We will migrate SYSTEM.CATALOG to SYSTEM namespace and create 
all other SYSTEM:.* tables
 @Test
-public void testUnmappedSysCatExistsInconsistentNSMappingFails() throws 
Exception {
+public void testUnmappedSysCat() throws Exception {
 SystemCatalogCreationOnConnectionIT.PhoenixSysCatCreationTestingDriver 

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5345 PartialCommitIT fails in 4.x-HBase-1.3.

2019-06-28 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 160d56b  PHOENIX-5345 PartialCommitIT fails in 4.x-HBase-1.3.
160d56b is described below

commit 160d56b14d435ba6afbbfe179eab4951b868dc29
Author: Lars Hofhansl 
AuthorDate: Fri Jun 28 13:38:52 2019 -0700

PHOENIX-5345 PartialCommitIT fails in 4.x-HBase-1.3.
---
 .../main/java/org/apache/phoenix/iterate/ScanningResultIterator.java| 2 ++
 1 file changed, 2 insertions(+)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
index 1a3d073..8f98f56 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
@@ -98,6 +98,8 @@ public class ScanningResultIterator implements ResultIterator 
{
 
 if (scanMetricsEnabled && !scanMetricsUpdated) {
 ScanMetrics scanMetrics = scan.getScanMetrics();
+if (scanMetrics == null)
+return; // See PHOENIX-5345
 Map scanMetricsMap = scanMetrics.getMetricsMap();
 scanMetricsHolder.setScanMetricMap(scanMetricsMap);
 



[phoenix] branch master updated: PHOENIX-5176 KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) returns wrong result when two key ranges have the same upper bound values but one is inclusive and anothe

2019-06-27 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new ae945a8  PHOENIX-5176 KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) 
returns wrong result when two key ranges have the same upper bound values but 
one is inclusive and another is exclusive
ae945a8 is described below

commit ae945a84dd21da8d3547419436eaba67f137dd91
Author: Bin Shi <39923490+binshi-secularb...@users.noreply.github.com>
AuthorDate: Mon Apr 8 16:40:30 2019 -0700

PHOENIX-5176 KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) returns wrong 
result when two key ranges have the same upper bound values but one is 
inclusive and another is exclusive
---
 .../java/org/apache/phoenix/query/KeyRange.java|   4 +-
 .../org/apache/phoenix/query/KeyRangeMoreTest.java | 136 -
 2 files changed, 77 insertions(+), 63 deletions(-)

diff --git a/phoenix-core/src/main/java/org/apache/phoenix/query/KeyRange.java 
b/phoenix-core/src/main/java/org/apache/phoenix/query/KeyRange.java
index 2b66061..4229dfa 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/query/KeyRange.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/query/KeyRange.java
@@ -548,7 +548,7 @@ public class KeyRange implements Writable {
 return Lists.transform(keys, POINT);
 }
 
-private static int compareUpperRange(KeyRange rowKeyRange1,KeyRange 
rowKeyRange2) {
+public static int compareUpperRange(KeyRange rowKeyRange1,KeyRange 
rowKeyRange2) {
 int result = Boolean.compare(rowKeyRange1.upperUnbound(), 
rowKeyRange2.upperUnbound());
 if (result != 0) {
 return result;
@@ -557,7 +557,7 @@ public class KeyRange implements Writable {
 if (result != 0) {
 return result;
 }
-return Boolean.compare(rowKeyRange2.isUpperInclusive(), 
rowKeyRange1.isUpperInclusive());
+return Boolean.compare(rowKeyRange1.isUpperInclusive(), 
rowKeyRange2.isUpperInclusive());
 }
 
 public static List intersect(List rowKeyRanges1, 
List rowKeyRanges2) {
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/query/KeyRangeMoreTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/query/KeyRangeMoreTest.java
index 9710bf5..6f0c4c7 100644
--- a/phoenix-core/src/test/java/org/apache/phoenix/query/KeyRangeMoreTest.java
+++ b/phoenix-core/src/test/java/org/apache/phoenix/query/KeyRangeMoreTest.java
@@ -23,6 +23,7 @@ import java.util.Arrays;
 import java.util.Collections;
 import java.util.List;
 
+import com.google.common.collect.Lists;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.schema.types.PInteger;
 import org.junit.Test;
@@ -126,8 +127,7 @@ public class KeyRangeMoreTest extends TestCase {
 assertResult(result, maxStart,minEnd);
 Collections.shuffle(rowKeyRanges1);
 Collections.shuffle(rowKeyRanges2);
-
-};
+}
 }
 
 private void assertResult(List result,int start,int end) {
@@ -192,72 +192,86 @@ public class KeyRangeMoreTest extends TestCase {
 
 
listIntersectAndAssert(Arrays.asList(KeyRange.EMPTY_RANGE),Arrays.asList(KeyRange.EVERYTHING_RANGE),Arrays.asList(KeyRange.EMPTY_RANGE));
 
-rowKeyRanges1=Arrays.asList(
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(2),
-true,
-PInteger.INSTANCE.toBytes(5),
-true),
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(8),
-true,
-KeyRange.UNBOUND,
-false));
-rowKeyRanges2=Arrays.asList(
-PInteger.INSTANCE.getKeyRange(
-KeyRange.UNBOUND,
-false,
-PInteger.INSTANCE.toBytes(4),
-true),
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(7),
-true,
-PInteger.INSTANCE.toBytes(10),
-true),
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(13),
-true,
-PInteger.INSTANCE.toBytes(14),
-true),
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(19),
-true,
-KeyRange.UNBOUND,
-false)
-);
-expected=Arrays.asList(
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(2),
-true,
-PInteger.INSTANCE.toB

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5176 KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) returns wrong result when two key ranges have the same upper bound values but one is inclusive and

2019-06-27 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new e157568  PHOENIX-5176 KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) 
returns wrong result when two key ranges have the same upper bound values but 
one is inclusive and another is exclusive
e157568 is described below

commit e1575682199781235132aedc02824affd936c902
Author: Bin Shi <39923490+binshi-secularb...@users.noreply.github.com>
AuthorDate: Mon Apr 8 16:40:30 2019 -0700

PHOENIX-5176 KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) returns wrong 
result when two key ranges have the same upper bound values but one is 
inclusive and another is exclusive
---
 .../java/org/apache/phoenix/query/KeyRange.java|   4 +-
 .../org/apache/phoenix/query/KeyRangeMoreTest.java | 136 -
 2 files changed, 77 insertions(+), 63 deletions(-)

diff --git a/phoenix-core/src/main/java/org/apache/phoenix/query/KeyRange.java 
b/phoenix-core/src/main/java/org/apache/phoenix/query/KeyRange.java
index 2b66061..4229dfa 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/query/KeyRange.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/query/KeyRange.java
@@ -548,7 +548,7 @@ public class KeyRange implements Writable {
 return Lists.transform(keys, POINT);
 }
 
-private static int compareUpperRange(KeyRange rowKeyRange1,KeyRange 
rowKeyRange2) {
+public static int compareUpperRange(KeyRange rowKeyRange1,KeyRange 
rowKeyRange2) {
 int result = Boolean.compare(rowKeyRange1.upperUnbound(), 
rowKeyRange2.upperUnbound());
 if (result != 0) {
 return result;
@@ -557,7 +557,7 @@ public class KeyRange implements Writable {
 if (result != 0) {
 return result;
 }
-return Boolean.compare(rowKeyRange2.isUpperInclusive(), 
rowKeyRange1.isUpperInclusive());
+return Boolean.compare(rowKeyRange1.isUpperInclusive(), 
rowKeyRange2.isUpperInclusive());
 }
 
 public static List intersect(List rowKeyRanges1, 
List rowKeyRanges2) {
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/query/KeyRangeMoreTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/query/KeyRangeMoreTest.java
index 9710bf5..6f0c4c7 100644
--- a/phoenix-core/src/test/java/org/apache/phoenix/query/KeyRangeMoreTest.java
+++ b/phoenix-core/src/test/java/org/apache/phoenix/query/KeyRangeMoreTest.java
@@ -23,6 +23,7 @@ import java.util.Arrays;
 import java.util.Collections;
 import java.util.List;
 
+import com.google.common.collect.Lists;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.schema.types.PInteger;
 import org.junit.Test;
@@ -126,8 +127,7 @@ public class KeyRangeMoreTest extends TestCase {
 assertResult(result, maxStart,minEnd);
 Collections.shuffle(rowKeyRanges1);
 Collections.shuffle(rowKeyRanges2);
-
-};
+}
 }
 
 private void assertResult(List result,int start,int end) {
@@ -192,72 +192,86 @@ public class KeyRangeMoreTest extends TestCase {
 
 
listIntersectAndAssert(Arrays.asList(KeyRange.EMPTY_RANGE),Arrays.asList(KeyRange.EVERYTHING_RANGE),Arrays.asList(KeyRange.EMPTY_RANGE));
 
-rowKeyRanges1=Arrays.asList(
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(2),
-true,
-PInteger.INSTANCE.toBytes(5),
-true),
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(8),
-true,
-KeyRange.UNBOUND,
-false));
-rowKeyRanges2=Arrays.asList(
-PInteger.INSTANCE.getKeyRange(
-KeyRange.UNBOUND,
-false,
-PInteger.INSTANCE.toBytes(4),
-true),
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(7),
-true,
-PInteger.INSTANCE.toBytes(10),
-true),
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(13),
-true,
-PInteger.INSTANCE.toBytes(14),
-true),
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(19),
-true,
-KeyRange.UNBOUND,
-false)
-);
-expected=Arrays.asList(
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(2),
-  

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5176 KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) returns wrong result when two key ranges have the same upper bound values but one is inclusive and

2019-06-27 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 52f66aa  PHOENIX-5176 KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) 
returns wrong result when two key ranges have the same upper bound values but 
one is inclusive and another is exclusive
52f66aa is described below

commit 52f66aa79578a958e912f3f4e6d8f64971653849
Author: Bin Shi <39923490+binshi-secularb...@users.noreply.github.com>
AuthorDate: Mon Apr 8 16:40:30 2019 -0700

PHOENIX-5176 KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) returns wrong 
result when two key ranges have the same upper bound values but one is 
inclusive and another is exclusive
---
 .../java/org/apache/phoenix/query/KeyRange.java|   4 +-
 .../org/apache/phoenix/query/KeyRangeMoreTest.java | 136 -
 2 files changed, 77 insertions(+), 63 deletions(-)

diff --git a/phoenix-core/src/main/java/org/apache/phoenix/query/KeyRange.java 
b/phoenix-core/src/main/java/org/apache/phoenix/query/KeyRange.java
index 2b66061..4229dfa 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/query/KeyRange.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/query/KeyRange.java
@@ -548,7 +548,7 @@ public class KeyRange implements Writable {
 return Lists.transform(keys, POINT);
 }
 
-private static int compareUpperRange(KeyRange rowKeyRange1,KeyRange 
rowKeyRange2) {
+public static int compareUpperRange(KeyRange rowKeyRange1,KeyRange 
rowKeyRange2) {
 int result = Boolean.compare(rowKeyRange1.upperUnbound(), 
rowKeyRange2.upperUnbound());
 if (result != 0) {
 return result;
@@ -557,7 +557,7 @@ public class KeyRange implements Writable {
 if (result != 0) {
 return result;
 }
-return Boolean.compare(rowKeyRange2.isUpperInclusive(), 
rowKeyRange1.isUpperInclusive());
+return Boolean.compare(rowKeyRange1.isUpperInclusive(), 
rowKeyRange2.isUpperInclusive());
 }
 
 public static List intersect(List rowKeyRanges1, 
List rowKeyRanges2) {
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/query/KeyRangeMoreTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/query/KeyRangeMoreTest.java
index 9710bf5..6f0c4c7 100644
--- a/phoenix-core/src/test/java/org/apache/phoenix/query/KeyRangeMoreTest.java
+++ b/phoenix-core/src/test/java/org/apache/phoenix/query/KeyRangeMoreTest.java
@@ -23,6 +23,7 @@ import java.util.Arrays;
 import java.util.Collections;
 import java.util.List;
 
+import com.google.common.collect.Lists;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.schema.types.PInteger;
 import org.junit.Test;
@@ -126,8 +127,7 @@ public class KeyRangeMoreTest extends TestCase {
 assertResult(result, maxStart,minEnd);
 Collections.shuffle(rowKeyRanges1);
 Collections.shuffle(rowKeyRanges2);
-
-};
+}
 }
 
 private void assertResult(List result,int start,int end) {
@@ -192,72 +192,86 @@ public class KeyRangeMoreTest extends TestCase {
 
 
listIntersectAndAssert(Arrays.asList(KeyRange.EMPTY_RANGE),Arrays.asList(KeyRange.EVERYTHING_RANGE),Arrays.asList(KeyRange.EMPTY_RANGE));
 
-rowKeyRanges1=Arrays.asList(
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(2),
-true,
-PInteger.INSTANCE.toBytes(5),
-true),
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(8),
-true,
-KeyRange.UNBOUND,
-false));
-rowKeyRanges2=Arrays.asList(
-PInteger.INSTANCE.getKeyRange(
-KeyRange.UNBOUND,
-false,
-PInteger.INSTANCE.toBytes(4),
-true),
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(7),
-true,
-PInteger.INSTANCE.toBytes(10),
-true),
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(13),
-true,
-PInteger.INSTANCE.toBytes(14),
-true),
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(19),
-true,
-KeyRange.UNBOUND,
-false)
-);
-expected=Arrays.asList(
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(2),
-  

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5176 KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) returns wrong result when two key ranges have the same upper bound values but one is inclusive and

2019-06-27 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new fd58288  PHOENIX-5176 KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) 
returns wrong result when two key ranges have the same upper bound values but 
one is inclusive and another is exclusive
fd58288 is described below

commit fd58288984203d9abbb82e20d15d9c78b42ba78f
Author: Bin Shi <39923490+binshi-secularb...@users.noreply.github.com>
AuthorDate: Mon Apr 8 16:40:30 2019 -0700

PHOENIX-5176 KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) returns wrong 
result when two key ranges have the same upper bound values but one is 
inclusive and another is exclusive
---
 .../java/org/apache/phoenix/query/KeyRange.java|   4 +-
 .../org/apache/phoenix/query/KeyRangeMoreTest.java | 136 -
 2 files changed, 77 insertions(+), 63 deletions(-)

diff --git a/phoenix-core/src/main/java/org/apache/phoenix/query/KeyRange.java 
b/phoenix-core/src/main/java/org/apache/phoenix/query/KeyRange.java
index 2b66061..4229dfa 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/query/KeyRange.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/query/KeyRange.java
@@ -548,7 +548,7 @@ public class KeyRange implements Writable {
 return Lists.transform(keys, POINT);
 }
 
-private static int compareUpperRange(KeyRange rowKeyRange1,KeyRange 
rowKeyRange2) {
+public static int compareUpperRange(KeyRange rowKeyRange1,KeyRange 
rowKeyRange2) {
 int result = Boolean.compare(rowKeyRange1.upperUnbound(), 
rowKeyRange2.upperUnbound());
 if (result != 0) {
 return result;
@@ -557,7 +557,7 @@ public class KeyRange implements Writable {
 if (result != 0) {
 return result;
 }
-return Boolean.compare(rowKeyRange2.isUpperInclusive(), 
rowKeyRange1.isUpperInclusive());
+return Boolean.compare(rowKeyRange1.isUpperInclusive(), 
rowKeyRange2.isUpperInclusive());
 }
 
 public static List intersect(List rowKeyRanges1, 
List rowKeyRanges2) {
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/query/KeyRangeMoreTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/query/KeyRangeMoreTest.java
index 9710bf5..6f0c4c7 100644
--- a/phoenix-core/src/test/java/org/apache/phoenix/query/KeyRangeMoreTest.java
+++ b/phoenix-core/src/test/java/org/apache/phoenix/query/KeyRangeMoreTest.java
@@ -23,6 +23,7 @@ import java.util.Arrays;
 import java.util.Collections;
 import java.util.List;
 
+import com.google.common.collect.Lists;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.schema.types.PInteger;
 import org.junit.Test;
@@ -126,8 +127,7 @@ public class KeyRangeMoreTest extends TestCase {
 assertResult(result, maxStart,minEnd);
 Collections.shuffle(rowKeyRanges1);
 Collections.shuffle(rowKeyRanges2);
-
-};
+}
 }
 
 private void assertResult(List result,int start,int end) {
@@ -192,72 +192,86 @@ public class KeyRangeMoreTest extends TestCase {
 
 
listIntersectAndAssert(Arrays.asList(KeyRange.EMPTY_RANGE),Arrays.asList(KeyRange.EVERYTHING_RANGE),Arrays.asList(KeyRange.EMPTY_RANGE));
 
-rowKeyRanges1=Arrays.asList(
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(2),
-true,
-PInteger.INSTANCE.toBytes(5),
-true),
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(8),
-true,
-KeyRange.UNBOUND,
-false));
-rowKeyRanges2=Arrays.asList(
-PInteger.INSTANCE.getKeyRange(
-KeyRange.UNBOUND,
-false,
-PInteger.INSTANCE.toBytes(4),
-true),
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(7),
-true,
-PInteger.INSTANCE.toBytes(10),
-true),
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(13),
-true,
-PInteger.INSTANCE.toBytes(14),
-true),
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(19),
-true,
-KeyRange.UNBOUND,
-false)
-);
-expected=Arrays.asList(
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(2),
-  

[phoenix] branch master updated: PHOENIX-4513 Fix the recursive call in ExecutableExplainStatement#getOperation. (Chia-Ping Tsai)

2019-06-27 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new af69f43  PHOENIX-4513 Fix the recursive call in 
ExecutableExplainStatement#getOperation. (Chia-Ping Tsai)
af69f43 is described below

commit af69f430fdf9e2a5299246d5c217a20446a72f5e
Author: Lars Hofhansl 
AuthorDate: Thu Jun 27 09:37:36 2019 -0700

PHOENIX-4513 Fix the recursive call in 
ExecutableExplainStatement#getOperation. (Chia-Ping Tsai)
---
 .../src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java  | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java
index 95ae1e3..5580672 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java
@@ -739,10 +739,11 @@ public class PhoenixStatement implements Statement, 
SQLCloseable {
 return true;
 }
 
-   @Override
-   public Operation getOperation() {
-   return this.getOperation();
-   }
+@Override
+public Operation getOperation() {
+return ExecutableExplainStatement.this.getOperation();
+}
+
 @Override
 public boolean useRoundRobinIterator() throws SQLException {
 return false;



[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-4513 Fix the recursive call in ExecutableExplainStatement#getOperation. (Chia-Ping Tsai)

2019-06-27 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 095b40d  PHOENIX-4513 Fix the recursive call in 
ExecutableExplainStatement#getOperation. (Chia-Ping Tsai)
095b40d is described below

commit 095b40dfdf1c7ec7b39e075dbf44b6b604e58480
Author: Lars Hofhansl 
AuthorDate: Thu Jun 27 09:37:00 2019 -0700

PHOENIX-4513 Fix the recursive call in 
ExecutableExplainStatement#getOperation. (Chia-Ping Tsai)
---
 .../src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java  | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java
index f699ea0..a3c1b67 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java
@@ -740,10 +740,11 @@ public class PhoenixStatement implements Statement, 
SQLCloseable {
 return true;
 }
 
-   @Override
-   public Operation getOperation() {
-   return this.getOperation();
-   }
+@Override
+public Operation getOperation() {
+return ExecutableExplainStatement.this.getOperation();
+}
+
 @Override
 public boolean useRoundRobinIterator() throws SQLException {
 return false;



[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-4513 Fix the recursive call in ExecutableExplainStatement#getOperation. (Chia-Ping Tsai)

2019-06-27 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 09dd935  PHOENIX-4513 Fix the recursive call in 
ExecutableExplainStatement#getOperation. (Chia-Ping Tsai)
09dd935 is described below

commit 09dd935029cef8be68d4294cce70c5b2b887c616
Author: Lars Hofhansl 
AuthorDate: Thu Jun 27 09:36:23 2019 -0700

PHOENIX-4513 Fix the recursive call in 
ExecutableExplainStatement#getOperation. (Chia-Ping Tsai)
---
 .../src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java  | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java
index 7f5897b..7359fc2 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java
@@ -739,10 +739,11 @@ public class PhoenixStatement implements Statement, 
SQLCloseable {
 return true;
 }
 
-   @Override
-   public Operation getOperation() {
-   return this.getOperation();
-   }
+@Override
+public Operation getOperation() {
+return ExecutableExplainStatement.this.getOperation();
+}
+
 @Override
 public boolean useRoundRobinIterator() throws SQLException {
 return false;



  1   2   3   4   5   6   7   >