Jenkins build is back to normal : Phoenix-4.x-HBase-1.3 #575

2019-10-24 Thread Apache Jenkins Server
See 




Apache-Phoenix | 4.x-HBase-1.3 | Build Successful

2019-10-24 Thread Apache Jenkins Server
4.x-HBase-1.3 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-1.3

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastCompletedBuild/testReport/

Changes
[larsh] PHOENIX-5533 Creating a view or index with a 4.14 client and 4.15.0



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Apache Phoenix - Timeout crawler - Build https://builds.apache.org/job/Phoenix-master/2546/

2019-10-24 Thread Apache Jenkins Server
[...truncated 27 lines...]
Looking at the log, list of test(s) that timed-out:

Build:
https://builds.apache.org/job/Phoenix-master/2546/


Affected test class(es):
Set(['as SYSTEM'])


Build step 'Execute shell' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

Apache-Phoenix | Master | Build Successful

2019-10-24 Thread Apache Jenkins Server
Master branch build status Successful
Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/master

Last Successful Compiled Artifacts https://builds.apache.org/job/Phoenix-master/lastSuccessfulBuild/artifact/

Last Complete Test Report https://builds.apache.org/job/Phoenix-master/lastCompletedBuild/testReport/

Changes
[larsh] PHOENIX-5533 Creating a view or index with a 4.14 client and 4.15.0



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Apache-Phoenix | 4.x-HBase-1.3 | Build Successful

2019-10-24 Thread Apache Jenkins Server
4.x-HBase-1.3 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-1.3

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastCompletedBuild/testReport/

Changes
[larsh] PHOENIX-5533 Creating a view or index with a 4.14 client and 4.15.0



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


[phoenix] branch master updated: PHOENIX-5533 Creating a view or index with a 4.14 client and 4.15.0 server fails with a NullPointerException.

2019-10-24 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new f3f722e  PHOENIX-5533 Creating a view or index with a 4.14 client and 
4.15.0 server fails with a NullPointerException.
f3f722e is described below

commit f3f722e4f29293885f1854cca9dd4cd37e6ff085
Author: Lars Hofhansl 
AuthorDate: Thu Oct 24 08:47:44 2019 -0700

PHOENIX-5533 Creating a view or index with a 4.14 client and 4.15.0 server 
fails with a NullPointerException.
---
 .../phoenix/coprocessor/MetaDataEndpointImpl.java  | 46 ++
 1 file changed, 46 insertions(+)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 8c80cd3..312602b 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -1735,6 +1735,45 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements RegionCopr
 byte[][] parentPhysicalSchemaTableNames = new byte[3][];
 getParentAndPhysicalNames(tableMetadata, 
parentSchemaTableNames, parentPhysicalSchemaTableNames);
 if (parentPhysicalSchemaTableNames[2] != null) {
+if (parentTable == null) {
+// This is needed when we connect with a 4.14 client to
+// a 4.15.0+ server.
+// In that case we need to resolve the parent table on
+// the server.
+parentTable = doGetTable(ByteUtil.EMPTY_BYTE_ARRAY,
+parentPhysicalSchemaTableNames[1],
+parentPhysicalSchemaTableNames[2], 
clientTimeStamp, clientVersion);
+if (parentTable == null) {
+builder.setReturnCode(
+
MetaDataProtos.MutationCode.PARENT_TABLE_NOT_FOUND);
+
builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis());
+done.run(builder.build());
+return;
+}
+if (parentSchemaTableNames[2] != null
+&& Bytes.compareTo(parentSchemaTableNames[2],
+parentPhysicalSchemaTableNames[2]) != 
0) {
+// if view is created on view
+byte[] tenantId = parentSchemaTableNames[0] == null
+? ByteUtil.EMPTY_BYTE_ARRAY
+: parentSchemaTableNames[0];
+parentTable = doGetTable(tenantId, 
parentSchemaTableNames[1],
+parentSchemaTableNames[2], 
clientTimeStamp, clientVersion);
+if (parentTable == null) {
+// it could be a global view
+parentTable = 
doGetTable(ByteUtil.EMPTY_BYTE_ARRAY,
+parentSchemaTableNames[1], 
parentSchemaTableNames[2],
+clientTimeStamp, clientVersion);
+}
+}
+if (parentTable == null) {
+builder.setReturnCode(
+
MetaDataProtos.MutationCode.PARENT_TABLE_NOT_FOUND);
+
builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis());
+done.run(builder.build());
+return;
+}
+}
 parentTableKey = 
SchemaUtil.getTableKey(ByteUtil.EMPTY_BYTE_ARRAY,
 parentPhysicalSchemaTableNames[1], 
parentPhysicalSchemaTableNames[2]);
 cParentPhysicalName = 
parentTable.getPhysicalName().getBytes();
@@ -1757,6 +1796,13 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements RegionCopr
  */
 parentTableName = 
MetaDataUtil.getParentTableName(tableMetadata);
 parentTableKey = SchemaUtil.getTableKey(tenantIdBytes, 
parentSchemaName, parentTableName);
+if (parentTable == null) {
+// This is needed when we connect with a 4.14 client to a 
4.15.0+ server.
+// In that case we need to resolve the parent table on the 
server.
+parentTable =
+doGetTable(tenantIdBytes, parentSchemaName, 
parentTableName, clientTimeStamp, null,
+ 

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5533 Creating a view or index with a 4.14 client and 4.15.0 server fails with a NullPointerException.

2019-10-24 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 2ed532f  PHOENIX-5533 Creating a view or index with a 4.14 client and 
4.15.0 server fails with a NullPointerException.
2ed532f is described below

commit 2ed532f7d1e6574af246abe62ff92d0ff7e4f8b1
Author: Lars Hofhansl 
AuthorDate: Thu Oct 24 08:47:44 2019 -0700

PHOENIX-5533 Creating a view or index with a 4.14 client and 4.15.0 server 
fails with a NullPointerException.
---
 .../phoenix/coprocessor/MetaDataEndpointImpl.java  | 46 ++
 1 file changed, 46 insertions(+)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 6df5bf8..7558b8d 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -1730,6 +1730,45 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 byte[][] parentPhysicalSchemaTableNames = new byte[3][];
 getParentAndPhysicalNames(tableMetadata, 
parentSchemaTableNames, parentPhysicalSchemaTableNames);
 if (parentPhysicalSchemaTableNames[2] != null) {
+if (parentTable == null) {
+// This is needed when we connect with a 4.14 client to
+// a 4.15.0+ server.
+// In that case we need to resolve the parent table on
+// the server.
+parentTable = doGetTable(ByteUtil.EMPTY_BYTE_ARRAY,
+parentPhysicalSchemaTableNames[1],
+parentPhysicalSchemaTableNames[2], 
clientTimeStamp, clientVersion);
+if (parentTable == null) {
+builder.setReturnCode(
+
MetaDataProtos.MutationCode.PARENT_TABLE_NOT_FOUND);
+
builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis());
+done.run(builder.build());
+return;
+}
+if (parentSchemaTableNames[2] != null
+&& Bytes.compareTo(parentSchemaTableNames[2],
+parentPhysicalSchemaTableNames[2]) != 
0) {
+// if view is created on view
+byte[] tenantId = parentSchemaTableNames[0] == null
+? ByteUtil.EMPTY_BYTE_ARRAY
+: parentSchemaTableNames[0];
+parentTable = doGetTable(tenantId, 
parentSchemaTableNames[1],
+parentSchemaTableNames[2], 
clientTimeStamp, clientVersion);
+if (parentTable == null) {
+// it could be a global view
+parentTable = 
doGetTable(ByteUtil.EMPTY_BYTE_ARRAY,
+parentSchemaTableNames[1], 
parentSchemaTableNames[2],
+clientTimeStamp, clientVersion);
+}
+}
+if (parentTable == null) {
+builder.setReturnCode(
+
MetaDataProtos.MutationCode.PARENT_TABLE_NOT_FOUND);
+
builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis());
+done.run(builder.build());
+return;
+}
+}
 parentTableKey = 
SchemaUtil.getTableKey(ByteUtil.EMPTY_BYTE_ARRAY,
 parentPhysicalSchemaTableNames[1], 
parentPhysicalSchemaTableNames[2]);
 cParentPhysicalName = 
parentTable.getPhysicalName().getBytes();
@@ -1752,6 +1791,13 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
  */
 parentTableName = 
MetaDataUtil.getParentTableName(tableMetadata);
 parentTableKey = SchemaUtil.getTableKey(tenantIdBytes, 
parentSchemaName, parentTableName);
+if (parentTable == null) {
+// This is needed when we connect with a 4.14 client to a 
4.15.0+ server.
+// In that case we need to resolve the parent table on the 
server.
+parentTable =
+doGetTable(tenantIdBytes, parentSchemaName, 
parentTableName, 

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5533 Creating a view or index with a 4.14 client and 4.15.0 server fails with a NullPointerException.

2019-10-24 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new dd662b1  PHOENIX-5533 Creating a view or index with a 4.14 client and 
4.15.0 server fails with a NullPointerException.
dd662b1 is described below

commit dd662b1b92971ed3a377f49736759f375164e445
Author: Lars Hofhansl 
AuthorDate: Thu Oct 24 08:47:44 2019 -0700

PHOENIX-5533 Creating a view or index with a 4.14 client and 4.15.0 server 
fails with a NullPointerException.
---
 .../phoenix/coprocessor/MetaDataEndpointImpl.java  | 46 ++
 1 file changed, 46 insertions(+)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 6df5bf8..7558b8d 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -1730,6 +1730,45 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 byte[][] parentPhysicalSchemaTableNames = new byte[3][];
 getParentAndPhysicalNames(tableMetadata, 
parentSchemaTableNames, parentPhysicalSchemaTableNames);
 if (parentPhysicalSchemaTableNames[2] != null) {
+if (parentTable == null) {
+// This is needed when we connect with a 4.14 client to
+// a 4.15.0+ server.
+// In that case we need to resolve the parent table on
+// the server.
+parentTable = doGetTable(ByteUtil.EMPTY_BYTE_ARRAY,
+parentPhysicalSchemaTableNames[1],
+parentPhysicalSchemaTableNames[2], 
clientTimeStamp, clientVersion);
+if (parentTable == null) {
+builder.setReturnCode(
+
MetaDataProtos.MutationCode.PARENT_TABLE_NOT_FOUND);
+
builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis());
+done.run(builder.build());
+return;
+}
+if (parentSchemaTableNames[2] != null
+&& Bytes.compareTo(parentSchemaTableNames[2],
+parentPhysicalSchemaTableNames[2]) != 
0) {
+// if view is created on view
+byte[] tenantId = parentSchemaTableNames[0] == null
+? ByteUtil.EMPTY_BYTE_ARRAY
+: parentSchemaTableNames[0];
+parentTable = doGetTable(tenantId, 
parentSchemaTableNames[1],
+parentSchemaTableNames[2], 
clientTimeStamp, clientVersion);
+if (parentTable == null) {
+// it could be a global view
+parentTable = 
doGetTable(ByteUtil.EMPTY_BYTE_ARRAY,
+parentSchemaTableNames[1], 
parentSchemaTableNames[2],
+clientTimeStamp, clientVersion);
+}
+}
+if (parentTable == null) {
+builder.setReturnCode(
+
MetaDataProtos.MutationCode.PARENT_TABLE_NOT_FOUND);
+
builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis());
+done.run(builder.build());
+return;
+}
+}
 parentTableKey = 
SchemaUtil.getTableKey(ByteUtil.EMPTY_BYTE_ARRAY,
 parentPhysicalSchemaTableNames[1], 
parentPhysicalSchemaTableNames[2]);
 cParentPhysicalName = 
parentTable.getPhysicalName().getBytes();
@@ -1752,6 +1791,13 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
  */
 parentTableName = 
MetaDataUtil.getParentTableName(tableMetadata);
 parentTableKey = SchemaUtil.getTableKey(tenantIdBytes, 
parentSchemaName, parentTableName);
+if (parentTable == null) {
+// This is needed when we connect with a 4.14 client to a 
4.15.0+ server.
+// In that case we need to resolve the parent table on the 
server.
+parentTable =
+doGetTable(tenantIdBytes, parentSchemaName, 
parentTableName, 

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5533 Creating a view or index with a 4.14 client and 4.15.0 server fails with a NullPointerException.

2019-10-24 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 0b9a039  PHOENIX-5533 Creating a view or index with a 4.14 client and 
4.15.0 server fails with a NullPointerException.
0b9a039 is described below

commit 0b9a0395554dcf72ece54c131fb628e7c3329902
Author: Lars Hofhansl 
AuthorDate: Thu Oct 24 08:47:44 2019 -0700

PHOENIX-5533 Creating a view or index with a 4.14 client and 4.15.0 server 
fails with a NullPointerException.
---
 .../phoenix/coprocessor/MetaDataEndpointImpl.java  | 46 ++
 1 file changed, 46 insertions(+)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 6df5bf8..7558b8d 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -1730,6 +1730,45 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 byte[][] parentPhysicalSchemaTableNames = new byte[3][];
 getParentAndPhysicalNames(tableMetadata, 
parentSchemaTableNames, parentPhysicalSchemaTableNames);
 if (parentPhysicalSchemaTableNames[2] != null) {
+if (parentTable == null) {
+// This is needed when we connect with a 4.14 client to
+// a 4.15.0+ server.
+// In that case we need to resolve the parent table on
+// the server.
+parentTable = doGetTable(ByteUtil.EMPTY_BYTE_ARRAY,
+parentPhysicalSchemaTableNames[1],
+parentPhysicalSchemaTableNames[2], 
clientTimeStamp, clientVersion);
+if (parentTable == null) {
+builder.setReturnCode(
+
MetaDataProtos.MutationCode.PARENT_TABLE_NOT_FOUND);
+
builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis());
+done.run(builder.build());
+return;
+}
+if (parentSchemaTableNames[2] != null
+&& Bytes.compareTo(parentSchemaTableNames[2],
+parentPhysicalSchemaTableNames[2]) != 
0) {
+// if view is created on view
+byte[] tenantId = parentSchemaTableNames[0] == null
+? ByteUtil.EMPTY_BYTE_ARRAY
+: parentSchemaTableNames[0];
+parentTable = doGetTable(tenantId, 
parentSchemaTableNames[1],
+parentSchemaTableNames[2], 
clientTimeStamp, clientVersion);
+if (parentTable == null) {
+// it could be a global view
+parentTable = 
doGetTable(ByteUtil.EMPTY_BYTE_ARRAY,
+parentSchemaTableNames[1], 
parentSchemaTableNames[2],
+clientTimeStamp, clientVersion);
+}
+}
+if (parentTable == null) {
+builder.setReturnCode(
+
MetaDataProtos.MutationCode.PARENT_TABLE_NOT_FOUND);
+
builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis());
+done.run(builder.build());
+return;
+}
+}
 parentTableKey = 
SchemaUtil.getTableKey(ByteUtil.EMPTY_BYTE_ARRAY,
 parentPhysicalSchemaTableNames[1], 
parentPhysicalSchemaTableNames[2]);
 cParentPhysicalName = 
parentTable.getPhysicalName().getBytes();
@@ -1752,6 +1791,13 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
  */
 parentTableName = 
MetaDataUtil.getParentTableName(tableMetadata);
 parentTableKey = SchemaUtil.getTableKey(tenantIdBytes, 
parentSchemaName, parentTableName);
+if (parentTable == null) {
+// This is needed when we connect with a 4.14 client to a 
4.15.0+ server.
+// In that case we need to resolve the parent table on the 
server.
+parentTable =
+doGetTable(tenantIdBytes, parentSchemaName, 
parentTableName, 

Build failed in Jenkins: Phoenix Compile Compatibility with HBase #1159

2019-10-24 Thread Apache Jenkins Server
See 


Changes:


--
Started by timer
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H26 (ubuntu) in workspace 

[Phoenix_Compile_Compat_wHBase] $ /bin/bash /tmp/jenkins3319739723648276838.sh
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 386349
max locked memory   (kbytes, -l) 16384
max memory size (kbytes, -m) unlimited
open files  (-n) 6
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 10240
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited
core id : 0
core id : 1
core id : 2
core id : 3
core id : 4
core id : 5
physical id : 0
physical id : 1
MemTotal:   98950016 kB
MemFree:14242324 kB
Filesystem  Size  Used Avail Use% Mounted on
udev 48G 0   48G   0% /dev
tmpfs   9.5G  1.6M  9.5G   1% /run
/dev/sda3   3.6T  518G  2.9T  15% /
tmpfs48G 0   48G   0% /dev/shm
tmpfs   5.0M 0  5.0M   0% /run/lock
tmpfs48G 0   48G   0% /sys/fs/cgroup
/dev/sda2   473M  158M  292M  36% /boot
tmpfs   9.5G 0  9.5G   0% /run/user/910
/dev/loop0   90M   90M 0 100% /snap/core/7713
/dev/loop1   58M   58M 0 100% /snap/snapcraft/3440
/dev/loop4   90M   90M 0 100% /snap/core/7917
/dev/loop2   55M   55M 0 100% /snap/lxd/12181
/dev/loop3   55M   55M 0 100% /snap/lxd/12211
apache-maven-2.2.1
apache-maven-3.0.5
apache-maven-3.1.1
apache-maven-3.2.5
apache-maven-3.3.9
apache-maven-3.5.2
apache-maven-3.5.4
apache-maven-3.6.0
apache-maven-3.6.2
latest
latest2
latest3


===
Verifying compile level compatibility with HBase 0.98 with Phoenix 
4.x-HBase-0.98
===

Cloning into 'hbase'...
Switched to a new branch '0.98'
Branch '0.98' set up to track remote branch '0.98' from 'origin'.
[ERROR] Plugin org.codehaus.mojo:findbugs-maven-plugin:2.5.2 or one of its 
dependencies could not be resolved: Failed to read artifact descriptor for 
org.codehaus.mojo:findbugs-maven-plugin:jar:2.5.2: Could not transfer artifact 
org.codehaus.mojo:findbugs-maven-plugin:pom:2.5.2 from/to central 
(https://repo.maven.apache.org/maven2): Received fatal alert: protocol_version 
-> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginResolutionException
Build step 'Execute shell' marked build as failure


Jenkins build is back to normal : Phoenix-4.x-HBase-1.5 #172

2019-10-24 Thread Apache Jenkins Server
See 




Apache-Phoenix | 4.x-HBase-1.3 | Build Successful

2019-10-24 Thread Apache Jenkins Server
4.x-HBase-1.3 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-1.3

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastCompletedBuild/testReport/

Changes
[kadir] PHOENIX-5478 IndexTool mapper task should not timeout



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Build failed in Jenkins: Phoenix-4.x-HBase-1.3 #574

2019-10-24 Thread Apache Jenkins Server
See 


Changes:

[kadir] PHOENIX-5478 IndexTool mapper task should not timeout


--
[...truncated 226.87 KB...]
[INFO]  T E S T S
[INFO] ---
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test 
(NeedTheirOwnClusterTests) @ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.003 
s - in 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running org.apache.phoenix.end2end.ConcurrentMutationsExtendedIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.325 s 
- in org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT
[INFO] Running org.apache.phoenix.end2end.ColumnEncodedMutableTxStatsCollectorIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedMutableNonTxStatsCollectorIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.248 s 
- in org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.976 s 
- in org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Running org.apache.phoenix.end2end.CostBasedDecisionIT
[INFO] Running org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.812 s 
- in org.apache.phoenix.end2end.CountDistinctCompressionIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
194.92 s - in 
org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
193.226 s - in 
org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
192.764 s - in org.apache.phoenix.end2end.ColumnEncodedMutableTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
193.846 s - in 
org.apache.phoenix.end2end.ColumnEncodedMutableNonTxStatsCollectorIT
[INFO] Running org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.DropSchemaIT
[INFO] Running org.apache.phoenix.end2end.IndexBuildTimestampIT
[INFO] Running org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Running org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.855 s 
- in org.apache.phoenix.end2end.DropSchemaIT
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 366.84 
s - in org.apache.phoenix.end2end.ConcurrentMutationsExtendedIT
[INFO] Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 101.412 
s - in org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Running org.apache.phoenix.end2end.IndexRebuildTaskIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 165.96 
s - in org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.993 s 
- in org.apache.phoenix.end2end.IndexRebuildTaskIT
[INFO] Running org.apache.phoenix.end2end.IndexScrutinyToolForTenantIT
[INFO] Running org.apache.phoenix.end2end.IndexScrutinyToolIT
[INFO] Running org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 204.384 
s - in org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 207.799 
s - in org.apache.phoenix.end2end.IndexBuildTimestampIT
[INFO] Running 
org.apache.phoenix.end2end.IndexToolForPartialBuildWithNamespaceEnabledIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.416 s 
- in org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Running org.apache.phoenix.end2end.MigrateSystemTablesToSystemNamespaceIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 102.978 
s - in org.apache.phoenix.end2end.IndexScrutinyToolForTenantIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.153 s 
- in 

[phoenix] branch 4.14-HBase-1.4 updated: PHOENIX-5478 IndexTool mapper task should not timeout

2019-10-24 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch 4.14-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.4 by this push:
 new 2c908e0  PHOENIX-5478 IndexTool mapper task should not timeout
2c908e0 is described below

commit 2c908e01f549fd3095d2d7c0fbfee87775b23982
Author: Kadir 
AuthorDate: Wed Oct 23 22:38:13 2019 -0700

PHOENIX-5478 IndexTool mapper task should not timeout
---
 .../org/apache/phoenix/end2end/IndexToolIT.java|   2 +-
 .../coprocessor/BaseScannerRegionObserver.java |   2 +
 .../UngroupedAggregateRegionObserver.java  | 220 +++--
 .../apache/phoenix/index/GlobalIndexChecker.java   |   4 -
 .../PhoenixServerBuildIndexInputFormat.java|   2 +
 .../apache/phoenix/mapreduce/index/IndexTool.java  |  12 --
 .../org/apache/phoenix/query/QueryServices.java|   2 +
 .../apache/phoenix/query/QueryServicesOptions.java |   3 +-
 8 files changed, 130 insertions(+), 117 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
index 9cc2393..6fc01bd 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
@@ -81,7 +81,6 @@ import com.google.common.collect.Maps;
 @RunWith(Parameterized.class)
 @Category(NeedsOwnMiniClusterTest.class)
 public class IndexToolIT extends ParallelStatsEnabledIT {
-
 private final boolean localIndex;
 private final boolean transactional;
 private final boolean directApi;
@@ -117,6 +116,7 @@ public class IndexToolIT extends ParallelStatsEnabledIT {
 Map serverProps = Maps.newHashMapWithExpectedSize(2);
 serverProps.put(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB,
 QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
+serverProps.put(QueryServices.INDEX_REBUILD_PAGE_SIZE_IN_ROWS, 
Long.toString(8));
 Map clientProps = Maps.newHashMapWithExpectedSize(2);
 clientProps.put(QueryServices.TRANSACTIONS_ENABLED, 
Boolean.TRUE.toString());
 clientProps.put(QueryServices.FORCE_ROW_KEY_ORDER_ATTRIB, 
Boolean.TRUE.toString());
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java
index b73615f..cb4d0af 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java
@@ -75,6 +75,8 @@ abstract public class BaseScannerRegionObserver extends 
BaseRegionObserver {
 public static final String GROUP_BY_LIMIT = "_GroupByLimit";
 public static final String LOCAL_INDEX = "_LocalIndex";
 public static final String LOCAL_INDEX_BUILD = "_LocalIndexBuild";
+// The number of index rows to be rebuild in one RPC call
+public static final String INDEX_REBUILD_PAGING = "_IndexRebuildPaging";
 /* 
 * Attribute to denote that the index maintainer has been serialized using 
its proto-buf presentation.
 * Needed for backward compatibility purposes. TODO: get rid of this in 
next major release.
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
index 0166206..3cae671 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
@@ -21,6 +21,7 @@ import static 
org.apache.phoenix.query.QueryConstants.AGG_TIMESTAMP;
 import static org.apache.phoenix.query.QueryConstants.SINGLE_COLUMN;
 import static org.apache.phoenix.query.QueryConstants.SINGLE_COLUMN_FAMILY;
 import static org.apache.phoenix.query.QueryConstants.UNGROUPED_AGG_ROW_KEY;
+import static 
org.apache.phoenix.query.QueryServices.INDEX_REBUILD_PAGE_SIZE_IN_ROWS;
 import static org.apache.phoenix.query.QueryServices.MUTATE_BATCH_SIZE_ATTRIB;
 import static 
org.apache.phoenix.query.QueryServices.MUTATE_BATCH_SIZE_BYTES_ATTRIB;
 import static 
org.apache.phoenix.schema.stats.StatisticsCollectionRunTracker.COMPACTION_UPDATE_STATS_ROW_COUNT;
@@ -1034,116 +1035,137 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 throw new RuntimeException(e);
 }
 }
-
-private RegionScanner rebuildIndices(final RegionScanner innerScanner, 
final Region region, final Scan scan,
-Configuration config) throws IOException {
-byte[] indexMetaData = 
scan.getAttribute(PhoenixIndexCodec.INDEX_PROTO_MD);
-boolean useProto = 

[phoenix] branch 4.14-HBase-1.3 updated: PHOENIX-5478 IndexTool mapper task should not timeout

2019-10-24 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.3 by this push:
 new 12b70b6  PHOENIX-5478 IndexTool mapper task should not timeout
12b70b6 is described below

commit 12b70b6f8f7611d9047cd73a3eea8e087b203b5c
Author: Kadir 
AuthorDate: Wed Oct 23 22:38:13 2019 -0700

PHOENIX-5478 IndexTool mapper task should not timeout
---
 .../org/apache/phoenix/end2end/IndexToolIT.java|   2 +-
 .../coprocessor/BaseScannerRegionObserver.java |   2 +
 .../UngroupedAggregateRegionObserver.java  | 220 +++--
 .../apache/phoenix/index/GlobalIndexChecker.java   |   4 -
 .../PhoenixServerBuildIndexInputFormat.java|   2 +
 .../apache/phoenix/mapreduce/index/IndexTool.java  |  12 --
 .../org/apache/phoenix/query/QueryServices.java|   2 +
 .../apache/phoenix/query/QueryServicesOptions.java |   3 +-
 8 files changed, 130 insertions(+), 117 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
index 9cc2393..6fc01bd 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
@@ -81,7 +81,6 @@ import com.google.common.collect.Maps;
 @RunWith(Parameterized.class)
 @Category(NeedsOwnMiniClusterTest.class)
 public class IndexToolIT extends ParallelStatsEnabledIT {
-
 private final boolean localIndex;
 private final boolean transactional;
 private final boolean directApi;
@@ -117,6 +116,7 @@ public class IndexToolIT extends ParallelStatsEnabledIT {
 Map serverProps = Maps.newHashMapWithExpectedSize(2);
 serverProps.put(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB,
 QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
+serverProps.put(QueryServices.INDEX_REBUILD_PAGE_SIZE_IN_ROWS, 
Long.toString(8));
 Map clientProps = Maps.newHashMapWithExpectedSize(2);
 clientProps.put(QueryServices.TRANSACTIONS_ENABLED, 
Boolean.TRUE.toString());
 clientProps.put(QueryServices.FORCE_ROW_KEY_ORDER_ATTRIB, 
Boolean.TRUE.toString());
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java
index b73615f..cb4d0af 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java
@@ -75,6 +75,8 @@ abstract public class BaseScannerRegionObserver extends 
BaseRegionObserver {
 public static final String GROUP_BY_LIMIT = "_GroupByLimit";
 public static final String LOCAL_INDEX = "_LocalIndex";
 public static final String LOCAL_INDEX_BUILD = "_LocalIndexBuild";
+// The number of index rows to be rebuild in one RPC call
+public static final String INDEX_REBUILD_PAGING = "_IndexRebuildPaging";
 /* 
 * Attribute to denote that the index maintainer has been serialized using 
its proto-buf presentation.
 * Needed for backward compatibility purposes. TODO: get rid of this in 
next major release.
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
index 0166206..3cae671 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
@@ -21,6 +21,7 @@ import static 
org.apache.phoenix.query.QueryConstants.AGG_TIMESTAMP;
 import static org.apache.phoenix.query.QueryConstants.SINGLE_COLUMN;
 import static org.apache.phoenix.query.QueryConstants.SINGLE_COLUMN_FAMILY;
 import static org.apache.phoenix.query.QueryConstants.UNGROUPED_AGG_ROW_KEY;
+import static 
org.apache.phoenix.query.QueryServices.INDEX_REBUILD_PAGE_SIZE_IN_ROWS;
 import static org.apache.phoenix.query.QueryServices.MUTATE_BATCH_SIZE_ATTRIB;
 import static 
org.apache.phoenix.query.QueryServices.MUTATE_BATCH_SIZE_BYTES_ATTRIB;
 import static 
org.apache.phoenix.schema.stats.StatisticsCollectionRunTracker.COMPACTION_UPDATE_STATS_ROW_COUNT;
@@ -1034,116 +1035,137 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 throw new RuntimeException(e);
 }
 }
-
-private RegionScanner rebuildIndices(final RegionScanner innerScanner, 
final Region region, final Scan scan,
-Configuration config) throws IOException {
-byte[] indexMetaData = 
scan.getAttribute(PhoenixIndexCodec.INDEX_PROTO_MD);
-boolean useProto = 

[phoenix] branch 4.x-HBase-1.5 updated (3ba5464 -> cee2c7a)

2019-10-24 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a change to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git.


from 3ba5464  PHOENIX-5535 Index rebuilds via 
UngroupedAggregateRegionObserver should replay delete markers
 new 883e3bd  Revert "PHOENIX-5535 Index rebuilds via 
UngroupedAggregateRegionObserver should replay delete markers"
 new cee2c7a  PHOENIX-5478 IndexTool mapper task should not timeout

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../org/apache/phoenix/end2end/IndexToolIT.java|  72 +-
 .../apache/phoenix/compile/PostDDLCompiler.java| 253 +
 .../phoenix/compile/ServerBuildIndexCompiler.java  | 109 -
 .../coprocessor/BaseScannerRegionObserver.java |   2 +
 .../UngroupedAggregateRegionObserver.java  | 220 ++
 .../apache/phoenix/index/GlobalIndexChecker.java   |  12 +-
 .../PhoenixServerBuildIndexInputFormat.java|  12 +-
 .../apache/phoenix/mapreduce/index/IndexTool.java  |  12 -
 .../org/apache/phoenix/query/QueryServices.java|   2 +
 .../apache/phoenix/query/QueryServicesOptions.java |   3 +-
 .../org/apache/phoenix/schema/MetaDataClient.java  |   9 +-
 11 files changed, 312 insertions(+), 394 deletions(-)



[phoenix] 01/02: Revert "PHOENIX-5535 Index rebuilds via UngroupedAggregateRegionObserver should replay delete markers"

2019-10-24 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit 883e3bd4ffb7b040fc2547d2165bec50477c2acd
Author: Kadir 
AuthorDate: Wed Oct 23 08:55:28 2019 -0700

Revert "PHOENIX-5535 Index rebuilds via UngroupedAggregateRegionObserver 
should replay delete markers"

This reverts commit 3ba54648c0c2afee2028f3ed05c3d34ec030cb5d.
---
 .../org/apache/phoenix/end2end/IndexToolIT.java|  72 +-
 .../apache/phoenix/compile/PostDDLCompiler.java| 253 +
 .../phoenix/compile/ServerBuildIndexCompiler.java  | 109 -
 .../apache/phoenix/index/GlobalIndexChecker.java   |   8 +-
 .../PhoenixServerBuildIndexInputFormat.java|  10 +-
 .../org/apache/phoenix/schema/MetaDataClient.java  |   9 +-
 6 files changed, 183 insertions(+), 278 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
index 87f6b20..2f12ae9 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
@@ -29,7 +29,6 @@ import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
-import java.sql.Types;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
@@ -40,7 +39,6 @@ import java.util.UUID;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.Cell;
-import org.apache.hadoop.hbase.CellUtil;
 import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
@@ -51,7 +49,6 @@ import org.apache.hadoop.hbase.regionserver.HRegion;
 import org.apache.hadoop.hbase.regionserver.RegionScanner;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.mapreduce.Job;
-import org.apache.phoenix.end2end.index.PartialIndexRebuilderIT;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.index.IndexTool;
 import org.apache.phoenix.query.ConnectionQueryServices;
@@ -80,12 +77,10 @@ import org.junit.runners.Parameterized.Parameters;
 
 import com.google.common.collect.Lists;
 import com.google.common.collect.Maps;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
 
 @RunWith(Parameterized.class)
 public class IndexToolIT extends BaseUniqueNamesOwnClusterIT {
-private static final Logger LOGGER = 
LoggerFactory.getLogger(PartialIndexRebuilderIT.class);
+
 private final boolean localIndex;
 private final boolean mutable;
 private final boolean transactional;
@@ -259,71 +254,6 @@ public class IndexToolIT extends 
BaseUniqueNamesOwnClusterIT {
 }
 }
 
-private void setEveryNthRowWithNull(int nrows, int nthRowNull, 
PreparedStatement stmt) throws Exception {
-for (int i = 0; i < nrows; i++) {
-stmt.setInt(1, i);
-stmt.setInt(2, i * 10);
-if (i % nthRowNull != 0) {
-stmt.setInt(3, 9000 + i * nthRowNull);
-} else {
-stmt.setNull(3, Types.INTEGER);
-}
-stmt.execute();
-}
-}
-
-@Test
-public void testWithSetNull() throws Exception {
-// This test is for building non-transactional mutable global indexes 
with direct api
-if (localIndex || transactional || !mutable) {
-return;
-}
-// This tests the cases where a column having a null value is 
overwritten with a not null value and vice versa;
-// and after that the index table is still rebuilt correctly
-final int NROWS = 2 * 3 * 5 * 7;
-String schemaName = generateUniqueName();
-String dataTableName = generateUniqueName();
-String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
-String indexTableName = generateUniqueName();
-String indexTableFullName = SchemaUtil.getTableName(schemaName, 
indexTableName);
-Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
-String stmString1 =
-"CREATE TABLE " + dataTableFullName
-+ " (ID INTEGER NOT NULL PRIMARY KEY, VAL INTEGER, 
ZIP INTEGER) "
-+ tableDDLOptions;
-conn.createStatement().execute(stmString1);
-String upsertStmt = "UPSERT INTO " + dataTableFullName + " 
VALUES(?,?,?)";
-PreparedStatement stmt = conn.prepareStatement(upsertStmt);
-setEveryNthRowWithNull(NROWS, 2, stmt);
-conn.commit();
-setEveryNthRowWithNull(NROWS, 3, stmt);
-conn.commit();
-String stmtString2 =
-

[phoenix] 02/02: PHOENIX-5478 IndexTool mapper task should not timeout

2019-10-24 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit cee2c7a66f494c3e761ccdbee6a9c1353925a961
Author: Kadir 
AuthorDate: Wed Oct 23 22:38:13 2019 -0700

PHOENIX-5478 IndexTool mapper task should not timeout
---
 .../org/apache/phoenix/end2end/IndexToolIT.java|   2 +-
 .../coprocessor/BaseScannerRegionObserver.java |   2 +
 .../UngroupedAggregateRegionObserver.java  | 220 +++--
 .../apache/phoenix/index/GlobalIndexChecker.java   |   4 -
 .../PhoenixServerBuildIndexInputFormat.java|   2 +
 .../apache/phoenix/mapreduce/index/IndexTool.java  |  12 --
 .../org/apache/phoenix/query/QueryServices.java|   2 +
 .../apache/phoenix/query/QueryServicesOptions.java |   3 +-
 8 files changed, 130 insertions(+), 117 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
index 2f12ae9..8af5295 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
@@ -80,7 +80,6 @@ import com.google.common.collect.Maps;
 
 @RunWith(Parameterized.class)
 public class IndexToolIT extends BaseUniqueNamesOwnClusterIT {
-
 private final boolean localIndex;
 private final boolean mutable;
 private final boolean transactional;
@@ -118,6 +117,7 @@ public class IndexToolIT extends 
BaseUniqueNamesOwnClusterIT {
 
serverProps.put(QueryServices.MAX_SERVER_METADATA_CACHE_TIME_TO_LIVE_MS_ATTRIB, 
Long.toString(5));
 serverProps.put(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB,
 QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
+serverProps.put(QueryServices.INDEX_REBUILD_PAGE_SIZE_IN_ROWS, 
Long.toString(8));
 Map clientProps = Maps.newHashMapWithExpectedSize(2);
 clientProps.put(QueryServices.USE_STATS_FOR_PARALLELIZATION, 
Boolean.toString(true));
 clientProps.put(QueryServices.STATS_UPDATE_FREQ_MS_ATTRIB, 
Long.toString(5));
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java
index b73615f..cb4d0af 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java
@@ -75,6 +75,8 @@ abstract public class BaseScannerRegionObserver extends 
BaseRegionObserver {
 public static final String GROUP_BY_LIMIT = "_GroupByLimit";
 public static final String LOCAL_INDEX = "_LocalIndex";
 public static final String LOCAL_INDEX_BUILD = "_LocalIndexBuild";
+// The number of index rows to be rebuild in one RPC call
+public static final String INDEX_REBUILD_PAGING = "_IndexRebuildPaging";
 /* 
 * Attribute to denote that the index maintainer has been serialized using 
its proto-buf presentation.
 * Needed for backward compatibility purposes. TODO: get rid of this in 
next major release.
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
index 3a03f94..0a16a68 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
@@ -21,6 +21,7 @@ import static 
org.apache.phoenix.query.QueryConstants.AGG_TIMESTAMP;
 import static org.apache.phoenix.query.QueryConstants.SINGLE_COLUMN;
 import static org.apache.phoenix.query.QueryConstants.SINGLE_COLUMN_FAMILY;
 import static org.apache.phoenix.query.QueryConstants.UNGROUPED_AGG_ROW_KEY;
+import static 
org.apache.phoenix.query.QueryServices.INDEX_REBUILD_PAGE_SIZE_IN_ROWS;
 import static org.apache.phoenix.query.QueryServices.MUTATE_BATCH_SIZE_ATTRIB;
 import static 
org.apache.phoenix.query.QueryServices.MUTATE_BATCH_SIZE_BYTES_ATTRIB;
 import static org.apache.phoenix.schema.PTableImpl.getColumnsToClone;
@@ -1056,116 +1057,137 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 throw new RuntimeException(e);
 }
 }
-
-private RegionScanner rebuildIndices(final RegionScanner innerScanner, 
final Region region, final Scan scan,
-Configuration config) throws IOException {
-byte[] indexMetaData = 
scan.getAttribute(PhoenixIndexCodec.INDEX_PROTO_MD);
-boolean useProto = true;
-// for backward compatibility fall back to look up by the old attribute
-if (indexMetaData == null) {
-useProto = false;
-indexMetaData = 

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5478 IndexTool mapper task should not timeout

2019-10-24 Thread kadir
This is an automated email from the ASF dual-hosted git repository.

kadir pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 4a11397  PHOENIX-5478 IndexTool mapper task should not timeout
4a11397 is described below

commit 4a11397d3127d333b36ab9b6febd70dbb515d959
Author: Kadir 
AuthorDate: Wed Oct 23 22:38:13 2019 -0700

PHOENIX-5478 IndexTool mapper task should not timeout
---
 .../org/apache/phoenix/end2end/IndexToolIT.java|   2 +-
 .../coprocessor/BaseScannerRegionObserver.java |   2 +
 .../UngroupedAggregateRegionObserver.java  | 220 +++--
 .../apache/phoenix/index/GlobalIndexChecker.java   |   4 -
 .../PhoenixServerBuildIndexInputFormat.java|   2 +
 .../apache/phoenix/mapreduce/index/IndexTool.java  |  12 --
 .../org/apache/phoenix/query/QueryServices.java|   2 +
 .../apache/phoenix/query/QueryServicesOptions.java |   3 +-
 8 files changed, 130 insertions(+), 117 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
index 2f12ae9..8af5295 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
@@ -80,7 +80,6 @@ import com.google.common.collect.Maps;
 
 @RunWith(Parameterized.class)
 public class IndexToolIT extends BaseUniqueNamesOwnClusterIT {
-
 private final boolean localIndex;
 private final boolean mutable;
 private final boolean transactional;
@@ -118,6 +117,7 @@ public class IndexToolIT extends 
BaseUniqueNamesOwnClusterIT {
 
serverProps.put(QueryServices.MAX_SERVER_METADATA_CACHE_TIME_TO_LIVE_MS_ATTRIB, 
Long.toString(5));
 serverProps.put(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB,
 QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
+serverProps.put(QueryServices.INDEX_REBUILD_PAGE_SIZE_IN_ROWS, 
Long.toString(8));
 Map clientProps = Maps.newHashMapWithExpectedSize(2);
 clientProps.put(QueryServices.USE_STATS_FOR_PARALLELIZATION, 
Boolean.toString(true));
 clientProps.put(QueryServices.STATS_UPDATE_FREQ_MS_ATTRIB, 
Long.toString(5));
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java
index b73615f..cb4d0af 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java
@@ -75,6 +75,8 @@ abstract public class BaseScannerRegionObserver extends 
BaseRegionObserver {
 public static final String GROUP_BY_LIMIT = "_GroupByLimit";
 public static final String LOCAL_INDEX = "_LocalIndex";
 public static final String LOCAL_INDEX_BUILD = "_LocalIndexBuild";
+// The number of index rows to be rebuild in one RPC call
+public static final String INDEX_REBUILD_PAGING = "_IndexRebuildPaging";
 /* 
 * Attribute to denote that the index maintainer has been serialized using 
its proto-buf presentation.
 * Needed for backward compatibility purposes. TODO: get rid of this in 
next major release.
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
index 3a03f94..0a16a68 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
@@ -21,6 +21,7 @@ import static 
org.apache.phoenix.query.QueryConstants.AGG_TIMESTAMP;
 import static org.apache.phoenix.query.QueryConstants.SINGLE_COLUMN;
 import static org.apache.phoenix.query.QueryConstants.SINGLE_COLUMN_FAMILY;
 import static org.apache.phoenix.query.QueryConstants.UNGROUPED_AGG_ROW_KEY;
+import static 
org.apache.phoenix.query.QueryServices.INDEX_REBUILD_PAGE_SIZE_IN_ROWS;
 import static org.apache.phoenix.query.QueryServices.MUTATE_BATCH_SIZE_ATTRIB;
 import static 
org.apache.phoenix.query.QueryServices.MUTATE_BATCH_SIZE_BYTES_ATTRIB;
 import static org.apache.phoenix.schema.PTableImpl.getColumnsToClone;
@@ -1056,116 +1057,137 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 throw new RuntimeException(e);
 }
 }
-
-private RegionScanner rebuildIndices(final RegionScanner innerScanner, 
final Region region, final Scan scan,
-Configuration config) throws IOException {
-byte[] indexMetaData = 
scan.getAttribute(PhoenixIndexCodec.INDEX_PROTO_MD);
-boolean useProto = true;
-// for