[phoenix] branch master updated: PHOENIX-5269 use AccessChecker to check for user permisssions

2019-06-18 Thread tdsilva
This is an automated email from the ASF dual-hosted git repository.

tdsilva pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 1f2508d  PHOENIX-5269 use AccessChecker to check for user permisssions
1f2508d is described below

commit 1f2508dbde365aaedac628c89df237e8b6b46df8
Author: Kiran Kumar Maturi 
AuthorDate: Mon Jun 17 16:42:49 2019 +0530

PHOENIX-5269 use AccessChecker to check for user permisssions
---
 .../apache/phoenix/end2end/PermissionsCacheIT.java | 107 +
 .../phoenix/coprocessor/MetaDataEndpointImpl.java  |   2 +
 .../coprocessor/PhoenixAccessController.java   |  77 +--
 .../PhoenixMetaDataCoprocessorHost.java|   5 +
 4 files changed, 185 insertions(+), 6 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/PermissionsCacheIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/PermissionsCacheIT.java
new file mode 100644
index 000..8d0c694
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/PermissionsCacheIT.java
@@ -0,0 +1,107 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.assertTrue;
+
+import java.security.PrivilegedExceptionAction;
+import java.sql.Connection;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.AuthUtil;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.security.access.AccessControlLists;
+import org.apache.hadoop.hbase.security.access.Permission.Action;
+import org.apache.hadoop.hbase.security.access.TablePermission;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.hbase.thirdparty.com.google.common.collect.ListMultimap;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class PermissionsCacheIT extends BasePermissionsIT {
+
+public PermissionsCacheIT() throws Exception {
+   super(true);
+   }
+
+@BeforeClass
+public static void doSetup() throws Exception {
+BasePermissionsIT.initCluster(true);
+}
+
+@Test
+public void testPermissionsCachedWithAccessChecker() throws Throwable {
+if (!isNamespaceMapped) {
+return;
+}
+final String schema = generateUniqueName();
+final String tableName = generateUniqueName();
+final String phoenixTableName = SchemaUtil.getTableName(schema, 
tableName);
+try (Connection conn = getConnection()) {
+grantPermissions(regularUser1.getShortName(), 
PHOENIX_NAMESPACE_MAPPED_SYSTEM_TABLES,
+Action.READ, Action.EXEC);
+grantPermissions(regularUser1.getShortName(), 
Collections.singleton("SYSTEM:SEQUENCE"),
+Action.WRITE, Action.READ, Action.EXEC);
+superUser1.runAs(new PrivilegedExceptionAction() {
+@Override
+public Void run() throws Exception {
+try {
+verifyAllowed(createSchema(schema), superUser1);
+grantPermissions(regularUser1.getShortName(), schema, 
Action.CREATE);
+
grantPermissions(AuthUtil.toGroupEntry(GROUP_SYSTEM_ACCESS), schema,
+Action.CREATE);
+} catch (Throwable e) {
+if (e instanceof Exception) {
+throw (Exception) e;
+} else {
+throw new Exception(e);
+}
+}
+return null;
+}
+});
+verifyAllowed(createTable(phoenixTableName), regularUser1);
+HBaseTestingUtility utility = getUtility();
+Configuration conf = utility.getConfiguration();
+ZKWatcher zkw = 

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5269 use AccessChecker to check for user permisssions

2019-06-18 Thread tdsilva
This is an automated email from the ASF dual-hosted git repository.

tdsilva pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new eb8ac33  PHOENIX-5269 use AccessChecker to check for user permisssions
eb8ac33 is described below

commit eb8ac33029cd1ce781bf2f8b826502f642f735c5
Author: Kiran Kumar Maturi 
AuthorDate: Tue Jun 18 14:50:44 2019 +0530

PHOENIX-5269 use AccessChecker to check for user permisssions
---
 .../apache/phoenix/end2end/PermissionsCacheIT.java | 108 +
 .../coprocessor/PhoenixAccessController.java   |  91 +++--
 pom.xml|   2 +-
 3 files changed, 193 insertions(+), 8 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/PermissionsCacheIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/PermissionsCacheIT.java
new file mode 100644
index 000..030c03f
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/PermissionsCacheIT.java
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.assertTrue;
+
+import java.security.PrivilegedExceptionAction;
+import java.sql.Connection;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.AuthUtil;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.security.access.AccessControlLists;
+import org.apache.hadoop.hbase.security.access.Permission.Action;
+import org.apache.hadoop.hbase.security.access.TablePermission;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import com.google.common.collect.ListMultimap;
+
+public class PermissionsCacheIT extends BasePermissionsIT {
+
+public PermissionsCacheIT() throws Exception {
+super(true);
+}
+
+@BeforeClass
+public static void doSetup() throws Exception {
+BasePermissionsIT.initCluster(true);
+}
+
+@Test
+public void testPermissionsCachedWithAccessChecker() throws Throwable {
+if (!isNamespaceMapped) {
+return;
+}
+final String schema = generateUniqueName();
+final String tableName = generateUniqueName();
+final String phoenixTableName = SchemaUtil.getTableName(schema, 
tableName);
+try (Connection conn = getConnection()) {
+grantPermissions(regularUser1.getShortName(), 
PHOENIX_NAMESPACE_MAPPED_SYSTEM_TABLES,
+Action.READ, Action.EXEC);
+grantPermissions(regularUser1.getShortName(), 
Collections.singleton("SYSTEM:SEQUENCE"),
+Action.WRITE, Action.READ, Action.EXEC);
+superUser1.runAs(new PrivilegedExceptionAction() {
+@Override
+public Void run() throws Exception {
+try {
+verifyAllowed(createSchema(schema), superUser1);
+grantPermissions(regularUser1.getShortName(), schema, 
Action.CREATE);
+
grantPermissions(AuthUtil.toGroupEntry(GROUP_SYSTEM_ACCESS), schema,
+Action.CREATE);
+} catch (Throwable e) {
+if (e instanceof Exception) {
+throw (Exception) e;
+} else {
+throw new Exception(e);
+}
+}
+return null;
+}
+});
+verifyAllowed(createTable(phoenixTableName), regularUser1);
+HBaseTestingUtility utility = getUtility();
+Configuration conf = utility.getConfiguration();
+ZooKeeperWatcher zkw = 
HBaseTestingUtility.getZooKeeperWatcher(utility);
+String aclZnodeParent = conf.get("zookeeper.znode.acl.parent", 
"acl");
+String 

Build failed in Jenkins: Phoenix-4.x-HBase-1.4 #188

2019-06-18 Thread Apache Jenkins Server
See 


Changes:

[larsh] PHOENIX-5355 Speed up BaseIndexIT.

--
[...truncated 268.98 KB...]
[INFO] Excluding org.apache.hadoop:hadoop-yarn-server-common:jar:2.7.5 from the 
shaded jar.
[INFO] Excluding org.apache.hadoop:hadoop-mapreduce-client-shuffle:jar:2.7.5 
from the shaded jar.
[INFO] Excluding org.apache.hadoop:hadoop-yarn-server-nodemanager:jar:2.7.5 
from the shaded jar.
[INFO] Including com.google.code.gson:gson:jar:2.2.4 in the shaded jar.
[INFO] Excluding com.google.inject:guice:jar:3.0 from the shaded jar.
[INFO] Excluding javax.inject:javax.inject:jar:1 from the shaded jar.
[INFO] Excluding aopalliance:aopalliance:jar:1.0 from the shaded jar.
[INFO] Including com.google.inject.extensions:guice-assistedinject:jar:3.0 in 
the shaded jar.
[INFO] Including org.apache.thrift:libthrift:jar:0.9.0 in the shaded jar.
[INFO] Including it.unimi.dsi:fastutil:jar:6.5.6 in the shaded jar.
[INFO] Including org.apache.twill:twill-common:jar:0.8.0 in the shaded jar.
[INFO] Including org.apache.twill:twill-core:jar:0.8.0 in the shaded jar.
[INFO] Including org.apache.twill:twill-api:jar:0.8.0 in the shaded jar.
[INFO] Excluding org.ow2.asm:asm-all:jar:5.0.2 from the shaded jar.
[INFO] Including org.apache.twill:twill-discovery-api:jar:0.8.0 in the shaded 
jar.
[INFO] Including org.apache.twill:twill-discovery-core:jar:0.8.0 in the shaded 
jar.
[INFO] Including org.apache.twill:twill-zookeeper:jar:0.8.0 in the shaded jar.
[INFO] Including io.dropwizard.metrics:metrics-core:jar:3.1.0 in the shaded jar.
[INFO] Replacing 

 with 

[INFO] 
[INFO] --- maven-install-plugin:2.5.2:install-file (default-install) @ 
phoenix-server ---
[INFO] Installing 

 to 
/home/jenkins/.m2/repository/org/apache/phoenix/phoenix-server/4.15.0-HBase-1.4-SNAPSHOT/phoenix-server-4.15.0-HBase-1.4-SNAPSHOT.jar
[INFO] Installing 
 
to 
/home/jenkins/.m2/repository/org/apache/phoenix/phoenix-server/4.15.0-HBase-1.4-SNAPSHOT/phoenix-server-4.15.0-HBase-1.4-SNAPSHOT.pom
[INFO] 
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ 
phoenix-server ---
[INFO] Skipping artifact installation
[INFO] 
[INFO] < org.apache.phoenix:phoenix-assembly >-
[INFO] Building Phoenix Assembly 4.15.0-HBase-1.4-SNAPSHOT[6/7]
[INFO] [ pom ]-
Downloading from central: 
https://repo.maven.apache.org/maven2/org/codehaus/mojo/exec-maven-plugin/maven-metadata.xml
Progress (1): 741 B   Downloaded from central: 
https://repo.maven.apache.org/maven2/org/codehaus/mojo/exec-maven-plugin/maven-metadata.xml
 (741 B at 15 kB/s)
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ phoenix-assembly ---
[INFO] Deleting 

[INFO] 
[INFO] --- maven-checkstyle-plugin:2.13:check (validate) @ phoenix-assembly ---
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
phoenix-assembly ---
[INFO] 
[INFO] --- exec-maven-plugin:1.6.0:exec (Symlink to deprecated client jar name) 
@ phoenix-assembly ---
'phoenix-4.15.0-HBase-1.4-SNAPSHOT-client.jar' -> 
'phoenix-client-4.15.0-HBase-1.4-SNAPSHOT.jar'
[INFO] 
[INFO] --- maven-source-plugin:2.2.1:jar-no-fork (attach-sources) @ 
phoenix-assembly ---
[INFO] Skipping source per configuration.
[INFO] 
[INFO] --- maven-jar-plugin:2.4:test-jar (default) @ phoenix-assembly ---
[INFO] Skipping packaging of the test-jar
[INFO] 
[INFO] --- maven-site-plugin:3.7.1:attach-descriptor (attach-descriptor) @ 
phoenix-assembly ---
[INFO] No site descriptor found: nothing to attach.
[INFO] 
[INFO] --- maven-assembly-plugin:2.5.2:single (package-to-tar) @ 
phoenix-assembly ---
[INFO] Reading assembly descriptor: src/build/package-to-tar-all.xml
[WARNING] Cannot include project artifact: 
org.apache.phoenix:phoenix-assembly:pom:4.15.0-HBase-1.4-SNAPSHOT; it doesn't 
have an associated file or directory.
[WARNING] Cannot include project artifact: 
org.apache.phoenix:phoenix-assembly:pom:4.15.0-HBase-1.4-SNAPSHOT; it doesn't 
have an associated file or directory.
[WARNING] The following patterns were never triggered in this artifact 
inclusion filter:
o  'org.cloudera.htrace:htrace-core'
o  'org.apache.calcite:calcite-avatica*'
o  'javax.servlet:javax.servlet-api'

[WARNING] Cannot include project artifact: 

Apache-Phoenix | Master | Build Successful

2019-06-18 Thread Apache Jenkins Server
Master branch build status Successful
Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/master

Last Successful Compiled Artifacts https://builds.apache.org/job/Phoenix-master/lastSuccessfulBuild/artifact/

Last Complete Test Report https://builds.apache.org/job/Phoenix-master/lastCompletedBuild/testReport/

Changes
[larsh] PHOENIX-5355 Speed up BaseIndexIT.



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Jenkins build is back to normal : Phoenix-4.x-HBase-1.5 #52

2019-06-18 Thread Apache Jenkins Server
See 




Apache-Phoenix | Master | Build Successful

2019-06-18 Thread Apache Jenkins Server
Master branch build status Successful
Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/master

Last Successful Compiled Artifacts https://builds.apache.org/job/Phoenix-master/lastSuccessfulBuild/artifact/

Last Complete Test Report https://builds.apache.org/job/Phoenix-master/lastCompletedBuild/testReport/

Changes
[larsh] PHOENIX-5357 Display max size in exceptions thrown in SizeBoundQueue.



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Build failed in Jenkins: Phoenix-4.x-HBase-1.4 #187

2019-06-18 Thread Apache Jenkins Server
See 


Changes:

[larsh] PHOENIX-5357 Display max size in exceptions thrown in SizeBoundQueue.

--
[...truncated 268.58 KB...]
[INFO] Excluding org.apache.hadoop:hadoop-mapreduce-client-common:jar:2.7.5 
from the shaded jar.
[INFO] Excluding org.apache.hadoop:hadoop-yarn-client:jar:2.7.5 from the shaded 
jar.
[INFO] Excluding org.apache.hadoop:hadoop-yarn-server-common:jar:2.7.5 from the 
shaded jar.
[INFO] Excluding org.apache.hadoop:hadoop-mapreduce-client-shuffle:jar:2.7.5 
from the shaded jar.
[INFO] Excluding org.apache.hadoop:hadoop-yarn-server-nodemanager:jar:2.7.5 
from the shaded jar.
[INFO] Including com.google.code.gson:gson:jar:2.2.4 in the shaded jar.
[INFO] Excluding com.google.inject:guice:jar:3.0 from the shaded jar.
[INFO] Excluding javax.inject:javax.inject:jar:1 from the shaded jar.
[INFO] Excluding aopalliance:aopalliance:jar:1.0 from the shaded jar.
[INFO] Including com.google.inject.extensions:guice-assistedinject:jar:3.0 in 
the shaded jar.
[INFO] Including org.apache.thrift:libthrift:jar:0.9.0 in the shaded jar.
[INFO] Including it.unimi.dsi:fastutil:jar:6.5.6 in the shaded jar.
[INFO] Including org.apache.twill:twill-common:jar:0.8.0 in the shaded jar.
[INFO] Including org.apache.twill:twill-core:jar:0.8.0 in the shaded jar.
[INFO] Including org.apache.twill:twill-api:jar:0.8.0 in the shaded jar.
[INFO] Excluding org.ow2.asm:asm-all:jar:5.0.2 from the shaded jar.
[INFO] Including org.apache.twill:twill-discovery-api:jar:0.8.0 in the shaded 
jar.
[INFO] Including org.apache.twill:twill-discovery-core:jar:0.8.0 in the shaded 
jar.
[INFO] Including org.apache.twill:twill-zookeeper:jar:0.8.0 in the shaded jar.
[INFO] Including io.dropwizard.metrics:metrics-core:jar:3.1.0 in the shaded jar.
[INFO] Replacing 

 with 

[INFO] 
[INFO] --- maven-install-plugin:2.5.2:install-file (default-install) @ 
phoenix-server ---
[INFO] Installing 

 to 
/home/jenkins/.m2/repository/org/apache/phoenix/phoenix-server/4.15.0-HBase-1.4-SNAPSHOT/phoenix-server-4.15.0-HBase-1.4-SNAPSHOT.jar
[INFO] Installing 
 
to 
/home/jenkins/.m2/repository/org/apache/phoenix/phoenix-server/4.15.0-HBase-1.4-SNAPSHOT/phoenix-server-4.15.0-HBase-1.4-SNAPSHOT.pom
[INFO] 
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ 
phoenix-server ---
[INFO] Skipping artifact installation
[INFO] 
[INFO] < org.apache.phoenix:phoenix-assembly >-
[INFO] Building Phoenix Assembly 4.15.0-HBase-1.4-SNAPSHOT[6/7]
[INFO] [ pom ]-
Downloading from central: 
https://repo.maven.apache.org/maven2/org/codehaus/mojo/exec-maven-plugin/maven-metadata.xml
Progress (1): 741 B   Downloaded from central: 
https://repo.maven.apache.org/maven2/org/codehaus/mojo/exec-maven-plugin/maven-metadata.xml
 (741 B at 21 kB/s)
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ phoenix-assembly ---
[INFO] Deleting 

[INFO] 
[INFO] --- maven-checkstyle-plugin:2.13:check (validate) @ phoenix-assembly ---
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
phoenix-assembly ---
[INFO] 
[INFO] --- exec-maven-plugin:1.6.0:exec (Symlink to deprecated client jar name) 
@ phoenix-assembly ---
'phoenix-4.15.0-HBase-1.4-SNAPSHOT-client.jar' -> 
'phoenix-client-4.15.0-HBase-1.4-SNAPSHOT.jar'
[INFO] 
[INFO] --- maven-source-plugin:2.2.1:jar-no-fork (attach-sources) @ 
phoenix-assembly ---
[INFO] Skipping source per configuration.
[INFO] 
[INFO] --- maven-jar-plugin:2.4:test-jar (default) @ phoenix-assembly ---
[INFO] Skipping packaging of the test-jar
[INFO] 
[INFO] --- maven-site-plugin:3.7.1:attach-descriptor (attach-descriptor) @ 
phoenix-assembly ---
[INFO] No site descriptor found: nothing to attach.
[INFO] 
[INFO] --- maven-assembly-plugin:2.5.2:single (package-to-tar) @ 
phoenix-assembly ---
[INFO] Reading assembly descriptor: src/build/package-to-tar-all.xml
[WARNING] Cannot include project artifact: 
org.apache.phoenix:phoenix-assembly:pom:4.15.0-HBase-1.4-SNAPSHOT; it doesn't 
have an associated file or directory.
[WARNING] Cannot include project artifact: 
org.apache.phoenix:phoenix-assembly:pom:4.15.0-HBase-1.4-SNAPSHOT; it doesn't 
have an associated file or directory.
[WARNING] The following patterns were 

Build failed in Jenkins: Phoenix-4.x-HBase-1.5 #51

2019-06-18 Thread Apache Jenkins Server
See 


Changes:

[larsh] PHOENIX-5357 Display max size in exceptions thrown in SizeBoundQueue.

--
[...truncated 123.68 KB...]
[INFO] Running org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.283 s 
- in org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Running org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.321 s 
- in org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Running org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.642 s 
- in org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Running org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 167.653 
s - in org.apache.phoenix.end2end.join.SubqueryUsingSortMergeJoinIT
[INFO] Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 232.969 
s - in org.apache.phoenix.end2end.join.SubqueryIT
[INFO] Running org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.868 s 
- in org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 621.336 
s - in org.apache.phoenix.end2end.join.HashJoinLocalIndexIT
[INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 180.72 
s - in org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 609.482 
s - in org.apache.phoenix.end2end.join.SortMergeJoinLocalIndexIT
[INFO] Tests run: 50, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 352.791 
s - in org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 66, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 471.646 
s - in org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Errors: 
[ERROR]   StringIT.testValidStringConcatExpression:311 » PhoenixIO 
org.apache.phoenix.ex...
[INFO] 
[ERROR] Tests run: 3749, Failures: 0, Errors: 1, Skipped: 2
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test (HBaseManagedTimeTests) 
@ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test 
(NeedTheirOwnClusterTests) @ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.002 
s - in 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running org.apache.phoenix.end2end.ConcurrentMutationsExtendedIT
[INFO] Running org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.078 s 
- in org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.472 s 
- in org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.468 s 
- in org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.876 s 
- in org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running org.apache.phoenix.end2end.CostBasedDecisionIT
[INFO] Running org.apache.phoenix.end2end.DropSchemaIT
[INFO] Running org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Running org.apache.phoenix.end2end.IndexBuildTimestampIT
[INFO] Running org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.657 s 
- in org.apache.phoenix.end2end.DropSchemaIT
[INFO] Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.074 
s - in org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.326 s 
- in org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Running 
org.apache.phoenix.end2end.IndexToolForPartialBuildWithNamespaceEnabledIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time 

[phoenix] branch master updated: PHOENIX-5355 Speed up BaseIndexIT.

2019-06-18 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 97f2d72  PHOENIX-5355 Speed up BaseIndexIT.
97f2d72 is described below

commit 97f2d72ef989e4870859fcd9fe2e68731258d5a5
Author: Lars Hofhansl 
AuthorDate: Tue Jun 18 16:32:26 2019 -0700

PHOENIX-5355 Speed up BaseIndexIT.
---
 .../org/apache/phoenix/end2end/CreateTableIT.java  | 62 +++
 .../apache/phoenix/end2end/index/BaseIndexIT.java  | 91 --
 2 files changed, 62 insertions(+), 91 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateTableIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateTableIT.java
index 054a218..96fa308 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateTableIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateTableIT.java
@@ -17,6 +17,7 @@
  */
 package org.apache.phoenix.end2end;
 
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNotEquals;
@@ -24,6 +25,7 @@ import static org.junit.Assert.assertNotNull;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
+import java.io.IOException;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
@@ -33,14 +35,19 @@ import java.sql.Statement;
 import java.util.List;
 import java.util.Properties;
 
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
 import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
 import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
+import org.apache.hadoop.hbase.ipc.PhoenixRpcSchedulerFactory;
 import org.apache.hadoop.hbase.regionserver.BloomType;
+import org.apache.hadoop.hbase.client.TableDescriptor;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
 import org.apache.phoenix.jdbc.PhoenixStatement;
+import org.apache.phoenix.query.BaseTest;
 import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
@@ -48,10 +55,12 @@ import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTable.ImmutableStorageScheme;
 import org.apache.phoenix.schema.PTable.QualifierEncodingScheme;
 import org.apache.phoenix.schema.PTableKey;
+import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.SchemaNotFoundException;
 import org.apache.phoenix.schema.TableAlreadyExistsException;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.TestUtil;
 import org.junit.Assert;
@@ -803,6 +812,59 @@ public class CreateTableIT extends ParallelStatsDisabledIT 
{
 }
 }
 
+/**
+ * Ensure that HTD contains table priorities correctly.
+ */
+@Test
+public void testTableDescriptorPriority() throws SQLException, IOException 
{
+String tableName = "TBL_" + generateUniqueName();
+String indexName = "IND_" + generateUniqueName();
+String fullTableName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, tableName);
+String fullIndexeName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, indexName);
+// Check system tables priorities.
+try (Admin admin = driver.getConnectionQueryServices(null, 
null).getAdmin();
+Connection c = DriverManager.getConnection(getUrl())) {
+ResultSet rs = c.getMetaData().getTables("", 
+"\""+ PhoenixDatabaseMetaData.SYSTEM_CATALOG_SCHEMA + 
"\"", 
+null, 
+new String[] {PTableType.SYSTEM.toString()});
+ReadOnlyProps p = 
c.unwrap(PhoenixConnection.class).getQueryServices().getProps();
+while (rs.next()) {
+String schemaName = 
rs.getString(PhoenixDatabaseMetaData.TABLE_SCHEM);
+String tName = 
rs.getString(PhoenixDatabaseMetaData.TABLE_NAME);
+org.apache.hadoop.hbase.TableName hbaseTableName = 
SchemaUtil.getPhysicalTableName(SchemaUtil.getTableName(schemaName, tName), p);
+TableDescriptor htd = admin.getDescriptor(hbaseTableName);
+String val = htd.getValue("PRIORITY");
+assertNotNull("PRIORITY is not set for table:" + htd, val);
+assertTrue(Integer.parseInt(val)
+>= 

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5355 Speed up BaseIndexIT.

2019-06-18 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 792c6f5  PHOENIX-5355 Speed up BaseIndexIT.
792c6f5 is described below

commit 792c6f5c1ea190380d66c43cef5c68175ae277fc
Author: Lars Hofhansl 
AuthorDate: Tue Jun 18 16:24:40 2019 -0700

PHOENIX-5355 Speed up BaseIndexIT.
---
 .../org/apache/phoenix/end2end/CreateTableIT.java  | 62 ++
 .../apache/phoenix/end2end/index/BaseIndexIT.java  | 98 --
 2 files changed, 62 insertions(+), 98 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateTableIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateTableIT.java
index fb6a0ce..1b2b8bd 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateTableIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateTableIT.java
@@ -17,6 +17,7 @@
  */
 package org.apache.phoenix.end2end;
 
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNotEquals;
@@ -24,6 +25,7 @@ import static org.junit.Assert.assertNotNull;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
+import java.io.IOException;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
@@ -34,12 +36,17 @@ import java.util.List;
 import java.util.Properties;
 
 import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.ipc.PhoenixRpcSchedulerFactory;
 import org.apache.hadoop.hbase.regionserver.BloomType;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
 import org.apache.phoenix.jdbc.PhoenixStatement;
+import org.apache.phoenix.query.BaseTest;
 import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
@@ -47,10 +54,12 @@ import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTable.ImmutableStorageScheme;
 import org.apache.phoenix.schema.PTable.QualifierEncodingScheme;
 import org.apache.phoenix.schema.PTableKey;
+import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.SchemaNotFoundException;
 import org.apache.phoenix.schema.TableAlreadyExistsException;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.TestUtil;
 import org.junit.Assert;
@@ -802,6 +811,59 @@ public class CreateTableIT extends ParallelStatsDisabledIT 
{
 }
 }
 
+/**
+ * Ensure that HTD contains table priorities correctly.
+ */
+@Test
+public void testTableDescriptorPriority() throws SQLException, IOException 
{
+String tableName = "TBL_" + generateUniqueName();
+String indexName = "IND_" + generateUniqueName();
+String fullTableName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, tableName);
+String fullIndexeName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, indexName);
+// Check system tables priorities.
+try (HBaseAdmin admin = driver.getConnectionQueryServices(null, 
null).getAdmin(); 
+Connection c = DriverManager.getConnection(getUrl())) {
+ResultSet rs = c.getMetaData().getTables("", 
+"\""+ PhoenixDatabaseMetaData.SYSTEM_CATALOG_SCHEMA + 
"\"", 
+null, 
+new String[] {PTableType.SYSTEM.toString()});
+ReadOnlyProps p = 
c.unwrap(PhoenixConnection.class).getQueryServices().getProps();
+while (rs.next()) {
+String schemaName = 
rs.getString(PhoenixDatabaseMetaData.TABLE_SCHEM);
+String tName = 
rs.getString(PhoenixDatabaseMetaData.TABLE_NAME);
+org.apache.hadoop.hbase.TableName hbaseTableName = 
SchemaUtil.getPhysicalTableName(SchemaUtil.getTableName(schemaName, tName), p);
+HTableDescriptor htd = 
admin.getTableDescriptor(hbaseTableName);
+String val = htd.getValue("PRIORITY");
+assertNotNull("PRIORITY is not set for table:" + htd, val);
+assertTrue(Integer.parseInt(val)
+>= 
PhoenixRpcSchedulerFactory.getMetadataPriority(config));
+}
+Properties props = 

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5355 Speed up BaseIndexIT.

2019-06-18 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 235678e  PHOENIX-5355 Speed up BaseIndexIT.
235678e is described below

commit 235678e97108d49989ff776b47c96b136556dada
Author: Lars Hofhansl 
AuthorDate: Tue Jun 18 16:24:19 2019 -0700

PHOENIX-5355 Speed up BaseIndexIT.
---
 .../org/apache/phoenix/end2end/CreateTableIT.java  | 62 ++
 .../apache/phoenix/end2end/index/BaseIndexIT.java  | 98 --
 2 files changed, 62 insertions(+), 98 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateTableIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateTableIT.java
index fb6a0ce..1b2b8bd 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateTableIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateTableIT.java
@@ -17,6 +17,7 @@
  */
 package org.apache.phoenix.end2end;
 
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNotEquals;
@@ -24,6 +25,7 @@ import static org.junit.Assert.assertNotNull;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
+import java.io.IOException;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
@@ -34,12 +36,17 @@ import java.util.List;
 import java.util.Properties;
 
 import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.ipc.PhoenixRpcSchedulerFactory;
 import org.apache.hadoop.hbase.regionserver.BloomType;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
 import org.apache.phoenix.jdbc.PhoenixStatement;
+import org.apache.phoenix.query.BaseTest;
 import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
@@ -47,10 +54,12 @@ import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTable.ImmutableStorageScheme;
 import org.apache.phoenix.schema.PTable.QualifierEncodingScheme;
 import org.apache.phoenix.schema.PTableKey;
+import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.SchemaNotFoundException;
 import org.apache.phoenix.schema.TableAlreadyExistsException;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.TestUtil;
 import org.junit.Assert;
@@ -802,6 +811,59 @@ public class CreateTableIT extends ParallelStatsDisabledIT 
{
 }
 }
 
+/**
+ * Ensure that HTD contains table priorities correctly.
+ */
+@Test
+public void testTableDescriptorPriority() throws SQLException, IOException 
{
+String tableName = "TBL_" + generateUniqueName();
+String indexName = "IND_" + generateUniqueName();
+String fullTableName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, tableName);
+String fullIndexeName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, indexName);
+// Check system tables priorities.
+try (HBaseAdmin admin = driver.getConnectionQueryServices(null, 
null).getAdmin(); 
+Connection c = DriverManager.getConnection(getUrl())) {
+ResultSet rs = c.getMetaData().getTables("", 
+"\""+ PhoenixDatabaseMetaData.SYSTEM_CATALOG_SCHEMA + 
"\"", 
+null, 
+new String[] {PTableType.SYSTEM.toString()});
+ReadOnlyProps p = 
c.unwrap(PhoenixConnection.class).getQueryServices().getProps();
+while (rs.next()) {
+String schemaName = 
rs.getString(PhoenixDatabaseMetaData.TABLE_SCHEM);
+String tName = 
rs.getString(PhoenixDatabaseMetaData.TABLE_NAME);
+org.apache.hadoop.hbase.TableName hbaseTableName = 
SchemaUtil.getPhysicalTableName(SchemaUtil.getTableName(schemaName, tName), p);
+HTableDescriptor htd = 
admin.getTableDescriptor(hbaseTableName);
+String val = htd.getValue("PRIORITY");
+assertNotNull("PRIORITY is not set for table:" + htd, val);
+assertTrue(Integer.parseInt(val)
+>= 
PhoenixRpcSchedulerFactory.getMetadataPriority(config));
+}
+Properties props = 

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5355 Speed up BaseIndexIT.

2019-06-18 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 5c492f5  PHOENIX-5355 Speed up BaseIndexIT.
5c492f5 is described below

commit 5c492f579c7680be9d79f36e70ce30351756c17d
Author: Lars Hofhansl 
AuthorDate: Tue Jun 18 16:21:23 2019 -0700

PHOENIX-5355 Speed up BaseIndexIT.
---
 .../org/apache/phoenix/end2end/CreateTableIT.java  | 62 ++
 .../apache/phoenix/end2end/index/BaseIndexIT.java  | 98 --
 2 files changed, 62 insertions(+), 98 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateTableIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateTableIT.java
index fb6a0ce..1b2b8bd 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateTableIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateTableIT.java
@@ -17,6 +17,7 @@
  */
 package org.apache.phoenix.end2end;
 
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNotEquals;
@@ -24,6 +25,7 @@ import static org.junit.Assert.assertNotNull;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
+import java.io.IOException;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
@@ -34,12 +36,17 @@ import java.util.List;
 import java.util.Properties;
 
 import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.ipc.PhoenixRpcSchedulerFactory;
 import org.apache.hadoop.hbase.regionserver.BloomType;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
 import org.apache.phoenix.jdbc.PhoenixStatement;
+import org.apache.phoenix.query.BaseTest;
 import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
@@ -47,10 +54,12 @@ import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTable.ImmutableStorageScheme;
 import org.apache.phoenix.schema.PTable.QualifierEncodingScheme;
 import org.apache.phoenix.schema.PTableKey;
+import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.SchemaNotFoundException;
 import org.apache.phoenix.schema.TableAlreadyExistsException;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.TestUtil;
 import org.junit.Assert;
@@ -802,6 +811,59 @@ public class CreateTableIT extends ParallelStatsDisabledIT 
{
 }
 }
 
+/**
+ * Ensure that HTD contains table priorities correctly.
+ */
+@Test
+public void testTableDescriptorPriority() throws SQLException, IOException 
{
+String tableName = "TBL_" + generateUniqueName();
+String indexName = "IND_" + generateUniqueName();
+String fullTableName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, tableName);
+String fullIndexeName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, indexName);
+// Check system tables priorities.
+try (HBaseAdmin admin = driver.getConnectionQueryServices(null, 
null).getAdmin(); 
+Connection c = DriverManager.getConnection(getUrl())) {
+ResultSet rs = c.getMetaData().getTables("", 
+"\""+ PhoenixDatabaseMetaData.SYSTEM_CATALOG_SCHEMA + 
"\"", 
+null, 
+new String[] {PTableType.SYSTEM.toString()});
+ReadOnlyProps p = 
c.unwrap(PhoenixConnection.class).getQueryServices().getProps();
+while (rs.next()) {
+String schemaName = 
rs.getString(PhoenixDatabaseMetaData.TABLE_SCHEM);
+String tName = 
rs.getString(PhoenixDatabaseMetaData.TABLE_NAME);
+org.apache.hadoop.hbase.TableName hbaseTableName = 
SchemaUtil.getPhysicalTableName(SchemaUtil.getTableName(schemaName, tName), p);
+HTableDescriptor htd = 
admin.getTableDescriptor(hbaseTableName);
+String val = htd.getValue("PRIORITY");
+assertNotNull("PRIORITY is not set for table:" + htd, val);
+assertTrue(Integer.parseInt(val)
+>= 
PhoenixRpcSchedulerFactory.getMetadataPriority(config));
+}
+Properties props = 

Build failed in Jenkins: Phoenix-4.x-HBase-1.5 #50

2019-06-18 Thread Apache Jenkins Server
See 

--
[...truncated 171.15 KB...]
[INFO] Running org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.035 s 
- in org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Running org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.42 s 
- in org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Running org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 163.552 
s - in org.apache.phoenix.end2end.join.SubqueryUsingSortMergeJoinIT
[INFO] Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 234.551 
s - in org.apache.phoenix.end2end.join.SubqueryIT
[INFO] Running org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.924 s 
- in org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 598.787 
s - in org.apache.phoenix.end2end.join.HashJoinLocalIndexIT
[INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 174.475 
s - in org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 599.554 
s - in org.apache.phoenix.end2end.join.SortMergeJoinLocalIndexIT
[INFO] Tests run: 50, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 352.505 
s - in org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 66, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 414.154 
s - in org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Errors: 
[ERROR]   
RowValueConstructorIT.testQueryMoreWithLeadingPKColSkippedInRowValueConstructor:544->_testQueryMoreWithLeadingPKColSkippedInRowValueConstructor:584
 » PhoenixIO
[ERROR]   
RowValueConstructorIT.testQueryMoreWithLeadingPKColSkippedInRowValueConstructor_salted:549->_testQueryMoreWithLeadingPKColSkippedInRowValueConstructor:584
 » PhoenixIO
[ERROR]   
RowValueConstructorIT.testQueryMoreWithSubsetofPKColsInRowValueConstructor:472->_testQueryMoreWithSubsetofPKColsInRowValueConstructor:515
 » PhoenixIO
[ERROR]   
RowValueConstructorIT.testQueryMoreWithSubsetofPKColsInRowValueConstructor_salted:477->_testQueryMoreWithSubsetofPKColsInRowValueConstructor:515
 » PhoenixIO
[INFO] 
[ERROR] Tests run: 3749, Failures: 0, Errors: 4, Skipped: 2
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test (HBaseManagedTimeTests) 
@ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test 
(NeedTheirOwnClusterTests) @ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.001 
s - in 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.109 s 
- in org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Running org.apache.phoenix.end2end.ConcurrentMutationsExtendedIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.461 s 
- in org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Running org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.561 s 
- in org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.034 s 
- in org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running org.apache.phoenix.end2end.CostBasedDecisionIT
[INFO] Running org.apache.phoenix.end2end.DropSchemaIT
[INFO] Running org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Running org.apache.phoenix.end2end.IndexBuildTimestampIT
[INFO] Running org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.409 s 
- in org.apache.phoenix.end2end.DropSchemaIT
[INFO] Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 96.155 
s - in org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.IndexToolForPartialBuildIT

[phoenix] branch master updated: PHOENIX-5357 Display max size in exceptions thrown in SizeBoundQueue.

2019-06-18 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 486afae  PHOENIX-5357 Display max size in exceptions thrown in 
SizeBoundQueue.
486afae is described below

commit 486afae05cc85649973dbb52d6c96387c1ed81cb
Author: Lars Hofhansl 
AuthorDate: Tue Jun 18 15:47:24 2019 -0700

PHOENIX-5357 Display max size in exceptions thrown in SizeBoundQueue.
---
 .../src/main/java/org/apache/phoenix/iterate/SizeBoundQueue.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/SizeBoundQueue.java 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/SizeBoundQueue.java
index eb1e6be..34c6fec 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/iterate/SizeBoundQueue.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/iterate/SizeBoundQueue.java
@@ -55,7 +55,7 @@ public abstract class SizeBoundQueue extends 
AbstractQueue implements Size
 return super.add(e);
 } catch (IllegalStateException ex) {
 throw new IllegalStateException(
-"Queue full. Consider increasing memory threshold or 
spooling to disk", ex);
+"Queue full. Consider increasing memory threshold or 
spooling to disk. Max size: " + maxSizeBytes + ", Current size: " + currentSize 
+ ", Number of elements:" + size(), ex);
 }
 }
 



[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5357 Display max size in exceptions thrown in SizeBoundQueue.

2019-06-18 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new a452a30  PHOENIX-5357 Display max size in exceptions thrown in 
SizeBoundQueue.
a452a30 is described below

commit a452a301570294a22a6d608bd4ba89d4a585939d
Author: Lars Hofhansl 
AuthorDate: Tue Jun 18 15:46:23 2019 -0700

PHOENIX-5357 Display max size in exceptions thrown in SizeBoundQueue.
---
 .../src/main/java/org/apache/phoenix/iterate/SizeBoundQueue.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/SizeBoundQueue.java 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/SizeBoundQueue.java
index eb1e6be..34c6fec 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/iterate/SizeBoundQueue.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/iterate/SizeBoundQueue.java
@@ -55,7 +55,7 @@ public abstract class SizeBoundQueue extends 
AbstractQueue implements Size
 return super.add(e);
 } catch (IllegalStateException ex) {
 throw new IllegalStateException(
-"Queue full. Consider increasing memory threshold or 
spooling to disk", ex);
+"Queue full. Consider increasing memory threshold or 
spooling to disk. Max size: " + maxSizeBytes + ", Current size: " + currentSize 
+ ", Number of elements:" + size(), ex);
 }
 }
 



[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5357 Display max size in exceptions thrown in SizeBoundQueue.

2019-06-18 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 1d3a1a0  PHOENIX-5357 Display max size in exceptions thrown in 
SizeBoundQueue.
1d3a1a0 is described below

commit 1d3a1a020e3d880336b197dc65c58a92798d616f
Author: Lars Hofhansl 
AuthorDate: Tue Jun 18 15:45:56 2019 -0700

PHOENIX-5357 Display max size in exceptions thrown in SizeBoundQueue.
---
 .../src/main/java/org/apache/phoenix/iterate/SizeBoundQueue.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/SizeBoundQueue.java 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/SizeBoundQueue.java
index eb1e6be..34c6fec 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/iterate/SizeBoundQueue.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/iterate/SizeBoundQueue.java
@@ -55,7 +55,7 @@ public abstract class SizeBoundQueue extends 
AbstractQueue implements Size
 return super.add(e);
 } catch (IllegalStateException ex) {
 throw new IllegalStateException(
-"Queue full. Consider increasing memory threshold or 
spooling to disk", ex);
+"Queue full. Consider increasing memory threshold or 
spooling to disk. Max size: " + maxSizeBytes + ", Current size: " + currentSize 
+ ", Number of elements:" + size(), ex);
 }
 }
 



[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5357 Display max size in exceptions thrown in SizeBoundQueue.

2019-06-18 Thread larsh
This is an automated email from the ASF dual-hosted git repository.

larsh pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 446efd1  PHOENIX-5357 Display max size in exceptions thrown in 
SizeBoundQueue.
446efd1 is described below

commit 446efd1457f6d33e66a8dce1c4a3d10873164e26
Author: Lars Hofhansl 
AuthorDate: Tue Jun 18 15:45:14 2019 -0700

PHOENIX-5357 Display max size in exceptions thrown in SizeBoundQueue.
---
 .../src/main/java/org/apache/phoenix/iterate/SizeBoundQueue.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/SizeBoundQueue.java 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/SizeBoundQueue.java
index eb1e6be..34c6fec 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/iterate/SizeBoundQueue.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/iterate/SizeBoundQueue.java
@@ -55,7 +55,7 @@ public abstract class SizeBoundQueue extends 
AbstractQueue implements Size
 return super.add(e);
 } catch (IllegalStateException ex) {
 throw new IllegalStateException(
-"Queue full. Consider increasing memory threshold or 
spooling to disk", ex);
+"Queue full. Consider increasing memory threshold or 
spooling to disk. Max size: " + maxSizeBytes + ", Current size: " + currentSize 
+ ", Number of elements:" + size(), ex);
 }
 }
 



Build failed in Jenkins: Phoenix Compile Compatibility with HBase #1032

2019-06-18 Thread Apache Jenkins Server
See 


--
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on H25 (ubuntu xenial) in workspace 

[Phoenix_Compile_Compat_wHBase] $ /bin/bash /tmp/jenkins124484674783930445.sh
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 386407
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 6
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 10240
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited
core id : 0
core id : 1
core id : 2
core id : 3
core id : 4
core id : 5
physical id : 0
physical id : 1
MemTotal:   98957636 kB
MemFree:32346252 kB
Filesystem  Size  Used Avail Use% Mounted on
udev 48G 0   48G   0% /dev
tmpfs   9.5G  986M  8.5G  11% /run
/dev/sda3   3.6T  421G  3.0T  13% /
tmpfs48G 0   48G   0% /dev/shm
tmpfs   5.0M 0  5.0M   0% /run/lock
tmpfs48G 0   48G   0% /sys/fs/cgroup
/dev/sda2   473M  236M  213M  53% /boot
tmpfs   9.5G  4.0K  9.5G   1% /run/user/910
tmpfs   9.5G 0  9.5G   0% /run/user/1000
/dev/loop9   90M   90M 0 100% /snap/core/6673
/dev/loop8   90M   90M 0 100% /snap/core/6818
/dev/loop13  55M   55M 0 100% /snap/lxd/10756
/dev/loop2   89M   89M 0 100% /snap/core/6964
/dev/loop11  57M   57M 0 100% /snap/snapcraft/3022
/dev/loop4   57M   57M 0 100% /snap/snapcraft/3059
/dev/loop7   55M   55M 0 100% /snap/lxd/10923
apache-maven-2.2.1
apache-maven-3.0.4
apache-maven-3.0.5
apache-maven-3.1.1
apache-maven-3.2.1
apache-maven-3.2.5
apache-maven-3.3.3
apache-maven-3.3.9
apache-maven-3.5.0
apache-maven-3.5.2
apache-maven-3.5.4
apache-maven-3.6.0
latest
latest2
latest3


===
Verifying compile level compatibility with HBase 0.98 with Phoenix 
4.x-HBase-0.98
===

Cloning into 'hbase'...
Switched to a new branch '0.98'
Branch 0.98 set up to track remote branch 0.98 from origin.
[ERROR] Plugin org.codehaus.mojo:findbugs-maven-plugin:2.5.2 or one of its 
dependencies could not be resolved: Failed to read artifact descriptor for 
org.codehaus.mojo:findbugs-maven-plugin:jar:2.5.2: Could not transfer artifact 
org.codehaus.mojo:findbugs-maven-plugin:pom:2.5.2 from/to central 
(https://repo.maven.apache.org/maven2): Received fatal alert: protocol_version 
-> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginResolutionException
Build step 'Execute shell' marked build as failure


Jenkins build is back to normal : Phoenix | Master #2418

2019-06-18 Thread Apache Jenkins Server
See