Apache-Phoenix | Master | Build Successful

2017-09-07 Thread Apache Jenkins Server
Master branch build status Successful
Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/master

Last Successful Compiled Artifacts https://builds.apache.org/job/Phoenix-master/lastSuccessfulBuild/artifact/

Last Complete Test Report https://builds.apache.org/job/Phoenix-master/lastCompletedBuild/testReport/

Changes
[jtaylor] PHOENIX-3953 Clear INDEX_DISABLED_TIMESTAMP and disable index on

[elserj] PHOENIX-4168 Pluggable Remote User Extraction



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Apache-Phoenix | 4.x-HBase-1.1 | Build Successful

2017-09-07 Thread Apache Jenkins Server
4.x-HBase-1.1 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-1.1

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-1.1/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-1.1/lastCompletedBuild/testReport/

Changes
[jtaylor] PHOENIX-4173 Ensure that the rebuild fails if an index that transitions

[jtaylor] PHOENIX-4175 Convert tests using CURRENT_SCN to not use it when possible

[jtaylor] PHOENIX-3953 Clear INDEX_DISABLED_TIMESTAMP and disable index on

[elserj] PHOENIX-4168 Pluggable Remote User Extraction



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Jenkins build is back to normal : Phoenix | Master #1779

2017-09-07 Thread Apache Jenkins Server
See 




Apache-Phoenix | 4.x-HBase-1.1 | Build Successful

2017-09-07 Thread Apache Jenkins Server
4.x-HBase-1.1 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-1.1

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-1.1/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-1.1/lastCompletedBuild/testReport/

Changes
[samarth] PHOENIX-4177 Convert TopNIT to extend ParallelStatsDisabledIT



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


[4/4] phoenix git commit: PHOENIX-4168 Pluggable Remote User Extraction

2017-09-07 Thread elserj
PHOENIX-4168 Pluggable Remote User Extraction

Adds factory for creating RemoteUserExtractor instances. The factory
can be overridden using ServiceLoader. The default factory creates
PhoenixRemoteUserExtractor instances.

Signed-off-by: Josh Elser 


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/c260846a
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/c260846a
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/c260846a

Branch: refs/heads/4.x-HBase-0.98
Commit: c260846a1d20f2392b5d2f6cdc02bb006d9db0af
Parents: 6380265
Author: Alex Araujo 
Authored: Wed Sep 6 13:33:27 2017 -0400
Committer: Josh Elser 
Committed: Thu Sep 7 15:15:55 2017 -0400

--
 .../phoenix/queryserver/server/QueryServer.java | 13 ++-
 .../server/RemoteUserExtractorFactory.java  | 36 
 .../server/RemoteUserExtractorFactoryTest.java  | 35 +++
 3 files changed, 83 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/c260846a/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/QueryServer.java
--
diff --git 
a/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/QueryServer.java
 
b/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/QueryServer.java
index 21eb2ef..288e4f5 100644
--- 
a/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/QueryServer.java
+++ 
b/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/QueryServer.java
@@ -51,6 +51,7 @@ import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.loadbalancer.service.LoadBalanceZookeeperConf;
 import org.apache.phoenix.queryserver.register.Registry;
+import org.apache.phoenix.util.InstanceResolver;
 
 import java.io.File;
 import java.lang.management.ManagementFactory;
@@ -373,10 +374,20 @@ public final class QueryServer extends Configured 
implements Tool, Runnable {
   public void setRemoteUserExtractorIfNecessary(HttpServer.Builder builder, 
Configuration conf) {
 if 
(conf.getBoolean(QueryServices.QUERY_SERVER_WITH_REMOTEUSEREXTRACTOR_ATTRIB,
 
QueryServicesOptions.DEFAULT_QUERY_SERVER_WITH_REMOTEUSEREXTRACTOR)) {
-  builder.withRemoteUserExtractor(new PhoenixRemoteUserExtractor(conf));
+  builder.withRemoteUserExtractor(createRemoteUserExtractor(conf));
 }
   }
 
+  private static final RemoteUserExtractorFactory DEFAULT_USER_EXTRACTOR =
+new RemoteUserExtractorFactory.RemoteUserExtractorFactoryImpl();
+
+  @VisibleForTesting
+  RemoteUserExtractor createRemoteUserExtractor(Configuration conf) {
+RemoteUserExtractorFactory factory =
+InstanceResolver.getSingleton(RemoteUserExtractorFactory.class, 
DEFAULT_USER_EXTRACTOR);
+return factory.createRemoteUserExtractor(conf);
+  }
+
   /**
* Use the correctly way to extract end user.
*/

http://git-wip-us.apache.org/repos/asf/phoenix/blob/c260846a/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/RemoteUserExtractorFactory.java
--
diff --git 
a/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/RemoteUserExtractorFactory.java
 
b/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/RemoteUserExtractorFactory.java
new file mode 100644
index 000..ff5e0d2
--- /dev/null
+++ 
b/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/RemoteUserExtractorFactory.java
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.queryserver.server;
+
+import org.apache.calcite.avatica.server.RemoteUserExtractor;
+import org.apache.hadoop.conf.Configuration;
+
+/**
+ * Creates remote user extractors.
+ */
+public interface RemoteUserExtractorFactory {
+
+  

[2/4] phoenix git commit: PHOENIX-4168 Pluggable Remote User Extraction

2017-09-07 Thread elserj
PHOENIX-4168 Pluggable Remote User Extraction

Adds factory for creating RemoteUserExtractor instances. The factory
can be overridden using ServiceLoader. The default factory creates
PhoenixRemoteUserExtractor instances.

Signed-off-by: Josh Elser 


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/bdb6a14a
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/bdb6a14a
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/bdb6a14a

Branch: refs/heads/4.x-HBase-1.2
Commit: bdb6a14a11017936ce23d6762f97d1475324cbfd
Parents: 5cf07c4
Author: Alex Araujo 
Authored: Wed Sep 6 13:33:27 2017 -0400
Committer: Josh Elser 
Committed: Thu Sep 7 14:50:20 2017 -0400

--
 .../phoenix/queryserver/server/QueryServer.java | 13 ++-
 .../server/RemoteUserExtractorFactory.java  | 36 
 .../server/RemoteUserExtractorFactoryTest.java  | 35 +++
 3 files changed, 83 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/bdb6a14a/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/QueryServer.java
--
diff --git 
a/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/QueryServer.java
 
b/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/QueryServer.java
index 21eb2ef..288e4f5 100644
--- 
a/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/QueryServer.java
+++ 
b/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/QueryServer.java
@@ -51,6 +51,7 @@ import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.loadbalancer.service.LoadBalanceZookeeperConf;
 import org.apache.phoenix.queryserver.register.Registry;
+import org.apache.phoenix.util.InstanceResolver;
 
 import java.io.File;
 import java.lang.management.ManagementFactory;
@@ -373,10 +374,20 @@ public final class QueryServer extends Configured 
implements Tool, Runnable {
   public void setRemoteUserExtractorIfNecessary(HttpServer.Builder builder, 
Configuration conf) {
 if 
(conf.getBoolean(QueryServices.QUERY_SERVER_WITH_REMOTEUSEREXTRACTOR_ATTRIB,
 
QueryServicesOptions.DEFAULT_QUERY_SERVER_WITH_REMOTEUSEREXTRACTOR)) {
-  builder.withRemoteUserExtractor(new PhoenixRemoteUserExtractor(conf));
+  builder.withRemoteUserExtractor(createRemoteUserExtractor(conf));
 }
   }
 
+  private static final RemoteUserExtractorFactory DEFAULT_USER_EXTRACTOR =
+new RemoteUserExtractorFactory.RemoteUserExtractorFactoryImpl();
+
+  @VisibleForTesting
+  RemoteUserExtractor createRemoteUserExtractor(Configuration conf) {
+RemoteUserExtractorFactory factory =
+InstanceResolver.getSingleton(RemoteUserExtractorFactory.class, 
DEFAULT_USER_EXTRACTOR);
+return factory.createRemoteUserExtractor(conf);
+  }
+
   /**
* Use the correctly way to extract end user.
*/

http://git-wip-us.apache.org/repos/asf/phoenix/blob/bdb6a14a/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/RemoteUserExtractorFactory.java
--
diff --git 
a/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/RemoteUserExtractorFactory.java
 
b/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/RemoteUserExtractorFactory.java
new file mode 100644
index 000..ff5e0d2
--- /dev/null
+++ 
b/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/RemoteUserExtractorFactory.java
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.queryserver.server;
+
+import org.apache.calcite.avatica.server.RemoteUserExtractor;
+import org.apache.hadoop.conf.Configuration;
+
+/**
+ * Creates remote user extractors.
+ */
+public interface RemoteUserExtractorFactory {
+
+  

[3/4] phoenix git commit: PHOENIX-4168 Pluggable Remote User Extraction

2017-09-07 Thread elserj
PHOENIX-4168 Pluggable Remote User Extraction

Adds factory for creating RemoteUserExtractor instances. The factory
can be overridden using ServiceLoader. The default factory creates
PhoenixRemoteUserExtractor instances.

Signed-off-by: Josh Elser 


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/00d42376
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/00d42376
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/00d42376

Branch: refs/heads/4.x-HBase-1.1
Commit: 00d42376bf50c0bd0708f753395a2f2908c9272d
Parents: 258f47d
Author: Alex Araujo 
Authored: Wed Sep 6 13:33:27 2017 -0400
Committer: Josh Elser 
Committed: Thu Sep 7 15:03:52 2017 -0400

--
 .../phoenix/queryserver/server/QueryServer.java | 13 ++-
 .../server/RemoteUserExtractorFactory.java  | 36 
 .../server/RemoteUserExtractorFactoryTest.java  | 35 +++
 3 files changed, 83 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/00d42376/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/QueryServer.java
--
diff --git 
a/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/QueryServer.java
 
b/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/QueryServer.java
index 21eb2ef..288e4f5 100644
--- 
a/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/QueryServer.java
+++ 
b/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/QueryServer.java
@@ -51,6 +51,7 @@ import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.loadbalancer.service.LoadBalanceZookeeperConf;
 import org.apache.phoenix.queryserver.register.Registry;
+import org.apache.phoenix.util.InstanceResolver;
 
 import java.io.File;
 import java.lang.management.ManagementFactory;
@@ -373,10 +374,20 @@ public final class QueryServer extends Configured 
implements Tool, Runnable {
   public void setRemoteUserExtractorIfNecessary(HttpServer.Builder builder, 
Configuration conf) {
 if 
(conf.getBoolean(QueryServices.QUERY_SERVER_WITH_REMOTEUSEREXTRACTOR_ATTRIB,
 
QueryServicesOptions.DEFAULT_QUERY_SERVER_WITH_REMOTEUSEREXTRACTOR)) {
-  builder.withRemoteUserExtractor(new PhoenixRemoteUserExtractor(conf));
+  builder.withRemoteUserExtractor(createRemoteUserExtractor(conf));
 }
   }
 
+  private static final RemoteUserExtractorFactory DEFAULT_USER_EXTRACTOR =
+new RemoteUserExtractorFactory.RemoteUserExtractorFactoryImpl();
+
+  @VisibleForTesting
+  RemoteUserExtractor createRemoteUserExtractor(Configuration conf) {
+RemoteUserExtractorFactory factory =
+InstanceResolver.getSingleton(RemoteUserExtractorFactory.class, 
DEFAULT_USER_EXTRACTOR);
+return factory.createRemoteUserExtractor(conf);
+  }
+
   /**
* Use the correctly way to extract end user.
*/

http://git-wip-us.apache.org/repos/asf/phoenix/blob/00d42376/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/RemoteUserExtractorFactory.java
--
diff --git 
a/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/RemoteUserExtractorFactory.java
 
b/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/RemoteUserExtractorFactory.java
new file mode 100644
index 000..ff5e0d2
--- /dev/null
+++ 
b/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/RemoteUserExtractorFactory.java
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.queryserver.server;
+
+import org.apache.calcite.avatica.server.RemoteUserExtractor;
+import org.apache.hadoop.conf.Configuration;
+
+/**
+ * Creates remote user extractors.
+ */
+public interface RemoteUserExtractorFactory {
+
+  

[1/4] phoenix git commit: PHOENIX-4168 Pluggable Remote User Extraction

2017-09-07 Thread elserj
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 63802652c -> c260846a1
  refs/heads/4.x-HBase-1.1 258f47d68 -> 00d42376b
  refs/heads/4.x-HBase-1.2 5cf07c4ce -> bdb6a14a1
  refs/heads/master 64b808971 -> 5a21734f1


PHOENIX-4168 Pluggable Remote User Extraction

Adds factory for creating RemoteUserExtractor instances. The factory
can be overridden using ServiceLoader. The default factory creates
PhoenixRemoteUserExtractor instances.

Signed-off-by: Josh Elser 


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/5a21734f
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/5a21734f
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/5a21734f

Branch: refs/heads/master
Commit: 5a21734f10a90fa8de0dce390aedc8edeb52b26c
Parents: 64b8089
Author: Alex Araujo 
Authored: Wed Sep 6 13:33:27 2017 -0400
Committer: Josh Elser 
Committed: Thu Sep 7 14:46:11 2017 -0400

--
 .../phoenix/queryserver/server/QueryServer.java | 13 ++-
 .../server/RemoteUserExtractorFactory.java  | 36 
 .../server/RemoteUserExtractorFactoryTest.java  | 35 +++
 3 files changed, 83 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/5a21734f/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/QueryServer.java
--
diff --git 
a/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/QueryServer.java
 
b/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/QueryServer.java
index 21eb2ef..288e4f5 100644
--- 
a/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/QueryServer.java
+++ 
b/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/QueryServer.java
@@ -51,6 +51,7 @@ import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.loadbalancer.service.LoadBalanceZookeeperConf;
 import org.apache.phoenix.queryserver.register.Registry;
+import org.apache.phoenix.util.InstanceResolver;
 
 import java.io.File;
 import java.lang.management.ManagementFactory;
@@ -373,10 +374,20 @@ public final class QueryServer extends Configured 
implements Tool, Runnable {
   public void setRemoteUserExtractorIfNecessary(HttpServer.Builder builder, 
Configuration conf) {
 if 
(conf.getBoolean(QueryServices.QUERY_SERVER_WITH_REMOTEUSEREXTRACTOR_ATTRIB,
 
QueryServicesOptions.DEFAULT_QUERY_SERVER_WITH_REMOTEUSEREXTRACTOR)) {
-  builder.withRemoteUserExtractor(new PhoenixRemoteUserExtractor(conf));
+  builder.withRemoteUserExtractor(createRemoteUserExtractor(conf));
 }
   }
 
+  private static final RemoteUserExtractorFactory DEFAULT_USER_EXTRACTOR =
+new RemoteUserExtractorFactory.RemoteUserExtractorFactoryImpl();
+
+  @VisibleForTesting
+  RemoteUserExtractor createRemoteUserExtractor(Configuration conf) {
+RemoteUserExtractorFactory factory =
+InstanceResolver.getSingleton(RemoteUserExtractorFactory.class, 
DEFAULT_USER_EXTRACTOR);
+return factory.createRemoteUserExtractor(conf);
+  }
+
   /**
* Use the correctly way to extract end user.
*/

http://git-wip-us.apache.org/repos/asf/phoenix/blob/5a21734f/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/RemoteUserExtractorFactory.java
--
diff --git 
a/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/RemoteUserExtractorFactory.java
 
b/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/RemoteUserExtractorFactory.java
new file mode 100644
index 000..ff5e0d2
--- /dev/null
+++ 
b/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/RemoteUserExtractorFactory.java
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package 

phoenix git commit: PHOENIX-4175 Convert tests using CURRENT_SCN to not use it when possible (addendum)

2017-09-07 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 98b9e3f4b -> 63802652c


PHOENIX-4175 Convert tests using CURRENT_SCN to not use it when possible 
(addendum)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/63802652
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/63802652
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/63802652

Branch: refs/heads/4.x-HBase-0.98
Commit: 63802652c78c971f93ffbf1a6cdcf3630d9c6849
Parents: 98b9e3f
Author: James Taylor 
Authored: Thu Sep 7 11:41:30 2017 -0700
Committer: James Taylor 
Committed: Thu Sep 7 11:41:30 2017 -0700

--
 .../apache/phoenix/end2end/UpsertSelectIT.java  | 23 ++--
 1 file changed, 16 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/63802652/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectIT.java
index e9a514b..aab8729 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectIT.java
@@ -77,20 +77,30 @@ public class UpsertSelectIT extends ParallelStatsDisabledIT 
{
 
 @Test
 public void testUpsertSelectWithNoIndex() throws Exception {
-testUpsertSelect(false);
+testUpsertSelect(false, false);
 }
 
 @Test
 public void testUpsertSelecWithIndex() throws Exception {
-testUpsertSelect(true);
+testUpsertSelect(true, false);
 }
 
-private void testUpsertSelect(boolean createIndex) throws Exception {
+@Test
+public void testUpsertSelecWithIndexWithSalt() throws Exception {
+testUpsertSelect(true, true);
+}
+
+@Test
+public void testUpsertSelecWithNoIndexWithSalt() throws Exception {
+testUpsertSelect(false, true);
+}
+
+private void testUpsertSelect(boolean createIndex, boolean saltTable) 
throws Exception {
 long ts = nextTimestamp();
 String tenantId = getOrganizationId();
 byte[][] splits = getDefaultSplits(tenantId);
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-String aTable = initATableValues(tenantId, splits, null, ts-1, 
getUrl(), null);
+String aTable = initATableValues(tenantId, saltTable ? null : splits, 
null, ts-1, getUrl(), saltTable ? "salt_buckets = 2" : null);
 
 String customEntityTable = generateUniqueName();
 props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
- 1));
@@ -122,12 +132,11 @@ public class UpsertSelectIT extends 
ParallelStatsDisabledIT {
 "b.val7 varchar,\n" +
 "b.val8 varchar,\n" +
 "b.val9 varchar\n" +
-"CONSTRAINT pk PRIMARY KEY (organization_id, key_prefix, 
custom_entity_data_id))";
+"CONSTRAINT pk PRIMARY KEY (organization_id, key_prefix, 
custom_entity_data_id)) " + (saltTable ? "salt_buckets = 2"  : "");
 conn.createStatement().execute(ddl);
 conn.close();
 
 String indexName = generateUniqueName();
-
 if (createIndex) {
 props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, 
Long.toString(ts)); // Execute at timestamp 1
 conn = DriverManager.getConnection(getUrl(), props);
@@ -1526,4 +1535,4 @@ public class UpsertSelectIT extends 
ParallelStatsDisabledIT {
 return DriverManager.getConnection(getUrl(), props);
 }
 
-}
+}
\ No newline at end of file



[1/3] phoenix git commit: PHOENIX-4173 Ensure that the rebuild fails if an index that transitions back to disabled while rebuilding

2017-09-07 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 9ecb193f2 -> 258f47d68


PHOENIX-4173 Ensure that the rebuild fails if an index that transitions back to 
disabled while rebuilding


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/ed994128
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/ed994128
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/ed994128

Branch: refs/heads/4.x-HBase-1.1
Commit: ed994128a9b50b710a37c5f62adc9c47922fcd64
Parents: 9ecb193
Author: James Taylor 
Authored: Wed Sep 6 12:46:34 2017 -0700
Committer: James Taylor 
Committed: Thu Sep 7 11:36:14 2017 -0700

--
 .../end2end/index/PartialIndexRebuilderIT.java  | 151 ++-
 1 file changed, 143 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/ed994128/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
index cacf0fa..067f50f 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
@@ -21,6 +21,7 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 
+import java.io.IOException;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.SQLException;
@@ -30,7 +31,7 @@ import java.util.concurrent.CountDownLatch;
 import java.util.concurrent.TimeUnit;
 
 import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hbase.HBaseIOException;
+import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.hadoop.hbase.client.HTableInterface;
 import org.apache.hadoop.hbase.client.Mutation;
@@ -38,10 +39,13 @@ import org.apache.hadoop.hbase.coprocessor.ObserverContext;
 import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
 import org.apache.hadoop.hbase.coprocessor.SimpleRegionObserver;
 import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
+import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.end2end.BaseUniqueNamesOwnClusterIT;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
+import org.apache.phoenix.query.ConnectionQueryServices;
 import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.schema.PIndexState;
 import org.apache.phoenix.schema.PMetaData;
 import org.apache.phoenix.schema.PTable;
@@ -634,6 +638,94 @@ public class PartialIndexRebuilderIT extends 
BaseUniqueNamesOwnClusterIT {
 }
 }
 
+private final static CountDownLatch WAIT_FOR_REBUILD_TO_START = new 
CountDownLatch(1);
+private final static CountDownLatch WAIT_FOR_INDEX_WRITE = new 
CountDownLatch(1);
+
+
+@Test
+public void testDisableIndexDuringRebuild() throws Throwable {
+String schemaName = generateUniqueName();
+String tableName = generateUniqueName();
+String indexName = generateUniqueName();
+final String fullTableName = SchemaUtil.getTableName(schemaName, 
tableName);
+final String fullIndexName = SchemaUtil.getTableName(schemaName, 
indexName);
+PTableKey key = new PTableKey(null,fullTableName);
+final MyClock clock = new MyClock(1000);
+EnvironmentEdgeManager.injectEdge(clock);
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+PMetaData metaCache = 
conn.unwrap(PhoenixConnection.class).getMetaDataCache();
+conn.createStatement().execute("CREATE TABLE " + fullTableName + 
"(k VARCHAR PRIMARY KEY, v1 VARCHAR, v2 VARCHAR, v3 VARCHAR) 
COLUMN_ENCODED_BYTES = 0, STORE_NULLS=true");
+clock.time += 100;
+conn.createStatement().execute("CREATE INDEX " + indexName + " ON 
" + fullTableName + " (v1, v2) INCLUDE (v3)");
+clock.time += 100;
+conn.createStatement().execute("UPSERT INTO " + fullTableName + " 
VALUES('a','a','0','x')");
+conn.commit();
+clock.time += 100;
+try (HTableInterface metaTable = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES))
 {
+// By using an INDEX_DISABLE_TIMESTAMP of 0, we prevent the 
partial index rebuilder from triggering
+

[3/3] phoenix git commit: PHOENIX-3953 Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction (addendum 2)

2017-09-07 Thread jamestaylor
PHOENIX-3953 Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction 
(addendum 2)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/258f47d6
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/258f47d6
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/258f47d6

Branch: refs/heads/4.x-HBase-1.1
Commit: 258f47d682b184ab37b9beff9a35c15de4af6a7c
Parents: bdbfc85
Author: James Taylor 
Authored: Thu Sep 7 11:26:47 2017 -0700
Committer: James Taylor 
Committed: Thu Sep 7 11:36:25 2017 -0700

--
 .../UngroupedAggregateRegionObserver.java   | 62 ++-
 .../org/apache/phoenix/hbase/index/Indexer.java | 42 +-
 .../stats/DefaultStatisticsCollector.java   | 83 ++--
 .../schema/stats/NoOpStatisticsCollector.java   |  2 +-
 .../schema/stats/StatisticsCollector.java   |  2 +-
 5 files changed, 111 insertions(+), 80 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/258f47d6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
index 31c83e4..a61f502 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
@@ -55,9 +55,12 @@ import org.apache.hadoop.hbase.NamespaceDescriptor;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.Durability;
+import org.apache.hadoop.hbase.client.Get;
 import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.HTableInterface;
 import org.apache.hadoop.hbase.client.Mutation;
 import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.coprocessor.ObserverContext;
 import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
@@ -69,11 +72,14 @@ import org.apache.hadoop.hbase.regionserver.Region;
 import org.apache.hadoop.hbase.regionserver.RegionScanner;
 import org.apache.hadoop.hbase.regionserver.ScanType;
 import org.apache.hadoop.hbase.regionserver.Store;
+import org.apache.hadoop.hbase.regionserver.StoreFile;
+import org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest;
 import org.apache.hadoop.hbase.security.User;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.hbase.util.Pair;
 import org.apache.hadoop.io.WritableUtils;
 import org.apache.phoenix.cache.ServerCacheClient;
+import org.apache.phoenix.coprocessor.MetaDataProtocol.MutationCode;
 import org.apache.phoenix.coprocessor.generated.PTableProtos;
 import org.apache.phoenix.exception.DataExceedsCapacityException;
 import org.apache.phoenix.execute.TupleProjector;
@@ -89,11 +95,13 @@ import 
org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 import org.apache.phoenix.hbase.index.util.KeyValueBuilder;
 import org.apache.phoenix.index.IndexMaintainer;
 import org.apache.phoenix.index.PhoenixIndexCodec;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
 import org.apache.phoenix.join.HashJoinInfo;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.schema.ColumnFamilyNotFoundException;
 import org.apache.phoenix.schema.PColumn;
+import org.apache.phoenix.schema.PIndexState;
 import org.apache.phoenix.schema.PRow;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableImpl;
@@ -899,6 +907,58 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 });
 }
 
+@Override
+public void postCompact(final 
ObserverContext e, final Store store,
+final StoreFile resultFile, CompactionRequest request) throws 
IOException {
+// If we're compacting all files, then delete markers are removed
+// and we must permanently disable an index that needs to be
+// partially rebuild because we're potentially losing the information
+// we need to successfully rebuilt it.
+if (request.isAllFiles() || request.isMajor()) {
+// Compaction and split upcalls run with the effective user 
context of the requesting user.
+// This will lead to failure of cross cluster RPC if the effective 
user is not
+// the login user. Switch to the login user context to 

[2/3] phoenix git commit: PHOENIX-4175 Convert tests using CURRENT_SCN to not use it when possible

2017-09-07 Thread jamestaylor
PHOENIX-4175 Convert tests using CURRENT_SCN to not use it when possible


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/bdbfc852
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/bdbfc852
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/bdbfc852

Branch: refs/heads/4.x-HBase-1.1
Commit: bdbfc8525031c5ed372c7bd5d539b2ff312d7448
Parents: ed99412
Author: James Taylor 
Authored: Wed Sep 6 18:05:42 2017 -0700
Committer: James Taylor 
Committed: Thu Sep 7 11:36:21 2017 -0700

--
 .../apache/phoenix/end2end/CreateSchemaIT.java  | 26 +++
 .../phoenix/end2end/CustomEntityDataIT.java | 75 
 .../apache/phoenix/end2end/UpsertSelectIT.java  | 42 +--
 3 files changed, 90 insertions(+), 53 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/bdbfc852/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java
index 09cd810..fe09dcd 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java
@@ -30,41 +30,31 @@ import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.query.QueryServices;
-import org.apache.phoenix.schema.NewerSchemaAlreadyExistsException;
 import org.apache.phoenix.schema.SchemaAlreadyExistsException;
-import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.TestUtil;
 import org.junit.Test;
 
-public class CreateSchemaIT extends BaseClientManagedTimeIT {
+public class CreateSchemaIT extends ParallelStatsDisabledIT {
 
 @Test
 public void testCreateSchema() throws Exception {
-long ts = nextTimestamp();
-Properties props = new Properties();
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, 
Long.toString(ts));
+Properties props = PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
 props.setProperty(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, 
Boolean.toString(true));
-String ddl = "CREATE SCHEMA TEST_SCHEMA";
+String schemaName = generateUniqueName();
+String ddl = "CREATE SCHEMA " + schemaName;
 try (Connection conn = DriverManager.getConnection(getUrl(), props);
 HBaseAdmin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();) {
 conn.createStatement().execute(ddl);
-assertNotNull(admin.getNamespaceDescriptor("TEST_SCHEMA"));
+assertNotNull(admin.getNamespaceDescriptor(schemaName));
 }
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 10));
-try (Connection conn = DriverManager.getConnection(getUrl(), props);) {
-conn.createStatement().execute(ddl);
-fail();
-} catch (SchemaAlreadyExistsException e) {
-// expected
-}
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
- 20));
 try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
 conn.createStatement().execute(ddl);
 fail();
-} catch (NewerSchemaAlreadyExistsException e) {
+} catch (SchemaAlreadyExistsException e) {
 // expected
 }
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 50));
 Connection conn = DriverManager.getConnection(getUrl(), props);
 try {
 conn.createStatement().execute("CREATE SCHEMA " + 
SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE);

http://git-wip-us.apache.org/repos/asf/phoenix/blob/bdbfc852/phoenix-core/src/it/java/org/apache/phoenix/end2end/CustomEntityDataIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CustomEntityDataIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CustomEntityDataIT.java
index ad0f308..4af2c5c 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CustomEntityDataIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CustomEntityDataIT.java
@@ -17,7 +17,6 @@
  */
 package org.apache.phoenix.end2end;
 
-import static org.apache.phoenix.util.TestUtil.CUSTOM_ENTITY_DATA_FULL_NAME;
 import static org.apache.phoenix.util.TestUtil.ROW2;
 import static 

[2/3] phoenix git commit: PHOENIX-4175 Convert tests using CURRENT_SCN to not use it when possible

2017-09-07 Thread jamestaylor
PHOENIX-4175 Convert tests using CURRENT_SCN to not use it when possible


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/aea61062
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/aea61062
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/aea61062

Branch: refs/heads/4.x-HBase-1.2
Commit: aea6106284bbf565a521e4e211b090525dec5129
Parents: 3c5e48d
Author: James Taylor 
Authored: Wed Sep 6 18:05:42 2017 -0700
Committer: James Taylor 
Committed: Thu Sep 7 11:34:35 2017 -0700

--
 .../apache/phoenix/end2end/CreateSchemaIT.java  | 26 +++
 .../phoenix/end2end/CustomEntityDataIT.java | 75 
 .../apache/phoenix/end2end/UpsertSelectIT.java  | 42 +--
 3 files changed, 90 insertions(+), 53 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/aea61062/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java
index 09cd810..fe09dcd 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java
@@ -30,41 +30,31 @@ import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.query.QueryServices;
-import org.apache.phoenix.schema.NewerSchemaAlreadyExistsException;
 import org.apache.phoenix.schema.SchemaAlreadyExistsException;
-import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.TestUtil;
 import org.junit.Test;
 
-public class CreateSchemaIT extends BaseClientManagedTimeIT {
+public class CreateSchemaIT extends ParallelStatsDisabledIT {
 
 @Test
 public void testCreateSchema() throws Exception {
-long ts = nextTimestamp();
-Properties props = new Properties();
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, 
Long.toString(ts));
+Properties props = PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
 props.setProperty(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, 
Boolean.toString(true));
-String ddl = "CREATE SCHEMA TEST_SCHEMA";
+String schemaName = generateUniqueName();
+String ddl = "CREATE SCHEMA " + schemaName;
 try (Connection conn = DriverManager.getConnection(getUrl(), props);
 HBaseAdmin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();) {
 conn.createStatement().execute(ddl);
-assertNotNull(admin.getNamespaceDescriptor("TEST_SCHEMA"));
+assertNotNull(admin.getNamespaceDescriptor(schemaName));
 }
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 10));
-try (Connection conn = DriverManager.getConnection(getUrl(), props);) {
-conn.createStatement().execute(ddl);
-fail();
-} catch (SchemaAlreadyExistsException e) {
-// expected
-}
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
- 20));
 try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
 conn.createStatement().execute(ddl);
 fail();
-} catch (NewerSchemaAlreadyExistsException e) {
+} catch (SchemaAlreadyExistsException e) {
 // expected
 }
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 50));
 Connection conn = DriverManager.getConnection(getUrl(), props);
 try {
 conn.createStatement().execute("CREATE SCHEMA " + 
SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE);

http://git-wip-us.apache.org/repos/asf/phoenix/blob/aea61062/phoenix-core/src/it/java/org/apache/phoenix/end2end/CustomEntityDataIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CustomEntityDataIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CustomEntityDataIT.java
index ad0f308..4af2c5c 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CustomEntityDataIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CustomEntityDataIT.java
@@ -17,7 +17,6 @@
  */
 package org.apache.phoenix.end2end;
 
-import static org.apache.phoenix.util.TestUtil.CUSTOM_ENTITY_DATA_FULL_NAME;
 import static org.apache.phoenix.util.TestUtil.ROW2;
 import static 

[3/3] phoenix git commit: PHOENIX-3953 Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction (addendum 2)

2017-09-07 Thread jamestaylor
PHOENIX-3953 Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction 
(addendum 2)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/5cf07c4c
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/5cf07c4c
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/5cf07c4c

Branch: refs/heads/4.x-HBase-1.2
Commit: 5cf07c4ce64174241e0c311c6f9e1905374aaeca
Parents: aea6106
Author: James Taylor 
Authored: Thu Sep 7 11:26:47 2017 -0700
Committer: James Taylor 
Committed: Thu Sep 7 11:34:53 2017 -0700

--
 .../UngroupedAggregateRegionObserver.java   | 62 ++-
 .../org/apache/phoenix/hbase/index/Indexer.java | 42 +-
 .../stats/DefaultStatisticsCollector.java   | 83 ++--
 .../schema/stats/NoOpStatisticsCollector.java   |  2 +-
 .../schema/stats/StatisticsCollector.java   |  2 +-
 5 files changed, 111 insertions(+), 80 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/5cf07c4c/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
index 31c83e4..a61f502 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
@@ -55,9 +55,12 @@ import org.apache.hadoop.hbase.NamespaceDescriptor;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.Durability;
+import org.apache.hadoop.hbase.client.Get;
 import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.HTableInterface;
 import org.apache.hadoop.hbase.client.Mutation;
 import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.coprocessor.ObserverContext;
 import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
@@ -69,11 +72,14 @@ import org.apache.hadoop.hbase.regionserver.Region;
 import org.apache.hadoop.hbase.regionserver.RegionScanner;
 import org.apache.hadoop.hbase.regionserver.ScanType;
 import org.apache.hadoop.hbase.regionserver.Store;
+import org.apache.hadoop.hbase.regionserver.StoreFile;
+import org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest;
 import org.apache.hadoop.hbase.security.User;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.hbase.util.Pair;
 import org.apache.hadoop.io.WritableUtils;
 import org.apache.phoenix.cache.ServerCacheClient;
+import org.apache.phoenix.coprocessor.MetaDataProtocol.MutationCode;
 import org.apache.phoenix.coprocessor.generated.PTableProtos;
 import org.apache.phoenix.exception.DataExceedsCapacityException;
 import org.apache.phoenix.execute.TupleProjector;
@@ -89,11 +95,13 @@ import 
org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 import org.apache.phoenix.hbase.index.util.KeyValueBuilder;
 import org.apache.phoenix.index.IndexMaintainer;
 import org.apache.phoenix.index.PhoenixIndexCodec;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
 import org.apache.phoenix.join.HashJoinInfo;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.schema.ColumnFamilyNotFoundException;
 import org.apache.phoenix.schema.PColumn;
+import org.apache.phoenix.schema.PIndexState;
 import org.apache.phoenix.schema.PRow;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableImpl;
@@ -899,6 +907,58 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 });
 }
 
+@Override
+public void postCompact(final 
ObserverContext e, final Store store,
+final StoreFile resultFile, CompactionRequest request) throws 
IOException {
+// If we're compacting all files, then delete markers are removed
+// and we must permanently disable an index that needs to be
+// partially rebuild because we're potentially losing the information
+// we need to successfully rebuilt it.
+if (request.isAllFiles() || request.isMajor()) {
+// Compaction and split upcalls run with the effective user 
context of the requesting user.
+// This will lead to failure of cross cluster RPC if the effective 
user is not
+// the login user. Switch to the login user context to 

[1/3] phoenix git commit: PHOENIX-4173 Ensure that the rebuild fails if an index that transitions back to disabled while rebuilding

2017-09-07 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 1fb01af6b -> 5cf07c4ce


PHOENIX-4173 Ensure that the rebuild fails if an index that transitions back to 
disabled while rebuilding


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/3c5e48d9
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/3c5e48d9
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/3c5e48d9

Branch: refs/heads/4.x-HBase-1.2
Commit: 3c5e48d9246f44cc39181b9c1cb9b51fb60bdd32
Parents: 1fb01af
Author: James Taylor 
Authored: Wed Sep 6 12:46:34 2017 -0700
Committer: James Taylor 
Committed: Thu Sep 7 11:34:13 2017 -0700

--
 .../end2end/index/PartialIndexRebuilderIT.java  | 151 ++-
 1 file changed, 143 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/3c5e48d9/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
index cacf0fa..067f50f 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
@@ -21,6 +21,7 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 
+import java.io.IOException;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.SQLException;
@@ -30,7 +31,7 @@ import java.util.concurrent.CountDownLatch;
 import java.util.concurrent.TimeUnit;
 
 import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hbase.HBaseIOException;
+import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.hadoop.hbase.client.HTableInterface;
 import org.apache.hadoop.hbase.client.Mutation;
@@ -38,10 +39,13 @@ import org.apache.hadoop.hbase.coprocessor.ObserverContext;
 import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
 import org.apache.hadoop.hbase.coprocessor.SimpleRegionObserver;
 import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
+import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.end2end.BaseUniqueNamesOwnClusterIT;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
+import org.apache.phoenix.query.ConnectionQueryServices;
 import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.schema.PIndexState;
 import org.apache.phoenix.schema.PMetaData;
 import org.apache.phoenix.schema.PTable;
@@ -634,6 +638,94 @@ public class PartialIndexRebuilderIT extends 
BaseUniqueNamesOwnClusterIT {
 }
 }
 
+private final static CountDownLatch WAIT_FOR_REBUILD_TO_START = new 
CountDownLatch(1);
+private final static CountDownLatch WAIT_FOR_INDEX_WRITE = new 
CountDownLatch(1);
+
+
+@Test
+public void testDisableIndexDuringRebuild() throws Throwable {
+String schemaName = generateUniqueName();
+String tableName = generateUniqueName();
+String indexName = generateUniqueName();
+final String fullTableName = SchemaUtil.getTableName(schemaName, 
tableName);
+final String fullIndexName = SchemaUtil.getTableName(schemaName, 
indexName);
+PTableKey key = new PTableKey(null,fullTableName);
+final MyClock clock = new MyClock(1000);
+EnvironmentEdgeManager.injectEdge(clock);
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+PMetaData metaCache = 
conn.unwrap(PhoenixConnection.class).getMetaDataCache();
+conn.createStatement().execute("CREATE TABLE " + fullTableName + 
"(k VARCHAR PRIMARY KEY, v1 VARCHAR, v2 VARCHAR, v3 VARCHAR) 
COLUMN_ENCODED_BYTES = 0, STORE_NULLS=true");
+clock.time += 100;
+conn.createStatement().execute("CREATE INDEX " + indexName + " ON 
" + fullTableName + " (v1, v2) INCLUDE (v3)");
+clock.time += 100;
+conn.createStatement().execute("UPSERT INTO " + fullTableName + " 
VALUES('a','a','0','x')");
+conn.commit();
+clock.time += 100;
+try (HTableInterface metaTable = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES))
 {
+// By using an INDEX_DISABLE_TIMESTAMP of 0, we prevent the 
partial index rebuilder from triggering
+

phoenix git commit: PHOENIX-3953 Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction (addendum 2)

2017-09-07 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/master 814276d4b -> 64b808971


PHOENIX-3953 Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction 
(addendum 2)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/64b80897
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/64b80897
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/64b80897

Branch: refs/heads/master
Commit: 64b808971698880980d06f17b0924e6e22d95e12
Parents: 814276d
Author: James Taylor 
Authored: Thu Sep 7 11:26:47 2017 -0700
Committer: James Taylor 
Committed: Thu Sep 7 11:26:47 2017 -0700

--
 .../UngroupedAggregateRegionObserver.java   | 62 ++-
 .../org/apache/phoenix/hbase/index/Indexer.java | 42 +-
 .../stats/DefaultStatisticsCollector.java   | 83 ++--
 .../schema/stats/NoOpStatisticsCollector.java   |  2 +-
 .../schema/stats/StatisticsCollector.java   |  2 +-
 5 files changed, 111 insertions(+), 80 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/64b80897/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
index 31c83e4..a61f502 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
@@ -55,9 +55,12 @@ import org.apache.hadoop.hbase.NamespaceDescriptor;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.Durability;
+import org.apache.hadoop.hbase.client.Get;
 import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.HTableInterface;
 import org.apache.hadoop.hbase.client.Mutation;
 import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.coprocessor.ObserverContext;
 import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
@@ -69,11 +72,14 @@ import org.apache.hadoop.hbase.regionserver.Region;
 import org.apache.hadoop.hbase.regionserver.RegionScanner;
 import org.apache.hadoop.hbase.regionserver.ScanType;
 import org.apache.hadoop.hbase.regionserver.Store;
+import org.apache.hadoop.hbase.regionserver.StoreFile;
+import org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest;
 import org.apache.hadoop.hbase.security.User;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.hbase.util.Pair;
 import org.apache.hadoop.io.WritableUtils;
 import org.apache.phoenix.cache.ServerCacheClient;
+import org.apache.phoenix.coprocessor.MetaDataProtocol.MutationCode;
 import org.apache.phoenix.coprocessor.generated.PTableProtos;
 import org.apache.phoenix.exception.DataExceedsCapacityException;
 import org.apache.phoenix.execute.TupleProjector;
@@ -89,11 +95,13 @@ import 
org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 import org.apache.phoenix.hbase.index.util.KeyValueBuilder;
 import org.apache.phoenix.index.IndexMaintainer;
 import org.apache.phoenix.index.PhoenixIndexCodec;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
 import org.apache.phoenix.join.HashJoinInfo;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.schema.ColumnFamilyNotFoundException;
 import org.apache.phoenix.schema.PColumn;
+import org.apache.phoenix.schema.PIndexState;
 import org.apache.phoenix.schema.PRow;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableImpl;
@@ -899,6 +907,58 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 });
 }
 
+@Override
+public void postCompact(final 
ObserverContext e, final Store store,
+final StoreFile resultFile, CompactionRequest request) throws 
IOException {
+// If we're compacting all files, then delete markers are removed
+// and we must permanently disable an index that needs to be
+// partially rebuild because we're potentially losing the information
+// we need to successfully rebuilt it.
+if (request.isAllFiles() || request.isMajor()) {
+// Compaction and split upcalls run with the effective user 
context of the requesting user.
+// This will lead to failure of cross cluster RPC if the effective 
user 

[1/3] phoenix git commit: PHOENIX-4175 Convert tests using CURRENT_SCN to not use it when possible

2017-09-07 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 a479c6dd4 -> 98b9e3f4b


PHOENIX-4175 Convert tests using CURRENT_SCN to not use it when possible


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/98b9e3f4
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/98b9e3f4
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/98b9e3f4

Branch: refs/heads/4.x-HBase-0.98
Commit: 98b9e3f4bf25b9411183305698741fbe30c27856
Parents: 1f24103
Author: James Taylor 
Authored: Wed Sep 6 18:05:42 2017 -0700
Committer: James Taylor 
Committed: Thu Sep 7 10:59:15 2017 -0700

--
 .../apache/phoenix/end2end/CreateSchemaIT.java  | 26 +++
 .../phoenix/end2end/CustomEntityDataIT.java | 75 
 .../apache/phoenix/end2end/UpsertSelectIT.java  | 46 ++--
 3 files changed, 93 insertions(+), 54 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/98b9e3f4/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java
index 09cd810..fe09dcd 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java
@@ -30,41 +30,31 @@ import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.query.QueryServices;
-import org.apache.phoenix.schema.NewerSchemaAlreadyExistsException;
 import org.apache.phoenix.schema.SchemaAlreadyExistsException;
-import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.TestUtil;
 import org.junit.Test;
 
-public class CreateSchemaIT extends BaseClientManagedTimeIT {
+public class CreateSchemaIT extends ParallelStatsDisabledIT {
 
 @Test
 public void testCreateSchema() throws Exception {
-long ts = nextTimestamp();
-Properties props = new Properties();
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, 
Long.toString(ts));
+Properties props = PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
 props.setProperty(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, 
Boolean.toString(true));
-String ddl = "CREATE SCHEMA TEST_SCHEMA";
+String schemaName = generateUniqueName();
+String ddl = "CREATE SCHEMA " + schemaName;
 try (Connection conn = DriverManager.getConnection(getUrl(), props);
 HBaseAdmin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();) {
 conn.createStatement().execute(ddl);
-assertNotNull(admin.getNamespaceDescriptor("TEST_SCHEMA"));
+assertNotNull(admin.getNamespaceDescriptor(schemaName));
 }
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 10));
-try (Connection conn = DriverManager.getConnection(getUrl(), props);) {
-conn.createStatement().execute(ddl);
-fail();
-} catch (SchemaAlreadyExistsException e) {
-// expected
-}
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
- 20));
 try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
 conn.createStatement().execute(ddl);
 fail();
-} catch (NewerSchemaAlreadyExistsException e) {
+} catch (SchemaAlreadyExistsException e) {
 // expected
 }
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 50));
 Connection conn = DriverManager.getConnection(getUrl(), props);
 try {
 conn.createStatement().execute("CREATE SCHEMA " + 
SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE);

http://git-wip-us.apache.org/repos/asf/phoenix/blob/98b9e3f4/phoenix-core/src/it/java/org/apache/phoenix/end2end/CustomEntityDataIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CustomEntityDataIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CustomEntityDataIT.java
index ad0f308..4af2c5c 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CustomEntityDataIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CustomEntityDataIT.java
@@ -17,7 +17,6 @@
  */
 package org.apache.phoenix.end2end;
 
-import static 

[3/3] phoenix git commit: PHOENIX-3953 Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction (addendum 2)

2017-09-07 Thread jamestaylor
PHOENIX-3953 Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction 
(addendum 2)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/fa29f7ff
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/fa29f7ff
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/fa29f7ff

Branch: refs/heads/4.x-HBase-0.98
Commit: fa29f7ff1585ec3d8121aca50b0795c3b29a4cfb
Parents: a479c6d
Author: James Taylor 
Authored: Thu Sep 7 09:43:30 2017 -0700
Committer: James Taylor 
Committed: Thu Sep 7 10:59:15 2017 -0700

--
 .../UngroupedAggregateRegionObserver.java   | 70 -
 .../org/apache/phoenix/hbase/index/Indexer.java | 50 +++-
 .../stats/DefaultStatisticsCollector.java   | 83 ++--
 .../schema/stats/NoOpStatisticsCollector.java   |  2 +-
 .../schema/stats/StatisticsCollector.java   |  2 +-
 5 files changed, 123 insertions(+), 84 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/fa29f7ff/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
index eef023e..b4d7e7f 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
@@ -55,25 +55,31 @@ import org.apache.hadoop.hbase.NamespaceDescriptor;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.Durability;
+import org.apache.hadoop.hbase.client.Get;
 import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.HTableInterface;
 import org.apache.hadoop.hbase.client.Mutation;
 import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.coprocessor.ObserverContext;
 import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
-import org.apache.hadoop.hbase.regionserver.HRegion;
 import org.apache.hadoop.hbase.ipc.RpcControllerFactory;
 import 
org.apache.hadoop.hbase.ipc.controller.InterRegionServerIndexRpcControllerFactory;
+import org.apache.hadoop.hbase.regionserver.HRegion;
 import org.apache.hadoop.hbase.regionserver.InternalScanner;
 import org.apache.hadoop.hbase.regionserver.RegionScanner;
 import org.apache.hadoop.hbase.regionserver.ScanType;
 import org.apache.hadoop.hbase.regionserver.Store;
+import org.apache.hadoop.hbase.regionserver.StoreFile;
+import org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.hbase.util.Pair;
 import org.apache.hadoop.io.WritableUtils;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.phoenix.cache.ServerCacheClient;
+import org.apache.phoenix.coprocessor.MetaDataProtocol.MutationCode;
 import org.apache.phoenix.coprocessor.generated.PTableProtos;
 import org.apache.phoenix.exception.DataExceedsCapacityException;
 import org.apache.phoenix.execute.TupleProjector;
@@ -89,11 +95,13 @@ import 
org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 import org.apache.phoenix.hbase.index.util.KeyValueBuilder;
 import org.apache.phoenix.index.IndexMaintainer;
 import org.apache.phoenix.index.PhoenixIndexCodec;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
 import org.apache.phoenix.join.HashJoinInfo;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.schema.ColumnFamilyNotFoundException;
 import org.apache.phoenix.schema.PColumn;
+import org.apache.phoenix.schema.PIndexState;
 import org.apache.phoenix.schema.PRow;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableImpl;
@@ -903,6 +911,64 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 }
 }
 
+@Override
+public void postCompact(final 
ObserverContext e, final Store store,
+final StoreFile resultFile, CompactionRequest request) throws 
IOException {
+// If we're compacting all files, then delete markers are removed
+// and we must permanently disable an index that needs to be
+// partially rebuild because we're potentially losing the information
+// we need to successfully rebuilt it.
+ 

[2/3] phoenix git commit: PHOENIX-4173 Ensure that the rebuild fails if an index that transitions back to disabled while rebuilding

2017-09-07 Thread jamestaylor
PHOENIX-4173 Ensure that the rebuild fails if an index that transitions back to 
disabled while rebuilding


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/1f24103a
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/1f24103a
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/1f24103a

Branch: refs/heads/4.x-HBase-0.98
Commit: 1f24103a716379622d360ebde1904bb46e927a37
Parents: fa29f7f
Author: James Taylor 
Authored: Wed Sep 6 12:46:34 2017 -0700
Committer: James Taylor 
Committed: Thu Sep 7 10:59:15 2017 -0700

--
 .../end2end/index/PartialIndexRebuilderIT.java  | 151 ++-
 1 file changed, 143 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/1f24103a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
index cacf0fa..067f50f 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
@@ -21,6 +21,7 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 
+import java.io.IOException;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.SQLException;
@@ -30,7 +31,7 @@ import java.util.concurrent.CountDownLatch;
 import java.util.concurrent.TimeUnit;
 
 import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hbase.HBaseIOException;
+import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.hadoop.hbase.client.HTableInterface;
 import org.apache.hadoop.hbase.client.Mutation;
@@ -38,10 +39,13 @@ import org.apache.hadoop.hbase.coprocessor.ObserverContext;
 import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
 import org.apache.hadoop.hbase.coprocessor.SimpleRegionObserver;
 import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
+import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.end2end.BaseUniqueNamesOwnClusterIT;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
+import org.apache.phoenix.query.ConnectionQueryServices;
 import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.schema.PIndexState;
 import org.apache.phoenix.schema.PMetaData;
 import org.apache.phoenix.schema.PTable;
@@ -634,6 +638,94 @@ public class PartialIndexRebuilderIT extends 
BaseUniqueNamesOwnClusterIT {
 }
 }
 
+private final static CountDownLatch WAIT_FOR_REBUILD_TO_START = new 
CountDownLatch(1);
+private final static CountDownLatch WAIT_FOR_INDEX_WRITE = new 
CountDownLatch(1);
+
+
+@Test
+public void testDisableIndexDuringRebuild() throws Throwable {
+String schemaName = generateUniqueName();
+String tableName = generateUniqueName();
+String indexName = generateUniqueName();
+final String fullTableName = SchemaUtil.getTableName(schemaName, 
tableName);
+final String fullIndexName = SchemaUtil.getTableName(schemaName, 
indexName);
+PTableKey key = new PTableKey(null,fullTableName);
+final MyClock clock = new MyClock(1000);
+EnvironmentEdgeManager.injectEdge(clock);
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+PMetaData metaCache = 
conn.unwrap(PhoenixConnection.class).getMetaDataCache();
+conn.createStatement().execute("CREATE TABLE " + fullTableName + 
"(k VARCHAR PRIMARY KEY, v1 VARCHAR, v2 VARCHAR, v3 VARCHAR) 
COLUMN_ENCODED_BYTES = 0, STORE_NULLS=true");
+clock.time += 100;
+conn.createStatement().execute("CREATE INDEX " + indexName + " ON 
" + fullTableName + " (v1, v2) INCLUDE (v3)");
+clock.time += 100;
+conn.createStatement().execute("UPSERT INTO " + fullTableName + " 
VALUES('a','a','0','x')");
+conn.commit();
+clock.time += 100;
+try (HTableInterface metaTable = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES))
 {
+// By using an INDEX_DISABLE_TIMESTAMP of 0, we prevent the 
partial index rebuilder from triggering
+IndexUtil.updateIndexState(fullIndexName, 0L, metaTable, 

phoenix git commit: PHOENIX-4177 Convert TopNIT to extend ParallelStatsDisabledIT

2017-09-07 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 b8068d69f -> a479c6dd4


PHOENIX-4177 Convert TopNIT to extend ParallelStatsDisabledIT


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/a479c6dd
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/a479c6dd
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/a479c6dd

Branch: refs/heads/4.x-HBase-0.98
Commit: a479c6dd4348b634365ca635460b27f99beb536e
Parents: b8068d6
Author: Samarth Jain 
Authored: Thu Sep 7 10:19:43 2017 -0700
Committer: Samarth Jain 
Committed: Thu Sep 7 10:19:43 2017 -0700

--
 .../java/org/apache/phoenix/end2end/TopNIT.java | 64 +++-
 1 file changed, 21 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/a479c6dd/phoenix-core/src/it/java/org/apache/phoenix/end2end/TopNIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TopNIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TopNIT.java
index 39e8cb6..8c213d2 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TopNIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TopNIT.java
@@ -26,8 +26,6 @@ import static org.apache.phoenix.util.TestUtil.ROW6;
 import static org.apache.phoenix.util.TestUtil.ROW7;
 import static org.apache.phoenix.util.TestUtil.ROW8;
 import static org.apache.phoenix.util.TestUtil.ROW9;
-import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
-import static org.apache.phoenix.util.TestUtil.ATABLE_NAME;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
@@ -36,25 +34,18 @@ import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
-import java.util.Properties;
 
-import org.apache.phoenix.util.PhoenixRuntime;
-import org.apache.phoenix.util.PropertiesUtil;
 import org.junit.Test;
 
 
-public class TopNIT extends BaseClientManagedTimeIT {
+public class TopNIT extends ParallelStatsDisabledIT {
 
 @Test
 public void testMultiOrderByExpr() throws Exception {
-long ts = nextTimestamp();
 String tenantId = getOrganizationId();
-
-initATableValues(ATABLE_NAME, tenantId, getDefaultSplits(tenantId), 
null, ts, getUrl(), null);
-String query = "SELECT entity_id FROM aTable ORDER BY b_string, 
entity_id LIMIT 5";
-Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
-Connection conn = DriverManager.getConnection(getUrl(), props);
+String tableName = initATableValues(null, tenantId, 
getDefaultSplits(tenantId), null, null, getUrl(), null);
+String query = "SELECT entity_id FROM " + tableName + " ORDER BY 
b_string, entity_id LIMIT 5";
+Connection conn = DriverManager.getConnection(getUrl());
 try {
 PreparedStatement statement = conn.prepareStatement(query);
 ResultSet rs = statement.executeQuery();
@@ -78,13 +69,10 @@ public class TopNIT extends BaseClientManagedTimeIT {
 
 @Test
 public void testDescMultiOrderByExpr() throws Exception {
-long ts = nextTimestamp();
 String tenantId = getOrganizationId();
-initATableValues(ATABLE_NAME, tenantId, getDefaultSplits(tenantId), 
null, ts, getUrl(), null);
-String query = "SELECT entity_id FROM aTable ORDER BY b_string || 
entity_id desc LIMIT 5";
-Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
-Connection conn = DriverManager.getConnection(getUrl(), props);
+String tableName = initATableValues(null, tenantId, 
getDefaultSplits(tenantId), null, null, getUrl(), null);
+String query = "SELECT entity_id FROM  " + tableName + "  ORDER BY 
b_string || entity_id desc LIMIT 5";
+Connection conn = DriverManager.getConnection(getUrl());
 try {
 PreparedStatement statement = conn.prepareStatement(query);
 ResultSet rs = statement.executeQuery();
@@ -117,42 +105,32 @@ public class TopNIT extends BaseClientManagedTimeIT {
 }
 
 private void testTopNDelete(boolean autoCommit) throws Exception {
-long ts = nextTimestamp();
 String tenantId = getOrganizationId();
-initATableValues(ATABLE_NAME, tenantId, getDefaultSplits(tenantId), 
null, ts, getUrl(), null);
-String query = "DELETE FROM aTable ORDER BY b_string, entity_id LIMIT 
5";
-   

phoenix git commit: PHOENIX-4177 Convert TopNIT to extend ParallelStatsDisabledIT

2017-09-07 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/master b46cbd375 -> 814276d4b


PHOENIX-4177 Convert TopNIT to extend ParallelStatsDisabledIT


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/814276d4
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/814276d4
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/814276d4

Branch: refs/heads/master
Commit: 814276d4b4b08be0681f1c402cfb3cc35f01fa0a
Parents: b46cbd3
Author: Samarth Jain 
Authored: Thu Sep 7 10:17:55 2017 -0700
Committer: Samarth Jain 
Committed: Thu Sep 7 10:18:40 2017 -0700

--
 .../java/org/apache/phoenix/end2end/TopNIT.java | 64 +++-
 1 file changed, 21 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/814276d4/phoenix-core/src/it/java/org/apache/phoenix/end2end/TopNIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TopNIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TopNIT.java
index 39e8cb6..8c213d2 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TopNIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TopNIT.java
@@ -26,8 +26,6 @@ import static org.apache.phoenix.util.TestUtil.ROW6;
 import static org.apache.phoenix.util.TestUtil.ROW7;
 import static org.apache.phoenix.util.TestUtil.ROW8;
 import static org.apache.phoenix.util.TestUtil.ROW9;
-import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
-import static org.apache.phoenix.util.TestUtil.ATABLE_NAME;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
@@ -36,25 +34,18 @@ import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
-import java.util.Properties;
 
-import org.apache.phoenix.util.PhoenixRuntime;
-import org.apache.phoenix.util.PropertiesUtil;
 import org.junit.Test;
 
 
-public class TopNIT extends BaseClientManagedTimeIT {
+public class TopNIT extends ParallelStatsDisabledIT {
 
 @Test
 public void testMultiOrderByExpr() throws Exception {
-long ts = nextTimestamp();
 String tenantId = getOrganizationId();
-
-initATableValues(ATABLE_NAME, tenantId, getDefaultSplits(tenantId), 
null, ts, getUrl(), null);
-String query = "SELECT entity_id FROM aTable ORDER BY b_string, 
entity_id LIMIT 5";
-Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
-Connection conn = DriverManager.getConnection(getUrl(), props);
+String tableName = initATableValues(null, tenantId, 
getDefaultSplits(tenantId), null, null, getUrl(), null);
+String query = "SELECT entity_id FROM " + tableName + " ORDER BY 
b_string, entity_id LIMIT 5";
+Connection conn = DriverManager.getConnection(getUrl());
 try {
 PreparedStatement statement = conn.prepareStatement(query);
 ResultSet rs = statement.executeQuery();
@@ -78,13 +69,10 @@ public class TopNIT extends BaseClientManagedTimeIT {
 
 @Test
 public void testDescMultiOrderByExpr() throws Exception {
-long ts = nextTimestamp();
 String tenantId = getOrganizationId();
-initATableValues(ATABLE_NAME, tenantId, getDefaultSplits(tenantId), 
null, ts, getUrl(), null);
-String query = "SELECT entity_id FROM aTable ORDER BY b_string || 
entity_id desc LIMIT 5";
-Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 2)); // Execute at timestamp 2
-Connection conn = DriverManager.getConnection(getUrl(), props);
+String tableName = initATableValues(null, tenantId, 
getDefaultSplits(tenantId), null, null, getUrl(), null);
+String query = "SELECT entity_id FROM  " + tableName + "  ORDER BY 
b_string || entity_id desc LIMIT 5";
+Connection conn = DriverManager.getConnection(getUrl());
 try {
 PreparedStatement statement = conn.prepareStatement(query);
 ResultSet rs = statement.executeQuery();
@@ -117,42 +105,32 @@ public class TopNIT extends BaseClientManagedTimeIT {
 }
 
 private void testTopNDelete(boolean autoCommit) throws Exception {
-long ts = nextTimestamp();
 String tenantId = getOrganizationId();
-initATableValues(ATABLE_NAME, tenantId, getDefaultSplits(tenantId), 
null, ts, getUrl(), null);
-String query = "DELETE FROM aTable ORDER BY b_string, entity_id LIMIT 
5";
-Properties 

Build failed in Jenkins: Phoenix Compile Compatibility with HBase #399

2017-09-07 Thread Apache Jenkins Server
See 


--
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on qnode3 (ubuntu) in workspace 

[Phoenix_Compile_Compat_wHBase] $ /bin/bash /tmp/jenkins418389972363595986.sh
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 128341
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 6
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 10240
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited
core id : 0
core id : 1
core id : 2
core id : 3
core id : 4
core id : 5
core id : 6
core id : 7
physical id : 0
MemTotal:   32865152 kB
MemFree:11765656 kB
Filesystem  Size  Used Avail Use% Mounted on
none 16G 0   16G   0% /dev
tmpfs   3.2G  367M  2.8G  12% /run
/dev/nbd046G   37G  6.7G  85% /
tmpfs16G 0   16G   0% /dev/shm
tmpfs   5.0M 0  5.0M   0% /run/lock
tmpfs16G 0   16G   0% /sys/fs/cgroup
/dev/sda1   235G  145G   79G  65% /home
tmpfs   3.2G 0  3.2G   0% /run/user/9997
tmpfs   3.2G 0  3.2G   0% /run/user/999
apache-maven-2.2.1
apache-maven-3.0.4
apache-maven-3.0.5
apache-maven-3.2.1
apache-maven-3.2.5
apache-maven-3.3.3
apache-maven-3.3.9
apache-maven-3.5.0
latest
latest2
latest3


===
Verifying compile level compatibility with HBase 0.98 with Phoenix 
4.x-HBase-0.98
===

Cloning into 'hbase'...
Switched to a new branch '0.98'
Branch 0.98 set up to track remote branch 0.98 from origin.

main:
 [exec] 
~/jenkins-slave/workspace/Phoenix_Compile_Compat_wHBase/hbase/hbase-common 
~/jenkins-slave/workspace/Phoenix_Compile_Compat_wHBase/hbase/hbase-common
 [exec] 
~/jenkins-slave/workspace/Phoenix_Compile_Compat_wHBase/hbase/hbase-common

main:
[mkdir] Created dir: 

 [exec] tar: hadoop-snappy-nativelibs.tar: Cannot open: No such file or 
directory
 [exec] tar: Error is not recoverable: exiting now
 [exec] Result: 2

main:
[mkdir] Created dir: 

 [copy] Copying 20 files to 

[mkdir] Created dir: 

[mkdir] Created dir: 


main:
[mkdir] Created dir: 

 [copy] Copying 17 files to 

[mkdir] Created dir: 


main:
[mkdir] Created dir: 

 [copy] Copying 1 file to 

[mkdir] Created dir: 


HBase pom.xml:

Got HBase version as 0.98.25-SNAPSHOT
Cloning into 'phoenix'...
Switched to a new branch '4.x-HBase-0.98'
Branch 4.x-HBase-0.98 set up to track remote branch 4.x-HBase-0.98 from origin.
ANTLR Parser Generator  Version 3.5.2
Output file 

 does not exist: must build 

PhoenixSQL.g


===