Jenkins build is back to normal : Phoenix-4.x-HBase-1.2 #367

2018-05-18 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Phoenix | 4.x-HBase-0.98 #1899

2018-05-18 Thread Apache Jenkins Server
See 


Changes:

[vincentpoon] PHOENIX-4704 Presplit index tables when building asynchronously

--
[...truncated 121.11 KB...]
[ERROR] Tests run: 96, Failures: 16, Errors: 0, Skipped: 0, Time elapsed: 
974.111 s <<< FAILURE! - in org.apache.phoenix.end2end.IndexToolIT
[ERROR] testSaltedVariableLengthPK[transactional = false , mutable = false , 
localIndex = true, directApi = false, useSnapshot = 
false](org.apache.phoenix.end2end.IndexToolIT)  Time elapsed: 9.957 s  <<< 
FAILURE!
java.lang.AssertionError: expected:<11> but was:<0>
at 
org.apache.phoenix.end2end.IndexToolIT.testSaltedVariableLengthPK(IndexToolIT.java:259)

[ERROR] testSaltedVariableLengthPK[transactional = false , mutable = false , 
localIndex = true, directApi = false, useSnapshot = 
true](org.apache.phoenix.end2end.IndexToolIT)  Time elapsed: 12.59 s  <<< 
FAILURE!
java.lang.AssertionError: expected:<11> but was:<0>
at 
org.apache.phoenix.end2end.IndexToolIT.testSaltedVariableLengthPK(IndexToolIT.java:259)

[ERROR] testSaltedVariableLengthPK[transactional = false , mutable = false , 
localIndex = true, directApi = true, useSnapshot = 
false](org.apache.phoenix.end2end.IndexToolIT)  Time elapsed: 8.938 s  <<< 
FAILURE!
java.lang.AssertionError: expected:<11> but was:<0>
at 
org.apache.phoenix.end2end.IndexToolIT.testSaltedVariableLengthPK(IndexToolIT.java:259)

[ERROR] testSaltedVariableLengthPK[transactional = false , mutable = false , 
localIndex = true, directApi = true, useSnapshot = 
true](org.apache.phoenix.end2end.IndexToolIT)  Time elapsed: 16.613 s  <<< 
FAILURE!
java.lang.AssertionError: expected:<11> but was:<0>
at 
org.apache.phoenix.end2end.IndexToolIT.testSaltedVariableLengthPK(IndexToolIT.java:259)

[ERROR] testSaltedVariableLengthPK[transactional = false , mutable = true , 
localIndex = true, directApi = false, useSnapshot = 
false](org.apache.phoenix.end2end.IndexToolIT)  Time elapsed: 10.341 s  <<< 
FAILURE!
java.lang.AssertionError: expected:<11> but was:<0>
at 
org.apache.phoenix.end2end.IndexToolIT.testSaltedVariableLengthPK(IndexToolIT.java:259)

[ERROR] testSaltedVariableLengthPK[transactional = false , mutable = true , 
localIndex = true, directApi = false, useSnapshot = 
true](org.apache.phoenix.end2end.IndexToolIT)  Time elapsed: 14.25 s  <<< 
FAILURE!
java.lang.AssertionError: expected:<11> but was:<0>
at 
org.apache.phoenix.end2end.IndexToolIT.testSaltedVariableLengthPK(IndexToolIT.java:259)

[ERROR] testSaltedVariableLengthPK[transactional = false , mutable = true , 
localIndex = true, directApi = true, useSnapshot = 
false](org.apache.phoenix.end2end.IndexToolIT)  Time elapsed: 8.613 s  <<< 
FAILURE!
java.lang.AssertionError: expected:<11> but was:<0>
at 
org.apache.phoenix.end2end.IndexToolIT.testSaltedVariableLengthPK(IndexToolIT.java:259)

[ERROR] testSaltedVariableLengthPK[transactional = false , mutable = true , 
localIndex = true, directApi = true, useSnapshot = 
true](org.apache.phoenix.end2end.IndexToolIT)  Time elapsed: 15.325 s  <<< 
FAILURE!
java.lang.AssertionError: expected:<11> but was:<0>
at 
org.apache.phoenix.end2end.IndexToolIT.testSaltedVariableLengthPK(IndexToolIT.java:259)

[ERROR] testSaltedVariableLengthPK[transactional = true , mutable = false , 
localIndex = true, directApi = false, useSnapshot = 
false](org.apache.phoenix.end2end.IndexToolIT)  Time elapsed: 9.44 s  <<< 
FAILURE!
java.lang.AssertionError: expected:<11> but was:<0>
at 
org.apache.phoenix.end2end.IndexToolIT.testSaltedVariableLengthPK(IndexToolIT.java:259)

[ERROR] testSaltedVariableLengthPK[transactional = true , mutable = false , 
localIndex = true, directApi = false, useSnapshot = 
true](org.apache.phoenix.end2end.IndexToolIT)  Time elapsed: 14.714 s  <<< 
FAILURE!
java.lang.AssertionError: expected:<11> but was:<0>
at 
org.apache.phoenix.end2end.IndexToolIT.testSaltedVariableLengthPK(IndexToolIT.java:259)

[ERROR] testSaltedVariableLengthPK[transactional = true , mutable = false , 
localIndex = true, directApi = true, useSnapshot = 
false](org.apache.phoenix.end2end.IndexToolIT)  Time elapsed: 9.304 s  <<< 
FAILURE!
java.lang.AssertionError: expected:<11> but was:<0>
at 
org.apache.phoenix.end2end.IndexToolIT.testSaltedVariableLengthPK(IndexToolIT.java:259)

[ERROR] testSaltedVariableLengthPK[transactional = true , mutable = false , 
localIndex = true, directApi = true, useSnapshot = 
true](org.apache.phoenix.end2end.IndexToolIT)  Time elapsed: 14.248 s  <<< 
FAILURE!
java.lang.AssertionError: expected:<11> but was:<0>
at 
org.apache.phoenix.end2end.IndexToolIT.testSaltedVariableLengthPK(IndexToolIT.java:259)

[ERROR] testSaltedVariableLengthPK[transactional = true , mutable = true , 
localIndex = true, directApi = false, useSnapshot = 
false](org.apache.phoenix.end2end.IndexToolIT)  Time 

Jenkins build is back to normal : Phoenix | Master #2030

2018-05-18 Thread Apache Jenkins Server
See 




phoenix git commit: PHOENIX-4704 Presplit index tables when building asynchronously

2018-05-18 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/5.x-HBase-2.0 eb34a8af7 -> 87ab023de


PHOENIX-4704 Presplit index tables when building asynchronously


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/87ab023d
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/87ab023d
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/87ab023d

Branch: refs/heads/5.x-HBase-2.0
Commit: 87ab023de53d18d6a24f28be3fc0b0e42c722ae5
Parents: eb34a8a
Author: Vincent Poon 
Authored: Fri May 18 20:42:34 2018 -0700
Committer: Vincent Poon 
Committed: Fri May 18 20:42:34 2018 -0700

--
 .../org/apache/phoenix/end2end/IndexToolIT.java | 105 +-
 .../phoenix/mapreduce/index/IndexTool.java  | 144 ++-
 2 files changed, 243 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/87ab023d/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
index afb6d72..2cec57b 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
@@ -21,12 +21,15 @@ import static 
org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.Arrays;
 import java.util.Collection;
 import java.util.List;
 import java.util.Map;
@@ -34,8 +37,15 @@ import java.util.Properties;
 import java.util.UUID;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.*;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.RegionScanner;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.index.IndexTool;
-import org.apache.phoenix.query.BaseTest;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.util.PropertiesUtil;
@@ -54,7 +64,7 @@ import com.google.common.collect.Maps;
 
 @RunWith(Parameterized.class)
 @Category(NeedsOwnMiniClusterTest.class)
-public class IndexToolIT extends BaseTest {
+public class IndexToolIT extends ParallelStatsEnabledIT {
 
 private final boolean localIndex;
 private final boolean transactional;
@@ -85,7 +95,7 @@ public class IndexToolIT extends BaseTest {
 }
 
 @BeforeClass
-public static void doSetup() throws Exception {
+public static void setup() throws Exception {
 Map serverProps = Maps.newHashMapWithExpectedSize(2);
 serverProps.put(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB,
 QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
@@ -249,6 +259,86 @@ public class IndexToolIT extends BaseTest {
 }
 }
 
+/**
+ * Test presplitting an index table
+ */
+@Test
+public void testSplitIndex() throws Exception {
+if (localIndex) return; // can't split local indexes
+String schemaName = generateUniqueName();
+String dataTableName = generateUniqueName();
+String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
+final TableName dataTN = TableName.valueOf(dataTableFullName);
+String indexTableName = generateUniqueName();
+String indexTableFullName = SchemaUtil.getTableName(schemaName, 
indexTableName);
+TableName indexTN = TableName.valueOf(indexTableFullName);
+try (Connection conn =
+DriverManager.getConnection(getUrl(), 
PropertiesUtil.deepCopy(TEST_PROPERTIES));
+Admin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+String dataDDL =
+"CREATE TABLE " + dataTableFullName + "(\n"
++ "ID VARCHAR NOT NULL PRIMARY KEY,\n"
++ "\"info\".CAR_NUM VARCHAR(18) NULL,\n"
++ "\"test\".CAR_NUM VARCHAR(18) NULL,\n"
++ "\"info\".CAP_DATE VARCHAR NULL,\n" + 
"\"info\".ORG_ID BIGINT NULL,\n"
+ 

phoenix git commit: PHOENIX-4692 ArrayIndexOutOfBoundsException in ScanRanges.intersectScan

2018-05-18 Thread maryannxue
Repository: phoenix
Updated Branches:
  refs/heads/master 58415e2f3 -> 28b9de0da


PHOENIX-4692 ArrayIndexOutOfBoundsException in ScanRanges.intersectScan


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/28b9de0d
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/28b9de0d
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/28b9de0d

Branch: refs/heads/master
Commit: 28b9de0da01b61e61c749ed433ddb995596b3e45
Parents: 58415e2
Author: maryannxue 
Authored: Fri May 18 19:46:29 2018 -0700
Committer: maryannxue 
Committed: Fri May 18 19:46:29 2018 -0700

--
 .../apache/phoenix/end2end/SkipScanQueryIT.java | 21 
 .../apache/phoenix/compile/WhereCompiler.java   | 12 +--
 .../apache/phoenix/execute/BaseQueryPlan.java   |  2 +-
 .../apache/phoenix/execute/HashJoinPlan.java|  5 -
 4 files changed, 32 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/28b9de0d/phoenix-core/src/it/java/org/apache/phoenix/end2end/SkipScanQueryIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SkipScanQueryIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SkipScanQueryIT.java
index d98bbe2..fb0b568 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SkipScanQueryIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SkipScanQueryIT.java
@@ -563,4 +563,25 @@ public class SkipScanQueryIT extends 
ParallelStatsDisabledIT {
 assertFalse(rs.next());
 }
 }
+
+@Test
+public void testSkipScanJoinOptimization() throws Exception {
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+String tableName = generateUniqueName();
+String viewName = generateUniqueName();
+String idxName = "IDX_" + tableName;
+conn.setAutoCommit(true);
+conn.createStatement().execute(
+"create table " + tableName + " (PK1 INTEGER NOT NULL, PK2 
INTEGER NOT NULL, " +
+" ID1 INTEGER, ID2 INTEGER CONSTRAINT PK PRIMARY 
KEY(PK1 , PK2))SALT_BUCKETS = 4");
+conn.createStatement().execute("upsert into " + tableName + " 
values (1,1,1,1)");
+conn.createStatement().execute("upsert into " + tableName + " 
values (2,2,2,2)");
+conn.createStatement().execute("upsert into " + tableName + " 
values (2,3,1,2)");
+conn.createStatement().execute("create view " + viewName + " as 
select * from " +
+tableName + " where PK1 in (1,2)");
+conn.createStatement().execute("create index " + idxName + " on " 
+ viewName + " (ID1)");
+ResultSet rs = conn.createStatement().executeQuery("select /*+ 
INDEX(" + viewName + " " + idxName + ") */ * from " + viewName + " where ID1 = 
1 ");
+assertTrue(rs.next());
+}
+}
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/28b9de0d/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereCompiler.java
index 2cf5857..832b1f0 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereCompiler.java
@@ -105,9 +105,9 @@ public class WhereCompiler {
  * @throws AmbiguousColumnException if an unaliased column name is 
ambiguous across multiple tables
  */
 public static Expression compile(StatementContext context, 
FilterableStatement statement, ParseNode viewWhere, Set 
subqueryNodes) throws SQLException {
-return compile(context, statement, viewWhere, 
Collections.emptyList(), false, subqueryNodes);
+return compile(context, statement, viewWhere, 
Collections.emptyList(), subqueryNodes);
 }
-
+
 /**
  * Optimize scan ranges by applying dynamically generated filter 
expressions.
  * @param context the shared context during query compilation
@@ -118,7 +118,7 @@ public class WhereCompiler {
  * @throws ColumnNotFoundException if column name could not be resolved
  * @throws AmbiguousColumnException if an unaliased column name is 
ambiguous across multiple tables
  */
-public static Expression compile(StatementContext context, 
FilterableStatement statement, ParseNode viewWhere, List 
dynamicFilters, boolean hashJoinOptimization, Set 
subqueryNodes) throws SQLException {
+public static Expression compile(StatementContext context, 

Apache-Phoenix | 4.x-HBase-0.98 | Build Successful

2018-05-18 Thread Apache Jenkins Server
4.x-HBase-0.98 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-0.98

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/lastCompletedBuild/testReport/

Changes
[jtaylor] PHOENIX-4742 DistinctPrefixFilter potentially seeks to lesser key when

[jtaylor] PHOENIX-4744 Reduce parallelism in integration test runs



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


[1/2] phoenix git commit: PHOENIX-4724 Efficient Equi-Depth histogram for streaming data

2018-05-18 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/4.x-cdh5.11 44dbb42b6 -> c2d6bc17a


PHOENIX-4724 Efficient Equi-Depth histogram for streaming data


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/1b2144f5
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/1b2144f5
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/1b2144f5

Branch: refs/heads/4.x-cdh5.11
Commit: 1b2144f578d7d8f3f523129d16d3064ef7511f76
Parents: 44dbb42
Author: Vincent Poon 
Authored: Thu May 3 17:07:27 2018 -0700
Committer: Vincent Poon 
Committed: Fri May 18 19:44:35 2018 -0700

--
 .../phoenix/util/EquiDepthStreamHistogram.java  | 453 +++
 .../util/EquiDepthStreamHistogramTest.java  | 303 +
 2 files changed, 756 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/1b2144f5/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
new file mode 100644
index 000..7649933
--- /dev/null
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
@@ -0,0 +1,453 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.util;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+/**
+ * Equi-Depth histogram based on 
http://web.cs.ucla.edu/~zaniolo/papers/Histogram-EDBT2011-CamReady.pdf,
+ * but without the sliding window - we assume a single window over the entire 
data set.
+ *
+ * Used to generate the bucket boundaries of a histogram where each bucket has 
the same # of items.
+ * This is useful, for example, for pre-splitting an index table, by feeding 
in data from the indexed column.
+ * Works on streaming data - the histogram is dynamically updated for each new 
value.
+ *
+ * Add values by calling addValue(), then at the end computeBuckets() can be 
called to get
+ * the buckets with their bounds.
+ *
+ * Average time complexity: O(log(B x p) + (B x p)/T) = nearly constant
+ * B = number of buckets, p = expansion factor constant, T = # of values
+ *
+ * Space complexity: different from paper since here we keep the blocked bars 
but don't have expiration,
+ *  comes out to basically O(log(T))
+ */
+public class EquiDepthStreamHistogram {
+private static final Log LOG = 
LogFactory.getLog(EquiDepthStreamHistogram.class);
+
+// used in maxSize calculation for each bar
+private static final double MAX_COEF = 1.7;
+// higher expansion factor = better accuracy and worse performance
+private static final short DEFAULT_EXPANSION_FACTOR = 7;
+private int numBuckets;
+private int maxBars;
+@VisibleForTesting
+long totalCount; // number of values - i.e. count across all bars
+@VisibleForTesting
+List bars;
+
+/**
+ * Create a new histogram
+ * @param numBuckets number of buckets, which can be used to get the splits
+ */
+public EquiDepthStreamHistogram(int numBuckets) {
+this(numBuckets, DEFAULT_EXPANSION_FACTOR);
+}
+
+/**
+ * @param numBuckets number of buckets
+ * @param expansionFactor number of bars = expansionFactor * numBuckets
+ * The more bars, the better the accuracy, at the cost of worse performance
+ */
+public EquiDepthStreamHistogram(int numBuckets, int expansionFactor) {
+this.numBuckets = numBuckets;
+this.maxBars = numBuckets * expansionFactor;
+this.bars = new 

[2/2] phoenix git commit: PHOENIX-4704 Presplit index tables when building asynchronously

2018-05-18 Thread vincentpoon
PHOENIX-4704 Presplit index tables when building asynchronously


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/c2d6bc17
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/c2d6bc17
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/c2d6bc17

Branch: refs/heads/4.x-cdh5.11
Commit: c2d6bc17a1a351dfb5180e810de38917fab93b68
Parents: 1b2144f
Author: Vincent Poon 
Authored: Fri May 18 11:22:26 2018 -0700
Committer: Vincent Poon 
Committed: Fri May 18 19:44:46 2018 -0700

--
 .../org/apache/phoenix/end2end/IndexToolIT.java | 106 +-
 .../phoenix/mapreduce/index/IndexTool.java  | 142 ++-
 2 files changed, 242 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/c2d6bc17/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
index afb6d72..a120aaa 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
@@ -21,12 +21,15 @@ import static 
org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.Arrays;
 import java.util.Collection;
 import java.util.List;
 import java.util.Map;
@@ -34,8 +37,16 @@ import java.util.Properties;
 import java.util.UUID;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.RegionScanner;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.index.IndexTool;
-import org.apache.phoenix.query.BaseTest;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.util.PropertiesUtil;
@@ -54,7 +65,7 @@ import com.google.common.collect.Maps;
 
 @RunWith(Parameterized.class)
 @Category(NeedsOwnMiniClusterTest.class)
-public class IndexToolIT extends BaseTest {
+public class IndexToolIT extends ParallelStatsEnabledIT {
 
 private final boolean localIndex;
 private final boolean transactional;
@@ -85,7 +96,7 @@ public class IndexToolIT extends BaseTest {
 }
 
 @BeforeClass
-public static void doSetup() throws Exception {
+public static void setup() throws Exception {
 Map serverProps = Maps.newHashMapWithExpectedSize(2);
 serverProps.put(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB,
 QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
@@ -249,6 +260,86 @@ public class IndexToolIT extends BaseTest {
 }
 }
 
+/**
+ * Test presplitting an index table
+ */
+@Test
+public void testSplitIndex() throws Exception {
+if (localIndex) return; // can't split local indexes
+String schemaName = generateUniqueName();
+String dataTableName = generateUniqueName();
+String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
+final TableName dataTN = TableName.valueOf(dataTableFullName);
+String indexTableName = generateUniqueName();
+String indexTableFullName = SchemaUtil.getTableName(schemaName, 
indexTableName);
+TableName indexTN = TableName.valueOf(indexTableFullName);
+try (Connection conn =
+DriverManager.getConnection(getUrl(), 
PropertiesUtil.deepCopy(TEST_PROPERTIES));
+HBaseAdmin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+String dataDDL =
+"CREATE TABLE " + dataTableFullName + "(\n"
++ "ID VARCHAR NOT NULL PRIMARY KEY,\n"
++ "\"info\".CAR_NUM VARCHAR(18) NULL,\n"
++ "\"test\".CAR_NUM VARCHAR(18) NULL,\n"
++ "\"info\".CAP_DATE VARCHAR NULL,\n" + 
"\"info\".ORG_ID BIGINT NULL,\n"
++ 

phoenix git commit: PHOENIX-4704 Presplit index tables when building asynchronously

2018-05-18 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 b4ffb7dfd -> 2f35fe306


PHOENIX-4704 Presplit index tables when building asynchronously


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/2f35fe30
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/2f35fe30
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/2f35fe30

Branch: refs/heads/4.x-HBase-0.98
Commit: 2f35fe3069bdb48a0603bda2dc59bea1f3145f0d
Parents: b4ffb7d
Author: Vincent Poon 
Authored: Fri May 18 19:21:56 2018 -0700
Committer: Vincent Poon 
Committed: Fri May 18 19:21:56 2018 -0700

--
 .../org/apache/phoenix/end2end/IndexToolIT.java | 146 ++-
 .../phoenix/mapreduce/index/IndexTool.java  | 142 +-
 2 files changed, 282 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/2f35fe30/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
index 8002db0..a120aaa 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
@@ -21,12 +21,15 @@ import static 
org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.Arrays;
 import java.util.Collection;
 import java.util.List;
 import java.util.Map;
@@ -34,8 +37,16 @@ import java.util.Properties;
 import java.util.UUID;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.RegionScanner;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.index.IndexTool;
-import org.apache.phoenix.query.BaseTest;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.util.PropertiesUtil;
@@ -54,7 +65,7 @@ import com.google.common.collect.Maps;
 
 @RunWith(Parameterized.class)
 @Category(NeedsOwnMiniClusterTest.class)
-public class IndexToolIT extends BaseTest {
+public class IndexToolIT extends ParallelStatsEnabledIT {
 
 private final boolean localIndex;
 private final boolean transactional;
@@ -85,7 +96,7 @@ public class IndexToolIT extends BaseTest {
 }
 
 @BeforeClass
-public static void doSetup() throws Exception {
+public static void setup() throws Exception {
 Map serverProps = Maps.newHashMapWithExpectedSize(2);
 serverProps.put(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB,
 QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
@@ -209,6 +220,126 @@ public class IndexToolIT extends BaseTest {
 }
 }
 
+@Test
+public void testSaltedVariableLengthPK() throws Exception {
+String schemaName = generateUniqueName();
+String dataTableName = generateUniqueName();
+String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
+String indexTableName = generateUniqueName();
+try (Connection conn =
+DriverManager.getConnection(getUrl(), 
PropertiesUtil.deepCopy(TEST_PROPERTIES))) {
+String dataDDL =
+"CREATE TABLE " + dataTableFullName + "(\n"
++ "ID VARCHAR NOT NULL PRIMARY KEY,\n"
++ "\"info\".CAR_NUM VARCHAR(18) NULL,\n"
++ "\"info\".CAP_DATE VARCHAR NULL,\n" + 
"\"info\".ORG_ID BIGINT NULL,\n"
++ "\"info\".ORG_NAME VARCHAR(255) NULL\n" + ") 
SALT_BUCKETS=3";
+conn.createStatement().execute(dataDDL);
+
+String upsert =
+"UPSERT INTO " + dataTableFullName
++ "(ID,CAR_NUM,CAP_DATE,ORG_ID,ORG_NAME) 
VALUES('1','car1','2016-01-01 00:00:00',11,'orgname1')";
+conn.createStatement().execute(upsert);
+conn.commit();
+
+

Build failed in Jenkins: Phoenix-4.x-HBase-1.2 #366

2018-05-18 Thread Apache Jenkins Server
See 


Changes:

[vincentpoon] PHOENIX-4704 Presplit index tables when building asynchronously

--
[...truncated 285.40 KB...]
[INFO] 
[INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ phoenix-pherf ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-site-plugin:3.2:attach-descriptor (attach-descriptor) @ 
phoenix-pherf ---
[INFO] 
[INFO] --- maven-shade-plugin:2.4.3:shade (default) @ phoenix-pherf ---
[INFO] Excluding org.apache.phoenix:phoenix-core:jar:4.14.0-HBase-1.2 from the 
shaded jar.
[INFO] Excluding org.apache.tephra:tephra-api:jar:0.13.0-incubating from the 
shaded jar.
[INFO] Excluding org.apache.tephra:tephra-core:jar:0.13.0-incubating from the 
shaded jar.
[INFO] Excluding 
org.apache.tephra:tephra-hbase-compat-1.1:jar:0.13.0-incubating from the shaded 
jar.
[INFO] Excluding org.antlr:antlr-runtime:jar:3.5.2 from the shaded jar.
[INFO] Excluding jline:jline:jar:2.11 from the shaded jar.
[INFO] Excluding sqlline:sqlline:jar:1.2.0 from the shaded jar.
[INFO] Excluding com.google.guava:guava:jar:13.0.1 from the shaded jar.
[INFO] Including joda-time:joda-time:jar:1.6 in the shaded jar.
[INFO] Excluding com.github.stephenc.findbugs:findbugs-annotations:jar:1.3.9-1 
from the shaded jar.
[INFO] Excluding com.github.stephenc.jcip:jcip-annotations:jar:1.0-1 from the 
shaded jar.
[INFO] Excluding org.codehaus.jackson:jackson-core-asl:jar:1.9.2 from the 
shaded jar.
[INFO] Excluding org.codehaus.jackson:jackson-mapper-asl:jar:1.9.2 from the 
shaded jar.
[INFO] Excluding com.google.protobuf:protobuf-java:jar:2.5.0 from the shaded 
jar.
[INFO] Excluding org.apache.httpcomponents:httpclient:jar:4.0.1 from the shaded 
jar.
[INFO] Excluding org.apache.httpcomponents:httpcore:jar:4.0.1 from the shaded 
jar.
[INFO] Excluding log4j:log4j:jar:1.2.17 from the shaded jar.
[INFO] Excluding org.slf4j:slf4j-api:jar:1.6.4 from the shaded jar.
[INFO] Excluding org.iq80.snappy:snappy:jar:0.3 from the shaded jar.
[INFO] Excluding org.apache.htrace:htrace-core:jar:3.1.0-incubating from the 
shaded jar.
[INFO] Excluding commons-codec:commons-codec:jar:1.7 from the shaded jar.
[INFO] Excluding commons-collections:commons-collections:jar:3.2.2 from the 
shaded jar.
[INFO] Including org.apache.commons:commons-csv:jar:1.0 in the shaded jar.
[INFO] Excluding com.google.code.findbugs:jsr305:jar:2.0.1 from the shaded jar.
[INFO] Excluding org.apache.hbase:hbase-annotations:jar:1.2.5 from the shaded 
jar.
[INFO] Excluding org.apache.hbase:hbase-common:jar:1.2.5 from the shaded jar.
[INFO] Excluding org.mortbay.jetty:jetty-util:jar:6.1.26 from the shaded jar.
[INFO] Excluding org.apache.hbase:hbase-protocol:jar:1.2.5 from the shaded jar.
[INFO] Excluding org.apache.hbase:hbase-client:jar:1.2.5 from the shaded jar.
[INFO] Excluding io.netty:netty-all:jar:4.0.23.Final from the shaded jar.
[INFO] Excluding org.apache.zookeeper:zookeeper:jar:3.4.6 from the shaded jar.
[INFO] Excluding org.jruby.jcodings:jcodings:jar:1.0.8 from the shaded jar.
[INFO] Excluding org.apache.hbase:hbase-server:jar:1.2.5 from the shaded jar.
[INFO] Excluding org.apache.hbase:hbase-procedure:jar:1.2.5 from the shaded jar.
[INFO] Excluding org.apache.hbase:hbase-prefix-tree:jar:1.2.5 from the shaded 
jar.
[INFO] Excluding commons-httpclient:commons-httpclient:jar:3.1 from the shaded 
jar.
[INFO] Excluding com.sun.jersey:jersey-core:jar:1.9 from the shaded jar.
[INFO] Excluding com.sun.jersey:jersey-server:jar:1.9 from the shaded jar.
[INFO] Excluding asm:asm:jar:3.1 from the shaded jar.
[INFO] Excluding org.mortbay.jetty:jetty:jar:6.1.26 from the shaded jar.
[INFO] Excluding org.mortbay.jetty:jetty-sslengine:jar:6.1.26 from the shaded 
jar.
[INFO] Excluding org.mortbay.jetty:jsp-2.1:jar:6.1.14 from the shaded jar.
[INFO] Excluding org.mortbay.jetty:jsp-api-2.1:jar:6.1.14 from the shaded jar.
[INFO] Excluding org.mortbay.jetty:servlet-api-2.5:jar:6.1.14 from the shaded 
jar.
[INFO] Excluding tomcat:jasper-compiler:jar:5.5.23 from the shaded jar.
[INFO] Excluding tomcat:jasper-runtime:jar:5.5.23 from the shaded jar.
[INFO] Excluding commons-el:commons-el:jar:1.0 from the shaded jar.
[INFO] Excluding org.jamon:jamon-runtime:jar:2.4.1 from the shaded jar.
[INFO] Excluding org.apache.hbase:hbase-hadoop-compat:jar:1.2.5 from the shaded 
jar.
[INFO] Excluding org.apache.hbase:hbase-hadoop2-compat:jar:1.2.5 from the 
shaded jar.
[INFO] Excluding org.apache.hadoop:hadoop-common:jar:2.7.1 from the shaded jar.
[INFO] Excluding xmlenc:xmlenc:jar:0.52 from the shaded jar.
[INFO] Excluding commons-net:commons-net:jar:3.1 from the shaded jar.
[INFO] Excluding javax.servlet:servlet-api:jar:2.5 from the shaded jar.
[INFO] Excluding javax.servlet.jsp:jsp-api:jar:2.1 from the shaded jar.
[INFO] Excluding 

Build failed in Jenkins: Phoenix-4.x-HBase-1.3 #138

2018-05-18 Thread Apache Jenkins Server
See 


Changes:

[vincentpoon] PHOENIX-4704 Presplit index tables when building asynchronously

--
[...truncated 148.71 KB...]
[INFO] Excluding commons-logging:commons-logging:jar:1.2 from the shaded jar.
[INFO] Excluding log4j:log4j:jar:1.2.17 from the shaded jar.
[INFO] Excluding org.slf4j:slf4j-api:jar:1.6.4 from the shaded jar.
[INFO] Excluding org.iq80.snappy:snappy:jar:0.3 from the shaded jar.
[INFO] Excluding org.apache.htrace:htrace-core:jar:3.1.0-incubating from the 
shaded jar.
[INFO] Excluding commons-cli:commons-cli:jar:1.2 from the shaded jar.
[INFO] Excluding commons-codec:commons-codec:jar:1.7 from the shaded jar.
[INFO] Excluding commons-collections:commons-collections:jar:3.2.2 from the 
shaded jar.
[INFO] Excluding org.apache.commons:commons-csv:jar:1.0 from the shaded jar.
[INFO] Excluding com.google.code.findbugs:jsr305:jar:2.0.1 from the shaded jar.
[INFO] Excluding org.slf4j:slf4j-log4j12:jar:1.7.7 from the shaded jar.
[INFO] Excluding org.apache.hadoop:hadoop-auth:jar:2.7.1 from the shaded jar.
[INFO] Excluding 
org.apache.directory.server:apacheds-kerberos-codec:jar:2.0.0-M15 from the 
shaded jar.
[INFO] Excluding org.apache.curator:curator-framework:jar:2.7.1 from the shaded 
jar.
[INFO] Excluding org.apache.hadoop:hadoop-client:jar:2.7.1 from the shaded jar.
[INFO] Excluding org.apache.hadoop:hadoop-mapreduce-client-jobclient:jar:2.7.1 
from the shaded jar.
[INFO] Excluding org.apache.hadoop:hadoop-mapreduce-client-common:jar:2.7.1 
from the shaded jar.
[INFO] Excluding org.apache.hadoop:hadoop-yarn-client:jar:2.7.1 from the shaded 
jar.
[INFO] Excluding org.apache.hadoop:hadoop-mapreduce-client-shuffle:jar:2.7.1 
from the shaded jar.
[INFO] Excluding com.sun.jersey:jersey-client:jar:1.9 from the shaded jar.
[INFO] Excluding org.apache.commons:commons-math:jar:2.2 from the shaded jar.
[INFO] Excluding commons-lang:commons-lang:jar:2.6 from the shaded jar.
[INFO] Excluding org.apache.hbase:hbase-annotations:jar:1.3.1 from the shaded 
jar.
[INFO] Excluding org.apache.hbase:hbase-common:jar:1.3.1 from the shaded jar.
[INFO] Excluding commons-io:commons-io:jar:2.4 from the shaded jar.
[INFO] Excluding org.mortbay.jetty:jetty-util:jar:6.1.26 from the shaded jar.
[INFO] Excluding org.apache.hbase:hbase-protocol:jar:1.3.1 from the shaded jar.
[INFO] Excluding org.apache.hbase:hbase-client:jar:1.3.1 from the shaded jar.
[INFO] Excluding io.netty:netty-all:jar:4.0.23.Final from the shaded jar.
[INFO] Excluding org.apache.zookeeper:zookeeper:jar:3.4.6 from the shaded jar.
[INFO] Excluding org.jruby.jcodings:jcodings:jar:1.0.8 from the shaded jar.
[INFO] Excluding org.apache.hbase:hbase-server:jar:1.3.1 from the shaded jar.
[INFO] Excluding org.apache.hbase:hbase-procedure:jar:1.3.1 from the shaded jar.
[INFO] Excluding org.apache.hbase:hbase-prefix-tree:jar:1.3.1 from the shaded 
jar.
[INFO] Excluding commons-httpclient:commons-httpclient:jar:3.1 from the shaded 
jar.
[INFO] Excluding com.sun.jersey:jersey-core:jar:1.9 from the shaded jar.
[INFO] Excluding com.sun.jersey:jersey-server:jar:1.9 from the shaded jar.
[INFO] Excluding asm:asm:jar:3.1 from the shaded jar.
[INFO] Excluding org.mortbay.jetty:jetty:jar:6.1.26 from the shaded jar.
[INFO] Excluding org.mortbay.jetty:jetty-sslengine:jar:6.1.26 from the shaded 
jar.
[INFO] Excluding org.mortbay.jetty:jsp-2.1:jar:6.1.14 from the shaded jar.
[INFO] Excluding org.mortbay.jetty:jsp-api-2.1:jar:6.1.14 from the shaded jar.
[INFO] Excluding org.mortbay.jetty:servlet-api-2.5:jar:6.1.14 from the shaded 
jar.
[INFO] Excluding tomcat:jasper-compiler:jar:5.5.23 from the shaded jar.
[INFO] Excluding tomcat:jasper-runtime:jar:5.5.23 from the shaded jar.
[INFO] Excluding commons-el:commons-el:jar:1.0 from the shaded jar.
[INFO] Excluding org.jamon:jamon-runtime:jar:2.4.1 from the shaded jar.
[INFO] Excluding com.lmax:disruptor:jar:3.3.6 from the shaded jar.
[INFO] Excluding org.apache.hadoop:hadoop-hdfs:jar:2.7.1 from the shaded jar.
[INFO] Excluding commons-daemon:commons-daemon:jar:1.0.13 from the shaded jar.
[INFO] Excluding org.fusesource.leveldbjni:leveldbjni-all:jar:1.8 from the 
shaded jar.
[INFO] Excluding org.apache.hbase:hbase-hadoop-compat:jar:1.3.1 from the shaded 
jar.
[INFO] Excluding org.apache.hadoop:hadoop-common:jar:2.7.1 from the shaded jar.
[INFO] Excluding org.apache.commons:commons-math3:jar:3.1.1 from the shaded jar.
[INFO] Excluding xmlenc:xmlenc:jar:0.52 from the shaded jar.
[INFO] Excluding commons-net:commons-net:jar:3.1 from the shaded jar.
[INFO] Excluding javax.servlet:servlet-api:jar:2.5 from the shaded jar.
[INFO] Excluding javax.servlet.jsp:jsp-api:jar:2.1 from the shaded jar.
[INFO] Excluding com.sun.jersey:jersey-json:jar:1.9 from the shaded jar.
[INFO] Excluding org.codehaus.jettison:jettison:jar:1.1 from the shaded jar.
[INFO] Excluding com.sun.xml.bind:jaxb-impl:jar:2.2.3-1 from the 

Build failed in Jenkins: Phoenix | Master #2029

2018-05-18 Thread Apache Jenkins Server
See 


Changes:

[vincentpoon] PHOENIX-4704 Presplit index tables when building asynchronously

--
[...truncated 1.60 MB...]
[INFO] Running org.apache.phoenix.end2end.join.HashJoinCacheIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.55 s - 
in org.apache.phoenix.end2end.join.HashJoinCacheIT
[INFO] Running org.apache.phoenix.end2end.join.HashJoinLocalIndexIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.168 s 
- in org.apache.phoenix.end2end.index.txn.RollbackIT
[INFO] Running org.apache.phoenix.end2end.join.HashJoinGlobalIndexIT
[INFO] Running org.apache.phoenix.end2end.join.HashJoinMoreIT
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 359.581 
s - in org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 338.792 
s - in org.apache.phoenix.end2end.index.LocalMutableNonTxIndexIT
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 357.909 
s - in org.apache.phoenix.end2end.index.LocalMutableTxIndexIT
[INFO] Running org.apache.phoenix.end2end.join.HashJoinNoIndexIT
[INFO] Running org.apache.phoenix.end2end.join.SortMergeJoinGlobalIndexIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.516 s 
- in org.apache.phoenix.end2end.join.HashJoinMoreIT
[INFO] Running org.apache.phoenix.end2end.join.SortMergeJoinNoIndexIT
[INFO] Running org.apache.phoenix.end2end.join.SortMergeJoinLocalIndexIT
[INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 188.777 
s - in org.apache.phoenix.end2end.join.HashJoinNoIndexIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 193.266 
s - in org.apache.phoenix.end2end.join.SortMergeJoinNoIndexIT
[INFO] Running org.apache.phoenix.end2end.join.SubqueryIT
[INFO] Running org.apache.phoenix.end2end.join.SubqueryUsingSortMergeJoinIT
[ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
871.612 s <<< FAILURE! - in org.apache.phoenix.end2end.index.DropColumnIT
[ERROR] testDroppingIndexedColDropsViewIndex[DropColumnIT_mutable=false, 
columnEncoded=false](org.apache.phoenix.end2end.index.DropColumnIT)  Time 
elapsed: 566.315 s  <<< ERROR!
org.apache.phoenix.execute.CommitException: 
org.apache.hadoop.hbase.DoNotRetryIOException: 
org.apache.phoenix.hbase.index.exception.MultiIndexWriteFailureException:  
disableIndexOnFailure=true, Failed to write to multiple index tables: 
[_IDX_T000447]
at 
org.apache.phoenix.end2end.index.DropColumnIT.helpTestDroppingIndexedColDropsViewIndex(DropColumnIT.java:464)
at 
org.apache.phoenix.end2end.index.DropColumnIT.testDroppingIndexedColDropsViewIndex(DropColumnIT.java:416)
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
org.apache.phoenix.hbase.index.exception.MultiIndexWriteFailureException:  
disableIndexOnFailure=true, Failed to write to multiple index tables: 
[_IDX_T000447]
at 
org.apache.phoenix.end2end.index.DropColumnIT.helpTestDroppingIndexedColDropsViewIndex(DropColumnIT.java:464)
at 
org.apache.phoenix.end2end.index.DropColumnIT.testDroppingIndexedColDropsViewIndex(DropColumnIT.java:416)
Caused by: 
org.apache.phoenix.hbase.index.exception.MultiIndexWriteFailureException:  
disableIndexOnFailure=true, Failed to write to multiple index tables: 
[_IDX_T000447]
at 
org.apache.phoenix.end2end.index.DropColumnIT.helpTestDroppingIndexedColDropsViewIndex(DropColumnIT.java:464)
at 
org.apache.phoenix.end2end.index.DropColumnIT.testDroppingIndexedColDropsViewIndex(DropColumnIT.java:416)

[INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 320.827 
s - in org.apache.phoenix.end2end.join.HashJoinGlobalIndexIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.887 s 
- in org.apache.phoenix.end2end.salted.SaltedTableIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.254 s 
- in org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
[INFO] Running org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.873 s 
- in org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Running org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.689 s 
- in org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
[INFO] Running org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.075 s 
- in 

phoenix git commit: PHOENIX-4742 DistinctPrefixFilter potentially seeks to lesser key when descending or null value

2018-05-18 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/5.x-HBase-2.0 3d0c724c1 -> eb34a8af7


PHOENIX-4742 DistinctPrefixFilter potentially seeks to lesser key when 
descending or null value


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/eb34a8af
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/eb34a8af
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/eb34a8af

Branch: refs/heads/5.x-HBase-2.0
Commit: eb34a8af7ed7940dafbf431b9627d41f54e489dd
Parents: 3d0c724
Author: James Taylor 
Authored: Fri May 18 08:46:38 2018 -0700
Committer: James Taylor 
Committed: Fri May 18 17:15:56 2018 -0700

--
 .../org/apache/phoenix/end2end/OrderByIT.java   | 45 +---
 .../GroupedAggregateRegionObserver.java |  4 +-
 .../phoenix/filter/DistinctPrefixFilter.java| 31 ++
 .../apache/phoenix/filter/SkipScanFilter.java   |  4 +-
 .../org/apache/phoenix/schema/RowKeySchema.java | 20 +
 5 files changed, 61 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/eb34a8af/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
index 9d6a450..578a3af 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
@@ -27,10 +27,10 @@ import static org.apache.phoenix.util.TestUtil.ROW7;
 import static org.apache.phoenix.util.TestUtil.ROW8;
 import static org.apache.phoenix.util.TestUtil.ROW9;
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.assertResultSet;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
-import static org.apache.phoenix.util.TestUtil.assertResultSet;
 
 import java.sql.Connection;
 import java.sql.Date;
@@ -663,7 +663,6 @@ public class OrderByIT extends ParallelStatsDisabledIT {
 conn = DriverManager.getConnection(getUrl(), props);
 
 String tableName=generateUniqueName();
-conn.createStatement().execute("DROP TABLE if exists "+tableName);
 String sql="CREATE TABLE "+tableName+" ( "+
 "ORGANIZATION_ID VARCHAR,"+
 "CONTAINER_ID VARCHAR,"+
@@ -871,26 +870,25 @@ public class OrderByIT extends ParallelStatsDisabledIT {
 }
 
 @Test
-public void testOrderByReverseOptimizationBug3491() throws Exception {
+public void testOrderByReverseOptimization() throws Exception {
 for(boolean salted: new boolean[]{true,false}) {
-doTestOrderByReverseOptimizationBug3491(salted,true,true,true);
-doTestOrderByReverseOptimizationBug3491(salted,true,true,false);
-doTestOrderByReverseOptimizationBug3491(salted,true,false,true);
-doTestOrderByReverseOptimizationBug3491(salted,true,false,false);
-doTestOrderByReverseOptimizationBug3491(salted,false,true,true);
-doTestOrderByReverseOptimizationBug3491(salted,false,true,false);
-doTestOrderByReverseOptimizationBug3491(salted,false,false,true);
-doTestOrderByReverseOptimizationBug3491(salted,false,false,false);
+doTestOrderByReverseOptimization(salted,true,true,true);
+doTestOrderByReverseOptimization(salted,true,true,false);
+doTestOrderByReverseOptimization(salted,true,false,true);
+doTestOrderByReverseOptimization(salted,true,false,false);
+doTestOrderByReverseOptimization(salted,false,true,true);
+doTestOrderByReverseOptimization(salted,false,true,false);
+doTestOrderByReverseOptimization(salted,false,false,true);
+doTestOrderByReverseOptimization(salted,false,false,false);
 }
 }
 
-private void doTestOrderByReverseOptimizationBug3491(boolean 
salted,boolean desc1,boolean desc2,boolean desc3) throws Exception {
+private void doTestOrderByReverseOptimization(boolean salted,boolean 
desc1,boolean desc2,boolean desc3) throws Exception {
 Connection conn = null;
 try {
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 conn = DriverManager.getConnection(getUrl(), props);
 String tableName=generateUniqueName();
-conn.createStatement().execute("DROP TABLE if exists "+tableName);
 String sql="CREATE TABLE "+tableName+" ( "+
 "ORGANIZATION_ID INTEGER NOT NULL,"+
  

[2/2] phoenix git commit: PHOENIX-4744 Reduce parallelism in integration test runs

2018-05-18 Thread jamestaylor
PHOENIX-4744 Reduce parallelism in integration test runs


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/b4ffb7df
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/b4ffb7df
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/b4ffb7df

Branch: refs/heads/4.x-HBase-0.98
Commit: b4ffb7dfd2b22d7834e10a58f4eeae9547f3f9a1
Parents: 7cd3d56
Author: James Taylor 
Authored: Fri May 18 08:50:38 2018 -0700
Committer: James Taylor 
Committed: Fri May 18 17:14:22 2018 -0700

--
 pom.xml | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/b4ffb7df/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 44094c7..541a3c9 100644
--- a/pom.xml
+++ b/pom.xml
@@ -123,8 +123,8 @@
 2.5.2
 
 
-5
-5
+8
+4
 false
 false
 



[1/2] phoenix git commit: PHOENIX-4742 DistinctPrefixFilter potentially seeks to lesser key when descending or null value

2018-05-18 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 865eb9a53 -> b4ffb7dfd


PHOENIX-4742 DistinctPrefixFilter potentially seeks to lesser key when 
descending or null value


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/7cd3d561
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/7cd3d561
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/7cd3d561

Branch: refs/heads/4.x-HBase-0.98
Commit: 7cd3d561317832c979a4f7249c965ea66c603bc7
Parents: 865eb9a
Author: James Taylor 
Authored: Fri May 18 08:46:38 2018 -0700
Committer: James Taylor 
Committed: Fri May 18 17:13:50 2018 -0700

--
 .../org/apache/phoenix/end2end/OrderByIT.java   | 45 +---
 .../GroupedAggregateRegionObserver.java |  4 +-
 .../phoenix/filter/DistinctPrefixFilter.java| 31 ++
 .../apache/phoenix/filter/SkipScanFilter.java   |  4 +-
 .../org/apache/phoenix/schema/RowKeySchema.java | 20 +
 5 files changed, 61 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/7cd3d561/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
index 9d6a450..578a3af 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
@@ -27,10 +27,10 @@ import static org.apache.phoenix.util.TestUtil.ROW7;
 import static org.apache.phoenix.util.TestUtil.ROW8;
 import static org.apache.phoenix.util.TestUtil.ROW9;
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.assertResultSet;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
-import static org.apache.phoenix.util.TestUtil.assertResultSet;
 
 import java.sql.Connection;
 import java.sql.Date;
@@ -663,7 +663,6 @@ public class OrderByIT extends ParallelStatsDisabledIT {
 conn = DriverManager.getConnection(getUrl(), props);
 
 String tableName=generateUniqueName();
-conn.createStatement().execute("DROP TABLE if exists "+tableName);
 String sql="CREATE TABLE "+tableName+" ( "+
 "ORGANIZATION_ID VARCHAR,"+
 "CONTAINER_ID VARCHAR,"+
@@ -871,26 +870,25 @@ public class OrderByIT extends ParallelStatsDisabledIT {
 }
 
 @Test
-public void testOrderByReverseOptimizationBug3491() throws Exception {
+public void testOrderByReverseOptimization() throws Exception {
 for(boolean salted: new boolean[]{true,false}) {
-doTestOrderByReverseOptimizationBug3491(salted,true,true,true);
-doTestOrderByReverseOptimizationBug3491(salted,true,true,false);
-doTestOrderByReverseOptimizationBug3491(salted,true,false,true);
-doTestOrderByReverseOptimizationBug3491(salted,true,false,false);
-doTestOrderByReverseOptimizationBug3491(salted,false,true,true);
-doTestOrderByReverseOptimizationBug3491(salted,false,true,false);
-doTestOrderByReverseOptimizationBug3491(salted,false,false,true);
-doTestOrderByReverseOptimizationBug3491(salted,false,false,false);
+doTestOrderByReverseOptimization(salted,true,true,true);
+doTestOrderByReverseOptimization(salted,true,true,false);
+doTestOrderByReverseOptimization(salted,true,false,true);
+doTestOrderByReverseOptimization(salted,true,false,false);
+doTestOrderByReverseOptimization(salted,false,true,true);
+doTestOrderByReverseOptimization(salted,false,true,false);
+doTestOrderByReverseOptimization(salted,false,false,true);
+doTestOrderByReverseOptimization(salted,false,false,false);
 }
 }
 
-private void doTestOrderByReverseOptimizationBug3491(boolean 
salted,boolean desc1,boolean desc2,boolean desc3) throws Exception {
+private void doTestOrderByReverseOptimization(boolean salted,boolean 
desc1,boolean desc2,boolean desc3) throws Exception {
 Connection conn = null;
 try {
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 conn = DriverManager.getConnection(getUrl(), props);
 String tableName=generateUniqueName();
-conn.createStatement().execute("DROP TABLE if exists "+tableName);
 String sql="CREATE TABLE "+tableName+" ( "+
 "ORGANIZATION_ID INTEGER NOT NULL,"+

[2/2] phoenix git commit: PHOENIX-4744 Reduce parallelism in integration test runs

2018-05-18 Thread jamestaylor
PHOENIX-4744 Reduce parallelism in integration test runs


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/272a7e6b
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/272a7e6b
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/272a7e6b

Branch: refs/heads/4.x-HBase-1.1
Commit: 272a7e6bb3406df1843a2fd4570a2a31569a7709
Parents: 4b7aa24
Author: James Taylor 
Authored: Fri May 18 08:50:38 2018 -0700
Committer: James Taylor 
Committed: Fri May 18 17:13:08 2018 -0700

--
 pom.xml | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/272a7e6b/pom.xml
--
diff --git a/pom.xml b/pom.xml
index e7cc22c..f9a7ab8 100644
--- a/pom.xml
+++ b/pom.xml
@@ -121,8 +121,8 @@
 2.5.2
 
 
-6
-6
+8
+4
 false
 false
 



[1/2] phoenix git commit: PHOENIX-4742 DistinctPrefixFilter potentially seeks to lesser key when descending or null value

2018-05-18 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 57db9afee -> 272a7e6bb


PHOENIX-4742 DistinctPrefixFilter potentially seeks to lesser key when 
descending or null value


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/4b7aa242
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/4b7aa242
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/4b7aa242

Branch: refs/heads/4.x-HBase-1.1
Commit: 4b7aa242b7fa756b29a9298d1d3217f20fd7afba
Parents: 57db9af
Author: James Taylor 
Authored: Fri May 18 08:46:38 2018 -0700
Committer: James Taylor 
Committed: Fri May 18 17:12:20 2018 -0700

--
 .../org/apache/phoenix/end2end/OrderByIT.java   | 45 +---
 .../GroupedAggregateRegionObserver.java |  4 +-
 .../phoenix/filter/DistinctPrefixFilter.java| 31 ++
 .../apache/phoenix/filter/SkipScanFilter.java   |  4 +-
 .../org/apache/phoenix/schema/RowKeySchema.java | 20 +
 5 files changed, 61 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/4b7aa242/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
index 9d6a450..578a3af 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
@@ -27,10 +27,10 @@ import static org.apache.phoenix.util.TestUtil.ROW7;
 import static org.apache.phoenix.util.TestUtil.ROW8;
 import static org.apache.phoenix.util.TestUtil.ROW9;
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.assertResultSet;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
-import static org.apache.phoenix.util.TestUtil.assertResultSet;
 
 import java.sql.Connection;
 import java.sql.Date;
@@ -663,7 +663,6 @@ public class OrderByIT extends ParallelStatsDisabledIT {
 conn = DriverManager.getConnection(getUrl(), props);
 
 String tableName=generateUniqueName();
-conn.createStatement().execute("DROP TABLE if exists "+tableName);
 String sql="CREATE TABLE "+tableName+" ( "+
 "ORGANIZATION_ID VARCHAR,"+
 "CONTAINER_ID VARCHAR,"+
@@ -871,26 +870,25 @@ public class OrderByIT extends ParallelStatsDisabledIT {
 }
 
 @Test
-public void testOrderByReverseOptimizationBug3491() throws Exception {
+public void testOrderByReverseOptimization() throws Exception {
 for(boolean salted: new boolean[]{true,false}) {
-doTestOrderByReverseOptimizationBug3491(salted,true,true,true);
-doTestOrderByReverseOptimizationBug3491(salted,true,true,false);
-doTestOrderByReverseOptimizationBug3491(salted,true,false,true);
-doTestOrderByReverseOptimizationBug3491(salted,true,false,false);
-doTestOrderByReverseOptimizationBug3491(salted,false,true,true);
-doTestOrderByReverseOptimizationBug3491(salted,false,true,false);
-doTestOrderByReverseOptimizationBug3491(salted,false,false,true);
-doTestOrderByReverseOptimizationBug3491(salted,false,false,false);
+doTestOrderByReverseOptimization(salted,true,true,true);
+doTestOrderByReverseOptimization(salted,true,true,false);
+doTestOrderByReverseOptimization(salted,true,false,true);
+doTestOrderByReverseOptimization(salted,true,false,false);
+doTestOrderByReverseOptimization(salted,false,true,true);
+doTestOrderByReverseOptimization(salted,false,true,false);
+doTestOrderByReverseOptimization(salted,false,false,true);
+doTestOrderByReverseOptimization(salted,false,false,false);
 }
 }
 
-private void doTestOrderByReverseOptimizationBug3491(boolean 
salted,boolean desc1,boolean desc2,boolean desc3) throws Exception {
+private void doTestOrderByReverseOptimization(boolean salted,boolean 
desc1,boolean desc2,boolean desc3) throws Exception {
 Connection conn = null;
 try {
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 conn = DriverManager.getConnection(getUrl(), props);
 String tableName=generateUniqueName();
-conn.createStatement().execute("DROP TABLE if exists "+tableName);
 String sql="CREATE TABLE "+tableName+" ( "+
 "ORGANIZATION_ID INTEGER NOT NULL,"+
  

[1/2] phoenix git commit: PHOENIX-4742 DistinctPrefixFilter potentially seeks to lesser key when descending or null value

2018-05-18 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-cdh5.11 b2829828e -> 44dbb42b6


PHOENIX-4742 DistinctPrefixFilter potentially seeks to lesser key when 
descending or null value


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/0109b9a5
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/0109b9a5
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/0109b9a5

Branch: refs/heads/4.x-cdh5.11
Commit: 0109b9a50ffeb99a9f0ea19b8abe95b16ddbb7a4
Parents: b282982
Author: James Taylor 
Authored: Fri May 18 08:46:38 2018 -0700
Committer: James Taylor 
Committed: Fri May 18 17:10:51 2018 -0700

--
 .../org/apache/phoenix/end2end/OrderByIT.java   | 45 +---
 .../GroupedAggregateRegionObserver.java |  4 +-
 .../phoenix/filter/DistinctPrefixFilter.java| 31 ++
 .../apache/phoenix/filter/SkipScanFilter.java   |  4 +-
 .../org/apache/phoenix/schema/RowKeySchema.java | 20 +
 5 files changed, 61 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/0109b9a5/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
index 9d6a450..578a3af 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
@@ -27,10 +27,10 @@ import static org.apache.phoenix.util.TestUtil.ROW7;
 import static org.apache.phoenix.util.TestUtil.ROW8;
 import static org.apache.phoenix.util.TestUtil.ROW9;
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.assertResultSet;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
-import static org.apache.phoenix.util.TestUtil.assertResultSet;
 
 import java.sql.Connection;
 import java.sql.Date;
@@ -663,7 +663,6 @@ public class OrderByIT extends ParallelStatsDisabledIT {
 conn = DriverManager.getConnection(getUrl(), props);
 
 String tableName=generateUniqueName();
-conn.createStatement().execute("DROP TABLE if exists "+tableName);
 String sql="CREATE TABLE "+tableName+" ( "+
 "ORGANIZATION_ID VARCHAR,"+
 "CONTAINER_ID VARCHAR,"+
@@ -871,26 +870,25 @@ public class OrderByIT extends ParallelStatsDisabledIT {
 }
 
 @Test
-public void testOrderByReverseOptimizationBug3491() throws Exception {
+public void testOrderByReverseOptimization() throws Exception {
 for(boolean salted: new boolean[]{true,false}) {
-doTestOrderByReverseOptimizationBug3491(salted,true,true,true);
-doTestOrderByReverseOptimizationBug3491(salted,true,true,false);
-doTestOrderByReverseOptimizationBug3491(salted,true,false,true);
-doTestOrderByReverseOptimizationBug3491(salted,true,false,false);
-doTestOrderByReverseOptimizationBug3491(salted,false,true,true);
-doTestOrderByReverseOptimizationBug3491(salted,false,true,false);
-doTestOrderByReverseOptimizationBug3491(salted,false,false,true);
-doTestOrderByReverseOptimizationBug3491(salted,false,false,false);
+doTestOrderByReverseOptimization(salted,true,true,true);
+doTestOrderByReverseOptimization(salted,true,true,false);
+doTestOrderByReverseOptimization(salted,true,false,true);
+doTestOrderByReverseOptimization(salted,true,false,false);
+doTestOrderByReverseOptimization(salted,false,true,true);
+doTestOrderByReverseOptimization(salted,false,true,false);
+doTestOrderByReverseOptimization(salted,false,false,true);
+doTestOrderByReverseOptimization(salted,false,false,false);
 }
 }
 
-private void doTestOrderByReverseOptimizationBug3491(boolean 
salted,boolean desc1,boolean desc2,boolean desc3) throws Exception {
+private void doTestOrderByReverseOptimization(boolean salted,boolean 
desc1,boolean desc2,boolean desc3) throws Exception {
 Connection conn = null;
 try {
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 conn = DriverManager.getConnection(getUrl(), props);
 String tableName=generateUniqueName();
-conn.createStatement().execute("DROP TABLE if exists "+tableName);
 String sql="CREATE TABLE "+tableName+" ( "+
 "ORGANIZATION_ID INTEGER NOT NULL,"+
  

[2/2] phoenix git commit: PHOENIX-4744 Reduce parallelism in integration test runs

2018-05-18 Thread jamestaylor
PHOENIX-4744 Reduce parallelism in integration test runs


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/8e012d01
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/8e012d01
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/8e012d01

Branch: refs/heads/4.x-HBase-1.2
Commit: 8e012d01347d68b778f7e0ba3e879ad2ac77a0c5
Parents: 59af6e3
Author: James Taylor 
Authored: Fri May 18 08:50:38 2018 -0700
Committer: James Taylor 
Committed: Fri May 18 17:09:16 2018 -0700

--
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/8e012d01/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 112cab4..9d0fec2 100644
--- a/pom.xml
+++ b/pom.xml
@@ -122,7 +122,7 @@
 
 
 8
-8
+4
 false
 false
 



[1/2] phoenix git commit: PHOENIX-4742 DistinctPrefixFilter potentially seeks to lesser key when descending or null value

2018-05-18 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 bb0f9816a -> 8e012d013


PHOENIX-4742 DistinctPrefixFilter potentially seeks to lesser key when 
descending or null value


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/59af6e3f
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/59af6e3f
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/59af6e3f

Branch: refs/heads/4.x-HBase-1.2
Commit: 59af6e3fbeb166421ef4a4d49feef9aa7fb18530
Parents: bb0f981
Author: James Taylor 
Authored: Fri May 18 08:46:38 2018 -0700
Committer: James Taylor 
Committed: Fri May 18 17:09:11 2018 -0700

--
 .../org/apache/phoenix/end2end/OrderByIT.java   | 45 +---
 .../GroupedAggregateRegionObserver.java |  4 +-
 .../phoenix/filter/DistinctPrefixFilter.java| 31 ++
 .../apache/phoenix/filter/SkipScanFilter.java   |  4 +-
 .../org/apache/phoenix/schema/RowKeySchema.java | 20 +
 5 files changed, 61 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/59af6e3f/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
index 9d6a450..578a3af 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
@@ -27,10 +27,10 @@ import static org.apache.phoenix.util.TestUtil.ROW7;
 import static org.apache.phoenix.util.TestUtil.ROW8;
 import static org.apache.phoenix.util.TestUtil.ROW9;
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.assertResultSet;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
-import static org.apache.phoenix.util.TestUtil.assertResultSet;
 
 import java.sql.Connection;
 import java.sql.Date;
@@ -663,7 +663,6 @@ public class OrderByIT extends ParallelStatsDisabledIT {
 conn = DriverManager.getConnection(getUrl(), props);
 
 String tableName=generateUniqueName();
-conn.createStatement().execute("DROP TABLE if exists "+tableName);
 String sql="CREATE TABLE "+tableName+" ( "+
 "ORGANIZATION_ID VARCHAR,"+
 "CONTAINER_ID VARCHAR,"+
@@ -871,26 +870,25 @@ public class OrderByIT extends ParallelStatsDisabledIT {
 }
 
 @Test
-public void testOrderByReverseOptimizationBug3491() throws Exception {
+public void testOrderByReverseOptimization() throws Exception {
 for(boolean salted: new boolean[]{true,false}) {
-doTestOrderByReverseOptimizationBug3491(salted,true,true,true);
-doTestOrderByReverseOptimizationBug3491(salted,true,true,false);
-doTestOrderByReverseOptimizationBug3491(salted,true,false,true);
-doTestOrderByReverseOptimizationBug3491(salted,true,false,false);
-doTestOrderByReverseOptimizationBug3491(salted,false,true,true);
-doTestOrderByReverseOptimizationBug3491(salted,false,true,false);
-doTestOrderByReverseOptimizationBug3491(salted,false,false,true);
-doTestOrderByReverseOptimizationBug3491(salted,false,false,false);
+doTestOrderByReverseOptimization(salted,true,true,true);
+doTestOrderByReverseOptimization(salted,true,true,false);
+doTestOrderByReverseOptimization(salted,true,false,true);
+doTestOrderByReverseOptimization(salted,true,false,false);
+doTestOrderByReverseOptimization(salted,false,true,true);
+doTestOrderByReverseOptimization(salted,false,true,false);
+doTestOrderByReverseOptimization(salted,false,false,true);
+doTestOrderByReverseOptimization(salted,false,false,false);
 }
 }
 
-private void doTestOrderByReverseOptimizationBug3491(boolean 
salted,boolean desc1,boolean desc2,boolean desc3) throws Exception {
+private void doTestOrderByReverseOptimization(boolean salted,boolean 
desc1,boolean desc2,boolean desc3) throws Exception {
 Connection conn = null;
 try {
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 conn = DriverManager.getConnection(getUrl(), props);
 String tableName=generateUniqueName();
-conn.createStatement().execute("DROP TABLE if exists "+tableName);
 String sql="CREATE TABLE "+tableName+" ( "+
 "ORGANIZATION_ID INTEGER NOT NULL,"+
  

[2/2] phoenix git commit: PHOENIX-4744 Reduce parallelism in integration test runs

2018-05-18 Thread jamestaylor
PHOENIX-4744 Reduce parallelism in integration test runs


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/9066ce39
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/9066ce39
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/9066ce39

Branch: refs/heads/4.x-HBase-1.3
Commit: 9066ce39508efcb5eda118012b82ad3b4e7bdc46
Parents: d7533f7
Author: James Taylor 
Authored: Fri May 18 08:50:38 2018 -0700
Committer: James Taylor 
Committed: Fri May 18 17:07:17 2018 -0700

--
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/9066ce39/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 96eb1ac..95654b0 100644
--- a/pom.xml
+++ b/pom.xml
@@ -122,7 +122,7 @@
 
 
 8
-8
+4
 false
 false
 



[1/2] phoenix git commit: PHOENIX-4742 DistinctPrefixFilter potentially seeks to lesser key when descending or null value

2018-05-18 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.3 52304092f -> 9066ce395


PHOENIX-4742 DistinctPrefixFilter potentially seeks to lesser key when 
descending or null value


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/d7533f70
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/d7533f70
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/d7533f70

Branch: refs/heads/4.x-HBase-1.3
Commit: d7533f70212ec9cdb664b8f7d6d3814e3ec6e7f5
Parents: 5230409
Author: James Taylor 
Authored: Fri May 18 08:46:38 2018 -0700
Committer: James Taylor 
Committed: Fri May 18 17:07:03 2018 -0700

--
 .../org/apache/phoenix/end2end/OrderByIT.java   | 45 +---
 .../GroupedAggregateRegionObserver.java |  4 +-
 .../phoenix/filter/DistinctPrefixFilter.java| 31 ++
 .../apache/phoenix/filter/SkipScanFilter.java   |  4 +-
 .../org/apache/phoenix/schema/RowKeySchema.java | 20 +
 5 files changed, 61 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/d7533f70/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
index 9d6a450..578a3af 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
@@ -27,10 +27,10 @@ import static org.apache.phoenix.util.TestUtil.ROW7;
 import static org.apache.phoenix.util.TestUtil.ROW8;
 import static org.apache.phoenix.util.TestUtil.ROW9;
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.assertResultSet;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
-import static org.apache.phoenix.util.TestUtil.assertResultSet;
 
 import java.sql.Connection;
 import java.sql.Date;
@@ -663,7 +663,6 @@ public class OrderByIT extends ParallelStatsDisabledIT {
 conn = DriverManager.getConnection(getUrl(), props);
 
 String tableName=generateUniqueName();
-conn.createStatement().execute("DROP TABLE if exists "+tableName);
 String sql="CREATE TABLE "+tableName+" ( "+
 "ORGANIZATION_ID VARCHAR,"+
 "CONTAINER_ID VARCHAR,"+
@@ -871,26 +870,25 @@ public class OrderByIT extends ParallelStatsDisabledIT {
 }
 
 @Test
-public void testOrderByReverseOptimizationBug3491() throws Exception {
+public void testOrderByReverseOptimization() throws Exception {
 for(boolean salted: new boolean[]{true,false}) {
-doTestOrderByReverseOptimizationBug3491(salted,true,true,true);
-doTestOrderByReverseOptimizationBug3491(salted,true,true,false);
-doTestOrderByReverseOptimizationBug3491(salted,true,false,true);
-doTestOrderByReverseOptimizationBug3491(salted,true,false,false);
-doTestOrderByReverseOptimizationBug3491(salted,false,true,true);
-doTestOrderByReverseOptimizationBug3491(salted,false,true,false);
-doTestOrderByReverseOptimizationBug3491(salted,false,false,true);
-doTestOrderByReverseOptimizationBug3491(salted,false,false,false);
+doTestOrderByReverseOptimization(salted,true,true,true);
+doTestOrderByReverseOptimization(salted,true,true,false);
+doTestOrderByReverseOptimization(salted,true,false,true);
+doTestOrderByReverseOptimization(salted,true,false,false);
+doTestOrderByReverseOptimization(salted,false,true,true);
+doTestOrderByReverseOptimization(salted,false,true,false);
+doTestOrderByReverseOptimization(salted,false,false,true);
+doTestOrderByReverseOptimization(salted,false,false,false);
 }
 }
 
-private void doTestOrderByReverseOptimizationBug3491(boolean 
salted,boolean desc1,boolean desc2,boolean desc3) throws Exception {
+private void doTestOrderByReverseOptimization(boolean salted,boolean 
desc1,boolean desc2,boolean desc3) throws Exception {
 Connection conn = null;
 try {
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 conn = DriverManager.getConnection(getUrl(), props);
 String tableName=generateUniqueName();
-conn.createStatement().execute("DROP TABLE if exists "+tableName);
 String sql="CREATE TABLE "+tableName+" ( "+
 "ORGANIZATION_ID INTEGER NOT NULL,"+
  

[2/2] phoenix git commit: PHOENIX-4742 DistinctPrefixFilter potentially seeks to lesser key when descending or null value

2018-05-18 Thread jamestaylor
PHOENIX-4742 DistinctPrefixFilter potentially seeks to lesser key when 
descending or null value


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/48b6f99a
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/48b6f99a
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/48b6f99a

Branch: refs/heads/master
Commit: 48b6f99acdeb91e3167e7beeed49747f7b7dcc6c
Parents: 6ab9b37
Author: James Taylor 
Authored: Fri May 18 08:46:38 2018 -0700
Committer: James Taylor 
Committed: Fri May 18 17:01:47 2018 -0700

--
 .../org/apache/phoenix/end2end/OrderByIT.java   | 45 +---
 .../GroupedAggregateRegionObserver.java |  4 +-
 .../phoenix/filter/DistinctPrefixFilter.java| 31 ++
 .../apache/phoenix/filter/SkipScanFilter.java   |  4 +-
 .../org/apache/phoenix/schema/RowKeySchema.java | 20 +
 5 files changed, 61 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/48b6f99a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
index 9d6a450..578a3af 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
@@ -27,10 +27,10 @@ import static org.apache.phoenix.util.TestUtil.ROW7;
 import static org.apache.phoenix.util.TestUtil.ROW8;
 import static org.apache.phoenix.util.TestUtil.ROW9;
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.assertResultSet;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
-import static org.apache.phoenix.util.TestUtil.assertResultSet;
 
 import java.sql.Connection;
 import java.sql.Date;
@@ -663,7 +663,6 @@ public class OrderByIT extends ParallelStatsDisabledIT {
 conn = DriverManager.getConnection(getUrl(), props);
 
 String tableName=generateUniqueName();
-conn.createStatement().execute("DROP TABLE if exists "+tableName);
 String sql="CREATE TABLE "+tableName+" ( "+
 "ORGANIZATION_ID VARCHAR,"+
 "CONTAINER_ID VARCHAR,"+
@@ -871,26 +870,25 @@ public class OrderByIT extends ParallelStatsDisabledIT {
 }
 
 @Test
-public void testOrderByReverseOptimizationBug3491() throws Exception {
+public void testOrderByReverseOptimization() throws Exception {
 for(boolean salted: new boolean[]{true,false}) {
-doTestOrderByReverseOptimizationBug3491(salted,true,true,true);
-doTestOrderByReverseOptimizationBug3491(salted,true,true,false);
-doTestOrderByReverseOptimizationBug3491(salted,true,false,true);
-doTestOrderByReverseOptimizationBug3491(salted,true,false,false);
-doTestOrderByReverseOptimizationBug3491(salted,false,true,true);
-doTestOrderByReverseOptimizationBug3491(salted,false,true,false);
-doTestOrderByReverseOptimizationBug3491(salted,false,false,true);
-doTestOrderByReverseOptimizationBug3491(salted,false,false,false);
+doTestOrderByReverseOptimization(salted,true,true,true);
+doTestOrderByReverseOptimization(salted,true,true,false);
+doTestOrderByReverseOptimization(salted,true,false,true);
+doTestOrderByReverseOptimization(salted,true,false,false);
+doTestOrderByReverseOptimization(salted,false,true,true);
+doTestOrderByReverseOptimization(salted,false,true,false);
+doTestOrderByReverseOptimization(salted,false,false,true);
+doTestOrderByReverseOptimization(salted,false,false,false);
 }
 }
 
-private void doTestOrderByReverseOptimizationBug3491(boolean 
salted,boolean desc1,boolean desc2,boolean desc3) throws Exception {
+private void doTestOrderByReverseOptimization(boolean salted,boolean 
desc1,boolean desc2,boolean desc3) throws Exception {
 Connection conn = null;
 try {
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 conn = DriverManager.getConnection(getUrl(), props);
 String tableName=generateUniqueName();
-conn.createStatement().execute("DROP TABLE if exists "+tableName);
 String sql="CREATE TABLE "+tableName+" ( "+
 "ORGANIZATION_ID INTEGER NOT NULL,"+
 "CONTAINER_ID INTEGER NOT NULL,"+
@@ -965,26 +963,25 @@ public class OrderByIT extends 

[1/2] phoenix git commit: PHOENIX-4744 Reduce parallelism in integration test runs

2018-05-18 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/master 6ab9b372f -> 58415e2f3


PHOENIX-4744 Reduce parallelism in integration test runs


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/58415e2f
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/58415e2f
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/58415e2f

Branch: refs/heads/master
Commit: 58415e2f31617ec543cb01e8bc27ce44c4efbe0d
Parents: 48b6f99
Author: James Taylor 
Authored: Fri May 18 08:50:38 2018 -0700
Committer: James Taylor 
Committed: Fri May 18 17:01:47 2018 -0700

--
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/58415e2f/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 87c2db6..38c1c41 100644
--- a/pom.xml
+++ b/pom.xml
@@ -122,7 +122,7 @@
 
 
 8
-8
+4
 false
 false
 



phoenix git commit: PHOENIX-4704 Presplit index tables when building asynchronously

2018-05-18 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 933dd0b80 -> 57db9afee


PHOENIX-4704 Presplit index tables when building asynchronously


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/57db9afe
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/57db9afe
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/57db9afe

Branch: refs/heads/4.x-HBase-1.1
Commit: 57db9afee6676c0fb3d8563bc14562c843a6894f
Parents: 933dd0b
Author: Vincent Poon 
Authored: Fri May 18 11:22:26 2018 -0700
Committer: Vincent Poon 
Committed: Fri May 18 16:48:45 2018 -0700

--
 .../org/apache/phoenix/end2end/IndexToolIT.java | 106 +-
 .../phoenix/mapreduce/index/IndexTool.java  | 142 ++-
 2 files changed, 242 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/57db9afe/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
index afb6d72..a120aaa 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
@@ -21,12 +21,15 @@ import static 
org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.Arrays;
 import java.util.Collection;
 import java.util.List;
 import java.util.Map;
@@ -34,8 +37,16 @@ import java.util.Properties;
 import java.util.UUID;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.RegionScanner;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.index.IndexTool;
-import org.apache.phoenix.query.BaseTest;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.util.PropertiesUtil;
@@ -54,7 +65,7 @@ import com.google.common.collect.Maps;
 
 @RunWith(Parameterized.class)
 @Category(NeedsOwnMiniClusterTest.class)
-public class IndexToolIT extends BaseTest {
+public class IndexToolIT extends ParallelStatsEnabledIT {
 
 private final boolean localIndex;
 private final boolean transactional;
@@ -85,7 +96,7 @@ public class IndexToolIT extends BaseTest {
 }
 
 @BeforeClass
-public static void doSetup() throws Exception {
+public static void setup() throws Exception {
 Map serverProps = Maps.newHashMapWithExpectedSize(2);
 serverProps.put(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB,
 QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
@@ -249,6 +260,86 @@ public class IndexToolIT extends BaseTest {
 }
 }
 
+/**
+ * Test presplitting an index table
+ */
+@Test
+public void testSplitIndex() throws Exception {
+if (localIndex) return; // can't split local indexes
+String schemaName = generateUniqueName();
+String dataTableName = generateUniqueName();
+String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
+final TableName dataTN = TableName.valueOf(dataTableFullName);
+String indexTableName = generateUniqueName();
+String indexTableFullName = SchemaUtil.getTableName(schemaName, 
indexTableName);
+TableName indexTN = TableName.valueOf(indexTableFullName);
+try (Connection conn =
+DriverManager.getConnection(getUrl(), 
PropertiesUtil.deepCopy(TEST_PROPERTIES));
+HBaseAdmin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+String dataDDL =
+"CREATE TABLE " + dataTableFullName + "(\n"
++ "ID VARCHAR NOT NULL PRIMARY KEY,\n"
++ "\"info\".CAR_NUM VARCHAR(18) NULL,\n"
++ "\"test\".CAR_NUM VARCHAR(18) NULL,\n"
++ "\"info\".CAP_DATE 

phoenix git commit: PHOENIX-4704 Presplit index tables when building asynchronously

2018-05-18 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 f67b2b793 -> bb0f9816a


PHOENIX-4704 Presplit index tables when building asynchronously


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/bb0f9816
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/bb0f9816
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/bb0f9816

Branch: refs/heads/4.x-HBase-1.2
Commit: bb0f9816a076f3d647a2eb0ef77f0fc578d58fe0
Parents: f67b2b7
Author: Vincent Poon 
Authored: Fri May 18 11:22:26 2018 -0700
Committer: Vincent Poon 
Committed: Fri May 18 16:46:54 2018 -0700

--
 .../org/apache/phoenix/end2end/IndexToolIT.java | 106 +-
 .../phoenix/mapreduce/index/IndexTool.java  | 142 ++-
 2 files changed, 242 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/bb0f9816/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
index afb6d72..a120aaa 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
@@ -21,12 +21,15 @@ import static 
org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.Arrays;
 import java.util.Collection;
 import java.util.List;
 import java.util.Map;
@@ -34,8 +37,16 @@ import java.util.Properties;
 import java.util.UUID;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.RegionScanner;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.index.IndexTool;
-import org.apache.phoenix.query.BaseTest;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.util.PropertiesUtil;
@@ -54,7 +65,7 @@ import com.google.common.collect.Maps;
 
 @RunWith(Parameterized.class)
 @Category(NeedsOwnMiniClusterTest.class)
-public class IndexToolIT extends BaseTest {
+public class IndexToolIT extends ParallelStatsEnabledIT {
 
 private final boolean localIndex;
 private final boolean transactional;
@@ -85,7 +96,7 @@ public class IndexToolIT extends BaseTest {
 }
 
 @BeforeClass
-public static void doSetup() throws Exception {
+public static void setup() throws Exception {
 Map serverProps = Maps.newHashMapWithExpectedSize(2);
 serverProps.put(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB,
 QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
@@ -249,6 +260,86 @@ public class IndexToolIT extends BaseTest {
 }
 }
 
+/**
+ * Test presplitting an index table
+ */
+@Test
+public void testSplitIndex() throws Exception {
+if (localIndex) return; // can't split local indexes
+String schemaName = generateUniqueName();
+String dataTableName = generateUniqueName();
+String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
+final TableName dataTN = TableName.valueOf(dataTableFullName);
+String indexTableName = generateUniqueName();
+String indexTableFullName = SchemaUtil.getTableName(schemaName, 
indexTableName);
+TableName indexTN = TableName.valueOf(indexTableFullName);
+try (Connection conn =
+DriverManager.getConnection(getUrl(), 
PropertiesUtil.deepCopy(TEST_PROPERTIES));
+HBaseAdmin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+String dataDDL =
+"CREATE TABLE " + dataTableFullName + "(\n"
++ "ID VARCHAR NOT NULL PRIMARY KEY,\n"
++ "\"info\".CAR_NUM VARCHAR(18) NULL,\n"
++ "\"test\".CAR_NUM VARCHAR(18) NULL,\n"
++ "\"info\".CAP_DATE 

phoenix git commit: PHOENIX-4704 Presplit index tables when building asynchronously

2018-05-18 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.3 5935edd71 -> 52304092f


PHOENIX-4704 Presplit index tables when building asynchronously


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/52304092
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/52304092
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/52304092

Branch: refs/heads/4.x-HBase-1.3
Commit: 52304092f5876ab9c1086e954f2c5b0ba875a03e
Parents: 5935edd
Author: Vincent Poon 
Authored: Fri May 18 11:22:26 2018 -0700
Committer: Vincent Poon 
Committed: Fri May 18 16:43:23 2018 -0700

--
 .../org/apache/phoenix/end2end/IndexToolIT.java | 106 +-
 .../phoenix/mapreduce/index/IndexTool.java  | 142 ++-
 2 files changed, 242 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/52304092/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
index afb6d72..a120aaa 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
@@ -21,12 +21,15 @@ import static 
org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.Arrays;
 import java.util.Collection;
 import java.util.List;
 import java.util.Map;
@@ -34,8 +37,16 @@ import java.util.Properties;
 import java.util.UUID;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.RegionScanner;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.index.IndexTool;
-import org.apache.phoenix.query.BaseTest;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.util.PropertiesUtil;
@@ -54,7 +65,7 @@ import com.google.common.collect.Maps;
 
 @RunWith(Parameterized.class)
 @Category(NeedsOwnMiniClusterTest.class)
-public class IndexToolIT extends BaseTest {
+public class IndexToolIT extends ParallelStatsEnabledIT {
 
 private final boolean localIndex;
 private final boolean transactional;
@@ -85,7 +96,7 @@ public class IndexToolIT extends BaseTest {
 }
 
 @BeforeClass
-public static void doSetup() throws Exception {
+public static void setup() throws Exception {
 Map serverProps = Maps.newHashMapWithExpectedSize(2);
 serverProps.put(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB,
 QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
@@ -249,6 +260,86 @@ public class IndexToolIT extends BaseTest {
 }
 }
 
+/**
+ * Test presplitting an index table
+ */
+@Test
+public void testSplitIndex() throws Exception {
+if (localIndex) return; // can't split local indexes
+String schemaName = generateUniqueName();
+String dataTableName = generateUniqueName();
+String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
+final TableName dataTN = TableName.valueOf(dataTableFullName);
+String indexTableName = generateUniqueName();
+String indexTableFullName = SchemaUtil.getTableName(schemaName, 
indexTableName);
+TableName indexTN = TableName.valueOf(indexTableFullName);
+try (Connection conn =
+DriverManager.getConnection(getUrl(), 
PropertiesUtil.deepCopy(TEST_PROPERTIES));
+HBaseAdmin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+String dataDDL =
+"CREATE TABLE " + dataTableFullName + "(\n"
++ "ID VARCHAR NOT NULL PRIMARY KEY,\n"
++ "\"info\".CAR_NUM VARCHAR(18) NULL,\n"
++ "\"test\".CAR_NUM VARCHAR(18) NULL,\n"
++ "\"info\".CAP_DATE 

phoenix git commit: PHOENIX-4704 Presplit index tables when building asynchronously

2018-05-18 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/master cb17adbbd -> 6ab9b372f


PHOENIX-4704 Presplit index tables when building asynchronously


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/6ab9b372
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/6ab9b372
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/6ab9b372

Branch: refs/heads/master
Commit: 6ab9b372f16f37b11e657b6803c6a60007815824
Parents: cb17adb
Author: Vincent Poon 
Authored: Fri May 18 11:22:26 2018 -0700
Committer: Vincent Poon 
Committed: Fri May 18 16:42:53 2018 -0700

--
 .../org/apache/phoenix/end2end/IndexToolIT.java | 106 +-
 .../phoenix/mapreduce/index/IndexTool.java  | 142 ++-
 2 files changed, 242 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/6ab9b372/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
index afb6d72..a120aaa 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
@@ -21,12 +21,15 @@ import static 
org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.Arrays;
 import java.util.Collection;
 import java.util.List;
 import java.util.Map;
@@ -34,8 +37,16 @@ import java.util.Properties;
 import java.util.UUID;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.RegionScanner;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.index.IndexTool;
-import org.apache.phoenix.query.BaseTest;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.util.PropertiesUtil;
@@ -54,7 +65,7 @@ import com.google.common.collect.Maps;
 
 @RunWith(Parameterized.class)
 @Category(NeedsOwnMiniClusterTest.class)
-public class IndexToolIT extends BaseTest {
+public class IndexToolIT extends ParallelStatsEnabledIT {
 
 private final boolean localIndex;
 private final boolean transactional;
@@ -85,7 +96,7 @@ public class IndexToolIT extends BaseTest {
 }
 
 @BeforeClass
-public static void doSetup() throws Exception {
+public static void setup() throws Exception {
 Map serverProps = Maps.newHashMapWithExpectedSize(2);
 serverProps.put(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB,
 QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
@@ -249,6 +260,86 @@ public class IndexToolIT extends BaseTest {
 }
 }
 
+/**
+ * Test presplitting an index table
+ */
+@Test
+public void testSplitIndex() throws Exception {
+if (localIndex) return; // can't split local indexes
+String schemaName = generateUniqueName();
+String dataTableName = generateUniqueName();
+String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
+final TableName dataTN = TableName.valueOf(dataTableFullName);
+String indexTableName = generateUniqueName();
+String indexTableFullName = SchemaUtil.getTableName(schemaName, 
indexTableName);
+TableName indexTN = TableName.valueOf(indexTableFullName);
+try (Connection conn =
+DriverManager.getConnection(getUrl(), 
PropertiesUtil.deepCopy(TEST_PROPERTIES));
+HBaseAdmin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+String dataDDL =
+"CREATE TABLE " + dataTableFullName + "(\n"
++ "ID VARCHAR NOT NULL PRIMARY KEY,\n"
++ "\"info\".CAR_NUM VARCHAR(18) NULL,\n"
++ "\"test\".CAR_NUM VARCHAR(18) NULL,\n"
++ "\"info\".CAP_DATE VARCHAR NULL,\n" 

Build failed in Jenkins: Phoenix | Master #2028

2018-05-18 Thread Apache Jenkins Server
See 


Changes:

[vincentpoon] PHOENIX-4724 Efficient Equi-Depth histogram for streaming data

--
[...truncated 88.91 KB...]
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.993 s 
- in org.apache.phoenix.end2end.MappingTableDataTypeIT
[INFO] Running org.apache.phoenix.end2end.NativeHBaseTypesIT
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 70.216 s 
- in org.apache.phoenix.end2end.LikeExpressionIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.519 s 
- in org.apache.phoenix.end2end.NamespaceSchemaMappingIT
[INFO] Running org.apache.phoenix.end2end.NotQueryWithGlobalImmutableIndexesIT
[INFO] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.657 
s - in org.apache.phoenix.end2end.ModulusExpressionIT
[INFO] Running org.apache.phoenix.end2end.NthValueFunctionIT
[INFO] Running org.apache.phoenix.end2end.NotQueryWithLocalImmutableIndexesIT
[INFO] Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.778 
s - in org.apache.phoenix.end2end.NthValueFunctionIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 80.92 s 
- in org.apache.phoenix.end2end.NativeHBaseTypesIT
[INFO] Running org.apache.phoenix.end2end.NumericArithmeticIT
[INFO] Running org.apache.phoenix.end2end.NullIT
[INFO] Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.746 
s - in org.apache.phoenix.end2end.NumericArithmeticIT
[INFO] Running org.apache.phoenix.end2end.OctetLengthFunctionEnd2EndIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.389 s 
- in org.apache.phoenix.end2end.OctetLengthFunctionEnd2EndIT
[INFO] Running org.apache.phoenix.end2end.OnDuplicateKeyIT
[INFO] Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 335.291 
s - in org.apache.phoenix.end2end.NotQueryWithGlobalImmutableIndexesIT
[INFO] Running org.apache.phoenix.end2end.OrderByIT
[INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 383.621 
s - in org.apache.phoenix.end2end.NotQueryWithLocalImmutableIndexesIT
[INFO] Tests run: 63, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 591.007 
s - in org.apache.phoenix.end2end.InQueryIT
[INFO] Running org.apache.phoenix.end2end.PartialScannerResultsDisabledIT
[INFO] Tests run: 63, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 659.358 
s - in org.apache.phoenix.end2end.GroupByIT
[INFO] Running org.apache.phoenix.end2end.PercentileIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.366 s 
- in org.apache.phoenix.end2end.PartialScannerResultsDisabledIT
[INFO] Running org.apache.phoenix.end2end.PointInTimeQueryIT
[INFO] Running org.apache.phoenix.end2end.PhoenixRuntimeIT
[INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 87.814 
s - in org.apache.phoenix.end2end.PercentileIT
[INFO] Running org.apache.phoenix.end2end.PowerFunctionEnd2EndIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.019 s 
- in org.apache.phoenix.end2end.PowerFunctionEnd2EndIT
[INFO] Running org.apache.phoenix.end2end.PrimitiveTypeIT
[INFO] Tests run: 70, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 663.676 
s - in org.apache.phoenix.end2end.IntArithmeticIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.075 s 
- in org.apache.phoenix.end2end.PhoenixRuntimeIT
[INFO] Running org.apache.phoenix.end2end.QueryDatabaseMetaDataIT
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.201 s 
- in org.apache.phoenix.end2end.PrimitiveTypeIT
[INFO] Running org.apache.phoenix.end2end.QueryExecWithoutSCNIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.371 s 
- in org.apache.phoenix.end2end.QueryExecWithoutSCNIT
[INFO] Running org.apache.phoenix.end2end.QueryIT
[INFO] Running org.apache.phoenix.end2end.ProductMetricsIT
[INFO] Tests run: 48, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 449.879 
s - in org.apache.phoenix.end2end.OnDuplicateKeyIT
[INFO] Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 543.871 
s - in org.apache.phoenix.end2end.NullIT
[INFO] Running org.apache.phoenix.end2end.QueryMoreIT
[INFO] Running org.apache.phoenix.end2end.QueryWithOffsetIT
[INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 169.229 
s - in org.apache.phoenix.end2end.QueryDatabaseMetaDataIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.232 s 
- in org.apache.phoenix.end2end.QueryWithOffsetIT
[INFO] Running org.apache.phoenix.end2end.RTrimFunctionIT
[INFO] Running org.apache.phoenix.end2end.RangeScanIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.823 s 
- in org.apache.phoenix.end2end.RTrimFunctionIT
[INFO] Running org.apache.phoenix.end2end.ReadOnlyIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.108 

Apache-Phoenix | 4.x-HBase-1.2 | Build Successful

2018-05-18 Thread Apache Jenkins Server
4.x-HBase-1.2 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-1.2

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-1.2/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-1.2/lastCompletedBuild/testReport/

Changes
[vincentpoon] PHOENIX-4724 Efficient Equi-Depth histogram for streaming data



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Build failed in Jenkins: Phoenix-4.x-HBase-1.3 #137

2018-05-18 Thread Apache Jenkins Server
See 


Changes:

[vincentpoon] PHOENIX-4724 Efficient Equi-Depth histogram for streaming data

--
[...truncated 107.42 KB...]
[INFO] Running org.apache.phoenix.end2end.join.SortMergeJoinLocalIndexIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 102.148 
s - in org.apache.phoenix.end2end.join.HashJoinMoreIT
[INFO] Running org.apache.phoenix.end2end.join.SubqueryIT
[INFO] Running org.apache.phoenix.end2end.join.SortMergeJoinNoIndexIT
[INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 370.19 
s - in org.apache.phoenix.end2end.join.HashJoinNoIndexIT
[INFO] Running org.apache.phoenix.end2end.join.SubqueryUsingSortMergeJoinIT
[INFO] Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 412.507 
s - in org.apache.phoenix.end2end.join.SubqueryIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 384.568 
s - in org.apache.phoenix.end2end.join.SortMergeJoinNoIndexIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableIT
[INFO] Tests run: 80, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 914.529 
s - in org.apache.phoenix.end2end.index.MutableIndexIT
[INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 642.669 
s - in org.apache.phoenix.end2end.join.HashJoinGlobalIndexIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.185 s 
- in org.apache.phoenix.end2end.salted.SaltedTableIT
[INFO] Running org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Running org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.335 s 
- in org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.536 s 
- in org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Running org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 51.223 s 
- in org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
[INFO] Running org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Running org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.555 s 
- in org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Running org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.586 s 
- in org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Running org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.919 s 
- in org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Running org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.626 s 
- in org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Running org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.701 s 
- in org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Running org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 104.906 
s - in org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Running org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 660.235 
s - in org.apache.phoenix.end2end.join.SortMergeJoinGlobalIndexIT
[INFO] Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 286.852 
s - in org.apache.phoenix.end2end.join.SubqueryUsingSortMergeJoinIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.737 s 
- in org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 97.717 s 
- in org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 837.525 
s - in org.apache.phoenix.end2end.join.HashJoinLocalIndexIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 842.211 
s - in org.apache.phoenix.end2end.join.SortMergeJoinLocalIndexIT
[INFO] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 422.637 
s - in org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 449.714 
s - in org.apache.phoenix.tx.TxCheckpointIT
[INFO] 
[INFO] Results:
[INFO] 
[WARNING] Tests run: 3429, Failures: 0, Errors: 0, Skipped: 3
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test (HBaseManagedTimeTests) 
@ phoenix-core ---
[INFO] 
[INFO] ---

Jenkins build is back to normal : Phoenix | 4.x-HBase-0.98 #1897

2018-05-18 Thread Apache Jenkins Server
See 




phoenix git commit: PHOENIX-4724 Efficient Equi-Depth histogram for streaming data

2018-05-18 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 0c107b773 -> 865eb9a53


PHOENIX-4724 Efficient Equi-Depth histogram for streaming data


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/865eb9a5
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/865eb9a5
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/865eb9a5

Branch: refs/heads/4.x-HBase-0.98
Commit: 865eb9a5362a0273cb85f6370b4470f03102a05a
Parents: 0c107b7
Author: Vincent Poon 
Authored: Thu May 3 17:07:27 2018 -0700
Committer: Vincent Poon 
Committed: Fri May 18 10:44:34 2018 -0700

--
 .../phoenix/util/EquiDepthStreamHistogram.java  | 453 +++
 .../util/EquiDepthStreamHistogramTest.java  | 303 +
 2 files changed, 756 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/865eb9a5/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
new file mode 100644
index 000..7649933
--- /dev/null
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
@@ -0,0 +1,453 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.util;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+/**
+ * Equi-Depth histogram based on 
http://web.cs.ucla.edu/~zaniolo/papers/Histogram-EDBT2011-CamReady.pdf,
+ * but without the sliding window - we assume a single window over the entire 
data set.
+ *
+ * Used to generate the bucket boundaries of a histogram where each bucket has 
the same # of items.
+ * This is useful, for example, for pre-splitting an index table, by feeding 
in data from the indexed column.
+ * Works on streaming data - the histogram is dynamically updated for each new 
value.
+ *
+ * Add values by calling addValue(), then at the end computeBuckets() can be 
called to get
+ * the buckets with their bounds.
+ *
+ * Average time complexity: O(log(B x p) + (B x p)/T) = nearly constant
+ * B = number of buckets, p = expansion factor constant, T = # of values
+ *
+ * Space complexity: different from paper since here we keep the blocked bars 
but don't have expiration,
+ *  comes out to basically O(log(T))
+ */
+public class EquiDepthStreamHistogram {
+private static final Log LOG = 
LogFactory.getLog(EquiDepthStreamHistogram.class);
+
+// used in maxSize calculation for each bar
+private static final double MAX_COEF = 1.7;
+// higher expansion factor = better accuracy and worse performance
+private static final short DEFAULT_EXPANSION_FACTOR = 7;
+private int numBuckets;
+private int maxBars;
+@VisibleForTesting
+long totalCount; // number of values - i.e. count across all bars
+@VisibleForTesting
+List bars;
+
+/**
+ * Create a new histogram
+ * @param numBuckets number of buckets, which can be used to get the splits
+ */
+public EquiDepthStreamHistogram(int numBuckets) {
+this(numBuckets, DEFAULT_EXPANSION_FACTOR);
+}
+
+/**
+ * @param numBuckets number of buckets
+ * @param expansionFactor number of bars = expansionFactor * numBuckets
+ * The more bars, the better the accuracy, at the cost of worse performance
+ */
+public EquiDepthStreamHistogram(int numBuckets, int expansionFactor) {
+this.numBuckets = numBuckets;
+this.maxBars = numBuckets * expansionFactor;
+this.bars = new 

phoenix git commit: PHOENIX-4724 Efficient Equi-Depth histogram for streaming data

2018-05-18 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 04af584ec -> 933dd0b80


PHOENIX-4724 Efficient Equi-Depth histogram for streaming data


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/933dd0b8
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/933dd0b8
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/933dd0b8

Branch: refs/heads/4.x-HBase-1.1
Commit: 933dd0b8048db56b33e7d750a69503ed857d85c9
Parents: 04af584
Author: Vincent Poon 
Authored: Thu May 3 17:07:27 2018 -0700
Committer: Vincent Poon 
Committed: Fri May 18 10:44:11 2018 -0700

--
 .../phoenix/util/EquiDepthStreamHistogram.java  | 453 +++
 .../util/EquiDepthStreamHistogramTest.java  | 303 +
 2 files changed, 756 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/933dd0b8/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
new file mode 100644
index 000..7649933
--- /dev/null
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
@@ -0,0 +1,453 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.util;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+/**
+ * Equi-Depth histogram based on 
http://web.cs.ucla.edu/~zaniolo/papers/Histogram-EDBT2011-CamReady.pdf,
+ * but without the sliding window - we assume a single window over the entire 
data set.
+ *
+ * Used to generate the bucket boundaries of a histogram where each bucket has 
the same # of items.
+ * This is useful, for example, for pre-splitting an index table, by feeding 
in data from the indexed column.
+ * Works on streaming data - the histogram is dynamically updated for each new 
value.
+ *
+ * Add values by calling addValue(), then at the end computeBuckets() can be 
called to get
+ * the buckets with their bounds.
+ *
+ * Average time complexity: O(log(B x p) + (B x p)/T) = nearly constant
+ * B = number of buckets, p = expansion factor constant, T = # of values
+ *
+ * Space complexity: different from paper since here we keep the blocked bars 
but don't have expiration,
+ *  comes out to basically O(log(T))
+ */
+public class EquiDepthStreamHistogram {
+private static final Log LOG = 
LogFactory.getLog(EquiDepthStreamHistogram.class);
+
+// used in maxSize calculation for each bar
+private static final double MAX_COEF = 1.7;
+// higher expansion factor = better accuracy and worse performance
+private static final short DEFAULT_EXPANSION_FACTOR = 7;
+private int numBuckets;
+private int maxBars;
+@VisibleForTesting
+long totalCount; // number of values - i.e. count across all bars
+@VisibleForTesting
+List bars;
+
+/**
+ * Create a new histogram
+ * @param numBuckets number of buckets, which can be used to get the splits
+ */
+public EquiDepthStreamHistogram(int numBuckets) {
+this(numBuckets, DEFAULT_EXPANSION_FACTOR);
+}
+
+/**
+ * @param numBuckets number of buckets
+ * @param expansionFactor number of bars = expansionFactor * numBuckets
+ * The more bars, the better the accuracy, at the cost of worse performance
+ */
+public EquiDepthStreamHistogram(int numBuckets, int expansionFactor) {
+this.numBuckets = numBuckets;
+this.maxBars = numBuckets * expansionFactor;
+this.bars = new 

phoenix git commit: PHOENIX-4724 Efficient Equi-Depth histogram for streaming data

2018-05-18 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 2d2c80108 -> f67b2b793


PHOENIX-4724 Efficient Equi-Depth histogram for streaming data


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/f67b2b79
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/f67b2b79
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/f67b2b79

Branch: refs/heads/4.x-HBase-1.2
Commit: f67b2b793ad5f5fa853ebf7f3338ea1090ecaf67
Parents: 2d2c801
Author: Vincent Poon 
Authored: Thu May 3 17:07:27 2018 -0700
Committer: Vincent Poon 
Committed: Fri May 18 10:42:30 2018 -0700

--
 .../phoenix/util/EquiDepthStreamHistogram.java  | 453 +++
 .../util/EquiDepthStreamHistogramTest.java  | 303 +
 2 files changed, 756 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/f67b2b79/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
new file mode 100644
index 000..7649933
--- /dev/null
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
@@ -0,0 +1,453 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.util;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+/**
+ * Equi-Depth histogram based on 
http://web.cs.ucla.edu/~zaniolo/papers/Histogram-EDBT2011-CamReady.pdf,
+ * but without the sliding window - we assume a single window over the entire 
data set.
+ *
+ * Used to generate the bucket boundaries of a histogram where each bucket has 
the same # of items.
+ * This is useful, for example, for pre-splitting an index table, by feeding 
in data from the indexed column.
+ * Works on streaming data - the histogram is dynamically updated for each new 
value.
+ *
+ * Add values by calling addValue(), then at the end computeBuckets() can be 
called to get
+ * the buckets with their bounds.
+ *
+ * Average time complexity: O(log(B x p) + (B x p)/T) = nearly constant
+ * B = number of buckets, p = expansion factor constant, T = # of values
+ *
+ * Space complexity: different from paper since here we keep the blocked bars 
but don't have expiration,
+ *  comes out to basically O(log(T))
+ */
+public class EquiDepthStreamHistogram {
+private static final Log LOG = 
LogFactory.getLog(EquiDepthStreamHistogram.class);
+
+// used in maxSize calculation for each bar
+private static final double MAX_COEF = 1.7;
+// higher expansion factor = better accuracy and worse performance
+private static final short DEFAULT_EXPANSION_FACTOR = 7;
+private int numBuckets;
+private int maxBars;
+@VisibleForTesting
+long totalCount; // number of values - i.e. count across all bars
+@VisibleForTesting
+List bars;
+
+/**
+ * Create a new histogram
+ * @param numBuckets number of buckets, which can be used to get the splits
+ */
+public EquiDepthStreamHistogram(int numBuckets) {
+this(numBuckets, DEFAULT_EXPANSION_FACTOR);
+}
+
+/**
+ * @param numBuckets number of buckets
+ * @param expansionFactor number of bars = expansionFactor * numBuckets
+ * The more bars, the better the accuracy, at the cost of worse performance
+ */
+public EquiDepthStreamHistogram(int numBuckets, int expansionFactor) {
+this.numBuckets = numBuckets;
+this.maxBars = numBuckets * expansionFactor;
+this.bars = new 

phoenix git commit: PHOENIX-4724 Efficient Equi-Depth histogram for streaming data

2018-05-18 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.3 2015345a0 -> 5935edd71


PHOENIX-4724 Efficient Equi-Depth histogram for streaming data


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/5935edd7
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/5935edd7
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/5935edd7

Branch: refs/heads/4.x-HBase-1.3
Commit: 5935edd71873f9ec766ffe35000e96d2e48d
Parents: 2015345
Author: Vincent Poon 
Authored: Thu May 3 17:07:27 2018 -0700
Committer: Vincent Poon 
Committed: Fri May 18 10:41:30 2018 -0700

--
 .../phoenix/util/EquiDepthStreamHistogram.java  | 453 +++
 .../util/EquiDepthStreamHistogramTest.java  | 303 +
 2 files changed, 756 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/5935edd7/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
new file mode 100644
index 000..7649933
--- /dev/null
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
@@ -0,0 +1,453 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.util;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+/**
+ * Equi-Depth histogram based on 
http://web.cs.ucla.edu/~zaniolo/papers/Histogram-EDBT2011-CamReady.pdf,
+ * but without the sliding window - we assume a single window over the entire 
data set.
+ *
+ * Used to generate the bucket boundaries of a histogram where each bucket has 
the same # of items.
+ * This is useful, for example, for pre-splitting an index table, by feeding 
in data from the indexed column.
+ * Works on streaming data - the histogram is dynamically updated for each new 
value.
+ *
+ * Add values by calling addValue(), then at the end computeBuckets() can be 
called to get
+ * the buckets with their bounds.
+ *
+ * Average time complexity: O(log(B x p) + (B x p)/T) = nearly constant
+ * B = number of buckets, p = expansion factor constant, T = # of values
+ *
+ * Space complexity: different from paper since here we keep the blocked bars 
but don't have expiration,
+ *  comes out to basically O(log(T))
+ */
+public class EquiDepthStreamHistogram {
+private static final Log LOG = 
LogFactory.getLog(EquiDepthStreamHistogram.class);
+
+// used in maxSize calculation for each bar
+private static final double MAX_COEF = 1.7;
+// higher expansion factor = better accuracy and worse performance
+private static final short DEFAULT_EXPANSION_FACTOR = 7;
+private int numBuckets;
+private int maxBars;
+@VisibleForTesting
+long totalCount; // number of values - i.e. count across all bars
+@VisibleForTesting
+List bars;
+
+/**
+ * Create a new histogram
+ * @param numBuckets number of buckets, which can be used to get the splits
+ */
+public EquiDepthStreamHistogram(int numBuckets) {
+this(numBuckets, DEFAULT_EXPANSION_FACTOR);
+}
+
+/**
+ * @param numBuckets number of buckets
+ * @param expansionFactor number of bars = expansionFactor * numBuckets
+ * The more bars, the better the accuracy, at the cost of worse performance
+ */
+public EquiDepthStreamHistogram(int numBuckets, int expansionFactor) {
+this.numBuckets = numBuckets;
+this.maxBars = numBuckets * expansionFactor;
+this.bars = new 

phoenix git commit: PHOENIX-4724 Efficient Equi-Depth histogram for streaming data

2018-05-18 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/5.x-HBase-2.0 54d19ce1a -> 3d0c724c1


PHOENIX-4724 Efficient Equi-Depth histogram for streaming data


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/3d0c724c
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/3d0c724c
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/3d0c724c

Branch: refs/heads/5.x-HBase-2.0
Commit: 3d0c724c13492f677f6361b07619772843948e73
Parents: 54d19ce
Author: Vincent Poon 
Authored: Thu May 3 17:07:27 2018 -0700
Committer: Vincent Poon 
Committed: Fri May 18 10:41:04 2018 -0700

--
 .../phoenix/util/EquiDepthStreamHistogram.java  | 453 +++
 .../util/EquiDepthStreamHistogramTest.java  | 303 +
 2 files changed, 756 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/3d0c724c/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
new file mode 100644
index 000..7649933
--- /dev/null
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
@@ -0,0 +1,453 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.util;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+/**
+ * Equi-Depth histogram based on 
http://web.cs.ucla.edu/~zaniolo/papers/Histogram-EDBT2011-CamReady.pdf,
+ * but without the sliding window - we assume a single window over the entire 
data set.
+ *
+ * Used to generate the bucket boundaries of a histogram where each bucket has 
the same # of items.
+ * This is useful, for example, for pre-splitting an index table, by feeding 
in data from the indexed column.
+ * Works on streaming data - the histogram is dynamically updated for each new 
value.
+ *
+ * Add values by calling addValue(), then at the end computeBuckets() can be 
called to get
+ * the buckets with their bounds.
+ *
+ * Average time complexity: O(log(B x p) + (B x p)/T) = nearly constant
+ * B = number of buckets, p = expansion factor constant, T = # of values
+ *
+ * Space complexity: different from paper since here we keep the blocked bars 
but don't have expiration,
+ *  comes out to basically O(log(T))
+ */
+public class EquiDepthStreamHistogram {
+private static final Log LOG = 
LogFactory.getLog(EquiDepthStreamHistogram.class);
+
+// used in maxSize calculation for each bar
+private static final double MAX_COEF = 1.7;
+// higher expansion factor = better accuracy and worse performance
+private static final short DEFAULT_EXPANSION_FACTOR = 7;
+private int numBuckets;
+private int maxBars;
+@VisibleForTesting
+long totalCount; // number of values - i.e. count across all bars
+@VisibleForTesting
+List bars;
+
+/**
+ * Create a new histogram
+ * @param numBuckets number of buckets, which can be used to get the splits
+ */
+public EquiDepthStreamHistogram(int numBuckets) {
+this(numBuckets, DEFAULT_EXPANSION_FACTOR);
+}
+
+/**
+ * @param numBuckets number of buckets
+ * @param expansionFactor number of bars = expansionFactor * numBuckets
+ * The more bars, the better the accuracy, at the cost of worse performance
+ */
+public EquiDepthStreamHistogram(int numBuckets, int expansionFactor) {
+this.numBuckets = numBuckets;
+this.maxBars = numBuckets * expansionFactor;
+this.bars = new 

phoenix git commit: PHOENIX-4724 Efficient Equi-Depth histogram for streaming data

2018-05-18 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/master b63b8e9e6 -> cb17adbbd


PHOENIX-4724 Efficient Equi-Depth histogram for streaming data


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/cb17adbb
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/cb17adbb
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/cb17adbb

Branch: refs/heads/master
Commit: cb17adbbde56cacd43846ead2200e6606ed64ae8
Parents: b63b8e9
Author: Vincent Poon 
Authored: Thu May 3 17:07:27 2018 -0700
Committer: Vincent Poon 
Committed: Fri May 18 10:39:10 2018 -0700

--
 .../phoenix/util/EquiDepthStreamHistogram.java  | 453 +++
 .../util/EquiDepthStreamHistogramTest.java  | 303 +
 2 files changed, 756 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/cb17adbb/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
new file mode 100644
index 000..7649933
--- /dev/null
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java
@@ -0,0 +1,453 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.util;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+/**
+ * Equi-Depth histogram based on 
http://web.cs.ucla.edu/~zaniolo/papers/Histogram-EDBT2011-CamReady.pdf,
+ * but without the sliding window - we assume a single window over the entire 
data set.
+ *
+ * Used to generate the bucket boundaries of a histogram where each bucket has 
the same # of items.
+ * This is useful, for example, for pre-splitting an index table, by feeding 
in data from the indexed column.
+ * Works on streaming data - the histogram is dynamically updated for each new 
value.
+ *
+ * Add values by calling addValue(), then at the end computeBuckets() can be 
called to get
+ * the buckets with their bounds.
+ *
+ * Average time complexity: O(log(B x p) + (B x p)/T) = nearly constant
+ * B = number of buckets, p = expansion factor constant, T = # of values
+ *
+ * Space complexity: different from paper since here we keep the blocked bars 
but don't have expiration,
+ *  comes out to basically O(log(T))
+ */
+public class EquiDepthStreamHistogram {
+private static final Log LOG = 
LogFactory.getLog(EquiDepthStreamHistogram.class);
+
+// used in maxSize calculation for each bar
+private static final double MAX_COEF = 1.7;
+// higher expansion factor = better accuracy and worse performance
+private static final short DEFAULT_EXPANSION_FACTOR = 7;
+private int numBuckets;
+private int maxBars;
+@VisibleForTesting
+long totalCount; // number of values - i.e. count across all bars
+@VisibleForTesting
+List bars;
+
+/**
+ * Create a new histogram
+ * @param numBuckets number of buckets, which can be used to get the splits
+ */
+public EquiDepthStreamHistogram(int numBuckets) {
+this(numBuckets, DEFAULT_EXPANSION_FACTOR);
+}
+
+/**
+ * @param numBuckets number of buckets
+ * @param expansionFactor number of bars = expansionFactor * numBuckets
+ * The more bars, the better the accuracy, at the cost of worse performance
+ */
+public EquiDepthStreamHistogram(int numBuckets, int expansionFactor) {
+this.numBuckets = numBuckets;
+this.maxBars = numBuckets * expansionFactor;
+this.bars = new 

Build failed in Jenkins: Phoenix Compile Compatibility with HBase #640

2018-05-18 Thread Apache Jenkins Server
See 


--
[...truncated 37.15 KB...]
  symbol:   class HBaseRpcController
  location: class 
org.apache.hadoop.hbase.ipc.controller.ClientRpcControllerFactory
[ERROR] 
:[52,9]
 cannot find symbol
  symbol:   class HBaseRpcController
  location: class 
org.apache.hadoop.hbase.ipc.controller.ClientRpcControllerFactory
[ERROR] 
:[180,14]
 cannot find symbol
  symbol: class MetricRegistry
[ERROR] 
:[179,7]
 method does not override or implement a method from a supertype
[ERROR] 
:[454,78]
 cannot find symbol
  symbol: class HBaseRpcController
[ERROR] 
:[432,17]
 cannot find symbol
  symbol: class HBaseRpcController
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.0:compile (default-compile) on 
project phoenix-core: Compilation failure: Compilation failure: 
[ERROR] 
:[34,39]
 cannot find symbol
[ERROR]   symbol:   class MetricRegistry
[ERROR]   location: package org.apache.hadoop.hbase.metrics
[ERROR] 
:[144,16]
 cannot find symbol
[ERROR]   symbol:   class MetricRegistry
[ERROR]   location: class 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.PhoenixMetaDataControllerEnvironment
[ERROR] 
:[24,35]
 cannot find symbol
[ERROR]   symbol:   class DelegatingHBaseRpcController
[ERROR]   location: package org.apache.hadoop.hbase.ipc
[ERROR] 
:[25,35]
 cannot find symbol
[ERROR]   symbol:   class HBaseRpcController
[ERROR]   location: package org.apache.hadoop.hbase.ipc
[ERROR] 
:[37,37]
 cannot find symbol
[ERROR]   symbol: class DelegatingHBaseRpcController
[ERROR] 
:[56,38]
 cannot find symbol
[ERROR]   symbol:   class HBaseRpcController
[ERROR]   location: class 
org.apache.hadoop.hbase.ipc.controller.MetadataRpcController
[ERROR] 
:[26,35]
 cannot find symbol
[ERROR]   symbol:   class HBaseRpcController
[ERROR]   location: package org.apache.hadoop.hbase.ipc
[ERROR] 
:[40,12]
 cannot find symbol
[ERROR]   symbol:   class HBaseRpcController
[ERROR]   location: class 
org.apache.hadoop.hbase.ipc.controller.InterRegionServerMetadataRpcControllerFactory
[ERROR] 
:[46,12]
 cannot find symbol
[ERROR]   symbol:   class HBaseRpcController
[ERROR]   location: class 
org.apache.hadoop.hbase.ipc.controller.InterRegionServerMetadataRpcControllerFactory
[ERROR]