Build failed in Jenkins: Phoenix-4.x-HBase-1.1 #86

2016-06-23 Thread Apache Jenkins Server
See 

Changes:

[rajeshbabu] PHOENIX-3002 Upgrading to 4.8 doesn't recreate local

--
[...truncated 2081 lines...]
Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
seconds
at 
org.apache.phoenix.end2end.UserDefinedFunctionsIT.doSetup(UserDefinedFunctionsIT.java:249)

Running org.apache.phoenix.end2end.ViewIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.005 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.ViewIT
org.apache.phoenix.end2end.ViewIT  Time elapsed: 0.005 sec  <<< ERROR!
java.lang.RuntimeException: java.io.IOException: Shutting down
Caused by: java.io.IOException: Shutting down
Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
seconds

Running org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.004 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT
org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT  Time elapsed: 0.004 
sec  <<< ERROR!
java.lang.RuntimeException: java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT.doSetup(ImmutableIndexWithStatsIT.java:52)
Caused by: java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT.doSetup(ImmutableIndexWithStatsIT.java:52)
Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
seconds
at 
org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT.doSetup(ImmutableIndexWithStatsIT.java:52)

Running org.apache.phoenix.end2end.index.MutableIndexFailureIT
Running org.apache.phoenix.end2end.index.MutableIndexReplicationIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.004 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.index.MutableIndexFailureIT
org.apache.phoenix.end2end.index.MutableIndexFailureIT  Time elapsed: 0.003 sec 
 <<< ERROR!
java.lang.RuntimeException: java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.MutableIndexFailureIT.doSetup(MutableIndexFailureIT.java:115)
Caused by: java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.MutableIndexFailureIT.doSetup(MutableIndexFailureIT.java:115)
Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
seconds
at 
org.apache.phoenix.end2end.index.MutableIndexFailureIT.doSetup(MutableIndexFailureIT.java:115)

Running org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.007 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.index.MutableIndexReplicationIT
org.apache.phoenix.end2end.index.MutableIndexReplicationIT  Time elapsed: 0.006 
sec  <<< ERROR!
java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.MutableIndexReplicationIT.setupConfigsAndStartCluster(MutableIndexReplicationIT.java:170)
at 
org.apache.phoenix.end2end.index.MutableIndexReplicationIT.setUpBeforeClass(MutableIndexReplicationIT.java:108)
Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
seconds
at 
org.apache.phoenix.end2end.index.MutableIndexReplicationIT.setupConfigsAndStartCluster(MutableIndexReplicationIT.java:170)
at 
org.apache.phoenix.end2end.index.MutableIndexReplicationIT.setUpBeforeClass(MutableIndexReplicationIT.java:108)

Running org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.008 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT
org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT  Time elapsed: 0.006 
sec  <<< ERROR!
java.lang.RuntimeException: java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT.doSetup(ReadOnlyIndexFailureIT.java:119)
Caused by: java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT.doSetup(ReadOnlyIndexFailureIT.java:119)
Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
seconds
at 
org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT.doSetup(ReadOnlyIndexFailureIT.java:119)

Running org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.004 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
org.apache.phoenix.end2end.index.txn.TxWriteFailureIT  Time elapsed: 0.004 sec  
<<< ERROR!
java.lang.RuntimeException: java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.txn.TxWriteFailureIT.doSetup(TxWriteFailureIT.java:86)
Caused by: java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.txn.TxWriteFailureIT.doSetup(TxWriteFailureIT.java:86)
Caused 

Jenkins build is back to normal : Phoenix | Master #1292

2016-06-23 Thread Apache Jenkins Server
See 



Build failed in Jenkins: Phoenix | 4.x-HBase-0.98 #1203

2016-06-23 Thread Apache Jenkins Server
See 

Changes:

[rajeshbabu] PHOENIX-3002 Upgrading to 4.8 doesn't recreate local 
indexes(Rajeshbabu)

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-6 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/phoenix.git # timeout=10
Fetching upstream changes from 
https://git-wip-us.apache.org/repos/asf/phoenix.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/phoenix.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse origin/4.x-HBase-0.98^{commit} # timeout=10
Checking out Revision 8cec07dca0c1804ee81a31d76131fec0388b911b 
(origin/4.x-HBase-0.98)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 8cec07dca0c1804ee81a31d76131fec0388b911b
 > git rev-list 6e75b6aff59583517208031e2ed3117e705abdc5 # timeout=10
No emails were triggered.
[EnvInject] - Executing scripts and injecting environment variables after the 
SCM step.
[EnvInject] - Injecting as environment variables the properties content 
MAVEN_OPTS=-Xmx3G

[EnvInject] - Variables injected successfully.
[Phoenix-4.x-HBase-0.98] $ /bin/bash -xe /tmp/hudson264049854268803162.sh
+ echo 'DELETING ~/.m2/repository/org/apache/htrace. See 
https://issues.apache.org/jira/browse/PHOENIX-1802'
DELETING ~/.m2/repository/org/apache/htrace. See 
https://issues.apache.org/jira/browse/PHOENIX-1802
+ echo 'CURRENT CONTENT:'
CURRENT CONTENT:
+ ls /home/jenkins/.m2/repository/org/apache/htrace
htrace
htrace-core
[Phoenix-4.x-HBase-0.98] $ /home/jenkins/tools/maven/apache-maven-3.0.4/bin/mvn 
-U clean install -Dcheckstyle.skip=true
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# 
Build step 'Invoke top-level Maven targets' marked build as failure
Archiving artifacts
Compressed 1.23 GB of artifacts by 40.1% relative to #1119
Recording test results
ERROR: Step ?Publish JUnit test result report? failed: Test reports were found 
but none of them are new. Did tests run? 
For example, 

 is 5 days 23 hr old




phoenix git commit: PHOENIX-3002 Upgrading to 4.8 doesn't recreate local indexes(Rajeshbabu)

2016-06-23 Thread rajeshbabu
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 6e75b6aff -> 8cec07dca


PHOENIX-3002 Upgrading to 4.8 doesn't recreate local indexes(Rajeshbabu)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/8cec07dc
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/8cec07dc
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/8cec07dc

Branch: refs/heads/4.x-HBase-0.98
Commit: 8cec07dca0c1804ee81a31d76131fec0388b911b
Parents: 6e75b6a
Author: Rajeshbabu Chintaguntla 
Authored: Fri Jun 24 03:21:56 2016 +0530
Committer: Rajeshbabu Chintaguntla 
Committed: Fri Jun 24 03:21:56 2016 +0530

--
 .../query/ConnectionQueryServicesImpl.java  | 199 +
 .../org/apache/phoenix/query/QueryServices.java |   1 +
 .../phoenix/query/QueryServicesOptions.java |   6 +-
 .../apache/phoenix/schema/MetaDataClient.java   |   2 +-
 .../org/apache/phoenix/util/PhoenixRuntime.java |  11 +-
 .../org/apache/phoenix/util/UpgradeUtil.java| 219 +--
 6 files changed, 224 insertions(+), 214 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/8cec07dc/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index 7c767fa..c7a30a9 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -22,17 +22,7 @@ import static org.apache.hadoop.hbase.HColumnDescriptor.TTL;
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_MAJOR_VERSION;
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_MINOR_VERSION;
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_PATCH_NUMBER;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.COLUMN_FAMILY;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.COLUMN_NAME;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.DATA_TABLE_NAME;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.INDEX_TYPE;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.ORDINAL_POSITION;
-import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TABLE_NAME;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TABLE_SCHEM;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TENANT_ID;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.VIEW_INDEX_ID;
 import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_DROP_METADATA;
 import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_RENEW_LEASE_ENABLED;
 import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_RENEW_LEASE_THREAD_POOL_SIZE;
@@ -95,6 +85,7 @@ import org.apache.hadoop.hbase.ipc.BlockingRpcCallback;
 import org.apache.hadoop.hbase.ipc.ServerRpcController;
 import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.MutationProto;
 import org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator;
+import org.apache.hadoop.hbase.regionserver.LocalIndexSplitter;
 import org.apache.hadoop.hbase.security.User;
 import org.apache.hadoop.hbase.util.ByteStringer;
 import org.apache.hadoop.hbase.util.Bytes;
@@ -191,6 +182,7 @@ import org.apache.phoenix.schema.types.PVarchar;
 import org.apache.phoenix.util.ByteUtil;
 import org.apache.phoenix.util.Closeables;
 import org.apache.phoenix.util.ConfigUtil;
+import org.apache.phoenix.util.IndexUtil;
 import org.apache.phoenix.util.JDBCUtil;
 import org.apache.phoenix.util.MetaDataUtil;
 import org.apache.phoenix.util.PhoenixContextExecutor;
@@ -2468,19 +2460,6 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 }
 
 if (currentServerSideTableTimeStamp < 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_8_0) {
-Properties props = 
PropertiesUtil.deepCopy(metaConnection.getClientInfo());
-
props.remove(PhoenixRuntime.CURRENT_SCN_ATTRIB);
-
props.remove(PhoenixRuntime.TENANT_ID_ATTRIB);
-PhoenixConnection conn =
-new 

phoenix git commit: PHOENIX-3002 Upgrading to 4.8 doesn't recreate local indexes(Rajeshbabu)

2016-06-23 Thread rajeshbabu
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.0 4de25dea4 -> 9c4ecad4c


PHOENIX-3002 Upgrading to 4.8 doesn't recreate local indexes(Rajeshbabu)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/9c4ecad4
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/9c4ecad4
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/9c4ecad4

Branch: refs/heads/4.x-HBase-1.0
Commit: 9c4ecad4c6d2e92be7c13ede18b6ab811065aae8
Parents: 4de25de
Author: Rajeshbabu Chintaguntla 
Authored: Fri Jun 24 03:09:48 2016 +0530
Committer: Rajeshbabu Chintaguntla 
Committed: Fri Jun 24 03:09:48 2016 +0530

--
 .../phoenix/coprocessor/MetaDataProtocol.java   |   2 +-
 .../query/ConnectionQueryServicesImpl.java  | 212 ++
 .../org/apache/phoenix/query/QueryServices.java |   1 +
 .../phoenix/query/QueryServicesOptions.java |   6 +-
 .../apache/phoenix/schema/MetaDataClient.java   |   2 +-
 .../org/apache/phoenix/util/PhoenixRuntime.java |  11 +-
 .../org/apache/phoenix/util/UpgradeUtil.java| 219 +--
 7 files changed, 237 insertions(+), 216 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/9c4ecad4/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
index f847b97..df00f8f 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
@@ -79,7 +79,7 @@ public abstract class MetaDataProtocol extends 
MetaDataService {
 public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_5_0 = 
MIN_TABLE_TIMESTAMP + 8;
 public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_6_0 = 
MIN_TABLE_TIMESTAMP + 9;
 public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_7_0 = 
MIN_TABLE_TIMESTAMP + 15;
-public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_8_0 = 
MIN_TABLE_TIMESTAMP + 16;
+public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_8_0 = 
MIN_TABLE_TIMESTAMP + 18;
 // MIN_SYSTEM_TABLE_TIMESTAMP needs to be set to the max of all the 
MIN_SYSTEM_TABLE_TIMESTAMP_* constants
 public static final long MIN_SYSTEM_TABLE_TIMESTAMP = 
MIN_SYSTEM_TABLE_TIMESTAMP_4_8_0;
 // TODO: pare this down to minimum, as we don't need duplicates for both 
table and column errors, nor should we need

http://git-wip-us.apache.org/repos/asf/phoenix/blob/9c4ecad4/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index 24fa588..be73e5a 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -21,17 +21,7 @@ import static org.apache.hadoop.hbase.HColumnDescriptor.TTL;
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_MAJOR_VERSION;
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_MINOR_VERSION;
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_PATCH_NUMBER;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.COLUMN_FAMILY;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.COLUMN_NAME;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.DATA_TABLE_NAME;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.INDEX_TYPE;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.ORDINAL_POSITION;
-import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TABLE_NAME;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TABLE_SCHEM;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TENANT_ID;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.VIEW_INDEX_ID;
 import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_DROP_METADATA;
 import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_RENEW_LEASE_ENABLED;
 import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_RENEW_LEASE_THREAD_POOL_SIZE;
@@ -94,6 +84,7 @@ import 

Build failed in Jenkins: Phoenix | 4.x-HBase-0.98 #1202

2016-06-23 Thread Apache Jenkins Server
See 

Changes:

[ankitsinghal59] PHOENIX-2949 Fix estimated region size when checking for 
serial query

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-6 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/phoenix.git # timeout=10
Fetching upstream changes from 
https://git-wip-us.apache.org/repos/asf/phoenix.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/phoenix.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse origin/4.x-HBase-0.98^{commit} # timeout=10
Checking out Revision 6e75b6aff59583517208031e2ed3117e705abdc5 
(origin/4.x-HBase-0.98)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 6e75b6aff59583517208031e2ed3117e705abdc5
 > git rev-list bb8d7cd0de5d6c6ecadcf67fcc245bd1d07fa8c4 # timeout=10
No emails were triggered.
[EnvInject] - Executing scripts and injecting environment variables after the 
SCM step.
[EnvInject] - Injecting as environment variables the properties content 
MAVEN_OPTS=-Xmx3G

[EnvInject] - Variables injected successfully.
[Phoenix-4.x-HBase-0.98] $ /bin/bash -xe /tmp/hudson3782892876637018896.sh
+ echo 'DELETING ~/.m2/repository/org/apache/htrace. See 
https://issues.apache.org/jira/browse/PHOENIX-1802'
DELETING ~/.m2/repository/org/apache/htrace. See 
https://issues.apache.org/jira/browse/PHOENIX-1802
+ echo 'CURRENT CONTENT:'
CURRENT CONTENT:
+ ls /home/jenkins/.m2/repository/org/apache/htrace
htrace
htrace-core
[Phoenix-4.x-HBase-0.98] $ /home/jenkins/tools/maven/apache-maven-3.0.4/bin/mvn 
-U clean install -Dcheckstyle.skip=true
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# 
Build step 'Invoke top-level Maven targets' marked build as failure
Archiving artifacts
Compressed 1.23 GB of artifacts by 40.1% relative to #1119
Recording test results
ERROR: Step ?Publish JUnit test result report? failed: Test reports were found 
but none of them are new. Did tests run? 
For example, 

 is 5 days 22 hr old




phoenix git commit: PHOENIX-2949 Fix estimated region size when checking for serial query

2016-06-23 Thread ankit
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 bb8d7cd0d -> 6e75b6aff


PHOENIX-2949 Fix estimated region size when checking for serial query


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/6e75b6af
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/6e75b6af
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/6e75b6af

Branch: refs/heads/4.x-HBase-0.98
Commit: 6e75b6aff59583517208031e2ed3117e705abdc5
Parents: bb8d7cd
Author: Ankit Singhal 
Authored: Thu Jun 23 13:57:52 2016 -0700
Committer: Ankit Singhal 
Committed: Thu Jun 23 13:57:52 2016 -0700

--
 .../org/apache/phoenix/execute/ScanPlan.java| 46 ++--
 .../org/apache/phoenix/query/QueryServices.java |  2 +-
 2 files changed, 25 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/6e75b6af/phoenix-core/src/main/java/org/apache/phoenix/execute/ScanPlan.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/execute/ScanPlan.java 
b/phoenix-core/src/main/java/org/apache/phoenix/execute/ScanPlan.java
index c55a1cc..0975b3f 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/execute/ScanPlan.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/execute/ScanPlan.java
@@ -25,7 +25,7 @@ import java.sql.SQLException;
 import java.util.Collections;
 import java.util.List;
 
-import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.phoenix.compile.GroupByCompiler.GroupBy;
 import org.apache.phoenix.compile.OrderByCompiler.OrderBy;
@@ -62,7 +62,6 @@ import org.apache.phoenix.schema.SaltingUtil;
 import org.apache.phoenix.schema.TableRef;
 import org.apache.phoenix.schema.stats.GuidePostsInfo;
 import org.apache.phoenix.schema.stats.PTableStats;
-import org.apache.phoenix.schema.stats.StatisticsUtil;
 import org.apache.phoenix.util.LogUtil;
 import org.apache.phoenix.util.QueryUtil;
 import org.apache.phoenix.util.ScanUtil;
@@ -118,7 +117,7 @@ public class ScanPlan extends BaseQueryPlan {
 Scan scan = context.getScan();
 /*
  * If a limit is provided and we have no filter, run the scan serially 
when we estimate that
- * the limit's worth of data will fit into a single region.
+ * the limit's worth of data is less than the threshold bytes provided 
in QueryServices.QUERY_PARALLEL_LIMIT_THRESHOLD
  */
 Integer perScanLimit = !allowPageFilter ? null : limit;
 if (perScanLimit == null || scan.getFilter() != null) {
@@ -127,32 +126,35 @@ public class ScanPlan extends BaseQueryPlan {
 long scn = context.getConnection().getSCN() == null ? Long.MAX_VALUE : 
context.getConnection().getSCN();
 PTableStats tableStats = 
context.getConnection().getQueryServices().getTableStats(table.getName().getBytes(),
 scn);
 GuidePostsInfo gpsInfo = 
tableStats.getGuidePosts().get(SchemaUtil.getEmptyColumnFamily(table));
-long estRowSize = SchemaUtil.estimateRowSize(table);
-long estRegionSize;
+ConnectionQueryServices services = 
context.getConnection().getQueryServices();
+long estRowSize;
+long estimatedParallelThresholdBytes;
 if (gpsInfo == null) {
-// Use guidepost depth as minimum size
-ConnectionQueryServices services = 
context.getConnection().getQueryServices();
-HTableDescriptor desc = 
services.getTableDescriptor(table.getPhysicalName().getBytes());
-int guidepostPerRegion = 
services.getProps().getInt(QueryServices.STATS_GUIDEPOST_PER_REGION_ATTRIB,
-QueryServicesOptions.DEFAULT_STATS_GUIDEPOST_PER_REGION);
-long guidepostWidth = 
services.getProps().getLong(QueryServices.STATS_GUIDEPOST_WIDTH_BYTES_ATTRIB,
-QueryServicesOptions.DEFAULT_STATS_GUIDEPOST_WIDTH_BYTES);
-estRegionSize = 
StatisticsUtil.getGuidePostDepth(guidepostPerRegion, guidepostWidth, desc);
+estRowSize = SchemaUtil.estimateRowSize(table);
+estimatedParallelThresholdBytes = 
services.getProps().getLong(HConstants.HREGION_MAX_FILESIZE,
+HConstants.DEFAULT_MAX_FILE_SIZE);
 } else {
-// Region size estimated based on total number of bytes divided by 
number of regions
 long totByteSize = 0;
+long totRowCount = 0;
 for (long byteCount : gpsInfo.getByteCounts()) {
 totByteSize += byteCount;
 }
-estRegionSize = totByteSize / (gpsInfo.getGuidePostsCount()+1);
+for (long rowCount : 

phoenix git commit: PHOENIX-2949 Fix estimated region size when checking for serial query

2016-06-23 Thread ankit
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.0 88c380925 -> 4de25dea4


PHOENIX-2949 Fix estimated region size when checking for serial query


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/4de25dea
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/4de25dea
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/4de25dea

Branch: refs/heads/4.x-HBase-1.0
Commit: 4de25dea48c95c0abc810b2678dbdbea0d86e1ab
Parents: 88c3809
Author: Ankit Singhal 
Authored: Thu Jun 23 13:56:21 2016 -0700
Committer: Ankit Singhal 
Committed: Thu Jun 23 13:56:21 2016 -0700

--
 .../org/apache/phoenix/execute/ScanPlan.java| 46 ++--
 .../org/apache/phoenix/query/QueryServices.java |  2 +-
 2 files changed, 25 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/4de25dea/phoenix-core/src/main/java/org/apache/phoenix/execute/ScanPlan.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/execute/ScanPlan.java 
b/phoenix-core/src/main/java/org/apache/phoenix/execute/ScanPlan.java
index c55a1cc..0975b3f 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/execute/ScanPlan.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/execute/ScanPlan.java
@@ -25,7 +25,7 @@ import java.sql.SQLException;
 import java.util.Collections;
 import java.util.List;
 
-import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.phoenix.compile.GroupByCompiler.GroupBy;
 import org.apache.phoenix.compile.OrderByCompiler.OrderBy;
@@ -62,7 +62,6 @@ import org.apache.phoenix.schema.SaltingUtil;
 import org.apache.phoenix.schema.TableRef;
 import org.apache.phoenix.schema.stats.GuidePostsInfo;
 import org.apache.phoenix.schema.stats.PTableStats;
-import org.apache.phoenix.schema.stats.StatisticsUtil;
 import org.apache.phoenix.util.LogUtil;
 import org.apache.phoenix.util.QueryUtil;
 import org.apache.phoenix.util.ScanUtil;
@@ -118,7 +117,7 @@ public class ScanPlan extends BaseQueryPlan {
 Scan scan = context.getScan();
 /*
  * If a limit is provided and we have no filter, run the scan serially 
when we estimate that
- * the limit's worth of data will fit into a single region.
+ * the limit's worth of data is less than the threshold bytes provided 
in QueryServices.QUERY_PARALLEL_LIMIT_THRESHOLD
  */
 Integer perScanLimit = !allowPageFilter ? null : limit;
 if (perScanLimit == null || scan.getFilter() != null) {
@@ -127,32 +126,35 @@ public class ScanPlan extends BaseQueryPlan {
 long scn = context.getConnection().getSCN() == null ? Long.MAX_VALUE : 
context.getConnection().getSCN();
 PTableStats tableStats = 
context.getConnection().getQueryServices().getTableStats(table.getName().getBytes(),
 scn);
 GuidePostsInfo gpsInfo = 
tableStats.getGuidePosts().get(SchemaUtil.getEmptyColumnFamily(table));
-long estRowSize = SchemaUtil.estimateRowSize(table);
-long estRegionSize;
+ConnectionQueryServices services = 
context.getConnection().getQueryServices();
+long estRowSize;
+long estimatedParallelThresholdBytes;
 if (gpsInfo == null) {
-// Use guidepost depth as minimum size
-ConnectionQueryServices services = 
context.getConnection().getQueryServices();
-HTableDescriptor desc = 
services.getTableDescriptor(table.getPhysicalName().getBytes());
-int guidepostPerRegion = 
services.getProps().getInt(QueryServices.STATS_GUIDEPOST_PER_REGION_ATTRIB,
-QueryServicesOptions.DEFAULT_STATS_GUIDEPOST_PER_REGION);
-long guidepostWidth = 
services.getProps().getLong(QueryServices.STATS_GUIDEPOST_WIDTH_BYTES_ATTRIB,
-QueryServicesOptions.DEFAULT_STATS_GUIDEPOST_WIDTH_BYTES);
-estRegionSize = 
StatisticsUtil.getGuidePostDepth(guidepostPerRegion, guidepostWidth, desc);
+estRowSize = SchemaUtil.estimateRowSize(table);
+estimatedParallelThresholdBytes = 
services.getProps().getLong(HConstants.HREGION_MAX_FILESIZE,
+HConstants.DEFAULT_MAX_FILE_SIZE);
 } else {
-// Region size estimated based on total number of bytes divided by 
number of regions
 long totByteSize = 0;
+long totRowCount = 0;
 for (long byteCount : gpsInfo.getByteCounts()) {
 totByteSize += byteCount;
 }
-estRegionSize = totByteSize / (gpsInfo.getGuidePostsCount()+1);
+for (long rowCount : 

Build failed in Jenkins: Phoenix | Master #1291

2016-06-23 Thread Apache Jenkins Server
See 

Changes:

[rajeshbabu] PHOENIX-3002 Upgrading to 4.8 doesn't recreate local 
indexes(Rajeshbabu)

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-6 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/phoenix.git # timeout=10
Fetching upstream changes from 
https://git-wip-us.apache.org/repos/asf/phoenix.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/phoenix.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse origin/master^{commit} # timeout=10
Checking out Revision 8a72032e20ff6d3ab4457ad50be2bef2dfb124d9 (origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 8a72032e20ff6d3ab4457ad50be2bef2dfb124d9
 > git rev-list 3e69b90d8666ffb8017cf5537d987522ef485d57 # timeout=10
No emails were triggered.
[EnvInject] - Executing scripts and injecting environment variables after the 
SCM step.
[EnvInject] - Injecting as environment variables the properties content 
MAVEN_OPTS=-Xmx3G

[EnvInject] - Variables injected successfully.
[Phoenix-master] $ /bin/bash -xe /tmp/hudson1039191050996375795.sh
+ ls /home/jenkins/.m2/repository/org/apache/htrace
htrace
htrace-core
+ ls /home/jenkins/.m2/repository/org/apache/hbase
hbase
hbase-annotations
hbase-archetype-builder
hbase-archetypes
hbase-assembly
hbase-checkstyle
hbase-client
hbase-client-project
hbase-client-project-archetype
hbase-common
hbase-examples
hbase-external-blockcache
hbase-hadoop1-compat
hbase-hadoop2-compat
hbase-hadoop-compat
hbase-it
hbase-prefix-tree
hbase-procedure
hbase-protocol
hbase-resource-bundle
hbase-rest
hbase-rsgroup
hbase-server
hbase-shaded
hbase-shaded-client
hbase-shaded-client-project
hbase-shaded-client-project-archetype
hbase-shaded-server
hbase-shell
hbase-spark
hbase-testing-util
hbase-thrift
[Phoenix-master] $ /home/jenkins/tools/maven/apache-maven-3.0.4/bin/mvn -U 
clean install -Dcheckstyle.skip=true
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# 
Build step 'Invoke top-level Maven targets' marked build as failure
Archiving artifacts
Compressed 576.23 MB of artifacts by 70.6% relative to #1290
Updating PHOENIX-3002
Recording test results
ERROR: Step ?Publish JUnit test result report? failed: Test reports were found 
but none of them are new. Did tests run? 
For example, 

 is 2 days 18 hr old




phoenix git commit: PHOENIX-2949 Fix estimated region size when checking for serial query

2016-06-23 Thread ankit
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 18a47b051 -> 2f10476a8


PHOENIX-2949 Fix estimated region size when checking for serial query


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/2f10476a
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/2f10476a
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/2f10476a

Branch: refs/heads/4.x-HBase-1.1
Commit: 2f10476a8e75fe1c831729ef1ce18f0dbbddeb2c
Parents: 18a47b0
Author: Ankit Singhal 
Authored: Thu Jun 23 13:54:55 2016 -0700
Committer: Ankit Singhal 
Committed: Thu Jun 23 13:54:55 2016 -0700

--
 .../org/apache/phoenix/execute/ScanPlan.java| 46 ++--
 .../org/apache/phoenix/query/QueryServices.java |  2 +-
 2 files changed, 25 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/2f10476a/phoenix-core/src/main/java/org/apache/phoenix/execute/ScanPlan.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/execute/ScanPlan.java 
b/phoenix-core/src/main/java/org/apache/phoenix/execute/ScanPlan.java
index c55a1cc..0975b3f 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/execute/ScanPlan.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/execute/ScanPlan.java
@@ -25,7 +25,7 @@ import java.sql.SQLException;
 import java.util.Collections;
 import java.util.List;
 
-import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.phoenix.compile.GroupByCompiler.GroupBy;
 import org.apache.phoenix.compile.OrderByCompiler.OrderBy;
@@ -62,7 +62,6 @@ import org.apache.phoenix.schema.SaltingUtil;
 import org.apache.phoenix.schema.TableRef;
 import org.apache.phoenix.schema.stats.GuidePostsInfo;
 import org.apache.phoenix.schema.stats.PTableStats;
-import org.apache.phoenix.schema.stats.StatisticsUtil;
 import org.apache.phoenix.util.LogUtil;
 import org.apache.phoenix.util.QueryUtil;
 import org.apache.phoenix.util.ScanUtil;
@@ -118,7 +117,7 @@ public class ScanPlan extends BaseQueryPlan {
 Scan scan = context.getScan();
 /*
  * If a limit is provided and we have no filter, run the scan serially 
when we estimate that
- * the limit's worth of data will fit into a single region.
+ * the limit's worth of data is less than the threshold bytes provided 
in QueryServices.QUERY_PARALLEL_LIMIT_THRESHOLD
  */
 Integer perScanLimit = !allowPageFilter ? null : limit;
 if (perScanLimit == null || scan.getFilter() != null) {
@@ -127,32 +126,35 @@ public class ScanPlan extends BaseQueryPlan {
 long scn = context.getConnection().getSCN() == null ? Long.MAX_VALUE : 
context.getConnection().getSCN();
 PTableStats tableStats = 
context.getConnection().getQueryServices().getTableStats(table.getName().getBytes(),
 scn);
 GuidePostsInfo gpsInfo = 
tableStats.getGuidePosts().get(SchemaUtil.getEmptyColumnFamily(table));
-long estRowSize = SchemaUtil.estimateRowSize(table);
-long estRegionSize;
+ConnectionQueryServices services = 
context.getConnection().getQueryServices();
+long estRowSize;
+long estimatedParallelThresholdBytes;
 if (gpsInfo == null) {
-// Use guidepost depth as minimum size
-ConnectionQueryServices services = 
context.getConnection().getQueryServices();
-HTableDescriptor desc = 
services.getTableDescriptor(table.getPhysicalName().getBytes());
-int guidepostPerRegion = 
services.getProps().getInt(QueryServices.STATS_GUIDEPOST_PER_REGION_ATTRIB,
-QueryServicesOptions.DEFAULT_STATS_GUIDEPOST_PER_REGION);
-long guidepostWidth = 
services.getProps().getLong(QueryServices.STATS_GUIDEPOST_WIDTH_BYTES_ATTRIB,
-QueryServicesOptions.DEFAULT_STATS_GUIDEPOST_WIDTH_BYTES);
-estRegionSize = 
StatisticsUtil.getGuidePostDepth(guidepostPerRegion, guidepostWidth, desc);
+estRowSize = SchemaUtil.estimateRowSize(table);
+estimatedParallelThresholdBytes = 
services.getProps().getLong(HConstants.HREGION_MAX_FILESIZE,
+HConstants.DEFAULT_MAX_FILE_SIZE);
 } else {
-// Region size estimated based on total number of bytes divided by 
number of regions
 long totByteSize = 0;
+long totRowCount = 0;
 for (long byteCount : gpsInfo.getByteCounts()) {
 totByteSize += byteCount;
 }
-estRegionSize = totByteSize / (gpsInfo.getGuidePostsCount()+1);
+for (long rowCount : 

phoenix git commit: PHOENIX-2949 Fix estimated region size when checking for serial query

2016-06-23 Thread ankit
Repository: phoenix
Updated Branches:
  refs/heads/master 8a72032e2 -> a44387358


PHOENIX-2949 Fix estimated region size when checking for serial query


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/a4438735
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/a4438735
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/a4438735

Branch: refs/heads/master
Commit: a44387358d6b58b77358a42f38c5baac9e2ab527
Parents: 8a72032
Author: Ankit Singhal 
Authored: Thu Jun 23 13:54:33 2016 -0700
Committer: Ankit Singhal 
Committed: Thu Jun 23 13:54:33 2016 -0700

--
 .../org/apache/phoenix/execute/ScanPlan.java| 46 ++--
 .../org/apache/phoenix/query/QueryServices.java |  2 +-
 2 files changed, 25 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/a4438735/phoenix-core/src/main/java/org/apache/phoenix/execute/ScanPlan.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/execute/ScanPlan.java 
b/phoenix-core/src/main/java/org/apache/phoenix/execute/ScanPlan.java
index c55a1cc..0975b3f 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/execute/ScanPlan.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/execute/ScanPlan.java
@@ -25,7 +25,7 @@ import java.sql.SQLException;
 import java.util.Collections;
 import java.util.List;
 
-import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.phoenix.compile.GroupByCompiler.GroupBy;
 import org.apache.phoenix.compile.OrderByCompiler.OrderBy;
@@ -62,7 +62,6 @@ import org.apache.phoenix.schema.SaltingUtil;
 import org.apache.phoenix.schema.TableRef;
 import org.apache.phoenix.schema.stats.GuidePostsInfo;
 import org.apache.phoenix.schema.stats.PTableStats;
-import org.apache.phoenix.schema.stats.StatisticsUtil;
 import org.apache.phoenix.util.LogUtil;
 import org.apache.phoenix.util.QueryUtil;
 import org.apache.phoenix.util.ScanUtil;
@@ -118,7 +117,7 @@ public class ScanPlan extends BaseQueryPlan {
 Scan scan = context.getScan();
 /*
  * If a limit is provided and we have no filter, run the scan serially 
when we estimate that
- * the limit's worth of data will fit into a single region.
+ * the limit's worth of data is less than the threshold bytes provided 
in QueryServices.QUERY_PARALLEL_LIMIT_THRESHOLD
  */
 Integer perScanLimit = !allowPageFilter ? null : limit;
 if (perScanLimit == null || scan.getFilter() != null) {
@@ -127,32 +126,35 @@ public class ScanPlan extends BaseQueryPlan {
 long scn = context.getConnection().getSCN() == null ? Long.MAX_VALUE : 
context.getConnection().getSCN();
 PTableStats tableStats = 
context.getConnection().getQueryServices().getTableStats(table.getName().getBytes(),
 scn);
 GuidePostsInfo gpsInfo = 
tableStats.getGuidePosts().get(SchemaUtil.getEmptyColumnFamily(table));
-long estRowSize = SchemaUtil.estimateRowSize(table);
-long estRegionSize;
+ConnectionQueryServices services = 
context.getConnection().getQueryServices();
+long estRowSize;
+long estimatedParallelThresholdBytes;
 if (gpsInfo == null) {
-// Use guidepost depth as minimum size
-ConnectionQueryServices services = 
context.getConnection().getQueryServices();
-HTableDescriptor desc = 
services.getTableDescriptor(table.getPhysicalName().getBytes());
-int guidepostPerRegion = 
services.getProps().getInt(QueryServices.STATS_GUIDEPOST_PER_REGION_ATTRIB,
-QueryServicesOptions.DEFAULT_STATS_GUIDEPOST_PER_REGION);
-long guidepostWidth = 
services.getProps().getLong(QueryServices.STATS_GUIDEPOST_WIDTH_BYTES_ATTRIB,
-QueryServicesOptions.DEFAULT_STATS_GUIDEPOST_WIDTH_BYTES);
-estRegionSize = 
StatisticsUtil.getGuidePostDepth(guidepostPerRegion, guidepostWidth, desc);
+estRowSize = SchemaUtil.estimateRowSize(table);
+estimatedParallelThresholdBytes = 
services.getProps().getLong(HConstants.HREGION_MAX_FILESIZE,
+HConstants.DEFAULT_MAX_FILE_SIZE);
 } else {
-// Region size estimated based on total number of bytes divided by 
number of regions
 long totByteSize = 0;
+long totRowCount = 0;
 for (long byteCount : gpsInfo.getByteCounts()) {
 totByteSize += byteCount;
 }
-estRegionSize = totByteSize / (gpsInfo.getGuidePostsCount()+1);
+for (long rowCount : gpsInfo.getRowCounts()) {
+ 

phoenix git commit: PHOENIX-3002 Upgrading to 4.8 doesn't recreate local indexes(Rajeshbabu)

2016-06-23 Thread rajeshbabu
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 61ea48c16 -> 18a47b051


PHOENIX-3002 Upgrading to 4.8 doesn't recreate local indexes(Rajeshbabu)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/18a47b05
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/18a47b05
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/18a47b05

Branch: refs/heads/4.x-HBase-1.1
Commit: 18a47b05146755a93b1e49ab9b22f49a4229439b
Parents: 61ea48c
Author: Rajeshbabu Chintaguntla 
Authored: Fri Jun 24 02:27:57 2016 +0530
Committer: Rajeshbabu Chintaguntla 
Committed: Fri Jun 24 02:27:57 2016 +0530

--
 .../query/ConnectionQueryServicesImpl.java  | 199 +
 .../org/apache/phoenix/query/QueryServices.java |   2 +-
 .../phoenix/query/QueryServicesOptions.java |   6 +-
 .../apache/phoenix/schema/MetaDataClient.java   |   2 +-
 .../org/apache/phoenix/util/PhoenixRuntime.java |  11 +-
 .../org/apache/phoenix/util/UpgradeUtil.java| 219 +--
 6 files changed, 224 insertions(+), 215 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/18a47b05/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index 5c17eb0..0a5c4f2 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -21,17 +21,7 @@ import static org.apache.hadoop.hbase.HColumnDescriptor.TTL;
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_MAJOR_VERSION;
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_MINOR_VERSION;
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_PATCH_NUMBER;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.COLUMN_FAMILY;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.COLUMN_NAME;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.DATA_TABLE_NAME;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.INDEX_TYPE;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.ORDINAL_POSITION;
-import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TABLE_NAME;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TABLE_SCHEM;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TENANT_ID;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.VIEW_INDEX_ID;
 import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_DROP_METADATA;
 import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_RENEW_LEASE_ENABLED;
 import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_RENEW_LEASE_THREAD_POOL_SIZE;
@@ -94,6 +84,7 @@ import org.apache.hadoop.hbase.ipc.BlockingRpcCallback;
 import org.apache.hadoop.hbase.ipc.ServerRpcController;
 import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.MutationProto;
 import org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator;
+import org.apache.hadoop.hbase.regionserver.LocalIndexSplitter;
 import org.apache.hadoop.hbase.security.User;
 import org.apache.hadoop.hbase.util.ByteStringer;
 import org.apache.hadoop.hbase.util.Bytes;
@@ -190,6 +181,7 @@ import org.apache.phoenix.schema.types.PVarchar;
 import org.apache.phoenix.util.ByteUtil;
 import org.apache.phoenix.util.Closeables;
 import org.apache.phoenix.util.ConfigUtil;
+import org.apache.phoenix.util.IndexUtil;
 import org.apache.phoenix.util.JDBCUtil;
 import org.apache.phoenix.util.MetaDataUtil;
 import org.apache.phoenix.util.PhoenixContextExecutor;
@@ -2472,19 +2464,6 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 }
 
 if (currentServerSideTableTimeStamp < 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_8_0) {
-Properties props = 
PropertiesUtil.deepCopy(metaConnection.getClientInfo());
-
props.remove(PhoenixRuntime.CURRENT_SCN_ATTRIB);
-
props.remove(PhoenixRuntime.TENANT_ID_ATTRIB);
-PhoenixConnection conn =
-

phoenix git commit: PHOENIX-3002 Upgrading to 4.8 doesn't recreate local indexes(Rajeshbabu)

2016-06-23 Thread rajeshbabu
Repository: phoenix
Updated Branches:
  refs/heads/master 3e69b90d8 -> 8a72032e2


PHOENIX-3002 Upgrading to 4.8 doesn't recreate local indexes(Rajeshbabu)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/8a72032e
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/8a72032e
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/8a72032e

Branch: refs/heads/master
Commit: 8a72032e20ff6d3ab4457ad50be2bef2dfb124d9
Parents: 3e69b90
Author: Rajeshbabu Chintaguntla 
Authored: Fri Jun 24 02:25:49 2016 +0530
Committer: Rajeshbabu Chintaguntla 
Committed: Fri Jun 24 02:25:49 2016 +0530

--
 .../query/ConnectionQueryServicesImpl.java  | 199 +
 .../org/apache/phoenix/query/QueryServices.java |   2 +-
 .../phoenix/query/QueryServicesOptions.java |   6 +-
 .../apache/phoenix/schema/MetaDataClient.java   |   2 +-
 .../org/apache/phoenix/util/PhoenixRuntime.java |  11 +-
 .../org/apache/phoenix/util/UpgradeUtil.java| 219 +--
 6 files changed, 224 insertions(+), 215 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/8a72032e/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index 536e450..00d2088 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -21,17 +21,7 @@ import static org.apache.hadoop.hbase.HColumnDescriptor.TTL;
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_MAJOR_VERSION;
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_MINOR_VERSION;
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_PATCH_NUMBER;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.COLUMN_FAMILY;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.COLUMN_NAME;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.DATA_TABLE_NAME;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.INDEX_TYPE;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.ORDINAL_POSITION;
-import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TABLE_NAME;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TABLE_SCHEM;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TENANT_ID;
-import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.VIEW_INDEX_ID;
 import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_DROP_METADATA;
 import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_RENEW_LEASE_ENABLED;
 import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_RENEW_LEASE_THREAD_POOL_SIZE;
@@ -94,6 +84,7 @@ import org.apache.hadoop.hbase.ipc.BlockingRpcCallback;
 import org.apache.hadoop.hbase.ipc.ServerRpcController;
 import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.MutationProto;
 import org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator;
+import org.apache.hadoop.hbase.regionserver.LocalIndexSplitter;
 import org.apache.hadoop.hbase.security.User;
 import org.apache.hadoop.hbase.util.ByteStringer;
 import org.apache.hadoop.hbase.util.Bytes;
@@ -190,6 +181,7 @@ import org.apache.phoenix.schema.types.PVarchar;
 import org.apache.phoenix.util.ByteUtil;
 import org.apache.phoenix.util.Closeables;
 import org.apache.phoenix.util.ConfigUtil;
+import org.apache.phoenix.util.IndexUtil;
 import org.apache.phoenix.util.JDBCUtil;
 import org.apache.phoenix.util.MetaDataUtil;
 import org.apache.phoenix.util.PhoenixContextExecutor;
@@ -2472,19 +2464,6 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 }
 
 if (currentServerSideTableTimeStamp < 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_8_0) {
-Properties props = 
PropertiesUtil.deepCopy(metaConnection.getClientInfo());
-
props.remove(PhoenixRuntime.CURRENT_SCN_ATTRIB);
-
props.remove(PhoenixRuntime.TENANT_ID_ATTRIB);
-PhoenixConnection conn =
-new 

Build failed in Jenkins: Phoenix-4.x-HBase-1.1 #85

2016-06-23 Thread Apache Jenkins Server
See 

Changes:

[ssa] PHOENIX-3020 Bulk load tool is not working with new jars

--
[...truncated 2081 lines...]
Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
seconds
at 
org.apache.phoenix.end2end.UserDefinedFunctionsIT.doSetup(UserDefinedFunctionsIT.java:249)

Running org.apache.phoenix.end2end.ViewIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.007 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.ViewIT
org.apache.phoenix.end2end.ViewIT  Time elapsed: 0.006 sec  <<< ERROR!
java.lang.RuntimeException: java.io.IOException: Shutting down
Caused by: java.io.IOException: Shutting down
Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
seconds

Running org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.005 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT
org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT  Time elapsed: 0.005 
sec  <<< ERROR!
java.lang.RuntimeException: java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT.doSetup(ImmutableIndexWithStatsIT.java:52)
Caused by: java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT.doSetup(ImmutableIndexWithStatsIT.java:52)
Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
seconds
at 
org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT.doSetup(ImmutableIndexWithStatsIT.java:52)

Running org.apache.phoenix.end2end.index.MutableIndexFailureIT
Running org.apache.phoenix.end2end.index.MutableIndexReplicationIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.007 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.index.MutableIndexFailureIT
org.apache.phoenix.end2end.index.MutableIndexFailureIT  Time elapsed: 0.006 sec 
 <<< ERROR!
java.lang.RuntimeException: java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.MutableIndexFailureIT.doSetup(MutableIndexFailureIT.java:115)
Caused by: java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.MutableIndexFailureIT.doSetup(MutableIndexFailureIT.java:115)
Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
seconds
at 
org.apache.phoenix.end2end.index.MutableIndexFailureIT.doSetup(MutableIndexFailureIT.java:115)

Running org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.004 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.index.MutableIndexReplicationIT
org.apache.phoenix.end2end.index.MutableIndexReplicationIT  Time elapsed: 0.003 
sec  <<< ERROR!
java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.MutableIndexReplicationIT.setupConfigsAndStartCluster(MutableIndexReplicationIT.java:170)
at 
org.apache.phoenix.end2end.index.MutableIndexReplicationIT.setUpBeforeClass(MutableIndexReplicationIT.java:108)
Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
seconds
at 
org.apache.phoenix.end2end.index.MutableIndexReplicationIT.setupConfigsAndStartCluster(MutableIndexReplicationIT.java:170)
at 
org.apache.phoenix.end2end.index.MutableIndexReplicationIT.setUpBeforeClass(MutableIndexReplicationIT.java:108)

Running org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.004 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT
org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT  Time elapsed: 0.003 
sec  <<< ERROR!
java.lang.RuntimeException: java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT.doSetup(ReadOnlyIndexFailureIT.java:119)
Caused by: java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT.doSetup(ReadOnlyIndexFailureIT.java:119)
Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
seconds
at 
org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT.doSetup(ReadOnlyIndexFailureIT.java:119)

Running org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.004 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
org.apache.phoenix.end2end.index.txn.TxWriteFailureIT  Time elapsed: 0.004 sec  
<<< ERROR!
java.lang.RuntimeException: java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.txn.TxWriteFailureIT.doSetup(TxWriteFailureIT.java:86)
Caused by: java.io.IOException: Shutting down
at 
org.apache.phoenix.end2end.index.txn.TxWriteFailureIT.doSetup(TxWriteFailureIT.java:86)
Caused by: 

Jenkins build is back to normal : Phoenix | Master #1290

2016-06-23 Thread Apache Jenkins Server
See 



Build failed in Jenkins: Phoenix-4.x-HBase-1.0 #533

2016-06-23 Thread Apache Jenkins Server
See 

Changes:

[ssa] PHOENIX-3020 Bulk load tool is not working with new jars

--
[...truncated 696 lines...]
Running org.apache.phoenix.tx.TransactionIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 83.557 sec - in 
org.apache.phoenix.trace.PhoenixTracingEndToEndIT
Running org.apache.phoenix.tx.TxCheckpointIT
Tests run: 40, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 411.145 sec 
<<< FAILURE! - in org.apache.phoenix.end2end.index.LocalIndexIT
testLocalIndexScan[isNamespaceMapped = 
false](org.apache.phoenix.end2end.index.LocalIndexIT)  Time elapsed: 12.665 sec 
 <<< FAILURE!
java.lang.AssertionError: expected:<1> but was:<2>
at 
org.apache.phoenix.end2end.index.LocalIndexIT.testLocalIndexScan(LocalIndexIT.java:361)

Tests run: 21, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 159.993 sec - 
in org.apache.phoenix.tx.TransactionIT
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 96.34 sec - in 
org.apache.phoenix.tx.TxCheckpointIT
Tests run: 128, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 521.127 sec - 
in org.apache.phoenix.end2end.index.IndexIT
Tests run: 36, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 581.457 sec - 
in org.apache.phoenix.end2end.index.MutableIndexIT

Results :

Failed tests: 
  LocalIndexIT.testLocalIndexScan:361 expected:<1> but was:<2>

Tests run: 1189, Failures: 1, Errors: 0, Skipped: 5

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test 
(HBaseManagedTimeTableReuseTest) @ phoenix-core ---

---
 T E S T S
---
Running org.apache.phoenix.end2end.AbsFunctionEnd2EndIT
Running org.apache.phoenix.end2end.ArrayFillFunctionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.177 sec - in 
org.apache.phoenix.end2end.AbsFunctionEnd2EndIT
Running org.apache.phoenix.end2end.ArithmeticQueryIT
Running org.apache.phoenix.end2end.ArraysWithNullsIT
Running org.apache.phoenix.end2end.AlterSessionIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.067 sec - in 
org.apache.phoenix.end2end.AlterSessionIT
Running org.apache.phoenix.end2end.AutoCommitIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.458 sec - in 
org.apache.phoenix.end2end.AutoCommitIT
Running org.apache.phoenix.end2end.ArrayToStringFunctionIT
Running org.apache.phoenix.end2end.CbrtFunctionEnd2EndIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.486 sec - in 
org.apache.phoenix.end2end.CbrtFunctionEnd2EndIT
Running org.apache.phoenix.end2end.ConvertTimezoneFunctionIT
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.62 sec - in 
org.apache.phoenix.end2end.ArrayFillFunctionIT
Running org.apache.phoenix.end2end.DecodeFunctionIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.185 sec - in 
org.apache.phoenix.end2end.ConvertTimezoneFunctionIT
Tests run: 36, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.091 sec - 
in org.apache.phoenix.end2end.ArrayToStringFunctionIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.018 sec - in 
org.apache.phoenix.end2end.DecodeFunctionIT
Running org.apache.phoenix.end2end.DynamicFamilyIT
Running org.apache.phoenix.end2end.DynamicUpsertIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.525 sec - in 
org.apache.phoenix.end2end.DynamicUpsertIT
Running org.apache.phoenix.end2end.FirstValueFunctionIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.512 sec - in 
org.apache.phoenix.end2end.DynamicFamilyIT
Running org.apache.phoenix.end2end.GetSetByteBitFunctionEnd2EndIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.152 sec - in 
org.apache.phoenix.end2end.GetSetByteBitFunctionEnd2EndIT
Running org.apache.phoenix.end2end.LikeExpressionIT
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.819 sec - 
in org.apache.phoenix.end2end.ArraysWithNullsIT
Running org.apache.phoenix.end2end.MD5FunctionIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.419 sec - in 
org.apache.phoenix.end2end.LikeExpressionIT
Running org.apache.phoenix.end2end.MinMaxAggregateFunctionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.491 sec - in 
org.apache.phoenix.end2end.MinMaxAggregateFunctionIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.76 sec - in 
org.apache.phoenix.end2end.FirstValueFunctionIT
Running org.apache.phoenix.end2end.NthValueFunctionIT
Running org.apache.phoenix.end2end.OctetLengthFunctionEnd2EndIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.145 sec - in 
org.apache.phoenix.end2end.MD5FunctionIT
Running org.apache.phoenix.end2end.DistinctPrefixFilterIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.338 sec - in 

Build failed in Jenkins: Phoenix | 4.x-HBase-0.98 #1201

2016-06-23 Thread Apache Jenkins Server
See 

Changes:

[ssa] PHOENIX-3020 Bulk load tool is not working with new jars

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-6 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/phoenix.git # timeout=10
Fetching upstream changes from 
https://git-wip-us.apache.org/repos/asf/phoenix.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/phoenix.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse origin/4.x-HBase-0.98^{commit} # timeout=10
Checking out Revision bb8d7cd0de5d6c6ecadcf67fcc245bd1d07fa8c4 
(origin/4.x-HBase-0.98)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f bb8d7cd0de5d6c6ecadcf67fcc245bd1d07fa8c4
 > git rev-list d2f2bade045cc8922f967a2594942cf077e9ef87 # timeout=10
No emails were triggered.
[EnvInject] - Executing scripts and injecting environment variables after the 
SCM step.
[EnvInject] - Injecting as environment variables the properties content 
MAVEN_OPTS=-Xmx3G

[EnvInject] - Variables injected successfully.
[Phoenix-4.x-HBase-0.98] $ /bin/bash -xe /tmp/hudson8752840134576169709.sh
+ echo 'DELETING ~/.m2/repository/org/apache/htrace. See 
https://issues.apache.org/jira/browse/PHOENIX-1802'
DELETING ~/.m2/repository/org/apache/htrace. See 
https://issues.apache.org/jira/browse/PHOENIX-1802
+ echo 'CURRENT CONTENT:'
CURRENT CONTENT:
+ ls /home/jenkins/.m2/repository/org/apache/htrace
htrace
htrace-core
[Phoenix-4.x-HBase-0.98] $ /home/jenkins/tools/maven/apache-maven-3.0.4/bin/mvn 
-U clean install -Dcheckstyle.skip=true
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# 
Build step 'Invoke top-level Maven targets' marked build as failure
Archiving artifacts
Compressed 1.23 GB of artifacts by 40.1% relative to #1119
Recording test results
ERROR: Step ?Publish JUnit test result report? failed: Test reports were found 
but none of them are new. Did tests run? 
For example, 

 is 5 days 8 hr old




[3/4] phoenix git commit: PHOENIX-3020 Bulk load tool is not working with new jars

2016-06-23 Thread ssa
PHOENIX-3020 Bulk load tool is not working with new jars


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/88c38092
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/88c38092
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/88c38092

Branch: refs/heads/4.x-HBase-1.0
Commit: 88c38092532aa3b1bdc5fc9f2854a47bdf717a16
Parents: a5caaeb
Author: Sergey Soldatov 
Authored: Wed Jun 22 23:45:54 2016 -0700
Committer: Sergey Soldatov 
Committed: Wed Jun 22 23:46:45 2016 -0700

--
 phoenix-client/pom.xml | 36 +---
 phoenix-queryserver-client/pom.xml | 32 
 phoenix-queryserver/pom.xml| 30 ++
 phoenix-server/pom.xml | 37 -
 4 files changed, 104 insertions(+), 31 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/88c38092/phoenix-client/pom.xml
--
diff --git a/phoenix-client/pom.xml b/phoenix-client/pom.xml
index 90f94b7..702bc7f 100644
--- a/phoenix-client/pom.xml
+++ b/phoenix-client/pom.xml
@@ -98,18 +98,6 @@
   
true
   false
   
-
-
-
-  
-LICENSE.txt
-ASL2.0
-  
-
-
-  false
-
 
 
@@ -126,7 +114,12 @@
 
   LICENSE.txt
-  ${project.basedir}/../LICENSE.txt
+  ${project.basedir}/../LICENSE
+
+
+  NOTICE
+  ${project.basedir}/../NOTICE
 
   
   
@@ -137,6 +130,19 @@
   org.apache.phoenix:phoenix-client
 
   
+  
+
+  *:*
+  
+META-INF/*.SF
+META-INF/*.DSA
+META-INF/*.RSA
+META-INF/license/*
+LICENSE.*
+NOTICE.*
+  
+
+  
 
   
 
@@ -172,10 +178,6 @@
   
${shaded.package}.com.thoughtworks
 
 
-  com.sun.jersey
-  
${shaded.package}.com.sun.jersey
-
-
   com.yammer
   ${shaded.package}.com.yammer
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/88c38092/phoenix-queryserver-client/pom.xml
--
diff --git a/phoenix-queryserver-client/pom.xml 
b/phoenix-queryserver-client/pom.xml
index 8f64746..cb8bc11 100644
--- a/phoenix-queryserver-client/pom.xml
+++ b/phoenix-queryserver-client/pom.xml
@@ -72,6 +72,38 @@
 
 
   phoenix-${project.version}-thin-client
+
+  
+
+  README.md
+  ${project.basedir}/../README.md
+
+
+  LICENSE.txt
+  ${project.basedir}/../LICENSE
+
+
+  NOTICE
+  ${project.basedir}/../NOTICE
+
+  
+  
+
+  *:*
+  
+META-INF/*.SF
+META-INF/*.DSA
+META-INF/*.RSA
+META-INF/license/*
+LICENSE.*
+NOTICE.*
+  
+
+  
+
   
 
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/88c38092/phoenix-queryserver/pom.xml
--
diff --git a/phoenix-queryserver/pom.xml b/phoenix-queryserver/pom.xml
index 0be245b..8921133 100644
--- a/phoenix-queryserver/pom.xml
+++ b/phoenix-queryserver/pom.xml
@@ -63,6 +63,23 @@
   false
   
true
   false
+  
+
+  README.md
+  ${project.basedir}/../README.md
+
+
+  LICENSE.txt
+  ${project.basedir}/../LICENSE
+
+
+  NOTICE
+  ${project.basedir}/../NOTICE
+
+  
   
 
   org.apache.calcite.avatica:*
@@ -70,6 +87,19 @@
   

[4/4] phoenix git commit: PHOENIX-3020 Bulk load tool is not working with new jars

2016-06-23 Thread ssa
PHOENIX-3020 Bulk load tool is not working with new jars


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/61ea48c1
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/61ea48c1
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/61ea48c1

Branch: refs/heads/4.x-HBase-1.1
Commit: 61ea48c16f78b41949cef436d56d0714834d0a76
Parents: 7b9c349
Author: Sergey Soldatov 
Authored: Wed Jun 22 23:45:54 2016 -0700
Committer: Sergey Soldatov 
Committed: Wed Jun 22 23:46:57 2016 -0700

--
 phoenix-client/pom.xml | 36 +---
 phoenix-queryserver-client/pom.xml | 32 
 phoenix-queryserver/pom.xml| 30 ++
 phoenix-server/pom.xml | 37 -
 4 files changed, 104 insertions(+), 31 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/61ea48c1/phoenix-client/pom.xml
--
diff --git a/phoenix-client/pom.xml b/phoenix-client/pom.xml
index 593e505..e4b1b73 100644
--- a/phoenix-client/pom.xml
+++ b/phoenix-client/pom.xml
@@ -98,18 +98,6 @@
   
true
   false
   
-
-
-
-  
-LICENSE.txt
-ASL2.0
-  
-
-
-  false
-
 
 
@@ -126,7 +114,12 @@
 
   LICENSE.txt
-  ${project.basedir}/../LICENSE.txt
+  ${project.basedir}/../LICENSE
+
+
+  NOTICE
+  ${project.basedir}/../NOTICE
 
   
   
@@ -137,6 +130,19 @@
   org.apache.phoenix:phoenix-client
 
   
+  
+
+  *:*
+  
+META-INF/*.SF
+META-INF/*.DSA
+META-INF/*.RSA
+META-INF/license/*
+LICENSE.*
+NOTICE.*
+  
+
+  
 
   
 
@@ -172,10 +178,6 @@
   
${shaded.package}.com.thoughtworks
 
 
-  com.sun.jersey
-  
${shaded.package}.com.sun.jersey
-
-
   com.yammer
   ${shaded.package}.com.yammer
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/61ea48c1/phoenix-queryserver-client/pom.xml
--
diff --git a/phoenix-queryserver-client/pom.xml 
b/phoenix-queryserver-client/pom.xml
index 380f269..2680574 100644
--- a/phoenix-queryserver-client/pom.xml
+++ b/phoenix-queryserver-client/pom.xml
@@ -72,6 +72,38 @@
 
 
   phoenix-${project.version}-thin-client
+
+  
+
+  README.md
+  ${project.basedir}/../README.md
+
+
+  LICENSE.txt
+  ${project.basedir}/../LICENSE
+
+
+  NOTICE
+  ${project.basedir}/../NOTICE
+
+  
+  
+
+  *:*
+  
+META-INF/*.SF
+META-INF/*.DSA
+META-INF/*.RSA
+META-INF/license/*
+LICENSE.*
+NOTICE.*
+  
+
+  
+
   
 
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/61ea48c1/phoenix-queryserver/pom.xml
--
diff --git a/phoenix-queryserver/pom.xml b/phoenix-queryserver/pom.xml
index 56f1a8a..d504503 100644
--- a/phoenix-queryserver/pom.xml
+++ b/phoenix-queryserver/pom.xml
@@ -63,6 +63,23 @@
   false
   
true
   false
+  
+
+  README.md
+  ${project.basedir}/../README.md
+
+
+  LICENSE.txt
+  ${project.basedir}/../LICENSE
+
+
+  NOTICE
+  ${project.basedir}/../NOTICE
+
+  
   
 
   org.apache.calcite.avatica:*
@@ -70,6 +87,19 @@
   

[2/4] phoenix git commit: PHOENIX-3020 Bulk load tool is not working with new jars

2016-06-23 Thread ssa
PHOENIX-3020 Bulk load tool is not working with new jars


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/bb8d7cd0
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/bb8d7cd0
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/bb8d7cd0

Branch: refs/heads/4.x-HBase-0.98
Commit: bb8d7cd0de5d6c6ecadcf67fcc245bd1d07fa8c4
Parents: d2f2bad
Author: Sergey Soldatov 
Authored: Wed Jun 22 23:45:54 2016 -0700
Committer: Sergey Soldatov 
Committed: Wed Jun 22 23:46:27 2016 -0700

--
 phoenix-client/pom.xml | 36 +---
 phoenix-queryserver-client/pom.xml | 32 
 phoenix-queryserver/pom.xml| 30 ++
 phoenix-server/pom.xml | 37 -
 4 files changed, 104 insertions(+), 31 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/bb8d7cd0/phoenix-client/pom.xml
--
diff --git a/phoenix-client/pom.xml b/phoenix-client/pom.xml
index 32dc749..e6c6dbf 100644
--- a/phoenix-client/pom.xml
+++ b/phoenix-client/pom.xml
@@ -98,18 +98,6 @@
   
true
   false
   
-
-
-
-  
-LICENSE.txt
-ASL2.0
-  
-
-
-  false
-
 
 
@@ -126,7 +114,12 @@
 
   LICENSE.txt
-  ${project.basedir}/../LICENSE.txt
+  ${project.basedir}/../LICENSE
+
+
+  NOTICE
+  ${project.basedir}/../NOTICE
 
   
   
@@ -137,6 +130,19 @@
   org.apache.phoenix:phoenix-client
 
   
+  
+
+  *:*
+  
+META-INF/*.SF
+META-INF/*.DSA
+META-INF/*.RSA
+META-INF/license/*
+LICENSE.*
+NOTICE.*
+  
+
+  
 
   
 
@@ -172,10 +178,6 @@
   
${shaded.package}.com.thoughtworks
 
 
-  com.sun.jersey
-  
${shaded.package}.com.sun.jersey
-
-
   com.yammer
   ${shaded.package}.com.yammer
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/bb8d7cd0/phoenix-queryserver-client/pom.xml
--
diff --git a/phoenix-queryserver-client/pom.xml 
b/phoenix-queryserver-client/pom.xml
index 1b9653f..a2522f8 100644
--- a/phoenix-queryserver-client/pom.xml
+++ b/phoenix-queryserver-client/pom.xml
@@ -72,6 +72,38 @@
 
 
   phoenix-${project.version}-thin-client
+
+  
+
+  README.md
+  ${project.basedir}/../README.md
+
+
+  LICENSE.txt
+  ${project.basedir}/../LICENSE
+
+
+  NOTICE
+  ${project.basedir}/../NOTICE
+
+  
+  
+
+  *:*
+  
+META-INF/*.SF
+META-INF/*.DSA
+META-INF/*.RSA
+META-INF/license/*
+LICENSE.*
+NOTICE.*
+  
+
+  
+
   
 
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/bb8d7cd0/phoenix-queryserver/pom.xml
--
diff --git a/phoenix-queryserver/pom.xml b/phoenix-queryserver/pom.xml
index 4f2a59b..12d091f 100644
--- a/phoenix-queryserver/pom.xml
+++ b/phoenix-queryserver/pom.xml
@@ -63,6 +63,23 @@
   false
   
true
   false
+  
+
+  README.md
+  ${project.basedir}/../README.md
+
+
+  LICENSE.txt
+  ${project.basedir}/../LICENSE
+
+
+  NOTICE
+  ${project.basedir}/../NOTICE
+
+  
   
 
   org.apache.calcite.avatica:*
@@ -70,6 +87,19 @@
   

[1/4] phoenix git commit: PHOENIX-3020 Bulk load tool is not working with new jars

2016-06-23 Thread ssa
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 d2f2bade0 -> bb8d7cd0d
  refs/heads/4.x-HBase-1.0 a5caaeb89 -> 88c380925
  refs/heads/4.x-HBase-1.1 7b9c34980 -> 61ea48c16
  refs/heads/master 557197ed6 -> 3e69b90d8


PHOENIX-3020 Bulk load tool is not working with new jars


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/3e69b90d
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/3e69b90d
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/3e69b90d

Branch: refs/heads/master
Commit: 3e69b90d8666ffb8017cf5537d987522ef485d57
Parents: 557197e
Author: Sergey Soldatov 
Authored: Wed Jun 22 23:45:54 2016 -0700
Committer: Sergey Soldatov 
Committed: Wed Jun 22 23:45:54 2016 -0700

--
 phoenix-client/pom.xml | 36 +---
 phoenix-queryserver-client/pom.xml | 32 
 phoenix-queryserver/pom.xml| 30 ++
 phoenix-server/pom.xml | 37 -
 4 files changed, 104 insertions(+), 31 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/3e69b90d/phoenix-client/pom.xml
--
diff --git a/phoenix-client/pom.xml b/phoenix-client/pom.xml
index 655b0fd..2448d1f 100644
--- a/phoenix-client/pom.xml
+++ b/phoenix-client/pom.xml
@@ -98,18 +98,6 @@
   
true
   false
   
-
-
-
-  
-LICENSE.txt
-ASL2.0
-  
-
-
-  false
-
 
 
@@ -126,7 +114,12 @@
 
   LICENSE.txt
-  ${project.basedir}/../LICENSE.txt
+  ${project.basedir}/../LICENSE
+
+
+  NOTICE
+  ${project.basedir}/../NOTICE
 
   
   
@@ -137,6 +130,19 @@
   org.apache.phoenix:phoenix-client
 
   
+  
+
+  *:*
+  
+META-INF/*.SF
+META-INF/*.DSA
+META-INF/*.RSA
+META-INF/license/*
+LICENSE.*
+NOTICE.*
+  
+
+  
 
   
 
@@ -172,10 +178,6 @@
   
${shaded.package}.com.thoughtworks
 
 
-  com.sun.jersey
-  
${shaded.package}.com.sun.jersey
-
-
   com.yammer
   ${shaded.package}.com.yammer
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/3e69b90d/phoenix-queryserver-client/pom.xml
--
diff --git a/phoenix-queryserver-client/pom.xml 
b/phoenix-queryserver-client/pom.xml
index 3fba5aa..c761251 100644
--- a/phoenix-queryserver-client/pom.xml
+++ b/phoenix-queryserver-client/pom.xml
@@ -72,6 +72,38 @@
 
 
   phoenix-${project.version}-thin-client
+
+  
+
+  README.md
+  ${project.basedir}/../README.md
+
+
+  LICENSE.txt
+  ${project.basedir}/../LICENSE
+
+
+  NOTICE
+  ${project.basedir}/../NOTICE
+
+  
+  
+
+  *:*
+  
+META-INF/*.SF
+META-INF/*.DSA
+META-INF/*.RSA
+META-INF/license/*
+LICENSE.*
+NOTICE.*
+  
+
+  
+
   
 
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/3e69b90d/phoenix-queryserver/pom.xml
--
diff --git a/phoenix-queryserver/pom.xml b/phoenix-queryserver/pom.xml
index 365f950..d4162ce 100644
--- a/phoenix-queryserver/pom.xml
+++ b/phoenix-queryserver/pom.xml
@@ -63,6 +63,23 @@
   false
   
true
   false
+  
+
+  README.md
+  ${project.basedir}/../README.md
+
+
+  LICENSE.txt
+  ${project.basedir}/../LICENSE
+
+
+