hbase git commit: HBASE-18846 Accommodate the hbase-indexer/lily/SEP consumer deploy-type

2017-10-23 Thread stack
Repository: hbase
Updated Branches:
  refs/heads/branch-2 a6f89f029 -> 94748a3c9


HBASE-18846 Accommodate the hbase-indexer/lily/SEP consumer deploy-type

Patch to start a standalone RegionServer that register's itself and
optionally stands up Services. Can work w/o a Master in the mix.
Useful testing. Also can be used by hbase-indexer to put up a
Replication sink that extends public-facing APIs w/o need to extend
internals. See JIRA release note for detail.

This patch adds booleans for whether to start Admin and Client Service.
Other refactoring moves all thread and service start into the one fat
location so we can ask to by-pass 'services' if we don't need them.
See JIRA for an example hbase-server.xml that has config to shutdown
WAL, cache, etc.

Adds checks if a service/thread has been setup before going to use it.

Renames the ExecutorService in HRegionServer from service to
executorService.

See JIRA too for example Connection implementation that makes use of
Connection plugin point to receive a replication stream. The default
replication sink catches the incoming replication stream, undoes the
WALEdits and then creates a Table to call a batch with the
edits; up on JIRA, an example Connection plugin (legit, supported)
returns a Table with an overridden batch method where in we do index
inserts returning appropriate results to keep the replication engine
ticking over.

Upsides: an unadulterated RegionServer that will keep replication metrics
and even hosts a web UI if wanted. No hacks. Just ordained configs
shutting down unused services. Injection of the indexing function at a
blessed point with no pollution by hbase internals; only public imports.
No user of Private nor LimitedPrivate classes.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/94748a3c
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/94748a3c
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/94748a3c

Branch: refs/heads/branch-2
Commit: 94748a3c9385adc06f9217a639ccf840d63207d4
Parents: a6f89f0
Author: Michael Stack 
Authored: Tue Sep 26 22:27:58 2017 -0700
Committer: Michael Stack 
Committed: Mon Oct 23 21:16:49 2017 -0700

--
 .../org/apache/hadoop/hbase/master/HMaster.java | 114 +--
 .../hbase/regionserver/HRegionServer.java   | 705 ++-
 .../hbase/regionserver/RSRpcServices.java   |  70 +-
 .../regionserver/TestRegionServerNoMaster.java  |   2 +-
 4 files changed, 489 insertions(+), 402 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/94748a3c/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
index cbb1537..8f2ae6b 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
@@ -1,4 +1,4 @@
-/**
+/*
  *
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
@@ -99,7 +99,6 @@ import 
org.apache.hadoop.hbase.master.assignment.AssignmentManager;
 import org.apache.hadoop.hbase.master.assignment.MergeTableRegionsProcedure;
 import org.apache.hadoop.hbase.master.assignment.RegionStates;
 import org.apache.hadoop.hbase.master.assignment.RegionStates.RegionStateNode;
-import org.apache.hadoop.hbase.master.assignment.RegionStates.ServerStateNode;
 import org.apache.hadoop.hbase.master.balancer.BalancerChore;
 import org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer;
 import org.apache.hadoop.hbase.master.balancer.ClusterStatusChore;
@@ -472,66 +471,73 @@ public class HMaster extends HRegionServer implements 
MasterServices {
   public HMaster(final Configuration conf, CoordinatedStateManager csm)
   throws IOException, KeeperException {
 super(conf, csm);
-this.rsFatals = new MemoryBoundedLogMessageBuffer(
-  conf.getLong("hbase.master.buffer.for.rs.fatals", 1*1024*1024));
+try {
+  this.rsFatals = new MemoryBoundedLogMessageBuffer(
+  conf.getLong("hbase.master.buffer.for.rs.fatals", 1 * 1024 * 1024));
 
-LOG.info("hbase.rootdir=" + getRootDir() +
-  ", hbase.cluster.distributed=" + 
this.conf.getBoolean(HConstants.CLUSTER_DISTRIBUTED, false));
+  LOG.info("hbase.rootdir=" + getRootDir() +
+  ", hbase.cluster.distributed=" + 
this.conf.getBoolean(HConstants.CLUSTER_DISTRIBUTED, false));
 
-// Disable usage of meta replicas in the master
-this.conf.setBoolean(HConstants.USE_META_REPLICAS, false);
+  // Disable usage of meta replicas in the master
+  

hbase git commit: HBASE-18846 Accommodate the hbase-indexer/lily/SEP consumer deploy-type

2017-10-23 Thread stack
Repository: hbase
Updated Branches:
  refs/heads/master 37b29e909 -> 456057ef9


HBASE-18846 Accommodate the hbase-indexer/lily/SEP consumer deploy-type

Patch to start a standalone RegionServer that register's itself and
optionally stands up Services. Can work w/o a Master in the mix.
Useful testing. Also can be used by hbase-indexer to put up a
Replication sink that extends public-facing APIs w/o need to extend
internals. See JIRA release note for detail.

This patch adds booleans for whether to start Admin and Client Service.
Other refactoring moves all thread and service start into the one fat
location so we can ask to by-pass 'services' if we don't need them.
See JIRA for an example hbase-server.xml that has config to shutdown
WAL, cache, etc.

Adds checks if a service/thread has been setup before going to use it.

Renames the ExecutorService in HRegionServer from service to
executorService.

See JIRA too for example Connection implementation that makes use of
Connection plugin point to receive a replication stream. The default
replication sink catches the incoming replication stream, undoes the
WALEdits and then creates a Table to call a batch with the
edits; up on JIRA, an example Connection plugin (legit, supported)
returns a Table with an overridden batch method where in we do index
inserts returning appropriate results to keep the replication engine
ticking over.

Upsides: an unadulterated RegionServer that will keep replication metrics
and even hosts a web UI if wanted. No hacks. Just ordained configs
shutting down unused services. Injection of the indexing function at a
blessed point with no pollution by hbase internals; only public imports.
No user of Private nor LimitedPrivate classes.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/456057ef
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/456057ef
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/456057ef

Branch: refs/heads/master
Commit: 456057ef90f152315a7f244141f3fca4ff748336
Parents: 37b29e9
Author: Michael Stack 
Authored: Tue Sep 26 22:27:58 2017 -0700
Committer: Michael Stack 
Committed: Mon Oct 23 21:16:13 2017 -0700

--
 .../org/apache/hadoop/hbase/master/HMaster.java | 114 +--
 .../hbase/regionserver/HRegionServer.java   | 705 ++-
 .../hbase/regionserver/RSRpcServices.java   |  70 +-
 .../regionserver/TestRegionServerNoMaster.java  |   2 +-
 4 files changed, 489 insertions(+), 402 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/456057ef/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
index cbb1537..8f2ae6b 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
@@ -1,4 +1,4 @@
-/**
+/*
  *
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
@@ -99,7 +99,6 @@ import 
org.apache.hadoop.hbase.master.assignment.AssignmentManager;
 import org.apache.hadoop.hbase.master.assignment.MergeTableRegionsProcedure;
 import org.apache.hadoop.hbase.master.assignment.RegionStates;
 import org.apache.hadoop.hbase.master.assignment.RegionStates.RegionStateNode;
-import org.apache.hadoop.hbase.master.assignment.RegionStates.ServerStateNode;
 import org.apache.hadoop.hbase.master.balancer.BalancerChore;
 import org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer;
 import org.apache.hadoop.hbase.master.balancer.ClusterStatusChore;
@@ -472,66 +471,73 @@ public class HMaster extends HRegionServer implements 
MasterServices {
   public HMaster(final Configuration conf, CoordinatedStateManager csm)
   throws IOException, KeeperException {
 super(conf, csm);
-this.rsFatals = new MemoryBoundedLogMessageBuffer(
-  conf.getLong("hbase.master.buffer.for.rs.fatals", 1*1024*1024));
+try {
+  this.rsFatals = new MemoryBoundedLogMessageBuffer(
+  conf.getLong("hbase.master.buffer.for.rs.fatals", 1 * 1024 * 1024));
 
-LOG.info("hbase.rootdir=" + getRootDir() +
-  ", hbase.cluster.distributed=" + 
this.conf.getBoolean(HConstants.CLUSTER_DISTRIBUTED, false));
+  LOG.info("hbase.rootdir=" + getRootDir() +
+  ", hbase.cluster.distributed=" + 
this.conf.getBoolean(HConstants.CLUSTER_DISTRIBUTED, false));
 
-// Disable usage of meta replicas in the master
-this.conf.setBoolean(HConstants.USE_META_REPLICAS, false);
+  // Disable usage of meta replicas in the master
+  

[42/50] [abbrv] hbase git commit: HBASE-18410 disable the HBASE-18957 test until we can fix it on the feature branch.

2017-10-23 Thread zhangduo
HBASE-18410 disable the HBASE-18957 test until we can fix it on the feature 
branch.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/2ebb7da6
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/2ebb7da6
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/2ebb7da6

Branch: refs/heads/HBASE-18410
Commit: 2ebb7da688e214becbdcc56c311fdb84225aeef1
Parents: 37b29e9
Author: Sean Busbey 
Authored: Mon Oct 9 15:24:00 2017 -0500
Committer: zhangduo 
Committed: Tue Oct 24 11:16:32 2017 +0800

--
 .../java/org/apache/hadoop/hbase/filter/TestFilterListOnMini.java   | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/2ebb7da6/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterListOnMini.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterListOnMini.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterListOnMini.java
index dd2399f..590b26e 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterListOnMini.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterListOnMini.java
@@ -58,6 +58,7 @@ public class TestFilterListOnMini {
 TEST_UTIL.shutdownMiniCluster();
   }
 
+  @Ignore("HBASE-18410 Should not merge without this test running.")
   @Test
   public void testFiltersWithOR() throws Exception {
 TableName tn = TableName.valueOf(name.getMethodName());



[43/50] [abbrv] hbase git commit: HBASE-17678 FilterList with MUST_PASS_ONE lead to redundancy cells returned - addendum

2017-10-23 Thread zhangduo
HBASE-17678 FilterList with MUST_PASS_ONE lead to redundancy cells returned - 
addendum

Signed-off-by: tedyu 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/3f5f2a54
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/3f5f2a54
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/3f5f2a54

Branch: refs/heads/HBASE-18410
Commit: 3f5f2a544691ba156705c87e3fd71c02ca1e7f5c
Parents: 49a877d
Author: huzheng 
Authored: Wed Jun 7 14:49:29 2017 +0800
Committer: zhangduo 
Committed: Tue Oct 24 11:30:34 2017 +0800

--
 .../java/org/apache/hadoop/hbase/filter/FilterList.java | 12 ++--
 1 file changed, 10 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/3f5f2a54/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
index 3493082..83db1f2 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
@@ -29,6 +29,7 @@ import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CellComparatorImpl;
 import org.apache.hadoop.hbase.CellUtil;
 import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.hadoop.hbase.KeyValueUtil;
 import org.apache.hadoop.hbase.exceptions.DeserializationException;
 import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
 import org.apache.hadoop.hbase.shaded.protobuf.generated.FilterProtos;
@@ -145,7 +146,7 @@ final public class FilterList extends FilterBase {
 
   public void initPrevListForMustPassOne(int size) {
 if (operator == Operator.MUST_PASS_ONE) {
-  if (this.prevCellList == null) {
+  if (this.prevFilterRCList == null) {
 prevFilterRCList = new ArrayList<>(Collections.nCopies(size, null));
   }
   if (this.prevCellList == null) {
@@ -407,7 +408,14 @@ final public class FilterList extends FilterBase {
 ReturnCode localRC = filter.filterKeyValue(c);
 // Update previous cell and return code we encountered.
 prevFilterRCList.set(i, localRC);
-prevCellList.set(i, c);
+if (c == null || localRC == ReturnCode.INCLUDE || localRC == 
ReturnCode.SKIP) {
+  // If previous return code is INCLUDE or SKIP, we should always pass 
the next cell to the
+  // corresponding sub-filter(need not test 
shouldPassCurrentCellToFilter() method), So we
+  // need not save current cell to prevCellList for saving heap memory.
+  prevCellList.set(i, null);
+} else {
+  prevCellList.set(i, KeyValueUtil.toNewKeyCell(c));
+}
 
 if (localRC != ReturnCode.SEEK_NEXT_USING_HINT) {
   seenNonHintReturnCode = true;



[13/50] [abbrv] hbase git commit: HBASE-18418 Remove apache_hbase_topology from dev-support

2017-10-23 Thread zhangduo
HBASE-18418 Remove apache_hbase_topology from dev-support


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/3acb0817
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/3acb0817
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/3acb0817

Branch: refs/heads/HBASE-18410
Commit: 3acb081787a4289d86d977db26bedaf6a42172ce
Parents: c16eb78
Author: Dima Spivak 
Authored: Thu Jul 20 10:08:11 2017 -0700
Committer: Dima Spivak 
Committed: Wed Oct 18 14:08:26 2017 -0700

--
 dev-support/apache_hbase_topology/Dockerfile|  24 --
 dev-support/apache_hbase_topology/README.md |  49 ---
 dev-support/apache_hbase_topology/__init__.py   |  15 -
 dev-support/apache_hbase_topology/actions.py| 421 ---
 .../apache_hbase_topology/configurations.cfg|  80 
 dev-support/apache_hbase_topology/profile.cfg   |  82 
 dev-support/apache_hbase_topology/ssh/id_rsa|  44 --
 .../apache_hbase_topology/ssh/id_rsa.pub|  18 -
 8 files changed, 733 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/3acb0817/dev-support/apache_hbase_topology/Dockerfile
--
diff --git a/dev-support/apache_hbase_topology/Dockerfile 
b/dev-support/apache_hbase_topology/Dockerfile
deleted file mode 100644
index 714a55c..000
--- a/dev-support/apache_hbase_topology/Dockerfile
+++ /dev/null
@@ -1,24 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-FROM debian:wheezy
-
-ENV TOPOLOGY_NAME=apache_hbase
-ADD . /root/clusterdock/clusterdock/topologies/${TOPOLOGY_NAME}
-
-RUN find /root -type f -name id_rsa -exec chmod 600 {} \;
-
-VOLUME /root/clusterdock/clusterdock/topologies/${TOPOLOGY_NAME}
-CMD ["/true"]

http://git-wip-us.apache.org/repos/asf/hbase/blob/3acb0817/dev-support/apache_hbase_topology/README.md
--
diff --git a/dev-support/apache_hbase_topology/README.md 
b/dev-support/apache_hbase_topology/README.md
deleted file mode 100644
index 018ee99..000
--- a/dev-support/apache_hbase_topology/README.md
+++ /dev/null
@@ -1,49 +0,0 @@
-
-# apache_hbase clusterdock topology
-
-## Overview
-*clusterdock* is a framework for creating Docker-based container clusters. 
Unlike regular Docker
-containers, which tend to run single processes and then exit once the process 
terminates, these
-container clusters are characterized by the execution of an init process in 
daemon mode. As such,
-the containers act more like "fat containers" or "light VMs;" entities with 
accessible IP addresses
-which emulate standalone hosts.
-
-*clusterdock* relies upon the notion of a topology to define how clusters 
should be built into
-images and then what to do with those images to start Docker container 
clusters.
-
-## Usage
-The *clusterdock* framework is designed to be run out of its own container 
while affecting
-operations on the host. To avoid problems that might result from incorrectly
-formatting this framework invocation, a Bash helper script (`clusterdock.sh`) 
can be sourced on a
-host that has Docker installed. Afterwards, running any of the binaries 
intended to carry
-out *clusterdock* actions can be done using the `clusterdock_run` command.
-```
-wget 
https://raw.githubusercontent.com/cloudera/clusterdock/master/clusterdock.sh
-# ALWAYS INSPECT SCRIPTS FROM THE INTERNET BEFORE SOURCING THEM.
-source clusterdock.sh
-```
-
-Since the *clusterdock* framework itself lives outside of Apache HBase, an 
environmental variable
-is used to let the helper script know where to find an image of the 
*apache_hbase* topology. To
-start a four-node Apache HBase cluster with default versions, you would simply 
run
-```
-CLUSTERDOCK_TOPOLOGY_IMAGE=apache_hbase_topology_location clusterdock_run \
-./bin/start_cluster apache_hbase --secondary-nodes='node-{2..4}'
-```

http://git-wip-us.apache.org/repos/asf/hbase/blob/3acb0817/dev-support/apache_hbase_topology/__init__.py

[14/50] [abbrv] hbase git commit: HBASE-19042 Oracle Java 8u144 downloader broken in precommit check

2017-10-23 Thread zhangduo
HBASE-19042 Oracle Java 8u144 downloader broken in precommit check


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/9e688117
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/9e688117
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/9e688117

Branch: refs/heads/HBASE-18410
Commit: 9e688117bad3cb4826c7201bb359672676389620
Parents: 3acb081
Author: zhangduo 
Authored: Thu Oct 19 14:49:09 2017 +0800
Committer: zhangduo 
Committed: Thu Oct 19 15:32:48 2017 +0800

--
 dev-support/docker/Dockerfile | 29 +++--
 1 file changed, 11 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/9e688117/dev-support/docker/Dockerfile
--
diff --git a/dev-support/docker/Dockerfile b/dev-support/docker/Dockerfile
index 62c6030..c23c70d 100644
--- a/dev-support/docker/Dockerfile
+++ b/dev-support/docker/Dockerfile
@@ -65,18 +65,18 @@ RUN apt-get -q update && apt-get -q install 
--no-install-recommends -y \
 zlib1g-dev
 
 ###
-# Oracle Java
+# OpenJDK 8
 ###
 
 RUN echo "dot_style = mega" > "/root/.wgetrc"
 RUN echo "quiet = on" >> "/root/.wgetrc"
 
 RUN apt-get -q update && apt-get -q install --no-install-recommends -y 
software-properties-common
-RUN add-apt-repository -y ppa:webupd8team/java
-
-# Auto-accept the Oracle JDK license
-RUN echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select 
true | sudo /usr/bin/debconf-set-selections
-RUN apt-get -q update && apt-get -q install --no-install-recommends -y 
oracle-java8-installer
+RUN add-apt-repository -y ppa:openjdk-r/ppa
+RUN apt-get -q update
+RUN apt-get -q install --no-install-recommends -y openjdk-8-jdk
+RUN update-alternatives --config java
+RUN update-alternatives --config javac
 
 
 # Apps that require Java
@@ -131,23 +131,16 @@ RUN pip install python-dateutil
 # Install Ruby 2, based on Yetus 0.4.0 dockerfile
 ###
 RUN echo 'gem: --no-rdoc --no-ri' >> /root/.gemrc
-RUN apt-get -q install -y ruby2.0
-#
-# on trusty, the above installs ruby2.0 and ruby (1.9.3) exes
-# but update-alternatives is broken, so we need to do some work
-# to make 2.0 actually the default without the system flipping out
-#
-# See https://bugs.launchpad.net/ubuntu/+source/ruby2.0/+bug/1310292
-#
-RUN dpkg-divert --add --rename --divert /usr/bin/ruby.divert /usr/bin/ruby
-RUN dpkg-divert --add --rename --divert /usr/bin/gem.divert /usr/bin/gemrc
-RUN update-alternatives --install /usr/bin/ruby ruby /usr/bin/ruby2.0 1
-RUN update-alternatives --install /usr/bin/gem gem /usr/bin/gem2.0 1
+RUN apt-add-repository ppa:brightbox/ruby-ng
+RUN apt-get -q update
 
+RUN apt-get -q install --no-install-recommends -y ruby2.2 ruby-switch
+RUN ruby-switch --set ruby2.2
 
 
 # Install rubocop
 ###
+RUN gem install rake
 RUN gem install rubocop
 
 



[12/50] [abbrv] hbase git commit: HBASE-19038 precommit mvn install should run from root on patch

2017-10-23 Thread zhangduo
HBASE-19038 precommit mvn install should run from root on patch


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/c16eb788
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/c16eb788
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/c16eb788

Branch: refs/heads/HBASE-18410
Commit: c16eb7881fa530a2dd626c1e06e294c2d198af22
Parents: e320df5
Author: Mike Drob 
Authored: Wed Oct 18 10:20:03 2017 -0500
Committer: Mike Drob 
Committed: Wed Oct 18 10:41:17 2017 -0500

--
 dev-support/hbase-personality.sh | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/c16eb788/dev-support/hbase-personality.sh
--
diff --git a/dev-support/hbase-personality.sh b/dev-support/hbase-personality.sh
index 43371f8..9b23e11 100755
--- a/dev-support/hbase-personality.sh
+++ b/dev-support/hbase-personality.sh
@@ -84,9 +84,7 @@ function personality_modules
 
   extra="-DHBasePatchProcess"
 
-  if [[ ${repostatus} == branch
- && ${testtype} == mvninstall ]] ||
- [[ "${BUILDMODE}" == full ]];then
+  if [[ ${testtype} == mvninstall ]] || [[ "${BUILDMODE}" == full ]]; then
 personality_enqueue_module . ${extra}
 return
   fi



[38/50] [abbrv] hbase git commit: HBASE-18893 remove add/delete/modify column

2017-10-23 Thread zhangduo
HBASE-18893 remove add/delete/modify column


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/a1bc20ab
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/a1bc20ab
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/a1bc20ab

Branch: refs/heads/HBASE-18410
Commit: a1bc20ab5886acd65cc2b693eccf8e736d373b6b
Parents: 880b26d
Author: Mike Drob 
Authored: Tue Oct 17 16:47:41 2017 -0500
Committer: Mike Drob 
Committed: Mon Oct 23 20:02:25 2017 -0500

--
 .../src/main/protobuf/MasterProcedure.proto |  46 ---
 .../hbase/coprocessor/MasterObserver.java   | 142 ---
 .../org/apache/hadoop/hbase/master/HMaster.java | 122 +++---
 .../hbase/master/MasterCoprocessorHost.java | 133 ---
 .../procedure/AddColumnFamilyProcedure.java | 358 --
 .../procedure/DeleteColumnFamilyProcedure.java  | 371 ---
 .../procedure/ModifyColumnFamilyProcedure.java  | 323 
 .../hbase/security/access/AccessController.java |  40 +-
 .../visibility/VisibilityController.java|  34 --
 .../hbase/coprocessor/TestMasterObserver.java   | 194 --
 .../procedure/TestAddColumnFamilyProcedure.java | 190 --
 .../TestDeleteColumnFamilyProcedure.java| 211 ---
 .../TestModifyColumnFamilyProcedure.java| 183 -
 .../security/access/TestAccessController.java   |  51 ---
 .../access/TestWithDisabledAuthorization.java   |  32 --
 15 files changed, 44 insertions(+), 2386 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/a1bc20ab/hbase-protocol-shaded/src/main/protobuf/MasterProcedure.proto
--
diff --git a/hbase-protocol-shaded/src/main/protobuf/MasterProcedure.proto 
b/hbase-protocol-shaded/src/main/protobuf/MasterProcedure.proto
index 626530f..af9caef 100644
--- a/hbase-protocol-shaded/src/main/protobuf/MasterProcedure.proto
+++ b/hbase-protocol-shaded/src/main/protobuf/MasterProcedure.proto
@@ -148,52 +148,6 @@ message DeleteNamespaceStateData {
   optional NamespaceDescriptor namespace_descriptor = 2;
 }
 
-enum AddColumnFamilyState {
-  ADD_COLUMN_FAMILY_PREPARE = 1;
-  ADD_COLUMN_FAMILY_PRE_OPERATION = 2;
-  ADD_COLUMN_FAMILY_UPDATE_TABLE_DESCRIPTOR = 3;
-  ADD_COLUMN_FAMILY_POST_OPERATION = 4;
-  ADD_COLUMN_FAMILY_REOPEN_ALL_REGIONS = 5;
-}
-
-message AddColumnFamilyStateData {
-  required UserInformation user_info = 1;
-  required TableName table_name = 2;
-  required ColumnFamilySchema columnfamily_schema = 3;
-  optional TableSchema unmodified_table_schema = 4;
-}
-
-enum ModifyColumnFamilyState {
-  MODIFY_COLUMN_FAMILY_PREPARE = 1;
-  MODIFY_COLUMN_FAMILY_PRE_OPERATION = 2;
-  MODIFY_COLUMN_FAMILY_UPDATE_TABLE_DESCRIPTOR = 3;
-  MODIFY_COLUMN_FAMILY_POST_OPERATION = 4;
-  MODIFY_COLUMN_FAMILY_REOPEN_ALL_REGIONS = 5;
-}
-
-message ModifyColumnFamilyStateData {
-  required UserInformation user_info = 1;
-  required TableName table_name = 2;
-  required ColumnFamilySchema columnfamily_schema = 3;
-  optional TableSchema unmodified_table_schema = 4;
-}
-
-enum DeleteColumnFamilyState {
-  DELETE_COLUMN_FAMILY_PREPARE = 1;
-  DELETE_COLUMN_FAMILY_PRE_OPERATION = 2;
-  DELETE_COLUMN_FAMILY_UPDATE_TABLE_DESCRIPTOR = 3;
-  DELETE_COLUMN_FAMILY_DELETE_FS_LAYOUT = 4;
-  DELETE_COLUMN_FAMILY_POST_OPERATION = 5;
-  DELETE_COLUMN_FAMILY_REOPEN_ALL_REGIONS = 6;
-}
-
-message DeleteColumnFamilyStateData {
-  required UserInformation user_info = 1;
-  required TableName table_name = 2;
-  required bytes columnfamily_name = 3;
-  optional TableSchema unmodified_table_schema = 4;
-}
-
 enum EnableTableState {
   ENABLE_TABLE_PREPARE = 1;
   ENABLE_TABLE_PRE_OPERATION = 2;

http://git-wip-us.apache.org/repos/asf/hbase/blob/a1bc20ab/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
index 29f0f9f..397ec8a 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
@@ -271,148 +271,6 @@ public interface MasterObserver {
   final TableDescriptor htd) throws IOException {}
 
   /**
-   * Called prior to adding a new column family to the table.  Called as part 
of
-   * add column RPC call.
-   *
-   * @param ctx the environment to interact with the framework and master
-   * @param tableName the name of the table
-   * @param columnFamily the ColumnFamilyDescriptor
-   */
-  default void preAddColumnFamily(final 

[24/50] [abbrv] hbase git commit: HBASE-19045 Deprecate RegionObserver#postInstantiateDeleteTracker.

2017-10-23 Thread zhangduo
HBASE-19045 Deprecate RegionObserver#postInstantiateDeleteTracker.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/64d164b8
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/64d164b8
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/64d164b8

Branch: refs/heads/HBASE-18410
Commit: 64d164b86d32f6d6e987722bf223a809743f9f47
Parents: d798541
Author: anoopsamjohn 
Authored: Fri Oct 20 23:57:40 2017 +0530
Committer: anoopsamjohn 
Committed: Fri Oct 20 23:57:40 2017 +0530

--
 .../org/apache/hadoop/hbase/coprocessor/RegionObserver.java | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/64d164b8/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
index d03a9be..076503f 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
@@ -1016,11 +1016,14 @@ public interface RegionObserver {
* Called after the ScanQueryMatcher creates ScanDeleteTracker. Implementing
* this hook would help in creating customised DeleteTracker and returning
* the newly created DeleteTracker
-   *
+   * 
+   * Warn: This is used by internal coprocessors. Should not be implemented by 
user coprocessors
* @param ctx the environment provided by the region server
* @param delTracker the deleteTracker that is created by the QueryMatcher
* @return the Delete Tracker
+   * @deprecated Since 2.0 with out any replacement and will be removed in 3.0
*/
+  @Deprecated
   default DeleteTracker postInstantiateDeleteTracker(
   ObserverContext ctx, DeleteTracker 
delTracker)
   throws IOException {



[03/50] [abbrv] hbase git commit: HBSE-18945 Make a IA.LimitedPrivate interface for CellComparator (Ram)

2017-10-23 Thread zhangduo
http://git-wip-us.apache.org/repos/asf/hbase/blob/70f4c5da/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java
index 039f499..7068fe1 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java
@@ -34,6 +34,7 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.ByteBufferKeyOnlyKeyValue;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CellComparator;
+import org.apache.hadoop.hbase.CellComparatorImpl;
 import org.apache.hadoop.hbase.CellUtil;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.KeyValue;
@@ -104,7 +105,7 @@ public class HFileReaderImpl implements HFile.Reader, 
Configurable {
   private int avgValueLen = -1;
 
   /** Key comparator */
-  private CellComparator comparator = CellComparator.COMPARATOR;
+  private CellComparator comparator = CellComparatorImpl.COMPARATOR;
 
   /** Size of this file. */
   private final long fileSize;
@@ -727,7 +728,7 @@ public class HFileReaderImpl implements HFile.Reader, 
Configurable {
 offsetFromPos += Bytes.SIZEOF_LONG;
 blockBuffer.asSubByteBuffer(blockBuffer.position() + offsetFromPos, 
klen, pair);
 bufBackedKeyOnlyKv.setKey(pair.getFirst(), pair.getSecond(), klen);
-int comp = reader.getComparator().compareKeyIgnoresMvcc(key, 
bufBackedKeyOnlyKv);
+int comp = CellUtil.compareKeyIgnoresMvcc(reader.getComparator(), key, 
bufBackedKeyOnlyKv);
 offsetFromPos += klen + vlen;
 if (this.reader.getFileContext().isIncludesTags()) {
   // Read short as unsigned, high byte first
@@ -810,8 +811,8 @@ public class HFileReaderImpl implements HFile.Reader, 
Configurable {
 } else {
   // The comparison with no_next_index_key has to be checked
   if (this.nextIndexedKey != null &&
-  (this.nextIndexedKey == KeyValueScanner.NO_NEXT_INDEXED_KEY || 
reader
-  .getComparator().compareKeyIgnoresMvcc(key, nextIndexedKey) < 
0)) {
+  (this.nextIndexedKey == KeyValueScanner.NO_NEXT_INDEXED_KEY || 
CellUtil
+  .compareKeyIgnoresMvcc(reader.getComparator(), key, 
nextIndexedKey) < 0)) {
 // The reader shall continue to scan the current data block instead
 // of querying the
 // block index as long as it knows the target key is strictly
@@ -864,8 +865,7 @@ public class HFileReaderImpl implements HFile.Reader, 
Configurable {
 return false;
   }
   Cell firstKey = getFirstKeyCellInBlock(seekToBlock);
-  if (reader.getComparator()
-   .compareKeyIgnoresMvcc(firstKey, key) >= 0) {
+  if (CellUtil.compareKeyIgnoresMvcc(reader.getComparator(), firstKey, 
key) >= 0) {
 long previousBlockOffset = seekToBlock.getPrevBlockOffset();
 // The key we are interested in
 if (previousBlockOffset == -1) {
@@ -1229,7 +1229,7 @@ public class HFileReaderImpl implements HFile.Reader, 
Configurable {
 public int compareKey(CellComparator comparator, Cell key) {
   blockBuffer.asSubByteBuffer(blockBuffer.position() + KEY_VALUE_LEN_SIZE, 
currKeyLen, pair);
   this.bufBackedKeyOnlyKv.setKey(pair.getFirst(), pair.getSecond(), 
currKeyLen);
-  return comparator.compareKeyIgnoresMvcc(key, this.bufBackedKeyOnlyKv);
+  return CellUtil.compareKeyIgnoresMvcc(comparator, key, 
this.bufBackedKeyOnlyKv);
 }
 
 @Override

http://git-wip-us.apache.org/repos/asf/hbase/blob/70f4c5da/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterImpl.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterImpl.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterImpl.java
index 5b25bed..33cfa1d 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterImpl.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterImpl.java
@@ -36,10 +36,11 @@ import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.hbase.ByteBufferCell;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CellComparator;
+import org.apache.hadoop.hbase.CellComparatorImpl;
 import org.apache.hadoop.hbase.CellUtil;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.KeyValueUtil;
-import org.apache.hadoop.hbase.CellComparator.MetaCellComparator;
+import org.apache.hadoop.hbase.CellComparatorImpl.MetaCellComparator;
 import org.apache.yetus.audience.InterfaceAudience;
 import 

[37/50] [abbrv] hbase git commit: HBASE-18893 remove add/delete/modify column

2017-10-23 Thread zhangduo
http://git-wip-us.apache.org/repos/asf/hbase/blob/a1bc20ab/hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestAddColumnFamilyProcedure.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestAddColumnFamilyProcedure.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestAddColumnFamilyProcedure.java
deleted file mode 100644
index 01de512..000
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestAddColumnFamilyProcedure.java
+++ /dev/null
@@ -1,190 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.hbase.master.procedure;
-
-import static org.junit.Assert.assertTrue;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.hbase.CategoryBasedTimeout;
-import org.apache.hadoop.hbase.HColumnDescriptor;
-import org.apache.hadoop.hbase.InvalidFamilyOperationException;
-import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.procedure2.Procedure;
-import org.apache.hadoop.hbase.procedure2.ProcedureExecutor;
-import org.apache.hadoop.hbase.procedure2.ProcedureTestingUtility;
-import org.apache.hadoop.hbase.testclassification.MasterTests;
-import org.apache.hadoop.hbase.testclassification.MediumTests;
-import org.junit.Rule;
-import org.junit.Test;
-import org.junit.experimental.categories.Category;
-import org.junit.rules.TestName;
-import org.junit.rules.TestRule;
-
-@Category({MasterTests.class, MediumTests.class})
-public class TestAddColumnFamilyProcedure extends TestTableDDLProcedureBase {
-  private static final Log LOG = 
LogFactory.getLog(TestAddColumnFamilyProcedure.class);
-  @Rule public final TestRule timeout = 
CategoryBasedTimeout.builder().withTimeout(this.getClass()).
-  withLookingForStuckThread(true).build();
-
-  @Rule public TestName name = new TestName();
-
-  @Test(timeout = 6)
-  public void testAddColumnFamily() throws Exception {
-final TableName tableName = TableName.valueOf(name.getMethodName());
-final String cf1 = "cf1";
-final String cf2 = "cf2";
-final HColumnDescriptor columnDescriptor1 = new HColumnDescriptor(cf1);
-final HColumnDescriptor columnDescriptor2 = new HColumnDescriptor(cf2);
-final ProcedureExecutor procExec = 
getMasterProcedureExecutor();
-
-MasterProcedureTestingUtility.createTable(procExec, tableName, null, "f3");
-
-// Test 1: Add a column family online
-long procId1 = procExec.submitProcedure(
-  new AddColumnFamilyProcedure(procExec.getEnvironment(), tableName, 
columnDescriptor1));
-// Wait the completion
-ProcedureTestingUtility.waitProcedure(procExec, procId1);
-ProcedureTestingUtility.assertProcNotFailed(procExec, procId1);
-
-MasterProcedureTestingUtility.validateColumnFamilyAddition(getMaster(), 
tableName, cf1);
-
-// Test 2: Add a column family offline
-UTIL.getAdmin().disableTable(tableName);
-long procId2 = procExec.submitProcedure(
-  new AddColumnFamilyProcedure(procExec.getEnvironment(), tableName, 
columnDescriptor2));
-// Wait the completion
-ProcedureTestingUtility.waitProcedure(procExec, procId2);
-ProcedureTestingUtility.assertProcNotFailed(procExec, procId2);
-MasterProcedureTestingUtility.validateColumnFamilyAddition(getMaster(), 
tableName, cf2);
-  }
-
-  @Test(timeout=6)
-  public void testAddSameColumnFamilyTwice() throws Exception {
-final TableName tableName = TableName.valueOf(name.getMethodName());
-final String cf2 = "cf2";
-final HColumnDescriptor columnDescriptor = new HColumnDescriptor(cf2);
-
-final ProcedureExecutor procExec = 
getMasterProcedureExecutor();
-
-MasterProcedureTestingUtility.createTable(procExec, tableName, null, "f1");
-
-// add the column family
-long procId1 = procExec.submitProcedure(
-  new AddColumnFamilyProcedure(procExec.getEnvironment(), tableName, 
columnDescriptor));
-// Wait the completion
-ProcedureTestingUtility.waitProcedure(procExec, procId1);
-ProcedureTestingUtility.assertProcNotFailed(procExec, procId1);

[04/50] [abbrv] hbase git commit: HBSE-18945 Make a IA.LimitedPrivate interface for CellComparator (Ram)

2017-10-23 Thread zhangduo
http://git-wip-us.apache.org/repos/asf/hbase/blob/70f4c5da/hbase-common/src/main/java/org/apache/hadoop/hbase/CellComparatorImpl.java
--
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/CellComparatorImpl.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/CellComparatorImpl.java
new file mode 100644
index 000..264984a
--- /dev/null
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/CellComparatorImpl.java
@@ -0,0 +1,381 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.KeyValue.Type;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.yetus.audience.InterfaceStability;
+import org.apache.hadoop.hbase.util.ByteBufferUtils;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import org.apache.hadoop.hbase.shaded.com.google.common.primitives.Longs;
+
+/**
+ * Compare two HBase cells.  Do not use this method comparing 
-ROOT- or
+ * hbase:meta cells.  Cells from these tables need a specialized 
comparator, one that
+ * takes account of the special formatting of the row where we have commas to 
delimit table from
+ * regionname, from row.  See KeyValue for how it has a special comparator to 
do hbase:meta cells
+ * and yet another for -ROOT-.
+ * While using this comparator for {{@link #compareRows(Cell, Cell)} et al, 
the hbase:meta cells
+ * format should be taken into consideration, for which the instance of this 
comparator
+ * should be used.  In all other cases the static APIs in this comparator 
would be enough
+ */
+@edu.umd.cs.findbugs.annotations.SuppressWarnings(
+value="UNKNOWN",
+justification="Findbugs doesn't like the way we are negating the result of 
a compare in below")
+@InterfaceAudience.Private
+@InterfaceStability.Evolving
+public class CellComparatorImpl implements CellComparator {
+  static final Log LOG = LogFactory.getLog(CellComparatorImpl.class);
+  /**
+   * Comparator for plain key/values; i.e. non-catalog table key/values. Works 
on Key portion
+   * of KeyValue only.
+   */
+  public static final CellComparatorImpl COMPARATOR = new CellComparatorImpl();
+  /**
+   * A {@link CellComparatorImpl} for hbase:meta catalog table
+   * {@link KeyValue}s.
+   */
+  public static final CellComparatorImpl META_COMPARATOR = new 
MetaCellComparator();
+
+  @Override
+  public int compare(Cell a, Cell b) {
+return compare(a, b, false);
+  }
+
+  /**
+   * Compare cells.
+   * @param a
+   * @param b
+   * @param ignoreSequenceid True if we are to compare the key portion only 
and ignore
+   * the sequenceid. Set to false to compare key and consider sequenceid.
+   * @return 0 if equal, -1 if a  b, and +1 if a  b.
+   */
+  public final int compare(final Cell a, final Cell b, boolean 
ignoreSequenceid) {
+// row
+int c = compareRows(a, b);
+if (c != 0) return c;
+
+c = compareWithoutRow(a, b);
+if(c != 0) return c;
+
+if (!ignoreSequenceid) {
+  // Negate following comparisons so later edits show up first
+  // mvccVersion: later sorts first
+  return Longs.compare(b.getSequenceId(), a.getSequenceId());
+} else {
+  return c;
+}
+  }
+
+  /**
+   * Compares the family and qualifier part of the cell
+   * @param left the left cell
+   * @param right the right cell
+   * @return 0 if both cells are equal, 1 if left cell is bigger than right, 
-1 otherwise
+   */
+  public final int compareColumns(final Cell left, final Cell right) {
+int diff = compareFamilies(left, right);
+if (diff != 0) {
+  return diff;
+}
+return compareQualifiers(left, right);
+  }
+
+  /**
+   * Compare the families of left and right cell
+   * @param left
+   * @param right
+   * @return 0 if both cells are equal, 1 if left cell is bigger than right, 
-1 otherwise
+   */
+  @Override
+  public final int compareFamilies(Cell left, Cell right) {
+if (left instanceof ByteBufferCell && right instanceof ByteBufferCell) {
+  return ByteBufferUtils.compareTo(((ByteBufferCell) 
left).getFamilyByteBuffer(),
+ 

[08/50] [abbrv] hbase git commit: HBASE-19032 Set mimetype for patches uploaded by submit-patch.py

2017-10-23 Thread zhangduo
HBASE-19032 Set mimetype for patches uploaded by submit-patch.py

Change-Id: I38e64174e2525cd6a929922b2612c91d660d


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/5368fd5b
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/5368fd5b
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/5368fd5b

Branch: refs/heads/HBASE-18410
Commit: 5368fd5bf0a281e67c4dde25816a1362d1f0a3f0
Parents: 41cc9a1
Author: Apekshit Sharma 
Authored: Tue Oct 17 15:32:39 2017 -0700
Committer: Apekshit Sharma 
Committed: Tue Oct 17 15:43:07 2017 -0700

--
 dev-support/submit-patch.py | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/5368fd5b/dev-support/submit-patch.py
--
diff --git a/dev-support/submit-patch.py b/dev-support/submit-patch.py
index 577be52..ad39495 100755
--- a/dev-support/submit-patch.py
+++ b/dev-support/submit-patch.py
@@ -205,12 +205,12 @@ def get_credentials():
 return creds
 
 
-def attach_patch_to_jira(issue_url, patch_filepath, creds):
+def attach_patch_to_jira(issue_url, patch_filepath, patch_filename, creds):
 # Upload patch to jira using REST API.
 headers = {'X-Atlassian-Token': 'no-check'}
-files = {'file': open(patch_filepath, 'rb')}
+files = {'file': (patch_filename, open(patch_filepath, 'rb'), 
'text/plain')}
 jira_auth = requests.auth.HTTPBasicAuth(creds['jira_username'], 
creds['jira_password'])
-attachment_url = issue_url +  "/attachments"
+attachment_url = issue_url + "/attachments"
 r = requests.post(attachment_url, headers = headers, files = files, auth = 
jira_auth)
 assert_status_code(r, 200, "uploading patch to jira")
 
@@ -256,7 +256,7 @@ if args.jira_id is not None:
 creds = get_credentials()
 issue_url = "https://issues.apache.org/jira/rest/api/2/issue/; + 
args.jira_id
 
-attach_patch_to_jira(issue_url, patch_filepath, creds)
+attach_patch_to_jira(issue_url, patch_filepath, patch_filename, creds)
 
 if not args.skip_review_board:
 rb_auth = requests.auth.HTTPBasicAuth(creds['rb_username'], 
creds['rb_password'])



[02/50] [abbrv] hbase git commit: HBSE-18945 Make a IA.LimitedPrivate interface for CellComparator (Ram)

2017-10-23 Thread zhangduo
http://git-wip-us.apache.org/repos/asf/hbase/blob/70f4c5da/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java
index 13589fb..39419ca 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java
@@ -41,7 +41,7 @@ import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.ArrayBackedTag;
 import org.apache.hadoop.hbase.Cell;
-import org.apache.hadoop.hbase.CellComparator;
+import org.apache.hadoop.hbase.CellComparatorImpl;
 import org.apache.hadoop.hbase.CellUtil;
 import org.apache.hadoop.hbase.HBaseCommonTestingUtility;
 import org.apache.hadoop.hbase.HBaseTestingUtility;
@@ -117,7 +117,7 @@ public class TestHFile  {
 HFileContext meta = new HFileContextBuilder().withBlockSize(64 * 
1024).build();
 StoreFileWriter sfw =
 new StoreFileWriter.Builder(conf, fs).withOutputDir(storeFileParentDir)
-
.withComparator(CellComparator.COMPARATOR).withFileContext(meta).build();
+
.withComparator(CellComparatorImpl.COMPARATOR).withFileContext(meta).build();
 
 final int rowLen = 32;
 Random RNG = new Random();
@@ -319,7 +319,7 @@ public class TestHFile  {
 Writer writer = HFile.getWriterFactory(conf, cacheConf)
 .withOutputStream(fout)
 .withFileContext(meta)
-.withComparator(CellComparator.COMPARATOR)
+.withComparator(CellComparatorImpl.COMPARATOR)
 .create();
 LOG.info(writer);
 writeRecords(writer, useTags);
@@ -486,72 +486,72 @@ public class TestHFile  {
 9,
 KeyValue.Type.Maximum.getCode(),
 HConstants.EMPTY_BYTE_ARRAY);
-Cell mid = HFileWriterImpl.getMidpoint(CellComparator.COMPARATOR, left, 
right);
-assertTrue(CellComparator.COMPARATOR.compareKeyIgnoresMvcc(left, mid) <= 
0);
-assertTrue(CellComparator.COMPARATOR.compareKeyIgnoresMvcc(mid, right) == 
0);
+Cell mid = HFileWriterImpl.getMidpoint(CellComparatorImpl.COMPARATOR, 
left, right);
+assertTrue(CellUtil.compareKeyIgnoresMvcc(CellComparatorImpl.COMPARATOR, 
left, mid) <= 0);
+assertTrue(CellUtil.compareKeyIgnoresMvcc(CellComparatorImpl.COMPARATOR, 
mid, right) == 0);
   }
 
   @Test
   public void testGetShortMidpoint() {
 Cell left = CellUtil.createCell(Bytes.toBytes("a"), Bytes.toBytes("a"), 
Bytes.toBytes("a"));
 Cell right = CellUtil.createCell(Bytes.toBytes("a"), Bytes.toBytes("a"), 
Bytes.toBytes("a"));
-Cell mid = HFileWriterImpl.getMidpoint(CellComparator.COMPARATOR, left, 
right);
-assertTrue(CellComparator.COMPARATOR.compareKeyIgnoresMvcc(left, mid) <= 
0);
-assertTrue(CellComparator.COMPARATOR.compareKeyIgnoresMvcc(mid, right) <= 
0);
+Cell mid = HFileWriterImpl.getMidpoint(CellComparatorImpl.COMPARATOR, 
left, right);
+assertTrue(CellUtil.compareKeyIgnoresMvcc(CellComparatorImpl.COMPARATOR, 
left, mid) <= 0);
+assertTrue(CellUtil.compareKeyIgnoresMvcc(CellComparatorImpl.COMPARATOR, 
mid, right) <= 0);
 
 left = CellUtil.createCell(Bytes.toBytes("a"), Bytes.toBytes("a"), 
Bytes.toBytes("a"));
 right = CellUtil.createCell(Bytes.toBytes("b"), Bytes.toBytes("a"), 
Bytes.toBytes("a"));
-mid = HFileWriterImpl.getMidpoint(CellComparator.COMPARATOR, left, right);
-assertTrue(CellComparator.COMPARATOR.compareKeyIgnoresMvcc(left, mid) < 0);
-assertTrue(CellComparator.COMPARATOR.compareKeyIgnoresMvcc(mid, right) <= 
0);
+mid = HFileWriterImpl.getMidpoint(CellComparatorImpl.COMPARATOR, left, 
right);
+assertTrue(CellUtil.compareKeyIgnoresMvcc(CellComparatorImpl.COMPARATOR, 
left, mid) < 0);
+assertTrue(CellUtil.compareKeyIgnoresMvcc(CellComparatorImpl.COMPARATOR, 
mid, right) <= 0);
 
 left = CellUtil.createCell(Bytes.toBytes("g"), Bytes.toBytes("a"), 
Bytes.toBytes("a"));
 right = CellUtil.createCell(Bytes.toBytes("i"), Bytes.toBytes("a"), 
Bytes.toBytes("a"));
-mid = HFileWriterImpl.getMidpoint(CellComparator.COMPARATOR, left, right);
-assertTrue(CellComparator.COMPARATOR.compareKeyIgnoresMvcc(left, mid) < 0);
-assertTrue(CellComparator.COMPARATOR.compareKeyIgnoresMvcc(mid, right) <= 
0);
+mid = HFileWriterImpl.getMidpoint(CellComparatorImpl.COMPARATOR, left, 
right);
+assertTrue(CellUtil.compareKeyIgnoresMvcc(CellComparatorImpl.COMPARATOR, 
left, mid) < 0);
+assertTrue(CellUtil.compareKeyIgnoresMvcc(CellComparatorImpl.COMPARATOR, 
mid, right) <= 0);
 
 left = CellUtil.createCell(Bytes.toBytes("a"), Bytes.toBytes("a"), 
Bytes.toBytes("a"));
 right = CellUtil.createCell(Bytes.toBytes("bbb"), Bytes.toBytes("a"), 
Bytes.toBytes("a"));
-mid = 

[10/50] [abbrv] hbase git commit: HBASE-19001 Remove the hooks in RegionObserver which are designed to construct a StoreScanner which is marked as IA.Private

2017-10-23 Thread zhangduo
HBASE-19001 Remove the hooks in RegionObserver which are designed to construct 
a StoreScanner which is marked as IA.Private


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/e804dd0b
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/e804dd0b
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/e804dd0b

Branch: refs/heads/HBASE-18410
Commit: e804dd0b600f898f7519dee7134b68ad04c20a9a
Parents: 5368fd5
Author: zhangduo 
Authored: Tue Oct 17 21:27:05 2017 +0800
Committer: zhangduo 
Committed: Wed Oct 18 11:06:39 2017 +0800

--
 .../hbase/coprocessor/RegionObserver.java   |  77 -
 .../hadoop/hbase/regionserver/HMobStore.java|  24 +-
 .../hadoop/hbase/regionserver/HRegion.java  |   4 +-
 .../hadoop/hbase/regionserver/HStore.java   |  18 +-
 .../hadoop/hbase/regionserver/Region.java   |   3 -
 .../regionserver/RegionCoprocessorHost.java |  64 +---
 .../regionserver/ReversedStoreScanner.java  |   6 +-
 .../hadoop/hbase/regionserver/StoreFlusher.java |  12 +-
 .../regionserver/compactions/Compactor.java |  44 +--
 ...estAvoidCellReferencesIntoShippedBlocks.java | 197 ++---
 .../hadoop/hbase/client/TestFromClientSide.java | 156 --
 .../client/TestFromClientSideScanExcpetion.java | 238 +++
 ...mClientSideScanExcpetionWithCoprocessor.java |  43 +++
 .../hbase/coprocessor/SimpleRegionObserver.java |  36 ---
 .../TestRegionObserverScannerOpenHook.java  |  31 +-
 .../regionserver/DelegatingInternalScanner.java |  45 +++
 .../regionserver/NoOpScanPolicyObserver.java|  60 +---
 .../hbase/util/TestCoprocessorScanPolicy.java   | 290 +++
 18 files changed, 647 insertions(+), 701 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/e804dd0b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
index a1e4f0e..d03a9be 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
@@ -124,27 +124,6 @@ public interface RegionObserver {
   default void postLogReplay(ObserverContext c) 
{}
 
   /**
-   * Called before a memstore is flushed to disk and prior to creating the 
scanner to read from
-   * the memstore.  To override or modify how a memstore is flushed,
-   * implementing classes can return a new scanner to provide the KeyValues to 
be
-   * stored into the new {@code StoreFile} or null to perform the default 
processing.
-   * Calling {@link 
org.apache.hadoop.hbase.coprocessor.ObserverContext#bypass()} has no
-   * effect in this hook.
-   * @param c the environment provided by the region server
-   * @param store the store being flushed
-   * @param scanners the scanners for the memstore that is flushed
-   * @param s the base scanner, if not {@code null}, from previous 
RegionObserver in the chain
-   * @param readPoint the readpoint to create scanner
-   * @return the scanner to use during the flush.  {@code null} if the default 
implementation
-   * is to be used.
-   */
-  default InternalScanner 
preFlushScannerOpen(ObserverContext c,
-  Store store, List scanners, InternalScanner s, long 
readPoint)
-  throws IOException {
-return s;
-  }
-
-  /**
* Called before the memstore is flushed to disk.
* @param c the environment provided by the region server
*/
@@ -236,33 +215,6 @@ public interface RegionObserver {
   }
 
   /**
-   * Called prior to writing the {@link StoreFile}s selected for compaction 
into a new
-   * {@code StoreFile} and prior to creating the scanner used to read the 
input files. To override
-   * or modify the compaction process, implementing classes can return a new 
scanner to provide the
-   * KeyValues to be stored into the new {@code StoreFile} or null to perform 
the default
-   * processing. Calling {@link 
org.apache.hadoop.hbase.coprocessor.ObserverContext#bypass()} has no
-   * effect in this hook.
-   * @param c the environment provided by the region server
-   * @param store the store being compacted
-   * @param scanners the list of store file scanners to be read from
-   * @param scanType the {@link ScanType} indicating whether this is a major 
or minor compaction
-   * @param earliestPutTs timestamp of the earliest put that was found in any 
of the involved store
-   *  files
-   * @param s the base scanner, if not {@code null}, from previous 
RegionObserver in the chain
-   * @param tracker 

[11/50] [abbrv] hbase git commit: HBASE-19020 HBase Rest test for xml parsing external entities should not rely on implementation of java XML APIs.

2017-10-23 Thread zhangduo
HBASE-19020 HBase Rest test for xml parsing external entities should not rely 
on implementation of java XML APIs.

Signed-off-by: Chia-Ping Tsai 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/e320df5a
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/e320df5a
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/e320df5a

Branch: refs/heads/HBASE-18410
Commit: e320df5a0c267258c03909da8d0eee4c0e287532
Parents: e804dd0
Author: Sean Busbey 
Authored: Mon Oct 16 16:11:39 2017 -0500
Committer: Sean Busbey 
Committed: Wed Oct 18 09:39:55 2017 -0500

--
 .../apache/hadoop/hbase/rest/client/TestXmlParsing.java   | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/e320df5a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestXmlParsing.java
--
diff --git 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestXmlParsing.java
 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestXmlParsing.java
index 5e259f2..586e33c 100644
--- 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestXmlParsing.java
+++ 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestXmlParsing.java
@@ -23,7 +23,10 @@ import static org.mockito.Mockito.mock;
 import static org.mockito.Mockito.when;
 
 import java.io.IOException;
+import javax.xml.bind.UnmarshalException;
 
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.hbase.HBaseConfiguration;
 import org.apache.hadoop.hbase.rest.Constants;
 import org.apache.hadoop.hbase.rest.model.StorageClusterVersionModel;
@@ -37,6 +40,7 @@ import org.junit.experimental.categories.Category;
  */
 @Category(SmallTests.class)
 public class TestXmlParsing {
+  private static final Log LOG = LogFactory.getLog(TestXmlParsing.class);
 
   @Test
   public void testParsingClusterVersion() throws Exception {
@@ -68,8 +72,12 @@ public class TestXmlParsing {
   admin.getClusterVersion();
   fail("Expected getClusterVersion() to throw an exception");
 } catch (IOException e) {
+  assertEquals("Cause of exception ought to be a failure to parse the 
stream due to our " +
+  "invalid external entity. Make sure this isn't just a false positive 
due to " +
+  "implementation. see HBASE-19020.", UnmarshalException.class, 
e.getCause().getClass());
   final String exceptionText = StringUtils.stringifyException(e);
-  final String expectedText = "The entity \"xee\" was referenced, but not 
declared.";
+  final String expectedText = "\"xee\"";
+  LOG.debug("exception text: '" + exceptionText + "'", e);
   assertTrue("Exception does not contain expected text", 
exceptionText.contains(expectedText));
 }
   }



[50/50] [abbrv] hbase git commit: HBASE-18368 FilterList with multiple FamilyFilters concatenated by OR does not work

2017-10-23 Thread zhangduo
HBASE-18368 FilterList with multiple FamilyFilters concatenated by OR does not 
work

Signed-off-by: Guanghao Zhang 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/b5896b7a
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/b5896b7a
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/b5896b7a

Branch: refs/heads/HBASE-18410
Commit: b5896b7a45b5e2324dc5d3e5fbd775c3e784caf5
Parents: a17094f
Author: huzheng 
Authored: Tue Oct 17 19:25:23 2017 +0800
Committer: zhangduo 
Committed: Tue Oct 24 11:39:31 2017 +0800

--
 .../org/apache/hadoop/hbase/filter/Filter.java  | 10 +---
 .../hadoop/hbase/filter/FilterListWithOR.java   | 10 ++--
 .../hadoop/hbase/filter/TestFilterList.java | 26 
 .../hbase/filter/TestFilterListOnMini.java  |  7 +++---
 4 files changed, 44 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/b5896b7a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/Filter.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/Filter.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/Filter.java
index 70c68b6..a92ea0b 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/Filter.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/Filter.java
@@ -172,8 +172,12 @@ public abstract class Filter {
  */
 NEXT_COL,
 /**
- * Done with columns, skip to next row. Note that filterRow() will
- * still be called.
+ * Seek to next row in current family. It may still pass a cell whose 
family is different but
+ * row is the same as previous cell to {@link #filterKeyValue(Cell)} , 
even if we get a NEXT_ROW
+ * returned for previous cell. For more details see HBASE-18368. 
+ * Once reset() method was invoked, then we switch to the next row for all 
family, and you can
+ * catch the event by invoking CellUtils.matchingRows(previousCell, 
currentCell). 
+ * Note that filterRow() will still be called. 
  */
 NEXT_ROW,
 /**
@@ -181,7 +185,7 @@ public abstract class Filter {
  */
 SEEK_NEXT_USING_HINT,
 /**
- * Include KeyValue and done with row, seek to next.
+ * Include KeyValue and done with row, seek to next. See NEXT_ROW.
  */
 INCLUDE_AND_SEEK_NEXT_ROW,
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/b5896b7a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterListWithOR.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterListWithOR.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterListWithOR.java
index bac9023..31e2a55 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterListWithOR.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterListWithOR.java
@@ -74,7 +74,12 @@ public class FilterListWithOR extends FilterListBase {
* as the previous cell even if filter-A has NEXT_COL returned for the 
previous cell. So we should
* save the previous cell and the return code list when checking previous 
cell for every filter in
* filter list, and verify if currentCell fit the previous return code, if 
fit then pass the
-   * currentCell to the corresponding filter. (HBASE-17678)
+   * currentCell to the corresponding filter. (HBASE-17678) 
+   * Note that: In StoreScanner level, NEXT_ROW will skip to the next row in 
current family, and in
+   * RegionScanner level, NEXT_ROW will skip to the next row in current family 
and switch to the
+   * next family for RegionScanner, INCLUDE_AND_NEXT_ROW is the same. so we 
should pass current cell
+   * to the filter, if row mismatch or row match but column family mismatch. 
(HBASE-18368)
+   * @see org.apache.hadoop.hbase.filter.Filter.ReturnCode
*/
   private boolean shouldPassCurrentCellToFilter(Cell prevCell, Cell 
currentCell, int filterIdx)
   throws IOException {
@@ -94,7 +99,8 @@ public class FilterListWithOR extends FilterListBase {
   return !CellUtil.matchingRowColumn(prevCell, currentCell);
 case NEXT_ROW:
 case INCLUDE_AND_SEEK_NEXT_ROW:
-  return !CellUtil.matchingRows(prevCell, currentCell);
+  return !CellUtil.matchingRows(prevCell, currentCell)
+  || !CellUtil.matchingFamily(prevCell, currentCell);
 default:
   throw new IllegalStateException("Received code is not valid.");
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/b5896b7a/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java

[29/50] [abbrv] hbase git commit: HBASE-19010 Reimplement getMasterInfoPort for Admin

2017-10-23 Thread zhangduo
HBASE-19010 Reimplement getMasterInfoPort for Admin


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/592d541f
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/592d541f
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/592d541f

Branch: refs/heads/HBASE-18410
Commit: 592d541f5d5e5fea5668915e0400f048fa3f65e3
Parents: cb5c477
Author: Guanghao Zhang 
Authored: Tue Oct 17 19:12:54 2017 +0800
Committer: Guanghao Zhang 
Committed: Sat Oct 21 18:19:22 2017 +0800

--
 .../org/apache/hadoop/hbase/ClusterStatus.java  | 28 +++-
 .../org/apache/hadoop/hbase/client/Admin.java   |  4 ++-
 .../apache/hadoop/hbase/client/AsyncAdmin.java  |  9 +++
 .../apache/hadoop/hbase/client/HBaseAdmin.java  | 12 -
 .../hbase/shaded/protobuf/ProtobufUtil.java |  7 +
 .../src/main/protobuf/ClusterStatus.proto   |  2 ++
 .../org/apache/hadoop/hbase/master/HMaster.java |  6 +
 .../apache/hadoop/hbase/TestInfoServers.java| 10 +++
 .../hbase/client/TestAsyncClusterAdminApi.java  | 19 +
 .../hbase/client/TestClientClusterStatus.java   |  2 ++
 10 files changed, 80 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/592d541f/hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterStatus.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterStatus.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterStatus.java
index 0655b18..13a1358 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterStatus.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterStatus.java
@@ -81,6 +81,7 @@ public class ClusterStatus {
   private String clusterId;
   private String[] masterCoprocessors;
   private Boolean balancerOn;
+  private int masterInfoPort;
 
   /**
* Use {@link ClusterStatus.Builder} to construct a ClusterStatus instead.
@@ -95,7 +96,8 @@ public class ClusterStatus {
   final Collection backupMasters,
   final List rit,
   final String[] masterCoprocessors,
-  final Boolean balancerOn) {
+  final Boolean balancerOn,
+  final int masterInfoPort) {
 // TODO: make this constructor private
 this.hbaseVersion = hbaseVersion;
 this.liveServers = servers;
@@ -106,6 +108,7 @@ public class ClusterStatus {
 this.clusterId = clusterid;
 this.masterCoprocessors = masterCoprocessors;
 this.balancerOn = balancerOn;
+this.masterInfoPort = masterInfoPort;
   }
 
   /**
@@ -202,15 +205,17 @@ public class ClusterStatus {
   getDeadServerNames().containsAll(other.getDeadServerNames()) &&
   Arrays.equals(getMasterCoprocessors(), other.getMasterCoprocessors()) &&
   Objects.equal(getMaster(), other.getMaster()) &&
-  getBackupMasters().containsAll(other.getBackupMasters());
+  getBackupMasters().containsAll(other.getBackupMasters()) &&
+  Objects.equal(getClusterId(), other.getClusterId()) &&
+  getMasterInfoPort() == other.getMasterInfoPort();
   }
 
   /**
* @see java.lang.Object#hashCode()
*/
   public int hashCode() {
-return Objects.hashCode(hbaseVersion, liveServers, deadServers,
-  master, backupMasters);
+return Objects.hashCode(hbaseVersion, liveServers, deadServers, master, 
backupMasters,
+  clusterId, masterInfoPort);
   }
 
   /**
@@ -312,6 +317,10 @@ public class ClusterStatus {
 return balancerOn;
   }
 
+  public int getMasterInfoPort() {
+return masterInfoPort;
+  }
+
   public String toString() {
 StringBuilder sb = new StringBuilder(1024);
 sb.append("Master: " + master);
@@ -372,6 +381,7 @@ public class ClusterStatus {
 private String clusterId = null;
 private String[] masterCoprocessors = null;
 private Boolean balancerOn = null;
+private int masterInfoPort = -1;
 
 private Builder() {}
 
@@ -420,10 +430,15 @@ public class ClusterStatus {
   return this;
 }
 
+public Builder setMasterInfoPort(int masterInfoPort) {
+  this.masterInfoPort = masterInfoPort;
+  return this;
+}
+
 public ClusterStatus build() {
   return new ClusterStatus(hbaseVersion, clusterId, liveServers,
   deadServers, master, backupMasters, intransition, masterCoprocessors,
-  balancerOn);
+  balancerOn, masterInfoPort);
 }
   }
 
@@ -439,6 +454,7 @@ public class ClusterStatus {
 MASTER, /** status about master */
 BACKUP_MASTERS, /** status about backup masters */
 MASTER_COPROCESSORS, /** status about master coprocessors */
-REGIONS_IN_TRANSITION; /** status about regions in transition */
+REGIONS_IN_TRANSITION, /** status about regions in transition */
+

[21/50] [abbrv] hbase git commit: HBASE-19051 Add new split algorithm for num string

2017-10-23 Thread zhangduo
HBASE-19051 Add new split algorithm for num string

Signed-off-by: tedyu 
Signed-off-by: Mike Drob 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/8c6ddc1a
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/8c6ddc1a
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/8c6ddc1a

Branch: refs/heads/HBASE-18410
Commit: 8c6ddc1aa5497a38018fdcf100bd33b385ca2c84
Parents: 5facade
Author: xiaowen147 
Authored: Fri Oct 20 02:18:17 2017 +0800
Committer: tedyu 
Committed: Fri Oct 20 09:49:57 2017 -0700

--
 .../hadoop/hbase/util/RegionSplitter.java   | 80 +++-
 .../hadoop/hbase/util/TestRegionSplitter.java   | 74 --
 2 files changed, 131 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/8c6ddc1a/hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSplitter.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSplitter.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSplitter.java
index 3ee593a..06bccd1 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSplitter.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSplitter.java
@@ -274,6 +274,12 @@ public class RegionSplitter {
* bin/hbase org.apache.hadoop.hbase.util.RegionSplitter -c 60 -f test:rs
* myTable HexStringSplit
* 
+   * create a table named 'myTable' with 50 pre-split regions,
+   * assuming the keys are decimal-encoded ASCII:
+   * 
+   * bin/hbase org.apache.hadoop.hbase.util.RegionSplitter -c 50
+   * myTable DecimalStringSplit
+   * 
* perform a rolling split of 'myTable' (i.e. 60 = 120 regions), # 2
* outstanding splits at a time, assuming keys are uniformly distributed
* bytes:
@@ -283,9 +289,9 @@ public class RegionSplitter {
* 
* 
*
-   * There are two SplitAlgorithms built into RegionSplitter, HexStringSplit
-   * and UniformSplit. These are different strategies for choosing region
-   * boundaries. See their source code for details.
+   * There are three SplitAlgorithms built into RegionSplitter, HexStringSplit,
+   * DecimalStringSplit, and UniformSplit. These are different strategies for
+   * choosing region boundaries. See their source code for details.
*
* @param args
*  Usage: RegionSplitter TABLE SPLITALGORITHM
@@ -353,9 +359,10 @@ public class RegionSplitter {
 if (2 != cmd.getArgList().size() || !oneOperOnly || cmd.hasOption("h")) {
   new HelpFormatter().printHelp("RegionSplitter  
\n"+
   "SPLITALGORITHM is a java class name of a class implementing " +
-  "SplitAlgorithm, or one of the special strings HexStringSplit " +
-  "or UniformSplit, which are built-in split algorithms. " +
+  "SplitAlgorithm, or one of the special strings HexStringSplit or " +
+  "DecimalStringSplit or UniformSplit, which are built-in split 
algorithms. " +
   "HexStringSplit treats keys as hexadecimal ASCII, and " +
+  "DecimalStringSplit treats keys as decimal ASCII, and " +
   "UniformSplit treats keys as arbitrary bytes.", opt);
   return;
 }
@@ -660,6 +667,8 @@ public class RegionSplitter {
 // their simple class name instead of a fully qualified class name.
 if(splitClassName.equals(HexStringSplit.class.getSimpleName())) {
   splitClass = HexStringSplit.class;
+} else if 
(splitClassName.equals(DecimalStringSplit.class.getSimpleName())) {
+  splitClass = DecimalStringSplit.class;
 } else if (splitClassName.equals(UniformSplit.class.getSimpleName())) {
   splitClass = UniformSplit.class;
 } else {
@@ -893,15 +902,52 @@ public class RegionSplitter {
* Since this split algorithm uses hex strings as keys, it is easy to read 

* write in the shell but takes up more space and may be non-intuitive.
*/
-  public static class HexStringSplit implements SplitAlgorithm {
+  public static class HexStringSplit extends NumberStringSplit {
 final static String DEFAULT_MIN_HEX = "";
 final static String DEFAULT_MAX_HEX = "";
+final static int RADIX_HEX = 16;
+
+public HexStringSplit() {
+  super(DEFAULT_MIN_HEX, DEFAULT_MAX_HEX, RADIX_HEX);
+}
 
-String firstRow = DEFAULT_MIN_HEX;
-BigInteger firstRowInt = BigInteger.ZERO;
-String lastRow = DEFAULT_MAX_HEX;
-BigInteger lastRowInt = new BigInteger(lastRow, 16);
-int rowComparisonLength = lastRow.length();
+  }
+
+  /**
+   * The format of a DecimalStringSplit region boundary is the ASCII 
representation of
+   * reversed sequential number, or any 

[17/50] [abbrv] hbase git commit: HBASE-19042 Oracle Java 8u144 downloader broken in precommit check

2017-10-23 Thread zhangduo
HBASE-19042 Oracle Java 8u144 downloader broken in precommit check

Signed-off-by: Mike Drob 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/af479c58
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/af479c58
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/af479c58

Branch: refs/heads/HBASE-18410
Commit: af479c580c24a78b34052dc4ad16dacd3dd988cd
Parents: 909e5f2
Author: zhangduo 
Authored: Thu Oct 19 14:49:09 2017 +0800
Committer: Mike Drob 
Committed: Thu Oct 19 15:53:52 2017 -0500

--
 dev-support/docker/Dockerfile | 29 +++--
 1 file changed, 11 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/af479c58/dev-support/docker/Dockerfile
--
diff --git a/dev-support/docker/Dockerfile b/dev-support/docker/Dockerfile
index 62c6030..c23c70d 100644
--- a/dev-support/docker/Dockerfile
+++ b/dev-support/docker/Dockerfile
@@ -65,18 +65,18 @@ RUN apt-get -q update && apt-get -q install 
--no-install-recommends -y \
 zlib1g-dev
 
 ###
-# Oracle Java
+# OpenJDK 8
 ###
 
 RUN echo "dot_style = mega" > "/root/.wgetrc"
 RUN echo "quiet = on" >> "/root/.wgetrc"
 
 RUN apt-get -q update && apt-get -q install --no-install-recommends -y 
software-properties-common
-RUN add-apt-repository -y ppa:webupd8team/java
-
-# Auto-accept the Oracle JDK license
-RUN echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select 
true | sudo /usr/bin/debconf-set-selections
-RUN apt-get -q update && apt-get -q install --no-install-recommends -y 
oracle-java8-installer
+RUN add-apt-repository -y ppa:openjdk-r/ppa
+RUN apt-get -q update
+RUN apt-get -q install --no-install-recommends -y openjdk-8-jdk
+RUN update-alternatives --config java
+RUN update-alternatives --config javac
 
 
 # Apps that require Java
@@ -131,23 +131,16 @@ RUN pip install python-dateutil
 # Install Ruby 2, based on Yetus 0.4.0 dockerfile
 ###
 RUN echo 'gem: --no-rdoc --no-ri' >> /root/.gemrc
-RUN apt-get -q install -y ruby2.0
-#
-# on trusty, the above installs ruby2.0 and ruby (1.9.3) exes
-# but update-alternatives is broken, so we need to do some work
-# to make 2.0 actually the default without the system flipping out
-#
-# See https://bugs.launchpad.net/ubuntu/+source/ruby2.0/+bug/1310292
-#
-RUN dpkg-divert --add --rename --divert /usr/bin/ruby.divert /usr/bin/ruby
-RUN dpkg-divert --add --rename --divert /usr/bin/gem.divert /usr/bin/gemrc
-RUN update-alternatives --install /usr/bin/ruby ruby /usr/bin/ruby2.0 1
-RUN update-alternatives --install /usr/bin/gem gem /usr/bin/gem2.0 1
+RUN apt-add-repository ppa:brightbox/ruby-ng
+RUN apt-get -q update
 
+RUN apt-get -q install --no-install-recommends -y ruby2.2 ruby-switch
+RUN ruby-switch --set ruby2.2
 
 
 # Install rubocop
 ###
+RUN gem install rake
 RUN gem install rubocop
 
 



[35/50] [abbrv] hbase git commit: HBASE-18989 Polish the compaction related CP hooks

2017-10-23 Thread zhangduo
HBASE-18989 Polish the compaction related CP hooks


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/c9fdbec7
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/c9fdbec7
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/c9fdbec7

Branch: refs/heads/HBASE-18410
Commit: c9fdbec772fe7dea06644d86e2854b98047ac9da
Parents: 4add40c
Author: zhangduo 
Authored: Mon Oct 23 16:44:54 2017 +0800
Committer: zhangduo 
Committed: Mon Oct 23 16:44:54 2017 +0800

--
 .../hbase/coprocessor/RegionObserver.java   |  23 +-
 .../hadoop/hbase/regionserver/CompactSplit.java | 101 +--
 .../hadoop/hbase/regionserver/HRegion.java  |  34 ++-
 .../hbase/regionserver/HRegionServer.java   |  10 +-
 .../hadoop/hbase/regionserver/HStore.java   |   5 +-
 .../hbase/regionserver/RSRpcServices.java   |  47 +---
 .../hadoop/hbase/regionserver/Region.java   |  19 +-
 .../regionserver/RegionServerServices.java  |  10 +-
 .../apache/hadoop/hbase/regionserver/Store.java |   7 -
 .../compactions/CompactionLifeCycleTracker.java |  19 +-
 .../compactions/CompactionRequester.java|  46 
 .../hadoop/hbase/MockRegionServerServices.java  |   6 +
 .../hadoop/hbase/master/MockRegionServer.java   |  36 +--
 .../hbase/regionserver/TestCompaction.java  |   2 +-
 .../TestCompactionLifeCycleTracker.java | 267 +++
 15 files changed, 487 insertions(+), 145 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/c9fdbec7/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
index 94550df..ba96a5b 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
@@ -158,7 +158,7 @@ public interface RegionObserver {
   /**
* Called prior to selecting the {@link StoreFile StoreFiles} to compact 
from the list of
* available candidates. To alter the files used for compaction, you may 
mutate the passed in list
-   * of candidates.
+   * of candidates. If you remove all the candidates then the compaction will 
be canceled.
* @param c the environment provided by the region server
* @param store the store where compaction is being requested
* @param candidates the store files currently available for compaction
@@ -183,18 +183,12 @@ public interface RegionObserver {
 
   /**
* Called prior to writing the {@link StoreFile}s selected for compaction 
into a new
-   * {@code StoreFile}. To override or modify the compaction process, 
implementing classes have two
-   * options:
-   * 
-   * Wrap the provided {@link InternalScanner} with a custom 
implementation that is returned
-   * from this method. The custom scanner can then inspect
-   *  {@link org.apache.hadoop.hbase.KeyValue}s from the wrapped scanner, 
applying its own
-   *   policy to what gets written.
-   * Call {@link 
org.apache.hadoop.hbase.coprocessor.ObserverContext#bypass()} and provide a
-   * custom implementation for writing of new {@link StoreFile}s. 
Note: any implementations
-   * bypassing core compaction using this approach must write out new store 
files themselves or the
-   * existing data will no longer be available after compaction.
-   * 
+   * {@code StoreFile}.
+   * 
+   * To override or modify the compaction process, implementing classes can 
wrap the provided
+   * {@link InternalScanner} with a custom implementation that is returned 
from this method. The
+   * custom scanner can then inspect {@link org.apache.hadoop.hbase.Cell}s 
from the wrapped scanner,
+   * applying its own policy to what gets written.
* @param c the environment provided by the region server
* @param store the store being compacted
* @param scanner the scanner over existing data used in the store file 
rewriting
@@ -206,8 +200,7 @@ public interface RegionObserver {
*/
   default InternalScanner 
preCompact(ObserverContext c, Store store,
   InternalScanner scanner, ScanType scanType, CompactionLifeCycleTracker 
tracker,
-  CompactionRequest request)
-  throws IOException {
+  CompactionRequest request) throws IOException {
 return scanner;
   }
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/c9fdbec7/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplit.java
--
diff --git 

[30/50] [abbrv] hbase git commit: HBASE-19007 Align Services Interfaces in Master and RegionServer

2017-10-23 Thread zhangduo
http://git-wip-us.apache.org/repos/asf/hbase/blob/38879fb3/hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/TokenProvider.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/TokenProvider.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/TokenProvider.java
index 0588138..e355752 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/TokenProvider.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/TokenProvider.java
@@ -27,6 +27,8 @@ import java.util.Collections;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.hbase.CoprocessorEnvironment;
+import org.apache.hadoop.hbase.coprocessor.CoreCoprocessor;
+import org.apache.hadoop.hbase.coprocessor.HasRegionServerServices;
 import org.apache.hadoop.hbase.coprocessor.RegionCoprocessor;
 import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
 import org.apache.hadoop.hbase.ipc.CoprocessorRpcUtils;
@@ -46,6 +48,7 @@ import org.apache.yetus.audience.InterfaceAudience;
  * Provides a service for obtaining authentication tokens via the
  * {@link AuthenticationProtos} AuthenticationService coprocessor service.
  */
+@CoreCoprocessor
 @InterfaceAudience.Private
 public class TokenProvider implements 
AuthenticationProtos.AuthenticationService.Interface,
 RegionCoprocessor {
@@ -59,11 +62,13 @@ public class TokenProvider implements 
AuthenticationProtos.AuthenticationService
   public void start(CoprocessorEnvironment env) {
 // if running at region
 if (env instanceof RegionCoprocessorEnvironment) {
-  RegionCoprocessorEnvironment regionEnv =
-  (RegionCoprocessorEnvironment)env;
-  assert regionEnv.getCoprocessorRegionServerServices() instanceof 
RegionServerServices;
-  RpcServerInterface server = ((RegionServerServices) regionEnv
-  .getCoprocessorRegionServerServices()).getRpcServer();
+  RegionCoprocessorEnvironment regionEnv = 
(RegionCoprocessorEnvironment)env;
+  /* Getting the RpcServer from a RegionCE is wrong. There cannot be an 
expectation that Region
+   is hosted inside a RegionServer. If you need RpcServer, then pass in a 
RegionServerCE.
+   TODO: FIX.
+   */
+  RegionServerServices rss = 
((HasRegionServerServices)regionEnv).getRegionServerServices();
+  RpcServerInterface server = rss.getRpcServer();
   SecretManager mgr = ((RpcServer)server).getSecretManager();
   if (mgr instanceof AuthenticationTokenSecretManager) {
 secretManager = (AuthenticationTokenSecretManager)mgr;

http://git-wip-us.apache.org/repos/asf/hbase/blob/38879fb3/hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/DefaultVisibilityLabelServiceImpl.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/DefaultVisibilityLabelServiceImpl.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/DefaultVisibilityLabelServiceImpl.java
index 8a5265d..5bd7c3f 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/DefaultVisibilityLabelServiceImpl.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/DefaultVisibilityLabelServiceImpl.java
@@ -1,4 +1,4 @@
-/**
+/*
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
  * distributed with this work for additional information
@@ -46,11 +46,11 @@ import org.apache.hadoop.hbase.ArrayBackedTag;
 import org.apache.hadoop.hbase.AuthUtil;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CellUtil;
-import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HConstants.OperationStatusCode;
 import org.apache.hadoop.hbase.Tag;
 import org.apache.hadoop.hbase.TagType;
 import org.apache.hadoop.hbase.TagUtil;
+import org.apache.hadoop.hbase.coprocessor.HasRegionServerServices;
 import org.apache.yetus.audience.InterfaceAudience;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.Mutation;
@@ -62,7 +62,6 @@ import org.apache.hadoop.hbase.io.util.StreamUtils;
 import org.apache.hadoop.hbase.regionserver.OperationStatus;
 import org.apache.hadoop.hbase.regionserver.Region;
 import org.apache.hadoop.hbase.regionserver.RegionScanner;
-import org.apache.hadoop.hbase.regionserver.RegionServerServices;
 import org.apache.hadoop.hbase.security.Superusers;
 import org.apache.hadoop.hbase.security.User;
 import org.apache.hadoop.hbase.util.Bytes;
@@ -112,9 +111,15 @@ public class DefaultVisibilityLabelServiceImpl implements 
VisibilityLabelService
 
   @Override
   public void init(RegionCoprocessorEnvironment e) throws IOException {
-assert 

[32/50] [abbrv] hbase git commit: HBASE-18824 Add meaningful comment to HConstants.LATEST_TIMESTAMP to explain why it is MAX_VALUE

2017-10-23 Thread zhangduo
HBASE-18824 Add meaningful comment to HConstants.LATEST_TIMESTAMP to explain 
why it is MAX_VALUE

Signed-off-by: Chia-Ping Tsai 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/2ee8690b
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/2ee8690b
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/2ee8690b

Branch: refs/heads/HBASE-18410
Commit: 2ee8690b47763fd0ed97d47713b1c516633f597b
Parents: 38879fb
Author: Xiang Li 
Authored: Tue Sep 19 23:10:31 2017 +0800
Committer: Chia-Ping Tsai 
Committed: Sun Oct 22 04:47:00 2017 +0800

--
 .../org/apache/hadoop/hbase/HConstants.java | 21 ++--
 1 file changed, 19 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/2ee8690b/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
--
diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
index 7577644..a272fc8 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
@@ -549,8 +549,25 @@ public final class HConstants {
 
   /**
* Timestamp to use when we want to refer to the latest cell.
-   * This is the timestamp sent by clients when no timestamp is specified on
-   * commit.
+   *
+   * On client side, this is the timestamp set by default when no timestamp is 
specified, to refer to the latest.
+   * On server side, this acts as a notation.
+   * (1) For a cell of Put, which has this notation,
+   * its timestamp will be replaced with server's current time.
+   * (2) For a cell of Delete, which has this notation,
+   * A. If the cell is of {@link KeyValue.Type#Delete}, HBase issues a Get 
operation firstly.
+   *a. When the count of cell it gets is less than the count of cell 
to delete,
+   *   the timestamp of Delete cell will be replaced with server's 
current time.
+   *b. When the count of cell it gets is equal to the count of cell to 
delete,
+   *   the timestamp of Delete cell will be replaced with the latest 
timestamp of cell it gets.
+   *   (c. It is invalid and an exception will be thrown,
+   *   if the count of cell it gets is greater than the count of cell 
to delete,
+   *   as the max version of Get is set to the count of cell to 
delete.)
+   * B. If the cell is of other Delete types, like {@link 
KeyValue.Type#DeleteFamilyVersion},
+   *{@link KeyValue.Type#DeleteColumn}, or {@link 
KeyValue.Type#DeleteFamily},
+   *the timestamp of Delete cell will be replaced with server's 
current time.
+   *
+   * So that is why it is named as "latest" but assigned as the max value of 
Long.
*/
   public static final long LATEST_TIMESTAMP = Long.MAX_VALUE;
 



[40/50] [abbrv] hbase git commit: HBASE-18873 Move protobufs to private implementation on GlobalQuotaSettings

2017-10-23 Thread zhangduo
HBASE-18873 Move protobufs to private implementation on GlobalQuotaSettings

A hack to "hide" the protobufs, but it's not going to be a trivial
change to remove use of protobufs entirely as they're serialized
into the hbase:quota table.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/81133f89
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/81133f89
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/81133f89

Branch: refs/heads/HBASE-18410
Commit: 81133f89fc9a80fbd03aff5a3b51184eeb90f130
Parents: b7db62c
Author: Josh Elser 
Authored: Wed Oct 11 18:37:42 2017 -0400
Committer: Josh Elser 
Committed: Mon Oct 23 22:37:10 2017 -0400

--
 .../hbase/quotas/QuotaSettingsFactory.java  |   2 +-
 .../hbase/quotas/GlobalQuotaSettings.java   | 290 +---
 .../hbase/quotas/GlobalQuotaSettingsImpl.java   | 332 +++
 .../hadoop/hbase/quotas/MasterQuotaManager.java |  72 ++--
 .../hbase/quotas/TestGlobalQuotaSettings.java   | 122 ---
 .../quotas/TestGlobalQuotaSettingsImpl.java | 122 +++
 6 files changed, 501 insertions(+), 439 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/81133f89/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
index 185365b..2a20c51 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
@@ -116,7 +116,7 @@ public class QuotaSettingsFactory {
 return settings;
   }
 
-  private static List fromThrottle(final String userName, final 
TableName tableName,
+  protected static List fromThrottle(final String userName, 
final TableName tableName,
   final String namespace, final QuotaProtos.Throttle throttle) {
 List settings = new ArrayList<>();
 if (throttle.hasReqNum()) {

http://git-wip-us.apache.org/repos/asf/hbase/blob/81133f89/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/GlobalQuotaSettings.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/GlobalQuotaSettings.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/GlobalQuotaSettings.java
index 079edf0..107523b 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/GlobalQuotaSettings.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/GlobalQuotaSettings.java
@@ -16,23 +16,12 @@
  */
 package org.apache.hadoop.hbase.quotas;
 
-import java.io.IOException;
-import java.util.HashMap;
-import java.util.Map;
-import java.util.Map.Entry;
-import java.util.Objects;
+import java.util.List;
 
-import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.HBaseInterfaceAudience;
 import org.apache.hadoop.hbase.TableName;
-import 
org.apache.hadoop.hbase.quotas.QuotaSettingsFactory.QuotaGlobalsSettingsBypass;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.SetQuotaRequest.Builder;
 import org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Quotas;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota;
-import org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Throttle;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.TimedQuota;
-import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-import org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos;
 import org.apache.yetus.audience.InterfaceAudience;
 import org.apache.yetus.audience.InterfaceStability;
 
@@ -43,28 +32,19 @@ import org.apache.yetus.audience.InterfaceStability;
  */
 @InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC})
 @InterfaceStability.Evolving
-public class GlobalQuotaSettings extends QuotaSettings {
-  private final QuotaProtos.Throttle throttleProto;
-  private final Boolean bypassGlobals;
-  private final QuotaProtos.SpaceQuota spaceProto;
+public abstract class GlobalQuotaSettings extends QuotaSettings {
 
-  protected GlobalQuotaSettings(
-  String username, TableName tableName, String namespace, 
QuotaProtos.Quotas quotas) {
-this(username, tableName, namespace,
-(quotas != null && quotas.hasThrottle() ? quotas.getThrottle() : null),
-(quotas != null && quotas.hasBypassGlobals() ? 
quotas.getBypassGlobals() : null),
-(quotas != null && quotas.hasSpace() ? quotas.getSpace() : null));
-  }
-
-  protected 

[36/50] [abbrv] hbase git commit: HBASE-19067 Do not expose getHDFSBlockDistribution in StoreFile.

2017-10-23 Thread zhangduo
HBASE-19067 Do not expose getHDFSBlockDistribution in StoreFile.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/880b26d7
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/880b26d7
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/880b26d7

Branch: refs/heads/HBASE-18410
Commit: 880b26d7d8678c688d741d991f55bd2245bee345
Parents: c9fdbec
Author: anoopsamjohn 
Authored: Mon Oct 23 17:04:05 2017 +0530
Committer: anoopsamjohn 
Committed: Mon Oct 23 17:04:05 2017 +0530

--
 .../org/apache/hadoop/hbase/coprocessor/RegionObserver.java   | 4 
 .../java/org/apache/hadoop/hbase/regionserver/HStoreFile.java | 5 -
 .../java/org/apache/hadoop/hbase/regionserver/StoreFile.java  | 7 ---
 3 files changed, 8 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/880b26d7/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
index ba96a5b..815daf1 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
@@ -958,6 +958,8 @@ public interface RegionObserver {
* @deprecated For Phoenix only, StoreFileReader is not a stable interface.
*/
   @Deprecated
+  // Passing InterfaceAudience.Private args FSDataInputStreamWrapper, 
CacheConfig and Reference.
+  // This is fine as the hook is deprecated any way.
   default StoreFileReader 
preStoreFileReaderOpen(ObserverContext ctx,
   FileSystem fs, Path p, FSDataInputStreamWrapper in, long size, 
CacheConfig cacheConf,
   Reference r, StoreFileReader reader) throws IOException {
@@ -979,6 +981,8 @@ public interface RegionObserver {
* @deprecated For Phoenix only, StoreFileReader is not a stable interface.
*/
   @Deprecated
+  // Passing InterfaceAudience.Private args FSDataInputStreamWrapper, 
CacheConfig and Reference.
+  // This is fine as the hook is deprecated any way.
   default StoreFileReader 
postStoreFileReaderOpen(ObserverContext ctx,
   FileSystem fs, Path p, FSDataInputStreamWrapper in, long size, 
CacheConfig cacheConf,
   Reference r, StoreFileReader reader) throws IOException {

http://git-wip-us.apache.org/repos/asf/hbase/blob/880b26d7/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStoreFile.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStoreFile.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStoreFile.java
index 5301922..0ca01a5 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStoreFile.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStoreFile.java
@@ -331,7 +331,10 @@ public class HStoreFile implements StoreFile {
 : OptionalLong.of(Bytes.toLong(bulkLoadTimestamp));
   }
 
-  @Override
+  /**
+   * @return the cached value of HDFS blocks distribution. The cached value is 
calculated when store
+   * file is opened.
+   */
   public HDFSBlocksDistribution getHDFSBlockDistribution() {
 return this.fileInfo.getHDFSBlockDistribution();
   }

http://git-wip-us.apache.org/repos/asf/hbase/blob/880b26d7/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
index 9e318cd..4f4cfcc 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
@@ -25,7 +25,6 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CellComparator;
 import org.apache.hadoop.hbase.HBaseInterfaceAudience;
-import org.apache.hadoop.hbase.HDFSBlocksDistribution;
 import org.apache.yetus.audience.InterfaceAudience;
 import org.apache.yetus.audience.InterfaceStability;
 
@@ -118,12 +117,6 @@ public interface StoreFile {
   OptionalLong getBulkLoadTimestamp();
 
   /**
-   * @return the cached value of HDFS blocks distribution. The cached value is 
calculated when store
-   * file is opened.
-   */
-  HDFSBlocksDistribution getHDFSBlockDistribution();
-
-  /**
* @return a length description of 

[33/50] [abbrv] hbase git commit: Add Zheng Hu to pom.xml

2017-10-23 Thread zhangduo
Add Zheng Hu to pom.xml


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/24931044
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/24931044
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/24931044

Branch: refs/heads/HBASE-18410
Commit: 24931044d6ec1a6eda4513102a99c453fe128bd9
Parents: 2ee8690
Author: huzheng 
Authored: Mon Oct 23 13:41:45 2017 +0800
Committer: huzheng 
Committed: Mon Oct 23 13:41:45 2017 +0800

--
 pom.xml | 6 ++
 1 file changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/24931044/pom.xml
--
diff --git a/pom.xml b/pom.xml
index d7cbca2..0a55b64 100755
--- a/pom.xml
+++ b/pom.xml
@@ -458,6 +458,12 @@
   0
 
 
+  openinx
+  Zheng Hu
+  open...@apache.org
+  +8
+
+
   rajeshbabu
   Rajeshbabu Chintaguntla
   rajeshb...@apache.org



[01/50] [abbrv] hbase git commit: HBSE-18945 Make a IA.LimitedPrivate interface for CellComparator (Ram) [Forced Update!]

2017-10-23 Thread zhangduo
Repository: hbase
Updated Branches:
  refs/heads/HBASE-18410 a157c62de -> b5896b7a4 (forced update)


http://git-wip-us.apache.org/repos/asf/hbase/blob/70f4c5da/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/HBaseContext.scala
--
diff --git 
a/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/HBaseContext.scala 
b/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/HBaseContext.scala
index d7e3f4f..0c51b28 100644
--- 
a/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/HBaseContext.scala
+++ 
b/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/HBaseContext.scala
@@ -917,7 +917,7 @@ class HBaseContext(@transient sc: SparkContext,
 new WriterLength(0,
   new StoreFileWriter.Builder(conf, new CacheConfig(tempConf), new 
HFileSystem(fs))
 .withBloomType(BloomType.valueOf(familyOptions.bloomType))
-
.withComparator(CellComparator.COMPARATOR).withFileContext(hFileContext)
+
.withComparator(CellComparatorImpl.COMPARATOR).withFileContext(hFileContext)
 .withFilePath(new Path(familydir, "_" + 
UUID.randomUUID.toString.replaceAll("-", "")))
 .withFavoredNodes(favoredNodes).build())
 



[22/50] [abbrv] hbase git commit: HBASE-19014 surefire fails; When writing xml report stdout/stderr ... No such file or directory

2017-10-23 Thread zhangduo
HBASE-19014 surefire fails; When writing xml report stdout/stderr ... No such 
file or directory


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/d59ed234
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/d59ed234
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/d59ed234

Branch: refs/heads/HBASE-18410
Commit: d59ed234ef0ba4f9c61961de306965ff39bec05f
Parents: 8c6ddc1
Author: Chia-Ping Tsai 
Authored: Sat Oct 21 01:22:19 2017 +0800
Committer: Chia-Ping Tsai 
Committed: Sat Oct 21 01:29:38 2017 +0800

--
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/d59ed234/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 8e1d7c7..d7cbca2 100755
--- a/pom.xml
+++ b/pom.xml
@@ -1488,7 +1488,7 @@
 
hbase-rsgroup-${project.version}-tests.jar
 
hbase-mapreduce-${project.version}-tests.jar
 bash
-2.19.1
+2.20.1
 surefire-junit47
 
 false



[47/50] [abbrv] hbase git commit: HBASE-18160 Fix incorrect logic in FilterList.filterKeyValue

2017-10-23 Thread zhangduo
HBASE-18160 Fix incorrect logic in FilterList.filterKeyValue

Signed-off-by: zhangduo 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/24a7ce84
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/24a7ce84
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/24a7ce84

Branch: refs/heads/HBASE-18410
Commit: 24a7ce849f0e951ba0a84337681033c541d46276
Parents: 5c9523b
Author: huzheng 
Authored: Thu Jun 8 15:58:42 2017 +0800
Committer: zhangduo 
Committed: Tue Oct 24 11:37:45 2017 +0800

--
 .../apache/hadoop/hbase/filter/FilterList.java  | 541 ---
 .../hadoop/hbase/filter/TestFilterList.java | 148 -
 2 files changed, 471 insertions(+), 218 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/24a7ce84/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
index 3ff978d..3147ab0 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
@@ -90,62 +90,53 @@ final public class FilterList extends FilterBase {
   private Cell transformedCell = null;
 
   /**
-   * Constructor that takes a set of {@link Filter}s. The default operator
-   * MUST_PASS_ALL is assumed.
+   * Constructor that takes a set of {@link Filter}s and an operator.
+   * @param operator Operator to process filter set with.
+   * @param rowFilters Set of row filters.
+   */
+  public FilterList(final Operator operator, final List rowFilters) {
+reversed = checkAndGetReversed(rowFilters, reversed);
+this.filters = new ArrayList<>(rowFilters);
+this.operator = operator;
+initPrevListForMustPassOne(rowFilters.size());
+  }
+
+  /**
+   * Constructor that takes a set of {@link Filter}s. The default operator 
MUST_PASS_ALL is assumed.
* All filters are cloned to internal list.
* @param rowFilters list of filters
*/
   public FilterList(final List rowFilters) {
-reversed = getReversed(rowFilters, reversed);
-this.filters = new ArrayList<>(rowFilters);
-initPrevListForMustPassOne(rowFilters.size());
+this(Operator.MUST_PASS_ALL, rowFilters);
   }
 
   /**
-   * Constructor that takes a var arg number of {@link Filter}s. The fefault 
operator
-   * MUST_PASS_ALL is assumed.
+   * Constructor that takes a var arg number of {@link Filter}s. The default 
operator MUST_PASS_ALL
+   * is assumed.
* @param rowFilters
*/
   public FilterList(final Filter... rowFilters) {
-this(Arrays.asList(rowFilters));
+this(Operator.MUST_PASS_ALL, Arrays.asList(rowFilters));
   }
 
   /**
* Constructor that takes an operator.
-   *
* @param operator Operator to process filter set with.
*/
   public FilterList(final Operator operator) {
-this.operator = operator;
-this.filters = new ArrayList<>();
-initPrevListForMustPassOne(filters.size());
-  }
-
-  /**
-   * Constructor that takes a set of {@link Filter}s and an operator.
-   *
-   * @param operator Operator to process filter set with.
-   * @param rowFilters Set of row filters.
-   */
-  public FilterList(final Operator operator, final List rowFilters) {
-this(rowFilters);
-this.operator = operator;
-initPrevListForMustPassOne(rowFilters.size());
+this(operator, new ArrayList<>());
   }
 
   /**
* Constructor that takes a var arg number of {@link Filter}s and an 
operator.
-   *
* @param operator Operator to process filter set with.
* @param rowFilters Filters to use
*/
   public FilterList(final Operator operator, final Filter... rowFilters) {
-this(rowFilters);
-this.operator = operator;
-initPrevListForMustPassOne(rowFilters.length);
+this(operator, Arrays.asList(rowFilters));
   }
 
-  public void initPrevListForMustPassOne(int size) {
+  private void initPrevListForMustPassOne(int size) {
 if (operator == Operator.MUST_PASS_ONE) {
   if (this.prevFilterRCList == null) {
 prevFilterRCList = new ArrayList<>(Collections.nCopies(size, null));
@@ -156,10 +147,8 @@ final public class FilterList extends FilterBase {
 }
   }
 
-
   /**
* Get the operator.
-   *
* @return operator
*/
   public Operator getOperator() {
@@ -168,7 +157,6 @@ final public class FilterList extends FilterBase {
 
   /**
* Get the filters.
-   *
* @return filters
*/
   public List getFilters() {
@@ -183,33 +171,22 @@ final public class FilterList extends FilterBase {
 return filters.isEmpty();
   }
 
-  private 

[44/50] [abbrv] hbase git commit: HBASE-17678 FilterList with MUST_PASS_ONE may lead to redundant cells returned

2017-10-23 Thread zhangduo
HBASE-17678 FilterList with MUST_PASS_ONE may lead to redundant cells returned

Signed-off-by: tedyu 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/49a877db
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/49a877db
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/49a877db

Branch: refs/heads/HBASE-18410
Commit: 49a877db30b8f5ea60af9340d83dec9c11a607d5
Parents: 2ebb7da
Author: huzheng 
Authored: Sat May 27 16:58:00 2017 +0800
Committer: zhangduo 
Committed: Tue Oct 24 11:30:34 2017 +0800

--
 .../apache/hadoop/hbase/filter/FilterList.java  |  74 +-
 .../hadoop/hbase/filter/TestFilterList.java | 136 +--
 2 files changed, 200 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/49a877db/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
index 2f11472..3493082 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
@@ -67,6 +67,14 @@ final public class FilterList extends FilterBase {
   private final List filters;
   private Collection seekHintFilters = new ArrayList();
 
+  /**
+   * Save previous return code and previous cell for every filter in filter 
list. For MUST_PASS_ONE,
+   * we use the previous return code to decide whether we should pass current 
cell encountered to
+   * the filter. For MUST_PASS_ALL, the two list are meaningless.
+   */
+  private List prevFilterRCList = null;
+  private List prevCellList = null;
+
   /** Reference Cell used by {@link #transformCell(Cell)} for validation 
purpose. */
   private Cell referenceCell = null;
 
@@ -88,6 +96,7 @@ final public class FilterList extends FilterBase {
   public FilterList(final List rowFilters) {
 reversed = getReversed(rowFilters, reversed);
 this.filters = new ArrayList<>(rowFilters);
+initPrevListForMustPassOne(rowFilters.size());
   }
 
   /**
@@ -107,6 +116,7 @@ final public class FilterList extends FilterBase {
   public FilterList(final Operator operator) {
 this.operator = operator;
 this.filters = new ArrayList<>();
+initPrevListForMustPassOne(filters.size());
   }
 
   /**
@@ -118,6 +128,7 @@ final public class FilterList extends FilterBase {
   public FilterList(final Operator operator, final List rowFilters) {
 this(rowFilters);
 this.operator = operator;
+initPrevListForMustPassOne(rowFilters.size());
   }
 
   /**
@@ -129,8 +140,21 @@ final public class FilterList extends FilterBase {
   public FilterList(final Operator operator, final Filter... rowFilters) {
 this(rowFilters);
 this.operator = operator;
+initPrevListForMustPassOne(rowFilters.length);
+  }
+
+  public void initPrevListForMustPassOne(int size) {
+if (operator == Operator.MUST_PASS_ONE) {
+  if (this.prevCellList == null) {
+prevFilterRCList = new ArrayList<>(Collections.nCopies(size, null));
+  }
+  if (this.prevCellList == null) {
+prevCellList = new ArrayList<>(Collections.nCopies(size, null));
+  }
+}
   }
 
+
   /**
* Get the operator.
*
@@ -185,6 +209,10 @@ final public class FilterList extends FilterBase {
   public void addFilter(List filters) {
 checkReversed(filters, isReversed());
 this.filters.addAll(filters);
+if (operator == Operator.MUST_PASS_ONE) {
+  this.prevFilterRCList.addAll(Collections.nCopies(filters.size(), null));
+  this.prevCellList.addAll(Collections.nCopies(filters.size(), null));
+}
   }
 
   /**
@@ -201,6 +229,10 @@ final public class FilterList extends FilterBase {
 int listize = filters.size();
 for (int i = 0; i < listize; i++) {
   filters.get(i).reset();
+  if (operator == Operator.MUST_PASS_ONE) {
+prevFilterRCList.set(i, null);
+prevCellList.set(i, null);
+  }
 }
 seekHintFilters.clear();
   }
@@ -283,6 +315,41 @@ final public class FilterList extends FilterBase {
 return this.transformedCell;
   }
 
+  /**
+   * For MUST_PASS_ONE, we cannot make sure that when filter-A in filter list 
return NEXT_COL then
+   * the next cell passing to filterList will be the first cell in next 
column, because if filter-B
+   * in filter list return SKIP, then the filter list will return SKIP. In 
this case, we should pass
+   * the cell following the previous cell, and it's possible that the next 
cell has the same column
+   * as the previous cell even if filter-A has 

[27/50] [abbrv] hbase git commit: HBASE-19039 refactor shadedjars test to only run on java changes.

2017-10-23 Thread zhangduo
HBASE-19039 refactor shadedjars test to only run on java changes.

Signed-off-by: Mike Drob 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/b10ad9e9
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/b10ad9e9
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/b10ad9e9

Branch: refs/heads/HBASE-18410
Commit: b10ad9e97f67b462a4ab58ee1d449c9c319c4176
Parents: dd4dbae
Author: Sean Busbey 
Authored: Fri Oct 20 14:39:03 2017 -0500
Committer: Sean Busbey 
Committed: Fri Oct 20 19:35:20 2017 -0500

--
 dev-support/hbase-personality.sh | 17 ++---
 1 file changed, 14 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/b10ad9e9/dev-support/hbase-personality.sh
--
diff --git a/dev-support/hbase-personality.sh b/dev-support/hbase-personality.sh
index 88e773e..dcf4f7a 100755
--- a/dev-support/hbase-personality.sh
+++ b/dev-support/hbase-personality.sh
@@ -171,12 +171,19 @@ function shadedjars_initialize
 {
   yetus_debug "initializing shaded client checks."
   maven_add_install shadedjars
-  add_test shadedjars
 }
 
-function shadedjars_clean
+## @description  only run the test if java changes.
+## @audience private
+## @stabilityevolving
+## @paramfilename
+function shadedjars_filefilter
 {
-  "${MAVEN}" "${MAVEN_ARGS[@]}" clean -fae -pl 
hbase_shaded/hbase-shaded-check-invariants -am -Prelease
+  local filename=$1
+
+  if [[ ${filename} =~ \.java$ ]] || [[ ${filename} =~ pom.xml$ ]]; then
+add_test shadedjars
+  fi
 }
 
 ## @description test the shaded client artifacts
@@ -188,6 +195,10 @@ function shadedjars_rebuild
   local repostatus=$1
   local logfile="${PATCH_DIR}/${repostatus}-shadedjars.txt"
 
+  if ! verify_needed_test shadedjars; then
+return 0
+  fi
+
   big_console_header "Checking shaded client builds on ${repostatus}"
 
   echo_and_redirect "${logfile}" \



[09/50] [abbrv] hbase git commit: HBASE-19001 Remove the hooks in RegionObserver which are designed to construct a StoreScanner which is marked as IA.Private

2017-10-23 Thread zhangduo
http://git-wip-us.apache.org/repos/asf/hbase/blob/e804dd0b/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCoprocessorScanPolicy.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCoprocessorScanPolicy.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCoprocessorScanPolicy.java
index ab9bfc59..c67d7bf 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCoprocessorScanPolicy.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCoprocessorScanPolicy.java
@@ -17,31 +17,25 @@
  */
 package org.apache.hadoop.hbase.util;
 // this is deliberately not in the o.a.h.h.regionserver package
+
 // in order to make sure all required classes/method are available
 
 import static org.junit.Assert.assertEquals;
 
 import java.io.IOException;
 import java.util.Collection;
-import java.util.HashMap;
 import java.util.List;
-import java.util.Map;
-import java.util.NavigableSet;
 import java.util.Optional;
-import java.util.OptionalInt;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.function.Predicate;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CellUtil;
 import org.apache.hadoop.hbase.HBaseCommonTestingUtility;
 import org.apache.hadoop.hbase.HBaseTestingUtility;
-import org.apache.hadoop.hbase.HColumnDescriptor;
-import org.apache.hadoop.hbase.HConstants;
-import org.apache.hadoop.hbase.HTableDescriptor;
-import org.apache.hadoop.hbase.KeyValue;
-import org.apache.hadoop.hbase.KeyValueUtil;
 import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
 import org.apache.hadoop.hbase.client.Durability;
 import org.apache.hadoop.hbase.client.Get;
 import org.apache.hadoop.hbase.client.Put;
@@ -53,11 +47,12 @@ import org.apache.hadoop.hbase.coprocessor.ObserverContext;
 import org.apache.hadoop.hbase.coprocessor.RegionCoprocessor;
 import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
 import org.apache.hadoop.hbase.coprocessor.RegionObserver;
-import org.apache.hadoop.hbase.regionserver.HStore;
+import org.apache.hadoop.hbase.regionserver.DelegatingInternalScanner;
 import org.apache.hadoop.hbase.regionserver.InternalScanner;
-import org.apache.hadoop.hbase.regionserver.KeyValueScanner;
-import org.apache.hadoop.hbase.regionserver.ScanInfo;
+import org.apache.hadoop.hbase.regionserver.Region;
+import org.apache.hadoop.hbase.regionserver.RegionScanner;
 import org.apache.hadoop.hbase.regionserver.ScanType;
+import org.apache.hadoop.hbase.regionserver.ScannerContext;
 import org.apache.hadoop.hbase.regionserver.Store;
 import org.apache.hadoop.hbase.regionserver.StoreScanner;
 import 
org.apache.hadoop.hbase.regionserver.compactions.CompactionLifeCycleTracker;
@@ -73,7 +68,7 @@ import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
 import org.junit.runners.Parameterized.Parameters;
 
-@Category({MiscTests.class, MediumTests.class})
+@Category({ MiscTests.class, MediumTests.class })
 @RunWith(Parameterized.class)
 public class TestCoprocessorScanPolicy {
   protected final static HBaseTestingUtility TEST_UTIL = new 
HBaseTestingUtility();
@@ -84,8 +79,7 @@ public class TestCoprocessorScanPolicy {
   @BeforeClass
   public static void setUpBeforeClass() throws Exception {
 Configuration conf = TEST_UTIL.getConfiguration();
-conf.setStrings(CoprocessorHost.REGION_COPROCESSOR_CONF_KEY,
-ScanObserver.class.getName());
+conf.setStrings(CoprocessorHost.REGION_COPROCESSOR_CONF_KEY, 
ScanObserver.class.getName());
 TEST_UTIL.startMiniCluster();
   }
 
@@ -106,49 +100,58 @@ public class TestCoprocessorScanPolicy {
 
   @Test
   public void testBaseCases() throws Exception {
-TableName tableName =
-TableName.valueOf("baseCases");
+TableName tableName = TableName.valueOf("baseCases");
 if (TEST_UTIL.getAdmin().tableExists(tableName)) {
   TEST_UTIL.deleteTable(tableName);
 }
-Table t = TEST_UTIL.createTable(tableName, F, 1);
-// set the version override to 2
-Put p = new Put(R);
-p.setAttribute("versions", new byte[]{});
-p.addColumn(F, tableName.getName(), Bytes.toBytes(2));
-t.put(p);
-
+Table t = TEST_UTIL.createTable(tableName, F, 10);
+// insert 3 versions
 long now = EnvironmentEdgeManager.currentTime();
-
-// insert 2 versions
-p = new Put(R);
+Put p = new Put(R);
 p.addColumn(F, Q, now, Q);
 t.put(p);
 p = new Put(R);
 p.addColumn(F, Q, now + 1, Q);
 t.put(p);
+p = new Put(R);
+p.addColumn(F, Q, now + 2, Q);
+t.put(p);
+
 Get g = new Get(R);
-g.setMaxVersions(10);
+g.readVersions(10);
 Result r = t.get(g);
+assertEquals(3, r.size());
+
+TEST_UTIL.flush(tableName);
+

[20/50] [abbrv] hbase git commit: HBASE-16338 Remove Jackson1 deps

2017-10-23 Thread zhangduo
HBASE-16338 Remove Jackson1 deps

* Change imports from org.codehaus to com.fasterxml
* Exclude transitive jackson1 from hadoop and others
* Minor test cleanup to add assert messages, fix some parameter order
* Add anti-pattern check for using jackson 1 imports
* Add explicit non-null serialization directive to ScannerModel


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/5facaded
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/5facaded
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/5facaded

Branch: refs/heads/HBASE-18410
Commit: 5facaded902a13556952b1f9d26b768cb86e6599
Parents: a43a00e
Author: Mike Drob 
Authored: Mon Oct 2 16:31:48 2017 -0500
Committer: Mike Drob 
Committed: Fri Oct 20 09:20:12 2017 -0500

--
 dev-support/hbase-personality.sh|   6 ++
 hbase-client/pom.xml|   8 +-
 .../apache/hadoop/hbase/util/JsonMapper.java|   2 +-
 .../hadoop/hbase/client/TestOperation.java  |   2 +-
 hbase-it/pom.xml|   4 +
 .../hadoop/hbase/RESTApiClusterManager.java |  18 ++--
 hbase-mapreduce/pom.xml |  12 +--
 .../hadoop/hbase/PerformanceEvaluation.java |  10 +-
 .../hadoop/hbase/TestPerformanceEvaluation.java |   6 +-
 .../src/main/resources/supplemental-models.xml  |  13 ---
 hbase-rest/pom.xml  |  21 ++--
 .../hbase/rest/ProtobufStreamingOutput.java | 105 ++
 .../hbase/rest/ProtobufStreamingUtil.java   | 106 ---
 .../apache/hadoop/hbase/rest/RESTServer.java|   4 +-
 .../hadoop/hbase/rest/TableScanResource.java|  26 ++---
 .../hadoop/hbase/rest/model/CellModel.java  |   2 +-
 .../hbase/rest/model/ColumnSchemaModel.java |   5 +-
 .../hbase/rest/model/NamespacesModel.java   |   3 +-
 .../hadoop/hbase/rest/model/RowModel.java   |   2 +-
 .../hadoop/hbase/rest/model/ScannerModel.java   |   6 +-
 .../rest/model/StorageClusterStatusModel.java   |   6 ++
 .../rest/model/StorageClusterVersionModel.java  |   3 -
 .../hbase/rest/model/TableSchemaModel.java  |   7 +-
 .../hbase/rest/HBaseRESTTestingUtility.java |   5 +-
 .../hadoop/hbase/rest/RowResourceBase.java  |   4 +-
 .../apache/hadoop/hbase/rest/TestDeleteRow.java |   2 +-
 .../hadoop/hbase/rest/TestMultiRowResource.java |   9 +-
 .../rest/TestNamespacesInstanceResource.java|   9 +-
 .../hadoop/hbase/rest/TestSchemaResource.java   |  52 ++---
 .../apache/hadoop/hbase/rest/TestTableScan.java |  60 +++
 .../hadoop/hbase/rest/TestVersionResource.java  |  21 ++--
 .../hbase/rest/model/TestColumnSchemaModel.java |  16 +--
 .../hadoop/hbase/rest/model/TestModelBase.java  |   6 +-
 .../hbase/rest/model/TestTableSchemaModel.java  |   3 +
 hbase-server/pom.xml|  16 +--
 .../hadoop/hbase/io/hfile/AgeSnapshot.java  |   2 +-
 .../hadoop/hbase/io/hfile/BlockCacheUtil.java   |  17 ++-
 .../hadoop/hbase/io/hfile/LruBlockCache.java|   5 +-
 .../hbase/io/hfile/bucket/BucketAllocator.java  |   2 +-
 .../org/apache/hadoop/hbase/ipc/RpcServer.java  |   2 +-
 .../hbase/monitoring/MonitoredTaskImpl.java |   2 +-
 .../org/apache/hadoop/hbase/util/JSONBean.java  |   6 +-
 .../hadoop/hbase/util/JSONMetricUtil.java   |  10 +-
 .../hadoop/hbase/wal/WALPrettyPrinter.java  |   2 +-
 .../hbase-webapps/master/processMaster.jsp  |   2 +-
 .../hbase-webapps/master/processRS.jsp  |   2 +-
 .../hbase-webapps/regionserver/processRS.jsp|   2 +-
 .../hbase/io/hfile/TestBlockCacheReporting.java |   4 +-
 .../hadoop/hbase/util/TestJSONMetricUtil.java   |  33 +++---
 hbase-shaded/hbase-shaded-mapreduce/pom.xml |   4 -
 hbase-shaded/pom.xml|   4 +
 hbase-shell/src/main/ruby/hbase/taskmonitor.rb  |   2 +-
 hbase-spark/pom.xml |  20 
 pom.xml |  97 -
 54 files changed, 417 insertions(+), 381 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/5facaded/dev-support/hbase-personality.sh
--
diff --git a/dev-support/hbase-personality.sh b/dev-support/hbase-personality.sh
index 9b23e11..27c2169 100755
--- a/dev-support/hbase-personality.sh
+++ b/dev-support/hbase-personality.sh
@@ -428,6 +428,12 @@ function hbaseanti_patchfile
 ((result=result+1))
   fi
 
+  warnings=$(${GREP} 'import org.codehaus.jackson' "${patchfile}")
+  if [[ ${warnings} -gt 0 ]]; then
+add_vote_table -1 hbaseanti "" "The patch appears use Jackson 1 
classes/annotations: ${warnings}."
+((result=result+1))
+  fi
+
   if [[ ${result} -gt 0 ]]; then
 return 1
   fi


[23/50] [abbrv] hbase git commit: HBASE-19043 Purge TableWrapper and CoprocessorHConnnection Also purge Coprocessor#getTable... Let Coprocessors manage their Table Connections in hbase2.0.0.

2017-10-23 Thread zhangduo
HBASE-19043 Purge TableWrapper and CoprocessorHConnnection
Also purge Coprocessor#getTable... Let Coprocessors manage their
Table Connections in hbase2.0.0.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/d7985412
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/d7985412
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/d7985412

Branch: refs/heads/HBASE-18410
Commit: d7985412610b612c09cf377ab87963e897c72afa
Parents: d59ed23
Author: Michael Stack 
Authored: Wed Oct 18 21:45:39 2017 -0700
Committer: Michael Stack 
Committed: Fri Oct 20 11:06:10 2017 -0700

--
 .../hadoop/hbase/CoprocessorEnvironment.java|  15 -
 .../hbase/client/CoprocessorHConnection.java| 105 --
 .../hadoop/hbase/client/HTableWrapper.java  | 346 --
 .../hbase/coprocessor/BaseEnvironment.java  |  44 ---
 .../hbase/security/access/AccessController.java |  82 +++--
 .../hbase/coprocessor/TestCoprocessorHost.java  |  13 -
 .../hbase/coprocessor/TestHTableWrapper.java| 362 ---
 .../coprocessor/TestOpenTableInCoprocessor.java |  28 +-
 .../security/token/TestTokenAuthentication.java |  15 -
 9 files changed, 70 insertions(+), 940 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/d7985412/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java
index aabf3b5..4022b4b 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java
@@ -20,11 +20,9 @@
 package org.apache.hadoop.hbase;
 
 import java.io.IOException;
-import java.util.concurrent.ExecutorService;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.yetus.audience.InterfaceAudience;
-import org.apache.hadoop.hbase.client.Table;
 
 /**
  * Coprocessor environment state.
@@ -51,19 +49,6 @@ public interface CoprocessorEnvironment {
   Configuration getConfiguration();
 
   /**
-   * @return an interface for accessing the given table
-   * @throws IOException
-   */
-  Table getTable(TableName tableName) throws IOException;
-
-  /**
-   * @return an interface for accessing the given table using the passed 
executor to run batch
-   * operations
-   * @throws IOException
-   */
-  Table getTable(TableName tableName, ExecutorService service) throws 
IOException;
-
-  /**
* @return the classloader for the loaded coprocessor instance
*/
   ClassLoader getClassLoader();

http://git-wip-us.apache.org/repos/asf/hbase/blob/d7985412/hbase-server/src/main/java/org/apache/hadoop/hbase/client/CoprocessorHConnection.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/client/CoprocessorHConnection.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/client/CoprocessorHConnection.java
deleted file mode 100644
index c87c56e..000
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/client/CoprocessorHConnection.java
+++ /dev/null
@@ -1,105 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hbase.client;
-
-import java.io.IOException;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hbase.CoprocessorEnvironment;
-import org.apache.hadoop.hbase.ServerName;
-import org.apache.yetus.audience.InterfaceAudience;
-import org.apache.yetus.audience.InterfaceStability;
-import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
-import org.apache.hadoop.hbase.regionserver.CoprocessorRegionServerServices;
-import org.apache.hadoop.hbase.regionserver.HRegionServer;
-import org.apache.hadoop.hbase.security.UserProvider;
-
-/**
- * Connection to an 

[07/50] [abbrv] hbase git commit: HBASE-18350 RSGroups are broken under AMv2

2017-10-23 Thread zhangduo
HBASE-18350 RSGroups are broken under AMv2

- Table moving to RSG was buggy, because it left the table unassigned.
  Now it is fixed we immediately assign to an appropriate RS
  (MoveRegionProcedure).
- Table was locked while moving, but unassign operation hung, because
  locked table queues are not scheduled while locked. Fixed.
- ProcedureSyncWait was buggy, because it searched the procId in
  executor, but executor does not store the return values of internal
  operations (they are stored, but immediately removed by the cleaner).
- list_rsgroups in the shell show also the assigned tables and servers.

Signed-off-by: Michael Stack 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/41cc9a12
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/41cc9a12
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/41cc9a12

Branch: refs/heads/HBASE-18410
Commit: 41cc9a125f0074bdb9633d873f5bc2219ca1fb73
Parents: e1941aa
Author: Balazs Meszaros 
Authored: Tue Oct 10 09:24:51 2017 +0200
Committer: Michael Stack 
Committed: Tue Oct 17 13:58:36 2017 -0700

--
 .../hbase/rsgroup/RSGroupAdminServer.java   | 156 +--
 .../hbase/rsgroup/RSGroupBasedLoadBalancer.java |  16 +-
 .../hadoop/hbase/rsgroup/TestRSGroups.java  |  16 +-
 .../hbase/rsgroup/TestRSGroupsOfflineMode.java  |   3 +-
 .../master/assignment/AssignmentManager.java|  25 +--
 .../master/procedure/ProcedureSyncWait.java |  50 +++---
 .../resources/hbase-webapps/master/table.jsp|   6 +-
 .../src/main/ruby/hbase/rsgroup_admin.rb|  29 +---
 .../src/main/ruby/shell/commands/get_rsgroup.rb |  14 +-
 .../main/ruby/shell/commands/list_rsgroups.rb   |  37 -
 .../ruby/shell/commands/move_servers_rsgroup.rb |   3 +-
 11 files changed, 183 insertions(+), 172 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/41cc9a12/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminServer.java
--
diff --git 
a/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminServer.java
 
b/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminServer.java
index b13dafd..3c82d76 100644
--- 
a/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminServer.java
+++ 
b/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminServer.java
@@ -44,13 +44,10 @@ import org.apache.hadoop.hbase.master.RegionState;
 import org.apache.hadoop.hbase.master.ServerManager;
 import org.apache.hadoop.hbase.master.assignment.AssignmentManager;
 import org.apache.hadoop.hbase.master.assignment.RegionStates.RegionStateNode;
-import org.apache.hadoop.hbase.master.locking.LockManager;
 import org.apache.hadoop.hbase.net.Address;
-import org.apache.hadoop.hbase.procedure2.LockType;
-import org.apache.yetus.audience.InterfaceAudience;
-
 import org.apache.hadoop.hbase.shaded.com.google.common.collect.Lists;
 import org.apache.hadoop.hbase.shaded.com.google.common.collect.Maps;
+import org.apache.yetus.audience.InterfaceAudience;
 
 /**
  * Service to support Region Server Grouping (HBase-6721).
@@ -88,10 +85,10 @@ public class RSGroupAdminServer implements RSGroupAdmin {
 for(ServerName server: 
master.getServerManager().getOnlineServers().keySet()) {
   onlineServers.add(server.getAddress());
 }
-for (Address el: servers) {
-  if (!onlineServers.contains(el)) {
+for (Address address: servers) {
+  if (!onlineServers.contains(address)) {
 throw new ConstraintException(
-"Server " + el + " is not an online server in 'default' RSGroup.");
+"Server " + address + " is not an online server in 'default' 
RSGroup.");
   }
 }
   }
@@ -192,18 +189,20 @@ public class RSGroupAdminServer implements RSGroupAdmin {
   }
 
   /**
+   * Moves every region from servers which are currently located on these 
servers,
+   * but should not be located there.
* @param servers the servers that will move to new group
+   * @param tables these tables will be kept on the servers, others will be 
moved
* @param targetGroupName the target group name
-   * @param tables The regions of tables assigned to these servers will not 
unassign
* @throws IOException
*/
-  private void unassignRegionFromServers(Set servers, String 
targetGroupName,
- Set tables) throws IOException 
{
-boolean foundRegionsToUnassign;
+  private void moveRegionsFromServers(Set servers, Set 
tables,
+  String targetGroupName) throws IOException {
+boolean foundRegionsToMove;
 RSGroupInfo targetGrp = getRSGroupInfo(targetGroupName);
 Set allSevers = new 

[28/50] [abbrv] hbase git commit: HBASE-19058. The wget isn't installed in building docker image

2017-10-23 Thread zhangduo
HBASE-19058. The wget isn't installed in building docker image

Signed-off-by: Sean Busbey 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/cb5c4776
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/cb5c4776
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/cb5c4776

Branch: refs/heads/HBASE-18410
Commit: cb5c4776deee270ea21afc52d4ba70d9474d8a8a
Parents: b10ad9e
Author: Chia-Ping Tsai 
Authored: Fri Oct 20 20:32:27 2017 -0500
Committer: Sean Busbey 
Committed: Fri Oct 20 20:35:33 2017 -0500

--
 dev-support/docker/Dockerfile | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/cb5c4776/dev-support/docker/Dockerfile
--
diff --git a/dev-support/docker/Dockerfile b/dev-support/docker/Dockerfile
index c23c70d..da5f32e 100644
--- a/dev-support/docker/Dockerfile
+++ b/dev-support/docker/Dockerfile
@@ -62,7 +62,8 @@ RUN apt-get -q update && apt-get -q install 
--no-install-recommends -y \
 python-pip \
 rsync \
 snappy \
-zlib1g-dev
+zlib1g-dev \
+wget
 
 ###
 # OpenJDK 8



[26/50] [abbrv] hbase git commit: HBASE-19060 precommit plugin test 'hadoopcheck' should only run when java or maven files change.

2017-10-23 Thread zhangduo
HBASE-19060 precommit plugin test 'hadoopcheck' should only run when java or 
maven files change.

Signed-off-by: Mike Drob 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/dd4dbae7
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/dd4dbae7
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/dd4dbae7

Branch: refs/heads/HBASE-18410
Commit: dd4dbae7643a7fb0beaa4e75a51d5a9c921c89b2
Parents: 89d3b0b
Author: Sean Busbey 
Authored: Fri Oct 20 11:08:35 2017 -0500
Committer: Sean Busbey 
Committed: Fri Oct 20 19:35:14 2017 -0500

--
 dev-support/hbase-personality.sh | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/dd4dbae7/dev-support/hbase-personality.sh
--
diff --git a/dev-support/hbase-personality.sh b/dev-support/hbase-personality.sh
index 27c2169..88e773e 100755
--- a/dev-support/hbase-personality.sh
+++ b/dev-support/hbase-personality.sh
@@ -218,7 +218,7 @@ function hadoopcheck_filefilter
 {
   local filename=$1
 
-  if [[ ${filename} =~ \.java$ ]]; then
+  if [[ ${filename} =~ \.java$ ]] || [[ ${filename} =~ pom.xml$ ]]; then
 add_test hadoopcheck
   fi
 }
@@ -241,6 +241,10 @@ function hadoopcheck_rebuild
 return 0
   fi
 
+  if ! verify_needed_test hadoopcheck; then
+return 0
+  fi
+
   big_console_header "Compiling against various Hadoop versions"
 
   # All supported Hadoop versions that we want to test the compilation with
@@ -317,7 +321,7 @@ function hbaseprotoc_filefilter
   fi
 }
 
-## @description  hadoopcheck test
+## @description  check hbase proto compilation
 ## @audience private
 ## @stabilityevolving
 ## @paramrepostatus



[41/50] [abbrv] hbase git commit: HBASE-19069 Do not wrap the original CompactionLifeCycleTracker when calling CP hooks

2017-10-23 Thread zhangduo
HBASE-19069 Do not wrap the original CompactionLifeCycleTracker when calling CP 
hooks


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/37b29e90
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/37b29e90
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/37b29e90

Branch: refs/heads/HBASE-18410
Commit: 37b29e909defecdc580112ce6cd306710d13e9e2
Parents: 81133f8
Author: zhangduo 
Authored: Mon Oct 23 21:10:44 2017 +0800
Committer: zhangduo 
Committed: Tue Oct 24 10:56:14 2017 +0800

--
 .../hadoop/hbase/regionserver/CompactSplit.java | 135 ++-
 .../TestCompactionLifeCycleTracker.java |   9 +-
 2 files changed, 80 insertions(+), 64 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/37b29e90/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplit.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplit.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplit.java
index b82b346..0749f85 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplit.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplit.java
@@ -237,80 +237,73 @@ public class CompactSplit implements CompactionRequester, 
PropagatingConfigurati
 }
   }
 
-  // A compaction life cycle tracker to trace the execution of all the 
compactions triggered by one
-  // request and delegate to the source CompactionLifeCycleTracker. It will 
call completed method if
-  // all the compactions are finished.
-  private static final class AggregatingCompactionLifeCycleTracker
-  implements CompactionLifeCycleTracker {
+  private interface CompactionCompleteTracker {
+
+default void completed(Store store) {
+}
+  }
+
+  private static final CompactionCompleteTracker DUMMY_COMPLETE_TRACKER =
+  new CompactionCompleteTracker() {
+  };
+
+  private static final class AggregatingCompleteTracker implements 
CompactionCompleteTracker {
 
 private final CompactionLifeCycleTracker tracker;
 
 private final AtomicInteger remaining;
 
-public AggregatingCompactionLifeCycleTracker(CompactionLifeCycleTracker 
tracker,
-int numberOfStores) {
+public AggregatingCompleteTracker(CompactionLifeCycleTracker tracker, int 
numberOfStores) {
   this.tracker = tracker;
   this.remaining = new AtomicInteger(numberOfStores);
 }
 
-private void tryCompleted() {
+@Override
+public void completed(Store store) {
   if (remaining.decrementAndGet() == 0) {
 tracker.completed();
   }
 }
-
-@Override
-public void notExecuted(Store store, String reason) {
-  tracker.notExecuted(store, reason);
-  tryCompleted();
-}
-
-@Override
-public void beforeExecution(Store store) {
-  tracker.beforeExecution(store);
-}
-
-@Override
-public void afterExecution(Store store) {
-  tracker.afterExecution(store);
-  tryCompleted();
-}
   }
 
-  private CompactionLifeCycleTracker wrap(CompactionLifeCycleTracker tracker,
+  private CompactionCompleteTracker 
getCompleteTracker(CompactionLifeCycleTracker tracker,
   IntSupplier numberOfStores) {
 if (tracker == CompactionLifeCycleTracker.DUMMY) {
   // a simple optimization to avoid creating unnecessary objects as 
usually we do not care about
   // the life cycle of a compaction.
-  return tracker;
+  return DUMMY_COMPLETE_TRACKER;
 } else {
-  return new AggregatingCompactionLifeCycleTracker(tracker, 
numberOfStores.getAsInt());
+  return new AggregatingCompleteTracker(tracker, 
numberOfStores.getAsInt());
 }
   }
 
   @Override
   public synchronized void requestCompaction(HRegion region, String why, int 
priority,
   CompactionLifeCycleTracker tracker, User user) throws IOException {
-requestCompactionInternal(region, why, priority, true,
-  wrap(tracker, () -> region.getTableDescriptor().getColumnFamilyCount()), 
user);
+requestCompactionInternal(region, why, priority, true, tracker,
+  getCompleteTracker(tracker, () -> 
region.getTableDescriptor().getColumnFamilyCount()), user);
   }
 
   @Override
   public synchronized void requestCompaction(HRegion region, HStore store, 
String why, int priority,
   CompactionLifeCycleTracker tracker, User user) throws IOException {
-requestCompactionInternal(region, store, why, priority, true, 
wrap(tracker, () -> 1), user);
+requestCompactionInternal(region, store, why, priority, true, tracker,
+  getCompleteTracker(tracker, () -> 1), user);
   }
 
   private void 

[05/50] [abbrv] hbase git commit: HBSE-18945 Make a IA.LimitedPrivate interface for CellComparator (Ram)

2017-10-23 Thread zhangduo
HBSE-18945 Make a IA.LimitedPrivate interface for CellComparator (Ram)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/70f4c5da
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/70f4c5da
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/70f4c5da

Branch: refs/heads/HBASE-18410
Commit: 70f4c5da475a221b28e3516a23f35fc6098d4044
Parents: 9f61f8b
Author: Ramkrishna 
Authored: Tue Oct 17 23:17:07 2017 +0530
Committer: Ramkrishna 
Committed: Tue Oct 17 23:17:07 2017 +0530

--
 .../hbase/backup/impl/BackupSystemTable.java|  15 +-
 .../hadoop/hbase/client/ConnectionUtils.java|   5 +-
 .../org/apache/hadoop/hbase/client/Put.java |   1 -
 .../org/apache/hadoop/hbase/client/Result.java  |  11 +-
 .../hbase/filter/ColumnPaginationFilter.java|   3 +-
 .../hadoop/hbase/filter/ColumnRangeFilter.java  |   5 +-
 .../hadoop/hbase/filter/CompareFilter.java  |  18 +-
 .../apache/hadoop/hbase/filter/FilterList.java  |   6 +-
 .../hadoop/hbase/filter/FuzzyRowFilter.java |   4 +-
 .../hbase/filter/InclusiveStopFilter.java   |   4 +-
 .../hbase/filter/SingleColumnValueFilter.java   |   3 +-
 .../hbase/client/TestClientNoCluster.java   |   4 +-
 .../hadoop/hbase/client/TestOperation.java  |  10 +-
 .../hadoop/hbase/filter/TestComparators.java|  38 +-
 .../hbase/shaded/protobuf/TestProtobufUtil.java |   4 +-
 .../org/apache/hadoop/hbase/CellComparator.java | 653 ++-
 .../apache/hadoop/hbase/CellComparatorImpl.java | 381 +++
 .../java/org/apache/hadoop/hbase/CellUtil.java  | 306 -
 .../java/org/apache/hadoop/hbase/KeyValue.java  |  21 +-
 .../io/encoding/BufferedDataBlockEncoder.java   |   5 +-
 .../hbase/io/encoding/DataBlockEncoder.java |   3 +-
 .../hbase/io/encoding/RowIndexCodecV1.java  |   3 +-
 .../hbase/io/encoding/RowIndexEncoderV1.java|   3 +-
 .../hbase/io/encoding/RowIndexSeekerV1.java |   4 +-
 .../apache/hadoop/hbase/TestCellComparator.java |  28 +-
 .../org/apache/hadoop/hbase/TestKeyValue.java   |  48 +-
 .../hadoop/hbase/util/RedundantKVGenerator.java |   6 +-
 .../mapreduce/IntegrationTestImportTsv.java |   6 +-
 .../hadoop/hbase/mapreduce/CellSortReducer.java |   4 +-
 .../hbase/mapreduce/HFileOutputFormat2.java |   6 +-
 .../apache/hadoop/hbase/mapreduce/Import.java   |   4 +-
 .../hadoop/hbase/mapreduce/PutSortReducer.java  |   4 +-
 .../hadoop/hbase/mapreduce/SyncTable.java   |   9 +-
 .../hadoop/hbase/mapreduce/TextSortReducer.java |   4 +-
 .../hbase/codec/prefixtree/PrefixTreeCodec.java |   2 +-
 .../decode/PrefixTreeArrayScanner.java  |   3 +-
 .../codec/prefixtree/decode/PrefixTreeCell.java |   3 +-
 .../row/data/TestRowDataNumberStrings.java  |   4 +-
 .../hadoop/hbase/io/HalfStoreFileReader.java|  13 +-
 .../hadoop/hbase/io/hfile/FixedFileTrailer.java |  15 +-
 .../org/apache/hadoop/hbase/io/hfile/HFile.java |   3 +-
 .../hadoop/hbase/io/hfile/HFileBlockIndex.java  |   3 +-
 .../hbase/io/hfile/HFilePrettyPrinter.java  |  10 +-
 .../hadoop/hbase/io/hfile/HFileReaderImpl.java  |  14 +-
 .../hadoop/hbase/io/hfile/HFileWriterImpl.java  |  11 +-
 .../org/apache/hadoop/hbase/mob/MobUtils.java   |   4 +-
 .../compactions/PartitionedMobCompactor.java|   4 +-
 .../regionserver/CellArrayImmutableSegment.java |   1 +
 .../regionserver/CellChunkImmutableSegment.java |   3 +-
 .../hbase/regionserver/DefaultMemStore.java |   3 +-
 .../hadoop/hbase/regionserver/HRegion.java  |  10 +-
 .../hadoop/hbase/regionserver/HStore.java   |   9 +-
 .../hbase/regionserver/ImmutableSegment.java|   2 +-
 .../apache/hadoop/hbase/regionserver/Store.java |   5 +-
 .../hadoop/hbase/regionserver/StoreFile.java|   4 +-
 .../hbase/regionserver/StoreFileReader.java |   5 +-
 .../hbase/regionserver/StoreFileWriter.java |   5 +-
 .../hadoop/hbase/regionserver/StoreScanner.java |   3 +-
 .../querymatcher/DeleteTracker.java |   7 +
 .../querymatcher/ExplicitColumnTracker.java |   5 +-
 .../querymatcher/NewVersionBehaviorTracker.java |  13 +-
 .../querymatcher/ScanDeleteTracker.java |  12 +-
 .../querymatcher/ScanQueryMatcher.java  |  22 +-
 .../querymatcher/ScanWildcardColumnTracker.java |   4 +-
 .../hbase/regionserver/wal/FSWALEntry.java  |   4 +-
 .../visibility/VisibilityController.java|   2 +-
 .../VisibilityNewVersionBehaivorTracker.java|   9 +-
 .../visibility/VisibilityScanDeleteTracker.java |   7 +-
 .../hadoop/hbase/util/BloomFilterFactory.java   |   4 +-
 .../hbase/util/CollectionBackedScanner.java |   5 +-
 .../hadoop/hbase/util/CompressionTest.java  |   4 +-
 .../hadoop/hbase/util/RowBloomContext.java  |   1 +
 .../hadoop/hbase/HBaseTestingUtility.java   |   2 +-
 

[19/50] [abbrv] hbase git commit: HBASE-16338 Remove Jackson1 deps

2017-10-23 Thread zhangduo
http://git-wip-us.apache.org/repos/asf/hbase/blob/5facaded/hbase-server/src/main/resources/hbase-webapps/regionserver/processRS.jsp
--
diff --git 
a/hbase-server/src/main/resources/hbase-webapps/regionserver/processRS.jsp 
b/hbase-server/src/main/resources/hbase-webapps/regionserver/processRS.jsp
index cc18d5b..f0df0c0 100644
--- a/hbase-server/src/main/resources/hbase-webapps/regionserver/processRS.jsp
+++ b/hbase-server/src/main/resources/hbase-webapps/regionserver/processRS.jsp
@@ -29,7 +29,7 @@
   import="java.lang.management.GarbageCollectorMXBean"
   import="org.apache.hadoop.hbase.util.JSONMetricUtil"
   import="org.apache.hadoop.hbase.procedure2.util.StringUtils"
-  import="org.codehaus.jackson.JsonNode"
+  import="com.fasterxml.jackson.databind.JsonNode"
 %>
 <%
 RuntimeMXBean runtimeBean = ManagementFactory.getRuntimeMXBean();

http://git-wip-us.apache.org/repos/asf/hbase/blob/5facaded/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestBlockCacheReporting.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestBlockCacheReporting.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestBlockCacheReporting.java
index ee5a364..dab8673 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestBlockCacheReporting.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestBlockCacheReporting.java
@@ -23,6 +23,8 @@ import java.io.IOException;
 import java.util.Map;
 import java.util.NavigableSet;
 
+import com.fasterxml.jackson.core.JsonGenerationException;
+import com.fasterxml.jackson.databind.JsonMappingException;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
@@ -32,8 +34,6 @@ import org.apache.hadoop.hbase.testclassification.IOTests;
 import org.apache.hadoop.hbase.testclassification.SmallTests;
 import org.apache.hadoop.hbase.io.hfile.TestCacheConfig.DataCacheEntry;
 import org.apache.hadoop.hbase.io.hfile.TestCacheConfig.IndexCacheEntry;
-import org.codehaus.jackson.JsonGenerationException;
-import org.codehaus.jackson.map.JsonMappingException;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;

http://git-wip-us.apache.org/repos/asf/hbase/blob/5facaded/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestJSONMetricUtil.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestJSONMetricUtil.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestJSONMetricUtil.java
index 30da26a..1135039 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestJSONMetricUtil.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestJSONMetricUtil.java
@@ -22,6 +22,7 @@ import java.lang.management.GarbageCollectorMXBean;
 import java.lang.management.ManagementFactory;
 import java.util.Hashtable;
 import java.util.List;
+import java.util.Map;
 
 import javax.management.MalformedObjectNameException;
 import javax.management.ObjectName;
@@ -29,13 +30,14 @@ import javax.management.openmbean.CompositeData;
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
 
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.databind.JsonNode;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.hbase.testclassification.MiscTests;
 import org.apache.hadoop.hbase.testclassification.SmallTests;
-import org.codehaus.jackson.JsonNode;
-import org.codehaus.jackson.JsonProcessingException;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
@@ -51,17 +53,14 @@ public class TestJSONMetricUtil {
 String[] values = {"MemoryPool", "Par Eden Space"};
 String[] values2 = {"MemoryPool", "Par Eden Space", "Test"};
 String[] emptyValue = {};
-Hashtable properties = 
JSONMetricUtil.buldKeyValueTable(keys, values);
-Hashtable nullObject = 
JSONMetricUtil.buldKeyValueTable(keys, values2);
-Hashtable nullObject1 = 
JSONMetricUtil.buldKeyValueTable(keys, emptyValue);
-Hashtable nullObject2 = 
JSONMetricUtil.buldKeyValueTable(emptyKey, values2);
-Hashtable nullObject3 = 
JSONMetricUtil.buldKeyValueTable(emptyKey, emptyValue);
-assertEquals(properties.get("type"), values[0]);
-assertEquals(properties.get("name"), values[1]);
-assertEquals(nullObject, null);
-assertEquals(nullObject1, null);
-assertEquals(nullObject2, null);
-assertEquals(nullObject3, null);
+Map properties = JSONMetricUtil.buldKeyValueTable(keys, 

[46/50] [abbrv] hbase git commit: HBASE-18904 Missing break in NEXT_ROW case of FilterList#mergeReturnCodeForOrOperator()

2017-10-23 Thread zhangduo
HBASE-18904 Missing break in NEXT_ROW case of 
FilterList#mergeReturnCodeForOrOperator()

Signed-off-by: tedyu 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/9dd2ddae
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/9dd2ddae
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/9dd2ddae

Branch: refs/heads/HBASE-18410
Commit: 9dd2ddaea2e9cba82d449a212f4f289d08aa4a7a
Parents: 24a7ce8
Author: Biju Nair 
Authored: Fri Sep 29 16:55:54 2017 -0400
Committer: zhangduo 
Committed: Tue Oct 24 11:37:45 2017 +0800

--
 .../src/main/java/org/apache/hadoop/hbase/filter/FilterList.java   | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/9dd2ddae/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
index 3147ab0..b518645 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
@@ -566,6 +566,7 @@ final public class FilterList extends FilterBase {
   if (isInReturnCodes(rc, ReturnCode.NEXT_ROW)) {
 return ReturnCode.NEXT_ROW;
   }
+  break;
 case SEEK_NEXT_USING_HINT:
   if (isInReturnCodes(rc, ReturnCode.INCLUDE, 
ReturnCode.INCLUDE_AND_NEXT_COL,
 ReturnCode.INCLUDE_AND_SEEK_NEXT_ROW)) {
@@ -577,6 +578,7 @@ final public class FilterList extends FilterBase {
   if (isInReturnCodes(rc, ReturnCode.SEEK_NEXT_USING_HINT)) {
 return ReturnCode.SEEK_NEXT_USING_HINT;
   }
+  break;
 }
 throw new IllegalStateException(
 "Received code is not valid. rc: " + rc + ", localRC: " + localRC);



[06/50] [abbrv] hbase git commit: HBASE-18960 A few bug fixes and minor improvements around batchMutate

2017-10-23 Thread zhangduo
HBASE-18960 A few bug fixes and minor improvements around batchMutate

* batch validation and preparation is done before we start iterating over 
operations for writes
* durability, familyCellMaps and observedExceptions are batch wide and are now 
sotred in BatchOperation,
  as a result durability is consistent across all operations in a batch
* for all operations done by preBatchMutate() CP hook, operation status is 
updated to success
* doWALAppend() is modified to habdle replay and is used from 
doMiniBatchMutate()
* minor improvements

Signed-off-by: Michael Stack 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/e1941aa6
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/e1941aa6
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/e1941aa6

Branch: refs/heads/HBASE-18410
Commit: e1941aa6d14afd116a555fc93a3149f3e7c20af2
Parents: 70f4c5d
Author: Umesh Agashe 
Authored: Fri Oct 6 15:40:05 2017 -0700
Committer: Michael Stack 
Committed: Tue Oct 17 13:57:00 2017 -0700

--
 .../hadoop/hbase/regionserver/HRegion.java  | 420 ---
 .../regionserver/TestHRegionReplayEvents.java   |  21 +
 2 files changed, 207 insertions(+), 234 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/e1941aa6/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
index 0bef925..1cbb689 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
@@ -661,7 +661,7 @@ public class HRegion implements HeapSize, 
PropagatingConfigurationObserver, Regi
 
   private final MetricsRegion metricsRegion;
   private final MetricsRegionWrapperImpl metricsRegionWrapper;
-  private final Durability durability;
+  private final Durability regionDurability;
   private final boolean regionStatsEnabled;
   // Stores the replication scope of the various column families of the table
   // that has non-default scope
@@ -787,9 +787,8 @@ public class HRegion implements HeapSize, 
PropagatingConfigurationObserver, Regi
  */
 this.rowProcessorTimeout = conf.getLong(
 "hbase.hregion.row.processor.timeout", DEFAULT_ROW_PROCESSOR_TIMEOUT);
-this.durability = htd.getDurability() == Durability.USE_DEFAULT
-? DEFAULT_DURABILITY
-: htd.getDurability();
+this.regionDurability = htd.getDurability() == Durability.USE_DEFAULT ?
+DEFAULT_DURABILITY : htd.getDurability();
 if (rsServices != null) {
   this.rsAccounting = this.rsServices.getRegionServerAccounting();
   // don't initialize coprocessors if not running within a regionserver
@@ -1945,13 +1944,6 @@ public class HRegion implements HeapSize, 
PropagatingConfigurationObserver, Regi
   // upkeep.
   
//
   /**
-   * @return returns size of largest HStore.
-   */
-  public long getLargestHStoreSize() {
-return 
stores.values().stream().mapToLong(HStore::getSize).max().orElse(0L);
-  }
-
-  /**
* Do preparation for pending compaction.
* @throws IOException
*/
@@ -3018,21 +3010,28 @@ public class HRegion implements HeapSize, 
PropagatingConfigurationObserver, Regi
   }
 
   /**
-   * Struct-like class that tracks the progress of a batch operation,
-   * accumulating status codes and tracking the index at which processing
-   * is proceeding.
+   * Struct-like class that tracks the progress of a batch operation, 
accumulating status codes
+   * and tracking the index at which processing is proceeding. These batch 
operations may get
+   * split into mini-batches for processing.
*/
   private abstract static class BatchOperation {
 T[] operations;
 int nextIndexToProcess = 0;
 OperationStatus[] retCodeDetails;
 WALEdit[] walEditsFromCoprocessors;
+// reference family cell maps directly so coprocessors can mutate them if 
desired
+Map[] familyCellMaps;
+ObservedExceptionsInBatch observedExceptions;
+Durability durability;  //Durability of the batch (highest durability of 
all operations)
 
 public BatchOperation(T[] operations) {
   this.operations = operations;
   this.retCodeDetails = new OperationStatus[operations.length];
   this.walEditsFromCoprocessors = new WALEdit[operations.length];
   Arrays.fill(this.retCodeDetails, OperationStatus.NOT_RUN);
+  familyCellMaps = new Map[operations.length];
+  

[25/50] [abbrv] hbase git commit: HBASE-19061 update enforcer rules for NPE

2017-10-23 Thread zhangduo
HBASE-19061 update enforcer rules for NPE


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/89d3b0b0
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/89d3b0b0
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/89d3b0b0

Branch: refs/heads/HBASE-18410
Commit: 89d3b0b07f2ce7a84780e7088efaf9e3bce1ee5f
Parents: 64d164b
Author: Mike Drob 
Authored: Fri Oct 20 16:04:16 2017 -0500
Committer: Mike Drob 
Committed: Fri Oct 20 16:04:16 2017 -0500

--
 hbase-shaded/hbase-shaded-check-invariants/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/89d3b0b0/hbase-shaded/hbase-shaded-check-invariants/pom.xml
--
diff --git a/hbase-shaded/hbase-shaded-check-invariants/pom.xml 
b/hbase-shaded/hbase-shaded-check-invariants/pom.xml
index 69275a7..8592d71 100644
--- a/hbase-shaded/hbase-shaded-check-invariants/pom.xml
+++ b/hbase-shaded/hbase-shaded-check-invariants/pom.xml
@@ -76,7 +76,7 @@
   
 org.codehaus.mojo
 extra-enforcer-rules
-1.0-beta-3
+1.0-beta-6
   
 
 



[34/50] [abbrv] hbase git commit: HBASE-19046 RegionObserver#postCompactSelection Avoid passing shaded ImmutableList param.

2017-10-23 Thread zhangduo
HBASE-19046 RegionObserver#postCompactSelection  Avoid passing shaded 
ImmutableList param.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/4add40ca
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/4add40ca
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/4add40ca

Branch: refs/heads/HBASE-18410
Commit: 4add40ca24405ca029739aaaf0b80cf5fff556f6
Parents: 2493104
Author: anoopsamjohn 
Authored: Mon Oct 23 12:14:09 2017 +0530
Committer: anoopsamjohn 
Committed: Mon Oct 23 12:14:09 2017 +0530

--
 .../org/apache/hadoop/hbase/coprocessor/RegionObserver.java | 5 +
 .../apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java | 3 +--
 .../apache/hadoop/hbase/coprocessor/SimpleRegionObserver.java   | 3 +--
 3 files changed, 3 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/4add40ca/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
index 076503f..94550df 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
@@ -22,7 +22,6 @@ package org.apache.hadoop.hbase.coprocessor;
 import java.io.IOException;
 import java.util.List;
 import java.util.Map;
-import java.util.NavigableSet;
 
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -44,7 +43,6 @@ import org.apache.hadoop.hbase.io.FSDataInputStreamWrapper;
 import org.apache.hadoop.hbase.io.Reference;
 import org.apache.hadoop.hbase.io.hfile.CacheConfig;
 import org.apache.hadoop.hbase.regionserver.InternalScanner;
-import org.apache.hadoop.hbase.regionserver.KeyValueScanner;
 import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
 import org.apache.hadoop.hbase.regionserver.OperationStatus;
 import org.apache.hadoop.hbase.regionserver.Region;
@@ -57,7 +55,6 @@ import org.apache.hadoop.hbase.regionserver.StoreFileReader;
 import 
org.apache.hadoop.hbase.regionserver.compactions.CompactionLifeCycleTracker;
 import org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest;
 import org.apache.hadoop.hbase.regionserver.querymatcher.DeleteTracker;
-import org.apache.hadoop.hbase.shaded.com.google.common.collect.ImmutableList;
 import org.apache.hadoop.hbase.util.Pair;
 import org.apache.hadoop.hbase.wal.WALEdit;
 import org.apache.hadoop.hbase.wal.WALKey;
@@ -181,7 +178,7 @@ public interface RegionObserver {
* @param request the requested compaction
*/
   default void 
postCompactSelection(ObserverContext c, Store 
store,
-  ImmutableList selected, CompactionLifeCycleTracker 
tracker,
+  List selected, CompactionLifeCycleTracker tracker,
   CompactionRequest request) {}
 
   /**

http://git-wip-us.apache.org/repos/asf/hbase/blob/4add40ca/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
index 8000a2f..735d7ba 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
@@ -87,7 +87,6 @@ import org.apache.hadoop.hbase.wal.WALEdit;
 import org.apache.hadoop.hbase.wal.WALKey;
 import org.apache.yetus.audience.InterfaceAudience;
 
-import org.apache.hadoop.hbase.shaded.com.google.common.collect.ImmutableList;
 import org.apache.hadoop.hbase.shaded.com.google.common.collect.Lists;
 
 /**
@@ -606,7 +605,7 @@ public class RegionCoprocessorHost
* @param request the compaction request
* @param user the user
*/
-  public void postCompactSelection(final HStore store, final 
ImmutableList selected,
+  public void postCompactSelection(final HStore store, final List 
selected,
   final CompactionLifeCycleTracker tracker, final CompactionRequest 
request,
   final User user) throws IOException {
 execOperation(coprocEnvironments.isEmpty() ? null : new 
RegionObserverOperation(user) {

http://git-wip-us.apache.org/repos/asf/hbase/blob/4add40ca/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/SimpleRegionObserver.java
--
diff --git 

[48/50] [abbrv] hbase git commit: HBASE-18411 Dividing FiterList into two separate sub-classes: FilterListWithOR , FilterListWithAND

2017-10-23 Thread zhangduo
HBASE-18411 Dividing FiterList into two separate sub-classes: FilterListWithOR 
, FilterListWithAND

Signed-off-by: zhangduo 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/b32bff02
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/b32bff02
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/b32bff02

Branch: refs/heads/HBASE-18410
Commit: b32bff028d00fedee5fb6e1ae8c587fd9e5f3b1e
Parents: 9dd2dda
Author: huzheng 
Authored: Tue Oct 10 20:01:48 2017 +0800
Committer: zhangduo 
Committed: Tue Oct 24 11:39:31 2017 +0800

--
 .../apache/hadoop/hbase/filter/FilterList.java  | 661 ++-
 .../hadoop/hbase/filter/FilterListBase.java | 159 +
 .../hadoop/hbase/filter/FilterListWithAND.java  | 273 
 .../hadoop/hbase/filter/FilterListWithOR.java   | 383 +++
 .../hadoop/hbase/filter/TestFilterList.java |  89 +++
 5 files changed, 962 insertions(+), 603 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/b32bff02/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
index b518645..97392d1 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
@@ -29,6 +29,7 @@ import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CellComparatorImpl;
 import org.apache.hadoop.hbase.CellUtil;
 import org.apache.hadoop.hbase.KeyValueUtil;
+import org.apache.yetus.audience.InterfaceAudience;
 import org.apache.hadoop.hbase.exceptions.DeserializationException;
 import org.apache.yetus.audience.InterfaceAudience;
 
@@ -37,86 +38,60 @@ import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
 import org.apache.hadoop.hbase.shaded.protobuf.generated.FilterProtos;
 
 /**
- * Implementation of {@link Filter} that represents an ordered List of Filters
- * which will be evaluated with a specified boolean operator {@link 
Operator#MUST_PASS_ALL}
- * (AND) or {@link Operator#MUST_PASS_ONE} (OR).
- * Since you can use Filter Lists as children of Filter Lists, you can create a
- * hierarchy of filters to be evaluated.
- *
- * 
- * {@link Operator#MUST_PASS_ALL} evaluates lazily: evaluation stops as soon 
as one filter does
- * not include the KeyValue.
- *
- * 
- * {@link Operator#MUST_PASS_ONE} evaluates non-lazily: all filters are always 
evaluated.
- *
- * 
+ * Implementation of {@link Filter} that represents an ordered List of Filters 
which will be
+ * evaluated with a specified boolean operator {@link Operator#MUST_PASS_ALL} 
(AND) or
+ * {@link Operator#MUST_PASS_ONE} (OR). Since you can use Filter 
Lists as children of
+ * Filter Lists, you can create a hierarchy of filters to be evaluated. 
+ * {@link Operator#MUST_PASS_ALL} evaluates lazily: evaluation stops as soon 
as one filter does not
+ * include the KeyValue. 
+ * {@link Operator#MUST_PASS_ONE} evaluates non-lazily: all filters are always 
evaluated. 
  * Defaults to {@link Operator#MUST_PASS_ALL}.
  */
 @InterfaceAudience.Public
 final public class FilterList extends FilterBase {
+
   /** set operator */
   @InterfaceAudience.Public
-  public static enum Operator {
+  public enum Operator {
 /** !AND */
 MUST_PASS_ALL,
 /** !OR */
 MUST_PASS_ONE
   }
 
-  private static final int MAX_LOG_FILTERS = 5;
-  private Operator operator = Operator.MUST_PASS_ALL;
-  private final List filters;
-  private Collection seekHintFilters = new ArrayList();
-
-  /**
-   * Save previous return code and previous cell for every filter in filter 
list. For MUST_PASS_ONE,
-   * we use the previous return code to decide whether we should pass current 
cell encountered to
-   * the filter. For MUST_PASS_ALL, the two list are meaningless.
-   */
-  private List prevFilterRCList = null;
-  private List prevCellList = null;
-
-  /** Reference Cell used by {@link #transformCell(Cell)} for validation 
purpose. */
-  private Cell referenceCell = null;
-
-  /**
-   * When filtering a given Cell in {@link #filterKeyValue(Cell)},
-   * this stores the transformed Cell to be returned by {@link 
#transformCell(Cell)}.
-   *
-   * Individual filters transformation are applied only when the filter 
includes the Cell.
-   * Transformations are composed in the order specified by {@link #filters}.
-   */
-  private Cell transformedCell = null;
+  private Operator operator;
+  private FilterListBase filterListBase;
 
   /**
* Constructor that takes a set of {@link Filter}s and an operator.
* 

[18/50] [abbrv] hbase git commit: HBASE-10367 RegionServer graceful stop / decommissioning

2017-10-23 Thread zhangduo
HBASE-10367 RegionServer graceful stop / decommissioning

Signed-off-by: Jerry He 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/a43a00e8
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/a43a00e8
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/a43a00e8

Branch: refs/heads/HBASE-18410
Commit: a43a00e89c5c99968a205208ab9a5307c89730b3
Parents: af479c5
Author: Jerry He 
Authored: Thu Oct 19 21:44:38 2017 -0700
Committer: Jerry He 
Committed: Thu Oct 19 21:54:45 2017 -0700

--
 bin/draining_servers.rb |   2 +
 .../org/apache/hadoop/hbase/client/Admin.java   |  26 +++--
 .../apache/hadoop/hbase/client/AsyncAdmin.java  |  25 ++--
 .../hadoop/hbase/client/AsyncHBaseAdmin.java|  14 ++-
 .../hbase/client/ConnectionImplementation.java  |  30 ++---
 .../apache/hadoop/hbase/client/HBaseAdmin.java  |  23 ++--
 .../hadoop/hbase/client/RawAsyncHBaseAdmin.java |  66 +--
 .../client/ShortCircuitMasterConnection.java|  30 ++---
 .../hbase/shaded/protobuf/RequestConverter.java |  24 ++--
 .../src/main/protobuf/Master.proto  |  38 +++---
 .../hbase/coprocessor/MasterObserver.java   |  36 ++
 .../org/apache/hadoop/hbase/master/HMaster.java | 117 +--
 .../hbase/master/MasterCoprocessorHost.java |  56 +
 .../hadoop/hbase/master/MasterRpcServices.java  |  71 ++-
 .../hadoop/hbase/master/MasterServices.java |  19 +--
 .../hadoop/hbase/master/ServerManager.java  |  14 ++-
 .../hbase/security/access/AccessController.java |  17 +++
 .../hbase/zookeeper/DrainingServerTracker.java  |   3 +
 .../apache/hadoop/hbase/client/TestAdmin2.java  | 103 
 .../client/TestAsyncDecommissionAdminApi.java   |  95 +++
 .../hbase/client/TestAsyncDrainAdminApi.java| 101 
 .../hbase/master/MockNoopMasterServices.java|  15 ---
 .../hbase/zookeeper/TestZooKeeperACL.java   |  18 +--
 23 files changed, 556 insertions(+), 387 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/a43a00e8/bin/draining_servers.rb
--
diff --git a/bin/draining_servers.rb b/bin/draining_servers.rb
index ea74c30..588bac4 100644
--- a/bin/draining_servers.rb
+++ b/bin/draining_servers.rb
@@ -17,6 +17,8 @@
 #
 
 # Add or remove servers from draining mode via zookeeper
+# Deprecated in 2.0, and will be removed in 3.0. Use Admin decommission
+# API instead.
 
 require 'optparse'
 include Java

http://git-wip-us.apache.org/repos/asf/hbase/blob/a43a00e8/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
index 64d5e53..540b7c8 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
@@ -2425,22 +2425,30 @@ public interface Admin extends Abortable, Closeable {
   }
 
   /**
-   * Mark a region server as draining to prevent additional regions from 
getting assigned to it.
-   * @param servers List of region servers to drain.
+   * Mark region server(s) as decommissioned to prevent additional regions 
from getting
+   * assigned to them. Optionally unload the regions on the servers. If there 
are multiple servers
+   * to be decommissioned, decommissioning them at the same time can prevent 
wasteful region
+   * movements. Region unloading is asynchronous.
+   * @param servers The list of servers to decommission.
+   * @param offload True to offload the regions from the decommissioned servers
*/
-  void drainRegionServers(List servers) throws IOException;
+  void decommissionRegionServers(List servers, boolean offload) 
throws IOException;
 
   /**
-   * List region servers marked as draining to not get additional regions 
assigned to them.
-   * @return List of draining region servers.
+   * List region servers marked as decommissioned, which can not be assigned 
regions.
+   * @return List of decommissioned region servers.
*/
-  List listDrainingRegionServers() throws IOException;
+  List listDecommissionedRegionServers() throws IOException;
 
   /**
-   * Remove drain from a region server to allow additional regions assignments.
-   * @param servers List of region servers to remove drain from.
+   * Remove decommission marker from a region server to allow regions 
assignments.
+   * Load regions onto the server if a list of regions is given. Region 
loading is
+   * asynchronous.
+   * @param server The server to 

[39/50] [abbrv] hbase git commit: HBASE-19072 Missing beak in catch block of InterruptedException in HRegion#waitForFlushes()

2017-10-23 Thread zhangduo
HBASE-19072 Missing beak in catch block of InterruptedException in 
HRegion#waitForFlushes()


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/b7db62c7
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/b7db62c7
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/b7db62c7

Branch: refs/heads/HBASE-18410
Commit: b7db62c702ef27b79365cfa62a8afee9042bcc6b
Parents: a1bc20a
Author: tedyu 
Authored: Mon Oct 23 19:34:11 2017 -0700
Committer: tedyu 
Committed: Mon Oct 23 19:34:11 2017 -0700

--
 .../src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/b7db62c7/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
index 9022e1f..99f5c35 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
@@ -1791,6 +1791,7 @@ public class HRegion implements HeapSize, 
PropagatingConfigurationObserver, Regi
 // essentially ignore and propagate the interrupt back up
 LOG.warn("Interrupted while waiting");
 interrupted = true;
+break;
   }
 }
   } finally {



[45/50] [abbrv] hbase git commit: HBASE-15410 Utilize the max seek value when all Filters in MUST_PASS_ALL FilterList return SEEK_NEXT_USING_HINT

2017-10-23 Thread zhangduo
HBASE-15410 Utilize the max seek value when all Filters in MUST_PASS_ALL 
FilterList return SEEK_NEXT_USING_HINT


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/5c9523b7
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/5c9523b7
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/5c9523b7

Branch: refs/heads/HBASE-18410
Commit: 5c9523b757e5b0f6b8d5ef1829f9b199fc2f73ef
Parents: 3f5f2a5
Author: tedyu 
Authored: Thu Sep 7 04:07:09 2017 -0700
Committer: zhangduo 
Committed: Tue Oct 24 11:35:24 2017 +0800

--
 .../main/java/org/apache/hadoop/hbase/filter/FilterList.java| 5 +++--
 .../java/org/apache/hadoop/hbase/filter/TestFilterList.java | 4 ++--
 2 files changed, 5 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/5c9523b7/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
index 83db1f2..3ff978d 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
@@ -28,12 +28,13 @@ import java.util.List;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CellComparatorImpl;
 import org.apache.hadoop.hbase.CellUtil;
-import org.apache.yetus.audience.InterfaceAudience;
 import org.apache.hadoop.hbase.KeyValueUtil;
 import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.yetus.audience.InterfaceAudience;
+
+import 
org.apache.hadoop.hbase.shaded.com.google.protobuf.InvalidProtocolBufferException;
 import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
 import org.apache.hadoop.hbase.shaded.protobuf.generated.FilterProtos;
-import 
org.apache.hadoop.hbase.shaded.com.google.protobuf.InvalidProtocolBufferException;
 
 /**
  * Implementation of {@link Filter} that represents an ordered List of Filters

http://git-wip-us.apache.org/repos/asf/hbase/blob/5c9523b7/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java
index 46d44de..e414729 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java
@@ -502,8 +502,8 @@ public class TestFilterList {
 // Should take the min if given two hints
 FilterList filterList = new FilterList(Operator.MUST_PASS_ONE,
 Arrays.asList(new Filter [] { filterMinHint, filterMaxHint } ));
-assertEquals(0,
-  CellComparatorImpl.COMPARATOR.compare(filterList.getNextCellHint(null), 
minKeyValue));
+assertEquals(0, 
CellComparatorImpl.COMPARATOR.compare(filterList.getNextCellHint(null),
+  minKeyValue));
 
 // Should have no hint if any filter has no hint
 filterList = new FilterList(Operator.MUST_PASS_ONE,



[49/50] [abbrv] hbase git commit: HBASE-18879 HBase FilterList cause KeyOnlyFilter not work

2017-10-23 Thread zhangduo
HBASE-18879 HBase FilterList cause KeyOnlyFilter not work


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/a17094f8
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/a17094f8
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/a17094f8

Branch: refs/heads/HBASE-18410
Commit: a17094f8e230e8350d97432243c9e69620e62619
Parents: b32bff0
Author: huzheng 
Authored: Wed Oct 11 21:17:03 2017 +0800
Committer: zhangduo 
Committed: Tue Oct 24 11:39:31 2017 +0800

--
 .../apache/hadoop/hbase/filter/FilterList.java  |  6 +++
 .../hadoop/hbase/filter/FilterListBase.java |  3 ++
 .../hadoop/hbase/filter/FilterListWithAND.java  | 22 +
 .../hadoop/hbase/filter/FilterListWithOR.java   | 22 +
 .../hadoop/hbase/filter/TestFilterList.java | 48 
 5 files changed, 85 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/a17094f8/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
index 97392d1..e87f1b3 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
@@ -72,6 +72,8 @@ final public class FilterList extends FilterBase {
   filterListBase = new FilterListWithAND(filters);
 } else if (operator == Operator.MUST_PASS_ONE) {
   filterListBase = new FilterListWithOR(filters);
+} else {
+  throw new IllegalArgumentException("Invalid operator: " + operator);
 }
 this.operator = operator;
   }
@@ -168,6 +170,10 @@ final public class FilterList extends FilterBase {
 return filterListBase.transformCell(c);
   }
 
+  ReturnCode internalFilterKeyValue(Cell c, Cell currentTransformedCell) 
throws IOException {
+return this.filterListBase.internalFilterKeyValue(c, 
currentTransformedCell);
+  }
+
   @Override
   public ReturnCode filterKeyValue(Cell c) throws IOException {
 return filterListBase.filterKeyValue(c);

http://git-wip-us.apache.org/repos/asf/hbase/blob/a17094f8/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterListBase.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterListBase.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterListBase.java
index 7fa0245..60b0dc1 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterListBase.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterListBase.java
@@ -107,6 +107,9 @@ public abstract class FilterListBase extends FilterBase {
 return cell;
   }
 
+  abstract ReturnCode internalFilterKeyValue(Cell c, Cell 
currentTransformedCell)
+  throws IOException;
+
   /**
* Filters that never filter by modifying the returned List of Cells can 
inherit this
* implementation that does nothing. {@inheritDoc}

http://git-wip-us.apache.org/repos/asf/hbase/blob/a17094f8/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterListWithAND.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterListWithAND.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterListWithAND.java
index fa979c0..4909dfd 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterListWithAND.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterListWithAND.java
@@ -147,16 +147,26 @@ public class FilterListWithAND extends FilterListBase {
 "Received code is not valid. rc: " + rc + ", localRC: " + localRC);
   }
 
-  private ReturnCode filterKeyValueWithMustPassAll(Cell c) throws IOException {
+  @Override
+  ReturnCode internalFilterKeyValue(Cell c, Cell currentTransformedCell) 
throws IOException {
+if (isEmpty()) {
+  return ReturnCode.INCLUDE;
+}
 ReturnCode rc = ReturnCode.INCLUDE;
-Cell transformed = c;
+Cell transformed = currentTransformedCell;
+this.referenceCell = c;
 this.seekHintFilter.clear();
 for (int i = 0, n = filters.size(); i < n; i++) {
   Filter filter = filters.get(i);
   if (filter.filterAllRemaining()) {
 return ReturnCode.NEXT_ROW;
   }
-  ReturnCode localRC = filter.filterKeyValue(c);
+  ReturnCode localRC;
+  if (filter instanceof FilterList) {
+localRC = ((FilterList) filter).internalFilterKeyValue(c, transformed);
+  } else {
+ 

[31/50] [abbrv] hbase git commit: HBASE-19007 Align Services Interfaces in Master and RegionServer

2017-10-23 Thread zhangduo
HBASE-19007 Align Services Interfaces in Master and RegionServer

Purges Server, MasterServices, and RegionServerServices from
CoprocessorEnvironments. Replaces removed functionality with
a set of carefully curated methods on the *CoprocessorEnvironment
implementations (Varies by CoprocessorEnvironment in that the
MasterCoprocessorEnvironment has Master-type facility exposed,
and so on).

A few core Coprocessors that should long ago have been converted
to be integral, violate their context; e.g. a RegionCoprocessor
wants free access to a hosting RegionServer (which may or may not
be present). Rather than let these violators make us corrupte the
CP API, instead, we've made up a hacky system that allows core
Coprocessors access to internals. A new CoreCoprocessor Annotation
has been introduced. When loading Coprocessors, if the instance is
annotated CoreCoprocessor, we pass it an Environment that has been
padded w/ extra-stuff. On invocation, CoreCoprocessors know how to
route their way to these extras in their environment.

See the *CoprocessoHost for how the do the check for CoreCoprocessor
and pass a fatter *Coprocessor, one that allows getting of either
a RegionServerService or MasterService out of the environment
via Marker Interfaces.

Removed org.apache.hadoop.hbase.regionserver.CoprocessorRegionServerServices

M 
hbase-endpoint/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java
 This Endpoint has been deprecated because its functionality has been
 moved to core. Marking it a CoreCoprocessor in the meantime to
 minimize change.

M 
hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminEndpoint.java
 This should be integral to hbase. Meantime, marking it CoreCoprocessor.

M hbase-server/src/main/java/org/apache/hadoop/hbase/Server.java
 Added doc on where it is used and added back a few methods we'd
removed.

A 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoreCoprocessor.java
 New annotation for core hbase coprocessors. They get richer environment
 on coprocessor loading.

A 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/HasMasterServices.java
A 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/HasRegionServerServices.java
 Marker Interface to access extras if present.

M 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterCoprocessorEnvironment.java
  Purge MasterServices access. Allow CPs a Connection.

M 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionCoprocessorEnvironment.java
  Purge RegionServerServices access. Allow CPs a Connection.

M 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionServerCoprocessorEnvironment.java
  Purge MasterServices access. Allow CPs a Connection.

M 
hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/MasterSpaceQuotaObserver.java
M hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaCache.java
  We no longer have access to MasterServices. Don't need it actually.
  Use short-circuiting Admin instead.

D 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CoprocessorRegionServerServices.java
  Removed. Not needed now we do CP Env differently.

M 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
  No need to go via RSS to getOnlineTables; just use HRS.

And so on. Adds tests to ensure we can only get at extra info
if the CP has been properly marked.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/38879fb3
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/38879fb3
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/38879fb3

Branch: refs/heads/HBASE-18410
Commit: 38879fb3ffa88ca95b15c61656a92e72c0ed996f
Parents: 592d541
Author: Guanghao Zhang 
Authored: Mon Oct 16 17:12:37 2017 +0800
Committer: Michael Stack 
Committed: Sat Oct 21 11:06:30 2017 -0700

--
 .../apache/hadoop/hbase/client/Connection.java  |   2 +-
 .../hadoop/hbase/client/ConnectionUtils.java|  64 +++
 .../apache/hadoop/hbase/client/HBaseAdmin.java  |   2 +-
 .../security/access/SecureBulkLoadEndpoint.java |   6 +-
 .../hbase/rsgroup/RSGroupAdminEndpoint.java |  10 +-
 .../java/org/apache/hadoop/hbase/Server.java|  21 +++-
 .../hbase/coprocessor/BaseEnvironment.java  |   3 +-
 .../hbase/coprocessor/CoreCoprocessor.java  |  45 
 .../hbase/coprocessor/HasMasterServices.java|  37 ++
 .../coprocessor/HasRegionServerServices.java|  37 ++
 .../MasterCoprocessorEnvironment.java   |  24 +++-
 .../RegionCoprocessorEnvironment.java   |  26 -
 .../RegionServerCoprocessorEnvironment.java |  24 +++-
 .../org/apache/hadoop/hbase/ipc/RpcServer.java  |   9 +-
 .../hbase/master/MasterCoprocessorHost.java |  48 ++--
 

[16/50] [abbrv] hbase git commit: HBASE-19026 TestLockProcedure#testRemoteNamespaceLockRecovery fails

2017-10-23 Thread zhangduo
HBASE-19026 TestLockProcedure#testRemoteNamespaceLockRecovery fails


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/909e5f2f
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/909e5f2f
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/909e5f2f

Branch: refs/heads/HBASE-18410
Commit: 909e5f2f14186709ceb4697f76103c83125c8b49
Parents: 4a7b430
Author: tedyu 
Authored: Thu Oct 19 11:07:57 2017 -0700
Committer: tedyu 
Committed: Thu Oct 19 11:07:57 2017 -0700

--
 .../org/apache/hadoop/hbase/master/locking/TestLockProcedure.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/909e5f2f/hbase-server/src/test/java/org/apache/hadoop/hbase/master/locking/TestLockProcedure.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/locking/TestLockProcedure.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/locking/TestLockProcedure.java
index ce02395..a817bd5 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/locking/TestLockProcedure.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/locking/TestLockProcedure.java
@@ -397,7 +397,7 @@ public class TestLockProcedure {
 sendHeartbeatAndCheckLocked(procId, true);
 Thread.sleep(HEARTBEAT_TIMEOUT/2);
 sendHeartbeatAndCheckLocked(procId, true);
-Thread.sleep(2 * HEARTBEAT_TIMEOUT);
+Thread.sleep(2 * HEARTBEAT_TIMEOUT + HEARTBEAT_TIMEOUT/2);
 sendHeartbeatAndCheckLocked(procId, false);
 ProcedureTestingUtility.waitProcedure(procExec, procId);
 ProcedureTestingUtility.assertProcNotFailed(procExec, procId);



[15/50] [abbrv] hbase git commit: Revert "HBASE-19042 Oracle Java 8u144 downloader broken in precommit check"

2017-10-23 Thread zhangduo
Revert "HBASE-19042 Oracle Java 8u144 downloader broken in precommit check"

This reverts commit 9e688117bad3cb4826c7201bb359672676389620.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/4a7b4303
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/4a7b4303
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/4a7b4303

Branch: refs/heads/HBASE-18410
Commit: 4a7b4303979ffe9896811f633141681669e1c20d
Parents: 9e68811
Author: zhangduo 
Authored: Thu Oct 19 16:03:28 2017 +0800
Committer: zhangduo 
Committed: Thu Oct 19 16:03:28 2017 +0800

--
 dev-support/docker/Dockerfile | 29 ++---
 1 file changed, 18 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/4a7b4303/dev-support/docker/Dockerfile
--
diff --git a/dev-support/docker/Dockerfile b/dev-support/docker/Dockerfile
index c23c70d..62c6030 100644
--- a/dev-support/docker/Dockerfile
+++ b/dev-support/docker/Dockerfile
@@ -65,18 +65,18 @@ RUN apt-get -q update && apt-get -q install 
--no-install-recommends -y \
 zlib1g-dev
 
 ###
-# OpenJDK 8
+# Oracle Java
 ###
 
 RUN echo "dot_style = mega" > "/root/.wgetrc"
 RUN echo "quiet = on" >> "/root/.wgetrc"
 
 RUN apt-get -q update && apt-get -q install --no-install-recommends -y 
software-properties-common
-RUN add-apt-repository -y ppa:openjdk-r/ppa
-RUN apt-get -q update
-RUN apt-get -q install --no-install-recommends -y openjdk-8-jdk
-RUN update-alternatives --config java
-RUN update-alternatives --config javac
+RUN add-apt-repository -y ppa:webupd8team/java
+
+# Auto-accept the Oracle JDK license
+RUN echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select 
true | sudo /usr/bin/debconf-set-selections
+RUN apt-get -q update && apt-get -q install --no-install-recommends -y 
oracle-java8-installer
 
 
 # Apps that require Java
@@ -131,16 +131,23 @@ RUN pip install python-dateutil
 # Install Ruby 2, based on Yetus 0.4.0 dockerfile
 ###
 RUN echo 'gem: --no-rdoc --no-ri' >> /root/.gemrc
-RUN apt-add-repository ppa:brightbox/ruby-ng
-RUN apt-get -q update
+RUN apt-get -q install -y ruby2.0
+#
+# on trusty, the above installs ruby2.0 and ruby (1.9.3) exes
+# but update-alternatives is broken, so we need to do some work
+# to make 2.0 actually the default without the system flipping out
+#
+# See https://bugs.launchpad.net/ubuntu/+source/ruby2.0/+bug/1310292
+#
+RUN dpkg-divert --add --rename --divert /usr/bin/ruby.divert /usr/bin/ruby
+RUN dpkg-divert --add --rename --divert /usr/bin/gem.divert /usr/bin/gemrc
+RUN update-alternatives --install /usr/bin/ruby ruby /usr/bin/ruby2.0 1
+RUN update-alternatives --install /usr/bin/gem gem /usr/bin/gem2.0 1
 
-RUN apt-get -q install --no-install-recommends -y ruby2.2 ruby-switch
-RUN ruby-switch --set ruby2.2
 
 
 # Install rubocop
 ###
-RUN gem install rake
 RUN gem install rubocop
 
 



hbase git commit: HBASE-19069 Do not wrap the original CompactionLifeCycleTracker when calling CP hooks

2017-10-23 Thread zhangduo
Repository: hbase
Updated Branches:
  refs/heads/branch-2 3e0b90b94 -> a6f89f029


HBASE-19069 Do not wrap the original CompactionLifeCycleTracker when calling CP 
hooks


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/a6f89f02
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/a6f89f02
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/a6f89f02

Branch: refs/heads/branch-2
Commit: a6f89f029a78a37eb84e4908b9975118e3050603
Parents: 3e0b90b
Author: zhangduo 
Authored: Mon Oct 23 21:10:44 2017 +0800
Committer: zhangduo 
Committed: Tue Oct 24 10:56:19 2017 +0800

--
 .../hadoop/hbase/regionserver/CompactSplit.java | 135 ++-
 .../TestCompactionLifeCycleTracker.java |   9 +-
 2 files changed, 80 insertions(+), 64 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/a6f89f02/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplit.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplit.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplit.java
index b82b346..0749f85 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplit.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplit.java
@@ -237,80 +237,73 @@ public class CompactSplit implements CompactionRequester, 
PropagatingConfigurati
 }
   }
 
-  // A compaction life cycle tracker to trace the execution of all the 
compactions triggered by one
-  // request and delegate to the source CompactionLifeCycleTracker. It will 
call completed method if
-  // all the compactions are finished.
-  private static final class AggregatingCompactionLifeCycleTracker
-  implements CompactionLifeCycleTracker {
+  private interface CompactionCompleteTracker {
+
+default void completed(Store store) {
+}
+  }
+
+  private static final CompactionCompleteTracker DUMMY_COMPLETE_TRACKER =
+  new CompactionCompleteTracker() {
+  };
+
+  private static final class AggregatingCompleteTracker implements 
CompactionCompleteTracker {
 
 private final CompactionLifeCycleTracker tracker;
 
 private final AtomicInteger remaining;
 
-public AggregatingCompactionLifeCycleTracker(CompactionLifeCycleTracker 
tracker,
-int numberOfStores) {
+public AggregatingCompleteTracker(CompactionLifeCycleTracker tracker, int 
numberOfStores) {
   this.tracker = tracker;
   this.remaining = new AtomicInteger(numberOfStores);
 }
 
-private void tryCompleted() {
+@Override
+public void completed(Store store) {
   if (remaining.decrementAndGet() == 0) {
 tracker.completed();
   }
 }
-
-@Override
-public void notExecuted(Store store, String reason) {
-  tracker.notExecuted(store, reason);
-  tryCompleted();
-}
-
-@Override
-public void beforeExecution(Store store) {
-  tracker.beforeExecution(store);
-}
-
-@Override
-public void afterExecution(Store store) {
-  tracker.afterExecution(store);
-  tryCompleted();
-}
   }
 
-  private CompactionLifeCycleTracker wrap(CompactionLifeCycleTracker tracker,
+  private CompactionCompleteTracker 
getCompleteTracker(CompactionLifeCycleTracker tracker,
   IntSupplier numberOfStores) {
 if (tracker == CompactionLifeCycleTracker.DUMMY) {
   // a simple optimization to avoid creating unnecessary objects as 
usually we do not care about
   // the life cycle of a compaction.
-  return tracker;
+  return DUMMY_COMPLETE_TRACKER;
 } else {
-  return new AggregatingCompactionLifeCycleTracker(tracker, 
numberOfStores.getAsInt());
+  return new AggregatingCompleteTracker(tracker, 
numberOfStores.getAsInt());
 }
   }
 
   @Override
   public synchronized void requestCompaction(HRegion region, String why, int 
priority,
   CompactionLifeCycleTracker tracker, User user) throws IOException {
-requestCompactionInternal(region, why, priority, true,
-  wrap(tracker, () -> region.getTableDescriptor().getColumnFamilyCount()), 
user);
+requestCompactionInternal(region, why, priority, true, tracker,
+  getCompleteTracker(tracker, () -> 
region.getTableDescriptor().getColumnFamilyCount()), user);
   }
 
   @Override
   public synchronized void requestCompaction(HRegion region, HStore store, 
String why, int priority,
   CompactionLifeCycleTracker tracker, User user) throws IOException {
-requestCompactionInternal(region, store, why, priority, true, 
wrap(tracker, () -> 1), user);
+requestCompactionInternal(region, store, why, priority, true, tracker,
+  

hbase git commit: HBASE-19069 Do not wrap the original CompactionLifeCycleTracker when calling CP hooks

2017-10-23 Thread zhangduo
Repository: hbase
Updated Branches:
  refs/heads/master 81133f89f -> 37b29e909


HBASE-19069 Do not wrap the original CompactionLifeCycleTracker when calling CP 
hooks


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/37b29e90
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/37b29e90
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/37b29e90

Branch: refs/heads/master
Commit: 37b29e909defecdc580112ce6cd306710d13e9e2
Parents: 81133f8
Author: zhangduo 
Authored: Mon Oct 23 21:10:44 2017 +0800
Committer: zhangduo 
Committed: Tue Oct 24 10:56:14 2017 +0800

--
 .../hadoop/hbase/regionserver/CompactSplit.java | 135 ++-
 .../TestCompactionLifeCycleTracker.java |   9 +-
 2 files changed, 80 insertions(+), 64 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/37b29e90/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplit.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplit.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplit.java
index b82b346..0749f85 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplit.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplit.java
@@ -237,80 +237,73 @@ public class CompactSplit implements CompactionRequester, 
PropagatingConfigurati
 }
   }
 
-  // A compaction life cycle tracker to trace the execution of all the 
compactions triggered by one
-  // request and delegate to the source CompactionLifeCycleTracker. It will 
call completed method if
-  // all the compactions are finished.
-  private static final class AggregatingCompactionLifeCycleTracker
-  implements CompactionLifeCycleTracker {
+  private interface CompactionCompleteTracker {
+
+default void completed(Store store) {
+}
+  }
+
+  private static final CompactionCompleteTracker DUMMY_COMPLETE_TRACKER =
+  new CompactionCompleteTracker() {
+  };
+
+  private static final class AggregatingCompleteTracker implements 
CompactionCompleteTracker {
 
 private final CompactionLifeCycleTracker tracker;
 
 private final AtomicInteger remaining;
 
-public AggregatingCompactionLifeCycleTracker(CompactionLifeCycleTracker 
tracker,
-int numberOfStores) {
+public AggregatingCompleteTracker(CompactionLifeCycleTracker tracker, int 
numberOfStores) {
   this.tracker = tracker;
   this.remaining = new AtomicInteger(numberOfStores);
 }
 
-private void tryCompleted() {
+@Override
+public void completed(Store store) {
   if (remaining.decrementAndGet() == 0) {
 tracker.completed();
   }
 }
-
-@Override
-public void notExecuted(Store store, String reason) {
-  tracker.notExecuted(store, reason);
-  tryCompleted();
-}
-
-@Override
-public void beforeExecution(Store store) {
-  tracker.beforeExecution(store);
-}
-
-@Override
-public void afterExecution(Store store) {
-  tracker.afterExecution(store);
-  tryCompleted();
-}
   }
 
-  private CompactionLifeCycleTracker wrap(CompactionLifeCycleTracker tracker,
+  private CompactionCompleteTracker 
getCompleteTracker(CompactionLifeCycleTracker tracker,
   IntSupplier numberOfStores) {
 if (tracker == CompactionLifeCycleTracker.DUMMY) {
   // a simple optimization to avoid creating unnecessary objects as 
usually we do not care about
   // the life cycle of a compaction.
-  return tracker;
+  return DUMMY_COMPLETE_TRACKER;
 } else {
-  return new AggregatingCompactionLifeCycleTracker(tracker, 
numberOfStores.getAsInt());
+  return new AggregatingCompleteTracker(tracker, 
numberOfStores.getAsInt());
 }
   }
 
   @Override
   public synchronized void requestCompaction(HRegion region, String why, int 
priority,
   CompactionLifeCycleTracker tracker, User user) throws IOException {
-requestCompactionInternal(region, why, priority, true,
-  wrap(tracker, () -> region.getTableDescriptor().getColumnFamilyCount()), 
user);
+requestCompactionInternal(region, why, priority, true, tracker,
+  getCompleteTracker(tracker, () -> 
region.getTableDescriptor().getColumnFamilyCount()), user);
   }
 
   @Override
   public synchronized void requestCompaction(HRegion region, HStore store, 
String why, int priority,
   CompactionLifeCycleTracker tracker, User user) throws IOException {
-requestCompactionInternal(region, store, why, priority, true, 
wrap(tracker, () -> 1), user);
+requestCompactionInternal(region, store, why, priority, true, tracker,
+  

[2/2] hbase git commit: HBASE-18873 Move protobufs to private implementation on GlobalQuotaSettings

2017-10-23 Thread elserj
HBASE-18873 Move protobufs to private implementation on GlobalQuotaSettings

A hack to "hide" the protobufs, but it's not going to be a trivial
change to remove use of protobufs entirely as they're serialized
into the hbase:quota table.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/3e0b90b9
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/3e0b90b9
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/3e0b90b9

Branch: refs/heads/branch-2
Commit: 3e0b90b949e311878f45251a28352156fec05743
Parents: 1e98ce2
Author: Josh Elser 
Authored: Wed Oct 11 18:37:42 2017 -0400
Committer: Josh Elser 
Committed: Mon Oct 23 22:44:44 2017 -0400

--
 .../hbase/quotas/QuotaSettingsFactory.java  |   2 +-
 .../hbase/quotas/GlobalQuotaSettings.java   | 290 +---
 .../hbase/quotas/GlobalQuotaSettingsImpl.java   | 332 +++
 .../hadoop/hbase/quotas/MasterQuotaManager.java |  72 ++--
 .../hbase/quotas/TestGlobalQuotaSettings.java   | 122 ---
 .../quotas/TestGlobalQuotaSettingsImpl.java | 122 +++
 6 files changed, 501 insertions(+), 439 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/3e0b90b9/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
index 185365b..2a20c51 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
@@ -116,7 +116,7 @@ public class QuotaSettingsFactory {
 return settings;
   }
 
-  private static List fromThrottle(final String userName, final 
TableName tableName,
+  protected static List fromThrottle(final String userName, 
final TableName tableName,
   final String namespace, final QuotaProtos.Throttle throttle) {
 List settings = new ArrayList<>();
 if (throttle.hasReqNum()) {

http://git-wip-us.apache.org/repos/asf/hbase/blob/3e0b90b9/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/GlobalQuotaSettings.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/GlobalQuotaSettings.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/GlobalQuotaSettings.java
index 079edf0..107523b 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/GlobalQuotaSettings.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/GlobalQuotaSettings.java
@@ -16,23 +16,12 @@
  */
 package org.apache.hadoop.hbase.quotas;
 
-import java.io.IOException;
-import java.util.HashMap;
-import java.util.Map;
-import java.util.Map.Entry;
-import java.util.Objects;
+import java.util.List;
 
-import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.HBaseInterfaceAudience;
 import org.apache.hadoop.hbase.TableName;
-import 
org.apache.hadoop.hbase.quotas.QuotaSettingsFactory.QuotaGlobalsSettingsBypass;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.SetQuotaRequest.Builder;
 import org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Quotas;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota;
-import org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Throttle;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.TimedQuota;
-import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-import org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos;
 import org.apache.yetus.audience.InterfaceAudience;
 import org.apache.yetus.audience.InterfaceStability;
 
@@ -43,28 +32,19 @@ import org.apache.yetus.audience.InterfaceStability;
  */
 @InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC})
 @InterfaceStability.Evolving
-public class GlobalQuotaSettings extends QuotaSettings {
-  private final QuotaProtos.Throttle throttleProto;
-  private final Boolean bypassGlobals;
-  private final QuotaProtos.SpaceQuota spaceProto;
+public abstract class GlobalQuotaSettings extends QuotaSettings {
 
-  protected GlobalQuotaSettings(
-  String username, TableName tableName, String namespace, 
QuotaProtos.Quotas quotas) {
-this(username, tableName, namespace,
-(quotas != null && quotas.hasThrottle() ? quotas.getThrottle() : null),
-(quotas != null && quotas.hasBypassGlobals() ? 
quotas.getBypassGlobals() : null),
-(quotas != null && quotas.hasSpace() ? quotas.getSpace() : null));
-  }
-
-  protected 

[1/2] hbase git commit: HBASE-18873 Move protobufs to private implementation on GlobalQuotaSettings

2017-10-23 Thread elserj
Repository: hbase
Updated Branches:
  refs/heads/branch-2 1e98ce2c8 -> 3e0b90b94
  refs/heads/master b7db62c70 -> 81133f89f


HBASE-18873 Move protobufs to private implementation on GlobalQuotaSettings

A hack to "hide" the protobufs, but it's not going to be a trivial
change to remove use of protobufs entirely as they're serialized
into the hbase:quota table.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/81133f89
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/81133f89
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/81133f89

Branch: refs/heads/master
Commit: 81133f89fc9a80fbd03aff5a3b51184eeb90f130
Parents: b7db62c
Author: Josh Elser 
Authored: Wed Oct 11 18:37:42 2017 -0400
Committer: Josh Elser 
Committed: Mon Oct 23 22:37:10 2017 -0400

--
 .../hbase/quotas/QuotaSettingsFactory.java  |   2 +-
 .../hbase/quotas/GlobalQuotaSettings.java   | 290 +---
 .../hbase/quotas/GlobalQuotaSettingsImpl.java   | 332 +++
 .../hadoop/hbase/quotas/MasterQuotaManager.java |  72 ++--
 .../hbase/quotas/TestGlobalQuotaSettings.java   | 122 ---
 .../quotas/TestGlobalQuotaSettingsImpl.java | 122 +++
 6 files changed, 501 insertions(+), 439 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/81133f89/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
index 185365b..2a20c51 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
@@ -116,7 +116,7 @@ public class QuotaSettingsFactory {
 return settings;
   }
 
-  private static List fromThrottle(final String userName, final 
TableName tableName,
+  protected static List fromThrottle(final String userName, 
final TableName tableName,
   final String namespace, final QuotaProtos.Throttle throttle) {
 List settings = new ArrayList<>();
 if (throttle.hasReqNum()) {

http://git-wip-us.apache.org/repos/asf/hbase/blob/81133f89/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/GlobalQuotaSettings.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/GlobalQuotaSettings.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/GlobalQuotaSettings.java
index 079edf0..107523b 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/GlobalQuotaSettings.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/GlobalQuotaSettings.java
@@ -16,23 +16,12 @@
  */
 package org.apache.hadoop.hbase.quotas;
 
-import java.io.IOException;
-import java.util.HashMap;
-import java.util.Map;
-import java.util.Map.Entry;
-import java.util.Objects;
+import java.util.List;
 
-import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.HBaseInterfaceAudience;
 import org.apache.hadoop.hbase.TableName;
-import 
org.apache.hadoop.hbase.quotas.QuotaSettingsFactory.QuotaGlobalsSettingsBypass;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.SetQuotaRequest.Builder;
 import org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Quotas;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota;
-import org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Throttle;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.TimedQuota;
-import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-import org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos;
 import org.apache.yetus.audience.InterfaceAudience;
 import org.apache.yetus.audience.InterfaceStability;
 
@@ -43,28 +32,19 @@ import org.apache.yetus.audience.InterfaceStability;
  */
 @InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC})
 @InterfaceStability.Evolving
-public class GlobalQuotaSettings extends QuotaSettings {
-  private final QuotaProtos.Throttle throttleProto;
-  private final Boolean bypassGlobals;
-  private final QuotaProtos.SpaceQuota spaceProto;
+public abstract class GlobalQuotaSettings extends QuotaSettings {
 
-  protected GlobalQuotaSettings(
-  String username, TableName tableName, String namespace, 
QuotaProtos.Quotas quotas) {
-this(username, tableName, namespace,
-(quotas != null && quotas.hasThrottle() ? quotas.getThrottle() : null),
-(quotas != null && quotas.hasBypassGlobals() ? 

hbase git commit: HBASE-19072 Missing beak in catch block of InterruptedException in HRegion#waitForFlushes()

2017-10-23 Thread tedyu
Repository: hbase
Updated Branches:
  refs/heads/branch-1.2 95ddf27da -> b4e6eae5b


HBASE-19072 Missing beak in catch block of InterruptedException in 
HRegion#waitForFlushes()


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/b4e6eae5
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/b4e6eae5
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/b4e6eae5

Branch: refs/heads/branch-1.2
Commit: b4e6eae5ba1bf00e7b413df618b5c44b088ed6fb
Parents: 95ddf27
Author: tedyu 
Authored: Mon Oct 23 19:42:11 2017 -0700
Committer: tedyu 
Committed: Mon Oct 23 19:42:11 2017 -0700

--
 .../src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/b4e6eae5/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
index 0e2860b..0e73d1c 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
@@ -1598,6 +1598,7 @@ public class HRegion implements HeapSize, 
PropagatingConfigurationObserver, Regi
 // essentially ignore and propagate the interrupt back up
 LOG.warn("Interrupted while waiting");
 interrupted = true;
+break;
   }
 }
   } finally {



hbase git commit: HBASE-19072 Missing beak in catch block of InterruptedException in HRegion#waitForFlushes()

2017-10-23 Thread tedyu
Repository: hbase
Updated Branches:
  refs/heads/branch-1.3 21b80f4b7 -> 447154dc0


HBASE-19072 Missing beak in catch block of InterruptedException in 
HRegion#waitForFlushes()


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/447154dc
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/447154dc
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/447154dc

Branch: refs/heads/branch-1.3
Commit: 447154dc079fdf77636a341a13b7c5a9e647df01
Parents: 21b80f4
Author: tedyu 
Authored: Mon Oct 23 19:41:43 2017 -0700
Committer: tedyu 
Committed: Mon Oct 23 19:41:43 2017 -0700

--
 .../src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/447154dc/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
index 5d35be2..a7ed011 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
@@ -1612,6 +1612,7 @@ public class HRegion implements HeapSize, 
PropagatingConfigurationObserver, Regi
 // essentially ignore and propagate the interrupt back up
 LOG.warn("Interrupted while waiting");
 interrupted = true;
+break;
   }
 }
   } finally {



hbase git commit: HBASE-19072 Missing beak in catch block of InterruptedException in HRegion#waitForFlushes()

2017-10-23 Thread tedyu
Repository: hbase
Updated Branches:
  refs/heads/branch-1.4 c32000763 -> 5f1bf1905


HBASE-19072 Missing beak in catch block of InterruptedException in 
HRegion#waitForFlushes()


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/5f1bf190
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/5f1bf190
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/5f1bf190

Branch: refs/heads/branch-1.4
Commit: 5f1bf190587fd714a75dc1cc3dfdf508e67b386d
Parents: c320007
Author: tedyu 
Authored: Mon Oct 23 19:40:20 2017 -0700
Committer: tedyu 
Committed: Mon Oct 23 19:40:20 2017 -0700

--
 .../src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/5f1bf190/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
index b861d45..7a5720a 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
@@ -1695,6 +1695,7 @@ public class HRegion implements HeapSize, 
PropagatingConfigurationObserver, Regi
 // essentially ignore and propagate the interrupt back up
 LOG.warn("Interrupted while waiting");
 interrupted = true;
+break;
   }
 }
   } finally {



hbase git commit: HBASE-19072 Missing beak in catch block of InterruptedException in HRegion#waitForFlushes()

2017-10-23 Thread tedyu
Repository: hbase
Updated Branches:
  refs/heads/branch-1 d0629d0f1 -> e1a73b914


HBASE-19072 Missing beak in catch block of InterruptedException in 
HRegion#waitForFlushes()


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/e1a73b91
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/e1a73b91
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/e1a73b91

Branch: refs/heads/branch-1
Commit: e1a73b9144615ad452b14ed07ceb13c7f5d19a6a
Parents: d0629d0
Author: tedyu 
Authored: Mon Oct 23 19:38:45 2017 -0700
Committer: tedyu 
Committed: Mon Oct 23 19:38:45 2017 -0700

--
 .../src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/e1a73b91/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
index b861d45..7a5720a 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
@@ -1695,6 +1695,7 @@ public class HRegion implements HeapSize, 
PropagatingConfigurationObserver, Regi
 // essentially ignore and propagate the interrupt back up
 LOG.warn("Interrupted while waiting");
 interrupted = true;
+break;
   }
 }
   } finally {



hbase git commit: HBASE-19072 Missing beak in catch block of InterruptedException in HRegion#waitForFlushes()

2017-10-23 Thread tedyu
Repository: hbase
Updated Branches:
  refs/heads/branch-2 c0144e200 -> 1e98ce2c8


HBASE-19072 Missing beak in catch block of InterruptedException in 
HRegion#waitForFlushes()


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/1e98ce2c
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/1e98ce2c
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/1e98ce2c

Branch: refs/heads/branch-2
Commit: 1e98ce2c8eeefd4a94a6a9345c8aaaf696b7924a
Parents: c0144e2
Author: tedyu 
Authored: Mon Oct 23 19:36:45 2017 -0700
Committer: tedyu 
Committed: Mon Oct 23 19:36:45 2017 -0700

--
 .../src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/1e98ce2c/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
index 9022e1f..99f5c35 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
@@ -1791,6 +1791,7 @@ public class HRegion implements HeapSize, 
PropagatingConfigurationObserver, Regi
 // essentially ignore and propagate the interrupt back up
 LOG.warn("Interrupted while waiting");
 interrupted = true;
+break;
   }
 }
   } finally {



hbase git commit: Amend HBASE-15631 Backport Regionserver Groups (HBASE-6721) to branch-1 (Francis Liu and Andrew Purtell)

2017-10-23 Thread apurtell
Repository: hbase
Updated Branches:
  refs/heads/branch-1 64328caef -> d0629d0f1


Amend HBASE-15631 Backport Regionserver Groups (HBASE-6721) to branch-1 
(Francis Liu and Andrew Purtell)

Fix hbase-rsgroups/pom.xml


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/d0629d0f
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/d0629d0f
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/d0629d0f

Branch: refs/heads/branch-1
Commit: d0629d0f121e66a7d52caceb4ac290e879ba6eb1
Parents: 64328ca
Author: Andrew Purtell 
Authored: Mon Oct 23 19:34:35 2017 -0700
Committer: Andrew Purtell 
Committed: Mon Oct 23 19:34:35 2017 -0700

--
 hbase-rsgroup/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/d0629d0f/hbase-rsgroup/pom.xml
--
diff --git a/hbase-rsgroup/pom.xml b/hbase-rsgroup/pom.xml
index ac1d6b3..5e199af 100644
--- a/hbase-rsgroup/pom.xml
+++ b/hbase-rsgroup/pom.xml
@@ -24,7 +24,7 @@
   
 hbase
 org.apache.hbase
-1.4.0-SNAPSHOT
+1.5.0-SNAPSHOT
 ..
   
 



hbase git commit: HBASE-19072 Missing beak in catch block of InterruptedException in HRegion#waitForFlushes()

2017-10-23 Thread tedyu
Repository: hbase
Updated Branches:
  refs/heads/master a1bc20ab5 -> b7db62c70


HBASE-19072 Missing beak in catch block of InterruptedException in 
HRegion#waitForFlushes()


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/b7db62c7
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/b7db62c7
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/b7db62c7

Branch: refs/heads/master
Commit: b7db62c702ef27b79365cfa62a8afee9042bcc6b
Parents: a1bc20a
Author: tedyu 
Authored: Mon Oct 23 19:34:11 2017 -0700
Committer: tedyu 
Committed: Mon Oct 23 19:34:11 2017 -0700

--
 .../src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/b7db62c7/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
index 9022e1f..99f5c35 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
@@ -1791,6 +1791,7 @@ public class HRegion implements HeapSize, 
PropagatingConfigurationObserver, Regi
 // essentially ignore and propagate the interrupt back up
 LOG.warn("Interrupted while waiting");
 interrupted = true;
+break;
   }
 }
   } finally {



[1/4] hbase git commit: HBASE-18893 remove add/delete/modify column

2017-10-23 Thread mdrob
Repository: hbase
Updated Branches:
  refs/heads/branch-2 34df2e665 -> c0144e200
  refs/heads/master 880b26d7d -> a1bc20ab5


http://git-wip-us.apache.org/repos/asf/hbase/blob/a1bc20ab/hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestAddColumnFamilyProcedure.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestAddColumnFamilyProcedure.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestAddColumnFamilyProcedure.java
deleted file mode 100644
index 01de512..000
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestAddColumnFamilyProcedure.java
+++ /dev/null
@@ -1,190 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.hbase.master.procedure;
-
-import static org.junit.Assert.assertTrue;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.hbase.CategoryBasedTimeout;
-import org.apache.hadoop.hbase.HColumnDescriptor;
-import org.apache.hadoop.hbase.InvalidFamilyOperationException;
-import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.procedure2.Procedure;
-import org.apache.hadoop.hbase.procedure2.ProcedureExecutor;
-import org.apache.hadoop.hbase.procedure2.ProcedureTestingUtility;
-import org.apache.hadoop.hbase.testclassification.MasterTests;
-import org.apache.hadoop.hbase.testclassification.MediumTests;
-import org.junit.Rule;
-import org.junit.Test;
-import org.junit.experimental.categories.Category;
-import org.junit.rules.TestName;
-import org.junit.rules.TestRule;
-
-@Category({MasterTests.class, MediumTests.class})
-public class TestAddColumnFamilyProcedure extends TestTableDDLProcedureBase {
-  private static final Log LOG = 
LogFactory.getLog(TestAddColumnFamilyProcedure.class);
-  @Rule public final TestRule timeout = 
CategoryBasedTimeout.builder().withTimeout(this.getClass()).
-  withLookingForStuckThread(true).build();
-
-  @Rule public TestName name = new TestName();
-
-  @Test(timeout = 6)
-  public void testAddColumnFamily() throws Exception {
-final TableName tableName = TableName.valueOf(name.getMethodName());
-final String cf1 = "cf1";
-final String cf2 = "cf2";
-final HColumnDescriptor columnDescriptor1 = new HColumnDescriptor(cf1);
-final HColumnDescriptor columnDescriptor2 = new HColumnDescriptor(cf2);
-final ProcedureExecutor procExec = 
getMasterProcedureExecutor();
-
-MasterProcedureTestingUtility.createTable(procExec, tableName, null, "f3");
-
-// Test 1: Add a column family online
-long procId1 = procExec.submitProcedure(
-  new AddColumnFamilyProcedure(procExec.getEnvironment(), tableName, 
columnDescriptor1));
-// Wait the completion
-ProcedureTestingUtility.waitProcedure(procExec, procId1);
-ProcedureTestingUtility.assertProcNotFailed(procExec, procId1);
-
-MasterProcedureTestingUtility.validateColumnFamilyAddition(getMaster(), 
tableName, cf1);
-
-// Test 2: Add a column family offline
-UTIL.getAdmin().disableTable(tableName);
-long procId2 = procExec.submitProcedure(
-  new AddColumnFamilyProcedure(procExec.getEnvironment(), tableName, 
columnDescriptor2));
-// Wait the completion
-ProcedureTestingUtility.waitProcedure(procExec, procId2);
-ProcedureTestingUtility.assertProcNotFailed(procExec, procId2);
-MasterProcedureTestingUtility.validateColumnFamilyAddition(getMaster(), 
tableName, cf2);
-  }
-
-  @Test(timeout=6)
-  public void testAddSameColumnFamilyTwice() throws Exception {
-final TableName tableName = TableName.valueOf(name.getMethodName());
-final String cf2 = "cf2";
-final HColumnDescriptor columnDescriptor = new HColumnDescriptor(cf2);
-
-final ProcedureExecutor procExec = 
getMasterProcedureExecutor();
-
-MasterProcedureTestingUtility.createTable(procExec, tableName, null, "f1");
-
-// add the column family
-long procId1 = procExec.submitProcedure(
-  new AddColumnFamilyProcedure(procExec.getEnvironment(), tableName, 
columnDescriptor));
-// Wait the completion
-

[4/4] hbase git commit: HBASE-18893 remove add/delete/modify column

2017-10-23 Thread mdrob
HBASE-18893 remove add/delete/modify column


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/c0144e20
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/c0144e20
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/c0144e20

Branch: refs/heads/branch-2
Commit: c0144e200d55abb96147091cc13d5291cf5aef34
Parents: 34df2e6
Author: Mike Drob 
Authored: Tue Oct 17 16:47:41 2017 -0500
Committer: Mike Drob 
Committed: Mon Oct 23 20:03:09 2017 -0500

--
 .../src/main/protobuf/MasterProcedure.proto |  46 ---
 .../hbase/coprocessor/MasterObserver.java   | 142 ---
 .../org/apache/hadoop/hbase/master/HMaster.java | 122 +++---
 .../hbase/master/MasterCoprocessorHost.java | 133 ---
 .../procedure/AddColumnFamilyProcedure.java | 358 --
 .../procedure/DeleteColumnFamilyProcedure.java  | 371 ---
 .../procedure/ModifyColumnFamilyProcedure.java  | 323 
 .../hbase/security/access/AccessController.java |  40 +-
 .../visibility/VisibilityController.java|  34 --
 .../hbase/coprocessor/TestMasterObserver.java   | 194 --
 .../procedure/TestAddColumnFamilyProcedure.java | 190 --
 .../TestDeleteColumnFamilyProcedure.java| 211 ---
 .../TestModifyColumnFamilyProcedure.java| 183 -
 .../security/access/TestAccessController.java   |  51 ---
 .../access/TestWithDisabledAuthorization.java   |  32 --
 15 files changed, 44 insertions(+), 2386 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/c0144e20/hbase-protocol-shaded/src/main/protobuf/MasterProcedure.proto
--
diff --git a/hbase-protocol-shaded/src/main/protobuf/MasterProcedure.proto 
b/hbase-protocol-shaded/src/main/protobuf/MasterProcedure.proto
index 626530f..af9caef 100644
--- a/hbase-protocol-shaded/src/main/protobuf/MasterProcedure.proto
+++ b/hbase-protocol-shaded/src/main/protobuf/MasterProcedure.proto
@@ -148,52 +148,6 @@ message DeleteNamespaceStateData {
   optional NamespaceDescriptor namespace_descriptor = 2;
 }
 
-enum AddColumnFamilyState {
-  ADD_COLUMN_FAMILY_PREPARE = 1;
-  ADD_COLUMN_FAMILY_PRE_OPERATION = 2;
-  ADD_COLUMN_FAMILY_UPDATE_TABLE_DESCRIPTOR = 3;
-  ADD_COLUMN_FAMILY_POST_OPERATION = 4;
-  ADD_COLUMN_FAMILY_REOPEN_ALL_REGIONS = 5;
-}
-
-message AddColumnFamilyStateData {
-  required UserInformation user_info = 1;
-  required TableName table_name = 2;
-  required ColumnFamilySchema columnfamily_schema = 3;
-  optional TableSchema unmodified_table_schema = 4;
-}
-
-enum ModifyColumnFamilyState {
-  MODIFY_COLUMN_FAMILY_PREPARE = 1;
-  MODIFY_COLUMN_FAMILY_PRE_OPERATION = 2;
-  MODIFY_COLUMN_FAMILY_UPDATE_TABLE_DESCRIPTOR = 3;
-  MODIFY_COLUMN_FAMILY_POST_OPERATION = 4;
-  MODIFY_COLUMN_FAMILY_REOPEN_ALL_REGIONS = 5;
-}
-
-message ModifyColumnFamilyStateData {
-  required UserInformation user_info = 1;
-  required TableName table_name = 2;
-  required ColumnFamilySchema columnfamily_schema = 3;
-  optional TableSchema unmodified_table_schema = 4;
-}
-
-enum DeleteColumnFamilyState {
-  DELETE_COLUMN_FAMILY_PREPARE = 1;
-  DELETE_COLUMN_FAMILY_PRE_OPERATION = 2;
-  DELETE_COLUMN_FAMILY_UPDATE_TABLE_DESCRIPTOR = 3;
-  DELETE_COLUMN_FAMILY_DELETE_FS_LAYOUT = 4;
-  DELETE_COLUMN_FAMILY_POST_OPERATION = 5;
-  DELETE_COLUMN_FAMILY_REOPEN_ALL_REGIONS = 6;
-}
-
-message DeleteColumnFamilyStateData {
-  required UserInformation user_info = 1;
-  required TableName table_name = 2;
-  required bytes columnfamily_name = 3;
-  optional TableSchema unmodified_table_schema = 4;
-}
-
 enum EnableTableState {
   ENABLE_TABLE_PREPARE = 1;
   ENABLE_TABLE_PRE_OPERATION = 2;

http://git-wip-us.apache.org/repos/asf/hbase/blob/c0144e20/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
index 29f0f9f..397ec8a 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
@@ -271,148 +271,6 @@ public interface MasterObserver {
   final TableDescriptor htd) throws IOException {}
 
   /**
-   * Called prior to adding a new column family to the table.  Called as part 
of
-   * add column RPC call.
-   *
-   * @param ctx the environment to interact with the framework and master
-   * @param tableName the name of the table
-   * @param columnFamily the ColumnFamilyDescriptor
-   */
-  default void preAddColumnFamily(final 

[3/4] hbase git commit: HBASE-18893 remove add/delete/modify column

2017-10-23 Thread mdrob
http://git-wip-us.apache.org/repos/asf/hbase/blob/c0144e20/hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestAddColumnFamilyProcedure.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestAddColumnFamilyProcedure.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestAddColumnFamilyProcedure.java
deleted file mode 100644
index 01de512..000
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestAddColumnFamilyProcedure.java
+++ /dev/null
@@ -1,190 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.hbase.master.procedure;
-
-import static org.junit.Assert.assertTrue;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.hbase.CategoryBasedTimeout;
-import org.apache.hadoop.hbase.HColumnDescriptor;
-import org.apache.hadoop.hbase.InvalidFamilyOperationException;
-import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.procedure2.Procedure;
-import org.apache.hadoop.hbase.procedure2.ProcedureExecutor;
-import org.apache.hadoop.hbase.procedure2.ProcedureTestingUtility;
-import org.apache.hadoop.hbase.testclassification.MasterTests;
-import org.apache.hadoop.hbase.testclassification.MediumTests;
-import org.junit.Rule;
-import org.junit.Test;
-import org.junit.experimental.categories.Category;
-import org.junit.rules.TestName;
-import org.junit.rules.TestRule;
-
-@Category({MasterTests.class, MediumTests.class})
-public class TestAddColumnFamilyProcedure extends TestTableDDLProcedureBase {
-  private static final Log LOG = 
LogFactory.getLog(TestAddColumnFamilyProcedure.class);
-  @Rule public final TestRule timeout = 
CategoryBasedTimeout.builder().withTimeout(this.getClass()).
-  withLookingForStuckThread(true).build();
-
-  @Rule public TestName name = new TestName();
-
-  @Test(timeout = 6)
-  public void testAddColumnFamily() throws Exception {
-final TableName tableName = TableName.valueOf(name.getMethodName());
-final String cf1 = "cf1";
-final String cf2 = "cf2";
-final HColumnDescriptor columnDescriptor1 = new HColumnDescriptor(cf1);
-final HColumnDescriptor columnDescriptor2 = new HColumnDescriptor(cf2);
-final ProcedureExecutor procExec = 
getMasterProcedureExecutor();
-
-MasterProcedureTestingUtility.createTable(procExec, tableName, null, "f3");
-
-// Test 1: Add a column family online
-long procId1 = procExec.submitProcedure(
-  new AddColumnFamilyProcedure(procExec.getEnvironment(), tableName, 
columnDescriptor1));
-// Wait the completion
-ProcedureTestingUtility.waitProcedure(procExec, procId1);
-ProcedureTestingUtility.assertProcNotFailed(procExec, procId1);
-
-MasterProcedureTestingUtility.validateColumnFamilyAddition(getMaster(), 
tableName, cf1);
-
-// Test 2: Add a column family offline
-UTIL.getAdmin().disableTable(tableName);
-long procId2 = procExec.submitProcedure(
-  new AddColumnFamilyProcedure(procExec.getEnvironment(), tableName, 
columnDescriptor2));
-// Wait the completion
-ProcedureTestingUtility.waitProcedure(procExec, procId2);
-ProcedureTestingUtility.assertProcNotFailed(procExec, procId2);
-MasterProcedureTestingUtility.validateColumnFamilyAddition(getMaster(), 
tableName, cf2);
-  }
-
-  @Test(timeout=6)
-  public void testAddSameColumnFamilyTwice() throws Exception {
-final TableName tableName = TableName.valueOf(name.getMethodName());
-final String cf2 = "cf2";
-final HColumnDescriptor columnDescriptor = new HColumnDescriptor(cf2);
-
-final ProcedureExecutor procExec = 
getMasterProcedureExecutor();
-
-MasterProcedureTestingUtility.createTable(procExec, tableName, null, "f1");
-
-// add the column family
-long procId1 = procExec.submitProcedure(
-  new AddColumnFamilyProcedure(procExec.getEnvironment(), tableName, 
columnDescriptor));
-// Wait the completion
-ProcedureTestingUtility.waitProcedure(procExec, procId1);
-ProcedureTestingUtility.assertProcNotFailed(procExec, procId1);

[2/4] hbase git commit: HBASE-18893 remove add/delete/modify column

2017-10-23 Thread mdrob
HBASE-18893 remove add/delete/modify column


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/a1bc20ab
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/a1bc20ab
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/a1bc20ab

Branch: refs/heads/master
Commit: a1bc20ab5886acd65cc2b693eccf8e736d373b6b
Parents: 880b26d
Author: Mike Drob 
Authored: Tue Oct 17 16:47:41 2017 -0500
Committer: Mike Drob 
Committed: Mon Oct 23 20:02:25 2017 -0500

--
 .../src/main/protobuf/MasterProcedure.proto |  46 ---
 .../hbase/coprocessor/MasterObserver.java   | 142 ---
 .../org/apache/hadoop/hbase/master/HMaster.java | 122 +++---
 .../hbase/master/MasterCoprocessorHost.java | 133 ---
 .../procedure/AddColumnFamilyProcedure.java | 358 --
 .../procedure/DeleteColumnFamilyProcedure.java  | 371 ---
 .../procedure/ModifyColumnFamilyProcedure.java  | 323 
 .../hbase/security/access/AccessController.java |  40 +-
 .../visibility/VisibilityController.java|  34 --
 .../hbase/coprocessor/TestMasterObserver.java   | 194 --
 .../procedure/TestAddColumnFamilyProcedure.java | 190 --
 .../TestDeleteColumnFamilyProcedure.java| 211 ---
 .../TestModifyColumnFamilyProcedure.java| 183 -
 .../security/access/TestAccessController.java   |  51 ---
 .../access/TestWithDisabledAuthorization.java   |  32 --
 15 files changed, 44 insertions(+), 2386 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/a1bc20ab/hbase-protocol-shaded/src/main/protobuf/MasterProcedure.proto
--
diff --git a/hbase-protocol-shaded/src/main/protobuf/MasterProcedure.proto 
b/hbase-protocol-shaded/src/main/protobuf/MasterProcedure.proto
index 626530f..af9caef 100644
--- a/hbase-protocol-shaded/src/main/protobuf/MasterProcedure.proto
+++ b/hbase-protocol-shaded/src/main/protobuf/MasterProcedure.proto
@@ -148,52 +148,6 @@ message DeleteNamespaceStateData {
   optional NamespaceDescriptor namespace_descriptor = 2;
 }
 
-enum AddColumnFamilyState {
-  ADD_COLUMN_FAMILY_PREPARE = 1;
-  ADD_COLUMN_FAMILY_PRE_OPERATION = 2;
-  ADD_COLUMN_FAMILY_UPDATE_TABLE_DESCRIPTOR = 3;
-  ADD_COLUMN_FAMILY_POST_OPERATION = 4;
-  ADD_COLUMN_FAMILY_REOPEN_ALL_REGIONS = 5;
-}
-
-message AddColumnFamilyStateData {
-  required UserInformation user_info = 1;
-  required TableName table_name = 2;
-  required ColumnFamilySchema columnfamily_schema = 3;
-  optional TableSchema unmodified_table_schema = 4;
-}
-
-enum ModifyColumnFamilyState {
-  MODIFY_COLUMN_FAMILY_PREPARE = 1;
-  MODIFY_COLUMN_FAMILY_PRE_OPERATION = 2;
-  MODIFY_COLUMN_FAMILY_UPDATE_TABLE_DESCRIPTOR = 3;
-  MODIFY_COLUMN_FAMILY_POST_OPERATION = 4;
-  MODIFY_COLUMN_FAMILY_REOPEN_ALL_REGIONS = 5;
-}
-
-message ModifyColumnFamilyStateData {
-  required UserInformation user_info = 1;
-  required TableName table_name = 2;
-  required ColumnFamilySchema columnfamily_schema = 3;
-  optional TableSchema unmodified_table_schema = 4;
-}
-
-enum DeleteColumnFamilyState {
-  DELETE_COLUMN_FAMILY_PREPARE = 1;
-  DELETE_COLUMN_FAMILY_PRE_OPERATION = 2;
-  DELETE_COLUMN_FAMILY_UPDATE_TABLE_DESCRIPTOR = 3;
-  DELETE_COLUMN_FAMILY_DELETE_FS_LAYOUT = 4;
-  DELETE_COLUMN_FAMILY_POST_OPERATION = 5;
-  DELETE_COLUMN_FAMILY_REOPEN_ALL_REGIONS = 6;
-}
-
-message DeleteColumnFamilyStateData {
-  required UserInformation user_info = 1;
-  required TableName table_name = 2;
-  required bytes columnfamily_name = 3;
-  optional TableSchema unmodified_table_schema = 4;
-}
-
 enum EnableTableState {
   ENABLE_TABLE_PREPARE = 1;
   ENABLE_TABLE_PRE_OPERATION = 2;

http://git-wip-us.apache.org/repos/asf/hbase/blob/a1bc20ab/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
index 29f0f9f..397ec8a 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
@@ -271,148 +271,6 @@ public interface MasterObserver {
   final TableDescriptor htd) throws IOException {}
 
   /**
-   * Called prior to adding a new column family to the table.  Called as part 
of
-   * add column RPC call.
-   *
-   * @param ctx the environment to interact with the framework and master
-   * @param tableName the name of the table
-   * @param columnFamily the ColumnFamilyDescriptor
-   */
-  default void preAddColumnFamily(final 

[04/14] hbase git commit: HBASE-15631 Backport Regionserver Groups (HBASE-6721) to branch-1 (Francis Liu and Andrew Purtell)

2017-10-23 Thread apurtell
http://git-wip-us.apache.org/repos/asf/hbase/blob/c3200076/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminEndpoint.java
--
diff --git 
a/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminEndpoint.java
 
b/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminEndpoint.java
new file mode 100644
index 000..c4f5952
--- /dev/null
+++ 
b/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminEndpoint.java
@@ -0,0 +1,1049 @@
+/**
+ * Copyright The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rsgroup;
+
+import com.google.common.collect.Sets;
+
+import com.google.protobuf.RpcCallback;
+import com.google.protobuf.RpcController;
+import com.google.protobuf.Service;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+import org.apache.hadoop.hbase.Coprocessor;
+import org.apache.hadoop.hbase.CoprocessorEnvironment;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.NamespaceDescriptor;
+import org.apache.hadoop.hbase.ProcedureInfo;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin.MasterSwitchType;
+import org.apache.hadoop.hbase.constraint.ConstraintException;
+import org.apache.hadoop.hbase.coprocessor.CoprocessorService;
+import org.apache.hadoop.hbase.coprocessor.MasterCoprocessorEnvironment;
+import org.apache.hadoop.hbase.coprocessor.MasterObserver;
+import org.apache.hadoop.hbase.coprocessor.ObserverContext;
+import org.apache.hadoop.hbase.master.MasterServices;
+import org.apache.hadoop.hbase.master.RegionPlan;
+import org.apache.hadoop.hbase.master.procedure.MasterProcedureEnv;
+import org.apache.hadoop.hbase.net.Address;
+import org.apache.hadoop.hbase.procedure2.ProcedureExecutor;
+import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
+import org.apache.hadoop.hbase.protobuf.ResponseConverter;
+import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos;
+import 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription;
+import org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas;
+import org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.AddRSGroupRequest;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.AddRSGroupResponse;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.BalanceRSGroupRequest;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.BalanceRSGroupResponse;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.GetRSGroupInfoOfServerRequest;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.GetRSGroupInfoOfServerResponse;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.GetRSGroupInfoOfTableRequest;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.GetRSGroupInfoOfTableResponse;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.GetRSGroupInfoRequest;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.GetRSGroupInfoResponse;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.ListRSGroupInfosRequest;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.ListRSGroupInfosResponse;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.MoveServersAndTablesRequest;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.MoveServersAndTablesResponse;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.MoveServersRequest;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.MoveServersResponse;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.MoveTablesRequest;
+import 

[11/14] hbase git commit: HBASE-15631 Backport Regionserver Groups (HBASE-6721) to branch-1 (Francis Liu and Andrew Purtell)

2017-10-23 Thread apurtell
http://git-wip-us.apache.org/repos/asf/hbase/blob/64328cae/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminEndpoint.java
--
diff --git 
a/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminEndpoint.java
 
b/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminEndpoint.java
new file mode 100644
index 000..c4f5952
--- /dev/null
+++ 
b/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminEndpoint.java
@@ -0,0 +1,1049 @@
+/**
+ * Copyright The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rsgroup;
+
+import com.google.common.collect.Sets;
+
+import com.google.protobuf.RpcCallback;
+import com.google.protobuf.RpcController;
+import com.google.protobuf.Service;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+import org.apache.hadoop.hbase.Coprocessor;
+import org.apache.hadoop.hbase.CoprocessorEnvironment;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.NamespaceDescriptor;
+import org.apache.hadoop.hbase.ProcedureInfo;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin.MasterSwitchType;
+import org.apache.hadoop.hbase.constraint.ConstraintException;
+import org.apache.hadoop.hbase.coprocessor.CoprocessorService;
+import org.apache.hadoop.hbase.coprocessor.MasterCoprocessorEnvironment;
+import org.apache.hadoop.hbase.coprocessor.MasterObserver;
+import org.apache.hadoop.hbase.coprocessor.ObserverContext;
+import org.apache.hadoop.hbase.master.MasterServices;
+import org.apache.hadoop.hbase.master.RegionPlan;
+import org.apache.hadoop.hbase.master.procedure.MasterProcedureEnv;
+import org.apache.hadoop.hbase.net.Address;
+import org.apache.hadoop.hbase.procedure2.ProcedureExecutor;
+import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
+import org.apache.hadoop.hbase.protobuf.ResponseConverter;
+import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos;
+import 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription;
+import org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas;
+import org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.AddRSGroupRequest;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.AddRSGroupResponse;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.BalanceRSGroupRequest;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.BalanceRSGroupResponse;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.GetRSGroupInfoOfServerRequest;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.GetRSGroupInfoOfServerResponse;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.GetRSGroupInfoOfTableRequest;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.GetRSGroupInfoOfTableResponse;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.GetRSGroupInfoRequest;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.GetRSGroupInfoResponse;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.ListRSGroupInfosRequest;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.ListRSGroupInfosResponse;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.MoveServersAndTablesRequest;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.MoveServersAndTablesResponse;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.MoveServersRequest;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.MoveServersResponse;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.MoveTablesRequest;
+import 

[03/14] hbase git commit: HBASE-15631 Backport Regionserver Groups (HBASE-6721) to branch-1 (Francis Liu and Andrew Purtell)

2017-10-23 Thread apurtell
http://git-wip-us.apache.org/repos/asf/hbase/blob/c3200076/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java
--
diff --git 
a/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java
 
b/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java
new file mode 100644
index 000..eec03ce
--- /dev/null
+++ 
b/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java
@@ -0,0 +1,795 @@
+/**
+ * Copyright The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rsgroup;
+
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+
+import com.google.common.collect.Sets;
+import com.google.protobuf.ServiceException;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.Set;
+import java.util.TreeSet;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.Coprocessor;
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.MetaTableAccessor;
+import org.apache.hadoop.hbase.NamespaceDescriptor;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.TableStateManager;
+import org.apache.hadoop.hbase.client.ClusterConnection;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Mutation;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.constraint.ConstraintException;
+import org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint;
+import org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel;
+import org.apache.hadoop.hbase.master.MasterServices;
+import org.apache.hadoop.hbase.master.ServerListener;
+import org.apache.hadoop.hbase.master.procedure.CreateTableProcedure;
+import org.apache.hadoop.hbase.master.procedure.ProcedurePrepareLatch;
+import org.apache.hadoop.hbase.net.Address;
+import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
+import org.apache.hadoop.hbase.protobuf.RequestConverter;
+import org.apache.hadoop.hbase.protobuf.generated.ClientProtos;
+import org.apache.hadoop.hbase.protobuf.generated.MultiRowMutationProtos;
+import org.apache.hadoop.hbase.protobuf.generated.RSGroupProtos;
+import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos;
+import 
org.apache.hadoop.hbase.protobuf.generated.MultiRowMutationProtos.MutateRowsRequest;
+import org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy;
+import org.apache.hadoop.hbase.security.access.AccessControlLists;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.ModifyRegionUtils;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * This is an implementation of {@link RSGroupInfoManager}. Which makes
+ * use of an HBase table as the persistence store for the group information.
+ * It also makes use of zookeeper to store group information needed
+ * for bootstrapping during offline mode.
+ */
+public class RSGroupInfoManagerImpl implements RSGroupInfoManager, 
ServerListener {
+  private static final Log LOG = 
LogFactory.getLog(RSGroupInfoManagerImpl.class);
+
+  /** Table descriptor for hbase:rsgroup catalog table */
+  private final static 

[07/14] hbase git commit: HBASE-15631 Backport Regionserver Groups (HBASE-6721) to branch-1 (Francis Liu and Andrew Purtell)

2017-10-23 Thread apurtell
HBASE-15631 Backport Regionserver Groups (HBASE-6721) to branch-1 (Francis Liu 
and Andrew Purtell)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/c3200076
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/c3200076
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/c3200076

Branch: refs/heads/branch-1.4
Commit: c320007638fe57f858aeac974cdd6bbd6b9dd5eb
Parents: 737b5a5
Author: Andrew Purtell 
Authored: Mon Oct 23 14:15:06 2017 -0700
Committer: Andrew Purtell 
Committed: Mon Oct 23 17:10:23 2017 -0700

--
 .../org/apache/hadoop/hbase/ServerName.java |25 +-
 .../org/apache/hadoop/hbase/net/Address.java|89 +
 hbase-it/pom.xml|41 +
 .../hbase/rsgroup/IntegrationTestRSGroup.java   |99 +
 hbase-protocol/pom.xml  | 2 +
 .../hbase/protobuf/generated/ClientProtos.java  |16 +-
 .../protobuf/generated/RSGroupAdminProtos.java  | 13571 +
 .../hbase/protobuf/generated/RSGroupProtos.java |  1332 ++
 hbase-protocol/src/main/protobuf/RSGroup.proto  |35 +
 .../src/main/protobuf/RSGroupAdmin.proto|   149 +
 hbase-rsgroup/pom.xml   |   278 +
 .../hadoop/hbase/rsgroup/RSGroupAdmin.java  |92 +
 .../hbase/rsgroup/RSGroupAdminClient.java   |   212 +
 .../hbase/rsgroup/RSGroupAdminEndpoint.java |  1049 ++
 .../hbase/rsgroup/RSGroupAdminServer.java   |   516 +
 .../hbase/rsgroup/RSGroupBasedLoadBalancer.java |   431 +
 .../hadoop/hbase/rsgroup/RSGroupInfo.java   |   190 +
 .../hbase/rsgroup/RSGroupInfoManager.java   |   116 +
 .../hbase/rsgroup/RSGroupInfoManagerImpl.java   |   795 +
 .../hbase/rsgroup/RSGroupProtobufUtil.java  |61 +
 .../hadoop/hbase/rsgroup/RSGroupSerDe.java  |88 +
 .../hbase/rsgroup/RSGroupableBalancer.java  |32 +
 .../balancer/TestRSGroupBasedLoadBalancer.java  |   573 +
 .../hadoop/hbase/rsgroup/TestRSGroups.java  |   300 +
 .../hadoop/hbase/rsgroup/TestRSGroupsBase.java  |   815 +
 .../hbase/rsgroup/TestRSGroupsOfflineMode.java  |   187 +
 .../rsgroup/VerifyingRSGroupAdminClient.java|   155 +
 .../hbase/tmpl/master/MasterStatusTmpl.jamon| 2 +
 .../apache/hadoop/hbase/LocalHBaseCluster.java  | 3 +
 .../BaseMasterAndRegionObserver.java|62 +
 .../hbase/coprocessor/BaseMasterObserver.java   |63 +
 .../hbase/coprocessor/MasterObserver.java   |   113 +
 .../hadoop/hbase/master/AssignmentManager.java  |16 +-
 .../org/apache/hadoop/hbase/master/HMaster.java | 5 +
 .../hadoop/hbase/master/LoadBalancer.java   | 3 +
 .../hbase/master/MasterCoprocessorHost.java |   160 +
 .../hadoop/hbase/master/MasterServices.java | 5 +
 .../hbase/security/access/AccessController.java |37 +
 .../hbase/coprocessor/TestMasterObserver.java   |61 +
 .../hbase/master/MockNoopMasterServices.java| 5 +
 .../master/TestAssignmentManagerOnCluster.java  |   127 +-
 .../hadoop/hbase/master/TestCatalogJanitor.java | 3 +
 .../hbase/master/TestMasterStatusServlet.java   |12 +-
 .../normalizer/TestSimpleRegionNormalizer.java  | 2 +-
 .../security/access/TestAccessController.java   |75 +
 hbase-shell/pom.xml |35 +
 hbase-shell/src/main/ruby/hbase.rb  | 1 +
 hbase-shell/src/main/ruby/hbase/hbase.rb| 4 +
 .../src/main/ruby/hbase/rsgroup_admin.rb|   164 +
 hbase-shell/src/main/ruby/shell.rb  |22 +
 hbase-shell/src/main/ruby/shell/commands.rb | 4 +
 .../src/main/ruby/shell/commands/add_rsgroup.rb |39 +
 .../main/ruby/shell/commands/balance_rsgroup.rb |37 +
 .../src/main/ruby/shell/commands/get_rsgroup.rb |43 +
 .../ruby/shell/commands/get_server_rsgroup.rb   |39 +
 .../ruby/shell/commands/get_table_rsgroup.rb|40 +
 .../main/ruby/shell/commands/list_rsgroups.rb   |49 +
 .../ruby/shell/commands/move_servers_rsgroup.rb |37 +
 .../commands/move_servers_tables_rsgroup.rb |37 +
 .../ruby/shell/commands/move_tables_rsgroup.rb  |37 +
 .../main/ruby/shell/commands/remove_rsgroup.rb  |37 +
 .../apache/hadoop/hbase/client/TestShell.java   | 2 +-
 .../hbase/client/rsgroup/TestShellRSGroups.java |   111 +
 .../src/test/ruby/shell/rsgroup_shell_test.rb   |96 +
 hbase-shell/src/test/ruby/test_helper.rb| 4 +
 pom.xml |23 +
 66 files changed, 22843 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/c3200076/hbase-client/src/main/java/org/apache/hadoop/hbase/ServerName.java
--
diff --git 

[13/14] hbase git commit: HBASE-15631 Backport Regionserver Groups (HBASE-6721) to branch-1 (Francis Liu and Andrew Purtell)

2017-10-23 Thread apurtell
http://git-wip-us.apache.org/repos/asf/hbase/blob/64328cae/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RSGroupAdminProtos.java
--
diff --git 
a/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RSGroupAdminProtos.java
 
b/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RSGroupAdminProtos.java
new file mode 100644
index 000..3d2285c
--- /dev/null
+++ 
b/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RSGroupAdminProtos.java
@@ -0,0 +1,13571 @@
+// Generated by the protocol buffer compiler.  DO NOT EDIT!
+// source: RSGroupAdmin.proto
+
+package org.apache.hadoop.hbase.protobuf.generated;
+
+public final class RSGroupAdminProtos {
+  private RSGroupAdminProtos() {}
+  public static void registerAllExtensions(
+  com.google.protobuf.ExtensionRegistry registry) {
+  }
+  public interface ListTablesOfRSGroupRequestOrBuilder
+  extends com.google.protobuf.MessageOrBuilder {
+
+// required string r_s_group_name = 1;
+/**
+ * required string r_s_group_name = 1;
+ */
+boolean hasRSGroupName();
+/**
+ * required string r_s_group_name = 1;
+ */
+java.lang.String getRSGroupName();
+/**
+ * required string r_s_group_name = 1;
+ */
+com.google.protobuf.ByteString
+getRSGroupNameBytes();
+  }
+  /**
+   * Protobuf type {@code hbase.pb.ListTablesOfRSGroupRequest}
+   */
+  public static final class ListTablesOfRSGroupRequest extends
+  com.google.protobuf.GeneratedMessage
+  implements ListTablesOfRSGroupRequestOrBuilder {
+// Use ListTablesOfRSGroupRequest.newBuilder() to construct.
+private 
ListTablesOfRSGroupRequest(com.google.protobuf.GeneratedMessage.Builder 
builder) {
+  super(builder);
+  this.unknownFields = builder.getUnknownFields();
+}
+private ListTablesOfRSGroupRequest(boolean noInit) { this.unknownFields = 
com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
+
+private static final ListTablesOfRSGroupRequest defaultInstance;
+public static ListTablesOfRSGroupRequest getDefaultInstance() {
+  return defaultInstance;
+}
+
+public ListTablesOfRSGroupRequest getDefaultInstanceForType() {
+  return defaultInstance;
+}
+
+private final com.google.protobuf.UnknownFieldSet unknownFields;
+@java.lang.Override
+public final com.google.protobuf.UnknownFieldSet
+getUnknownFields() {
+  return this.unknownFields;
+}
+private ListTablesOfRSGroupRequest(
+com.google.protobuf.CodedInputStream input,
+com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+throws com.google.protobuf.InvalidProtocolBufferException {
+  initFields();
+  int mutable_bitField0_ = 0;
+  com.google.protobuf.UnknownFieldSet.Builder unknownFields =
+  com.google.protobuf.UnknownFieldSet.newBuilder();
+  try {
+boolean done = false;
+while (!done) {
+  int tag = input.readTag();
+  switch (tag) {
+case 0:
+  done = true;
+  break;
+default: {
+  if (!parseUnknownField(input, unknownFields,
+ extensionRegistry, tag)) {
+done = true;
+  }
+  break;
+}
+case 10: {
+  bitField0_ |= 0x0001;
+  rSGroupName_ = input.readBytes();
+  break;
+}
+  }
+}
+  } catch (com.google.protobuf.InvalidProtocolBufferException e) {
+throw e.setUnfinishedMessage(this);
+  } catch (java.io.IOException e) {
+throw new com.google.protobuf.InvalidProtocolBufferException(
+e.getMessage()).setUnfinishedMessage(this);
+  } finally {
+this.unknownFields = unknownFields.build();
+makeExtensionsImmutable();
+  }
+}
+public static final com.google.protobuf.Descriptors.Descriptor
+getDescriptor() {
+  return 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.internal_static_hbase_pb_ListTablesOfRSGroupRequest_descriptor;
+}
+
+protected com.google.protobuf.GeneratedMessage.FieldAccessorTable
+internalGetFieldAccessorTable() {
+  return 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.internal_static_hbase_pb_ListTablesOfRSGroupRequest_fieldAccessorTable
+  .ensureFieldAccessorsInitialized(
+  
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.ListTablesOfRSGroupRequest.class,
 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.ListTablesOfRSGroupRequest.Builder.class);
+}
+
+public static com.google.protobuf.Parser 
PARSER =
+new com.google.protobuf.AbstractParser() {
+  public ListTablesOfRSGroupRequest parsePartialFrom(
+  

[02/14] hbase git commit: HBASE-15631 Backport Regionserver Groups (HBASE-6721) to branch-1 (Francis Liu and Andrew Purtell)

2017-10-23 Thread apurtell
http://git-wip-us.apache.org/repos/asf/hbase/blob/c3200076/hbase-rsgroup/src/test/java/org/apache/hadoop/hbase/rsgroup/TestRSGroupsBase.java
--
diff --git 
a/hbase-rsgroup/src/test/java/org/apache/hadoop/hbase/rsgroup/TestRSGroupsBase.java
 
b/hbase-rsgroup/src/test/java/org/apache/hadoop/hbase/rsgroup/TestRSGroupsBase.java
new file mode 100644
index 000..0db0fea
--- /dev/null
+++ 
b/hbase-rsgroup/src/test/java/org/apache/hadoop/hbase/rsgroup/TestRSGroupsBase.java
@@ -0,0 +1,815 @@
+/**
+ * Copyright The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rsgroup;
+
+import com.google.common.collect.Maps;
+import com.google.common.collect.Sets;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.ClusterStatus;
+import org.apache.hadoop.hbase.HBaseCluster;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.NamespaceDescriptor;
+import org.apache.hadoop.hbase.RegionLoad;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.Waiter;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.constraint.ConstraintException;
+import org.apache.hadoop.hbase.net.Address;
+import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
+import org.apache.hadoop.hbase.protobuf.generated.AdminProtos;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.Assert;
+import org.junit.Ignore;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.security.SecureRandom;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+public abstract class TestRSGroupsBase {
+  protected static final Log LOG = LogFactory.getLog(TestRSGroupsBase.class);
+
+  //shared
+  protected final static String groupPrefix = "Group";
+  protected final static String tablePrefix = "Group";
+  protected final static SecureRandom rand = new SecureRandom();
+
+  //shared, cluster type specific
+  protected static HBaseTestingUtility TEST_UTIL;
+  protected static HBaseAdmin admin;
+  protected static HBaseCluster cluster;
+  protected static RSGroupAdmin rsGroupAdmin;
+
+  public final static long WAIT_TIMEOUT = 6*5;
+  public final static int NUM_SLAVES_BASE = 4; //number of slaves for the 
smallest cluster
+
+
+
+  protected RSGroupInfo addGroup(RSGroupAdmin gAdmin, String groupName,
+ int serverCount) throws IOException, 
InterruptedException {
+RSGroupInfo defaultInfo = gAdmin
+.getRSGroupInfo(RSGroupInfo.DEFAULT_GROUP);
+assertTrue(defaultInfo != null);
+assertTrue(defaultInfo.getServers().size() >= serverCount);
+gAdmin.addRSGroup(groupName);
+
+Set set = new HashSet();
+for(Address server: defaultInfo.getServers()) {
+  if(set.size() == serverCount) {
+break;
+  }
+  set.add(server);
+}
+gAdmin.moveServers(set, groupName);
+RSGroupInfo result = gAdmin.getRSGroupInfo(groupName);
+assertTrue(result.getServers().size() >= serverCount);
+return result;
+  }
+
+  static void removeGroup(RSGroupAdminClient groupAdmin, String groupName) 
throws IOException {
+RSGroupInfo info = groupAdmin.getRSGroupInfo(groupName);
+groupAdmin.moveTables(info.getTables(), RSGroupInfo.DEFAULT_GROUP);
+groupAdmin.moveServers(info.getServers(), RSGroupInfo.DEFAULT_GROUP);
+groupAdmin.removeRSGroup(groupName);
+  }
+
+  protected void deleteTableIfNecessary() throws IOException {
+for (HTableDescriptor desc : 
TEST_UTIL.getHBaseAdmin().listTables(tablePrefix+".*")) {
+  

[14/14] hbase git commit: HBASE-15631 Backport Regionserver Groups (HBASE-6721) to branch-1 (Francis Liu and Andrew Purtell)

2017-10-23 Thread apurtell
HBASE-15631 Backport Regionserver Groups (HBASE-6721) to branch-1 (Francis Liu 
and Andrew Purtell)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/64328cae
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/64328cae
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/64328cae

Branch: refs/heads/branch-1
Commit: 64328caef0bb712bb69d0241b4b8b3474a82702c
Parents: 795f48c
Author: Andrew Purtell 
Authored: Mon Oct 23 14:15:06 2017 -0700
Committer: Andrew Purtell 
Committed: Mon Oct 23 17:10:33 2017 -0700

--
 .../org/apache/hadoop/hbase/ServerName.java |25 +-
 .../org/apache/hadoop/hbase/net/Address.java|89 +
 hbase-it/pom.xml|41 +
 .../hbase/rsgroup/IntegrationTestRSGroup.java   |99 +
 hbase-protocol/pom.xml  | 2 +
 .../hbase/protobuf/generated/ClientProtos.java  |16 +-
 .../protobuf/generated/RSGroupAdminProtos.java  | 13571 +
 .../hbase/protobuf/generated/RSGroupProtos.java |  1332 ++
 hbase-protocol/src/main/protobuf/RSGroup.proto  |35 +
 .../src/main/protobuf/RSGroupAdmin.proto|   149 +
 hbase-rsgroup/pom.xml   |   278 +
 .../hadoop/hbase/rsgroup/RSGroupAdmin.java  |92 +
 .../hbase/rsgroup/RSGroupAdminClient.java   |   212 +
 .../hbase/rsgroup/RSGroupAdminEndpoint.java |  1049 ++
 .../hbase/rsgroup/RSGroupAdminServer.java   |   516 +
 .../hbase/rsgroup/RSGroupBasedLoadBalancer.java |   431 +
 .../hadoop/hbase/rsgroup/RSGroupInfo.java   |   190 +
 .../hbase/rsgroup/RSGroupInfoManager.java   |   116 +
 .../hbase/rsgroup/RSGroupInfoManagerImpl.java   |   795 +
 .../hbase/rsgroup/RSGroupProtobufUtil.java  |61 +
 .../hadoop/hbase/rsgroup/RSGroupSerDe.java  |88 +
 .../hbase/rsgroup/RSGroupableBalancer.java  |32 +
 .../balancer/TestRSGroupBasedLoadBalancer.java  |   573 +
 .../hadoop/hbase/rsgroup/TestRSGroups.java  |   300 +
 .../hadoop/hbase/rsgroup/TestRSGroupsBase.java  |   815 +
 .../hbase/rsgroup/TestRSGroupsOfflineMode.java  |   187 +
 .../rsgroup/VerifyingRSGroupAdminClient.java|   155 +
 .../hbase/tmpl/master/MasterStatusTmpl.jamon| 2 +
 .../apache/hadoop/hbase/LocalHBaseCluster.java  | 3 +
 .../BaseMasterAndRegionObserver.java|62 +
 .../hbase/coprocessor/BaseMasterObserver.java   |63 +
 .../hbase/coprocessor/MasterObserver.java   |   113 +
 .../hadoop/hbase/master/AssignmentManager.java  |16 +-
 .../org/apache/hadoop/hbase/master/HMaster.java | 5 +
 .../hadoop/hbase/master/LoadBalancer.java   | 3 +
 .../hbase/master/MasterCoprocessorHost.java |   160 +
 .../hadoop/hbase/master/MasterServices.java | 5 +
 .../hbase/security/access/AccessController.java |37 +
 .../hbase/coprocessor/TestMasterObserver.java   |61 +
 .../hbase/master/MockNoopMasterServices.java| 5 +
 .../master/TestAssignmentManagerOnCluster.java  |   127 +-
 .../hadoop/hbase/master/TestCatalogJanitor.java | 3 +
 .../hbase/master/TestMasterStatusServlet.java   |12 +-
 .../normalizer/TestSimpleRegionNormalizer.java  | 2 +-
 .../security/access/TestAccessController.java   |75 +
 hbase-shell/pom.xml |35 +
 hbase-shell/src/main/ruby/hbase.rb  | 1 +
 hbase-shell/src/main/ruby/hbase/hbase.rb| 4 +
 .../src/main/ruby/hbase/rsgroup_admin.rb|   164 +
 hbase-shell/src/main/ruby/shell.rb  |22 +
 hbase-shell/src/main/ruby/shell/commands.rb | 4 +
 .../src/main/ruby/shell/commands/add_rsgroup.rb |39 +
 .../main/ruby/shell/commands/balance_rsgroup.rb |37 +
 .../src/main/ruby/shell/commands/get_rsgroup.rb |43 +
 .../ruby/shell/commands/get_server_rsgroup.rb   |39 +
 .../ruby/shell/commands/get_table_rsgroup.rb|40 +
 .../main/ruby/shell/commands/list_rsgroups.rb   |49 +
 .../ruby/shell/commands/move_servers_rsgroup.rb |37 +
 .../commands/move_servers_tables_rsgroup.rb |37 +
 .../ruby/shell/commands/move_tables_rsgroup.rb  |37 +
 .../main/ruby/shell/commands/remove_rsgroup.rb  |37 +
 .../apache/hadoop/hbase/client/TestShell.java   | 2 +-
 .../hbase/client/rsgroup/TestShellRSGroups.java |   111 +
 .../src/test/ruby/shell/rsgroup_shell_test.rb   |96 +
 hbase-shell/src/test/ruby/test_helper.rb| 4 +
 pom.xml |23 +
 66 files changed, 22843 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/64328cae/hbase-client/src/main/java/org/apache/hadoop/hbase/ServerName.java
--
diff --git 

[12/14] hbase git commit: HBASE-15631 Backport Regionserver Groups (HBASE-6721) to branch-1 (Francis Liu and Andrew Purtell)

2017-10-23 Thread apurtell
http://git-wip-us.apache.org/repos/asf/hbase/blob/64328cae/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RSGroupProtos.java
--
diff --git 
a/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RSGroupProtos.java
 
b/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RSGroupProtos.java
new file mode 100644
index 000..5f5eb3b
--- /dev/null
+++ 
b/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RSGroupProtos.java
@@ -0,0 +1,1332 @@
+// Generated by the protocol buffer compiler.  DO NOT EDIT!
+// source: RSGroup.proto
+
+package org.apache.hadoop.hbase.protobuf.generated;
+
+public final class RSGroupProtos {
+  private RSGroupProtos() {}
+  public static void registerAllExtensions(
+  com.google.protobuf.ExtensionRegistry registry) {
+  }
+  public interface RSGroupInfoOrBuilder
+  extends com.google.protobuf.MessageOrBuilder {
+
+// required string name = 1;
+/**
+ * required string name = 1;
+ */
+boolean hasName();
+/**
+ * required string name = 1;
+ */
+java.lang.String getName();
+/**
+ * required string name = 1;
+ */
+com.google.protobuf.ByteString
+getNameBytes();
+
+// repeated .hbase.pb.ServerName servers = 4;
+/**
+ * repeated .hbase.pb.ServerName servers = 4;
+ */
+
java.util.List
 
+getServersList();
+/**
+ * repeated .hbase.pb.ServerName servers = 4;
+ */
+org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName 
getServers(int index);
+/**
+ * repeated .hbase.pb.ServerName servers = 4;
+ */
+int getServersCount();
+/**
+ * repeated .hbase.pb.ServerName servers = 4;
+ */
+java.util.List 
+getServersOrBuilderList();
+/**
+ * repeated .hbase.pb.ServerName servers = 4;
+ */
+org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerNameOrBuilder 
getServersOrBuilder(
+int index);
+
+// repeated .hbase.pb.TableName tables = 3;
+/**
+ * repeated .hbase.pb.TableName tables = 3;
+ */
+
java.util.List
 
+getTablesList();
+/**
+ * repeated .hbase.pb.TableName tables = 3;
+ */
+org.apache.hadoop.hbase.protobuf.generated.TableProtos.TableName 
getTables(int index);
+/**
+ * repeated .hbase.pb.TableName tables = 3;
+ */
+int getTablesCount();
+/**
+ * repeated .hbase.pb.TableName tables = 3;
+ */
+java.util.List 
+getTablesOrBuilderList();
+/**
+ * repeated .hbase.pb.TableName tables = 3;
+ */
+org.apache.hadoop.hbase.protobuf.generated.TableProtos.TableNameOrBuilder 
getTablesOrBuilder(
+int index);
+  }
+  /**
+   * Protobuf type {@code hbase.pb.RSGroupInfo}
+   */
+  public static final class RSGroupInfo extends
+  com.google.protobuf.GeneratedMessage
+  implements RSGroupInfoOrBuilder {
+// Use RSGroupInfo.newBuilder() to construct.
+private RSGroupInfo(com.google.protobuf.GeneratedMessage.Builder 
builder) {
+  super(builder);
+  this.unknownFields = builder.getUnknownFields();
+}
+private RSGroupInfo(boolean noInit) { this.unknownFields = 
com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
+
+private static final RSGroupInfo defaultInstance;
+public static RSGroupInfo getDefaultInstance() {
+  return defaultInstance;
+}
+
+public RSGroupInfo getDefaultInstanceForType() {
+  return defaultInstance;
+}
+
+private final com.google.protobuf.UnknownFieldSet unknownFields;
+@java.lang.Override
+public final com.google.protobuf.UnknownFieldSet
+getUnknownFields() {
+  return this.unknownFields;
+}
+private RSGroupInfo(
+com.google.protobuf.CodedInputStream input,
+com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+throws com.google.protobuf.InvalidProtocolBufferException {
+  initFields();
+  int mutable_bitField0_ = 0;
+  com.google.protobuf.UnknownFieldSet.Builder unknownFields =
+  com.google.protobuf.UnknownFieldSet.newBuilder();
+  try {
+boolean done = false;
+while (!done) {
+  int tag = input.readTag();
+  switch (tag) {
+case 0:
+  done = true;
+  break;
+default: {
+  if (!parseUnknownField(input, unknownFields,
+ extensionRegistry, tag)) {
+done = true;
+  }
+  break;
+}
+case 10: {
+  bitField0_ |= 0x0001;
+  name_ = input.readBytes();
+  break;
+}
+case 26: {
+  if (!((mutable_bitField0_ & 0x0004) == 0x0004)) {
+tables_ = new 
java.util.ArrayList();
+mutable_bitField0_ |= 0x0004;

[01/14] hbase git commit: HBASE-15631 Backport Regionserver Groups (HBASE-6721) to branch-1 (Francis Liu and Andrew Purtell)

2017-10-23 Thread apurtell
Repository: hbase
Updated Branches:
  refs/heads/branch-1 795f48c31 -> 64328caef
  refs/heads/branch-1.4 737b5a5f9 -> c32000763


http://git-wip-us.apache.org/repos/asf/hbase/blob/c3200076/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManagerOnCluster.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManagerOnCluster.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManagerOnCluster.java
index 4843155..78b23c0 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManagerOnCluster.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManagerOnCluster.java
@@ -27,12 +27,16 @@ import static org.junit.Assert.fail;
 
 import java.io.IOException;
 import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 import java.util.Set;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicInteger;
 
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -610,7 +614,7 @@ public class TestAssignmentManagerOnCluster {
 desc.getTableName(), Bytes.toBytes("A"), Bytes.toBytes("Z"));
   MetaTableAccessor.addRegionToMeta(meta, hri);
 
-  MyLoadBalancer.controledRegion = hri.getEncodedName();
+  MyLoadBalancer.controledRegion = hri;
 
   HMaster master = TEST_UTIL.getHBaseCluster().getMaster();
   AssignmentManager am = master.getAssignmentManager();
@@ -634,6 +638,105 @@ public class TestAssignmentManagerOnCluster {
   }
 
   /**
+   * This tests round-robin assignment failed due to no bulkplan
+   */
+  @Test (timeout=6)
+  public void testRoundRobinAssignmentFailed() throws Exception {
+TableName tableName = TableName.valueOf("testRoundRobinAssignmentFailed");
+try {
+  HTableDescriptor desc = new HTableDescriptor(tableName);
+  desc.addFamily(new HColumnDescriptor(FAMILY));
+  admin.createTable(desc);
+
+  Table meta = admin.getConnection().getTable(TableName.META_TABLE_NAME);
+  HRegionInfo hri = new HRegionInfo(
+desc.getTableName(), Bytes.toBytes("A"), Bytes.toBytes("Z"));
+  MetaTableAccessor.addRegionToMeta(meta, hri);
+
+  MyLoadBalancer.controledRegion = hri;
+
+  HMaster master = TEST_UTIL.getHBaseCluster().getMaster();
+  AssignmentManager am = master.getAssignmentManager();
+  // round-robin assignment but balancer cannot find a plan
+  // assignment should fail
+  am.assign(Arrays.asList(hri));
+
+  // if bulk assignment cannot update region state to online
+  // or failed_open this waits until timeout
+  assertFalse(am.waitForAssignment(hri));
+  RegionState state = am.getRegionStates().getRegionState(hri);
+  assertEquals(RegionState.State.FAILED_OPEN, state.getState());
+  // Failed to open since no plan, so it's on no server
+  assertNull(state.getServerName());
+
+  // try again with valid plan
+  MyLoadBalancer.controledRegion = null;
+  am.assign(Arrays.asList(hri));
+  assertTrue(am.waitForAssignment(hri));
+
+  ServerName serverName = master.getAssignmentManager().
+getRegionStates().getRegionServerOfRegion(hri);
+  TEST_UTIL.assertRegionOnServer(hri, serverName, 200);
+} finally {
+  MyLoadBalancer.controledRegion = null;
+  TEST_UTIL.deleteTable(tableName);
+}
+  }
+
+  /**
+   * This tests retain assignment failed due to no bulkplan
+   */
+  @Test (timeout=6)
+  public void testRetainAssignmentFailed() throws Exception {
+TableName tableName = TableName.valueOf("testRetainAssignmentFailed");
+try {
+  HTableDescriptor desc = new HTableDescriptor(tableName);
+  desc.addFamily(new HColumnDescriptor(FAMILY));
+  admin.createTable(desc);
+
+  Table meta = 
TEST_UTIL.getConnection().getTable(TableName.META_TABLE_NAME);
+  HRegionInfo hri = new HRegionInfo(
+desc.getTableName(), Bytes.toBytes("A"), Bytes.toBytes("Z"));
+  MetaTableAccessor.addRegionToMeta(meta, hri);
+
+  MyLoadBalancer.controledRegion = hri;
+
+  HMaster master = TEST_UTIL.getHBaseCluster().getMaster();
+  AssignmentManager am = master.getAssignmentManager();
+
+  Map regions = new HashMap();
+  ServerName dest = 
TEST_UTIL.getHBaseCluster().getRegionServer(0).getServerName();
+  regions.put(hri, dest);
+  // retainAssignment but balancer cannot find a plan
+  // assignment should fail
+  am.assign(regions);
+
+  // if retain assignment cannot update region state to online
+  // or failed_open this waits until timeout
+  

[08/14] hbase git commit: HBASE-15631 Backport Regionserver Groups (HBASE-6721) to branch-1 (Francis Liu and Andrew Purtell)

2017-10-23 Thread apurtell
http://git-wip-us.apache.org/repos/asf/hbase/blob/64328cae/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManagerOnCluster.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManagerOnCluster.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManagerOnCluster.java
index 4843155..78b23c0 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManagerOnCluster.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManagerOnCluster.java
@@ -27,12 +27,16 @@ import static org.junit.Assert.fail;
 
 import java.io.IOException;
 import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 import java.util.Set;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicInteger;
 
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -610,7 +614,7 @@ public class TestAssignmentManagerOnCluster {
 desc.getTableName(), Bytes.toBytes("A"), Bytes.toBytes("Z"));
   MetaTableAccessor.addRegionToMeta(meta, hri);
 
-  MyLoadBalancer.controledRegion = hri.getEncodedName();
+  MyLoadBalancer.controledRegion = hri;
 
   HMaster master = TEST_UTIL.getHBaseCluster().getMaster();
   AssignmentManager am = master.getAssignmentManager();
@@ -634,6 +638,105 @@ public class TestAssignmentManagerOnCluster {
   }
 
   /**
+   * This tests round-robin assignment failed due to no bulkplan
+   */
+  @Test (timeout=6)
+  public void testRoundRobinAssignmentFailed() throws Exception {
+TableName tableName = TableName.valueOf("testRoundRobinAssignmentFailed");
+try {
+  HTableDescriptor desc = new HTableDescriptor(tableName);
+  desc.addFamily(new HColumnDescriptor(FAMILY));
+  admin.createTable(desc);
+
+  Table meta = admin.getConnection().getTable(TableName.META_TABLE_NAME);
+  HRegionInfo hri = new HRegionInfo(
+desc.getTableName(), Bytes.toBytes("A"), Bytes.toBytes("Z"));
+  MetaTableAccessor.addRegionToMeta(meta, hri);
+
+  MyLoadBalancer.controledRegion = hri;
+
+  HMaster master = TEST_UTIL.getHBaseCluster().getMaster();
+  AssignmentManager am = master.getAssignmentManager();
+  // round-robin assignment but balancer cannot find a plan
+  // assignment should fail
+  am.assign(Arrays.asList(hri));
+
+  // if bulk assignment cannot update region state to online
+  // or failed_open this waits until timeout
+  assertFalse(am.waitForAssignment(hri));
+  RegionState state = am.getRegionStates().getRegionState(hri);
+  assertEquals(RegionState.State.FAILED_OPEN, state.getState());
+  // Failed to open since no plan, so it's on no server
+  assertNull(state.getServerName());
+
+  // try again with valid plan
+  MyLoadBalancer.controledRegion = null;
+  am.assign(Arrays.asList(hri));
+  assertTrue(am.waitForAssignment(hri));
+
+  ServerName serverName = master.getAssignmentManager().
+getRegionStates().getRegionServerOfRegion(hri);
+  TEST_UTIL.assertRegionOnServer(hri, serverName, 200);
+} finally {
+  MyLoadBalancer.controledRegion = null;
+  TEST_UTIL.deleteTable(tableName);
+}
+  }
+
+  /**
+   * This tests retain assignment failed due to no bulkplan
+   */
+  @Test (timeout=6)
+  public void testRetainAssignmentFailed() throws Exception {
+TableName tableName = TableName.valueOf("testRetainAssignmentFailed");
+try {
+  HTableDescriptor desc = new HTableDescriptor(tableName);
+  desc.addFamily(new HColumnDescriptor(FAMILY));
+  admin.createTable(desc);
+
+  Table meta = 
TEST_UTIL.getConnection().getTable(TableName.META_TABLE_NAME);
+  HRegionInfo hri = new HRegionInfo(
+desc.getTableName(), Bytes.toBytes("A"), Bytes.toBytes("Z"));
+  MetaTableAccessor.addRegionToMeta(meta, hri);
+
+  MyLoadBalancer.controledRegion = hri;
+
+  HMaster master = TEST_UTIL.getHBaseCluster().getMaster();
+  AssignmentManager am = master.getAssignmentManager();
+
+  Map regions = new HashMap();
+  ServerName dest = 
TEST_UTIL.getHBaseCluster().getRegionServer(0).getServerName();
+  regions.put(hri, dest);
+  // retainAssignment but balancer cannot find a plan
+  // assignment should fail
+  am.assign(regions);
+
+  // if retain assignment cannot update region state to online
+  // or failed_open this waits until timeout
+  assertFalse(am.waitForAssignment(hri));
+  RegionState state = am.getRegionStates().getRegionState(hri);
+  

[10/14] hbase git commit: HBASE-15631 Backport Regionserver Groups (HBASE-6721) to branch-1 (Francis Liu and Andrew Purtell)

2017-10-23 Thread apurtell
http://git-wip-us.apache.org/repos/asf/hbase/blob/64328cae/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java
--
diff --git 
a/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java
 
b/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java
new file mode 100644
index 000..eec03ce
--- /dev/null
+++ 
b/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java
@@ -0,0 +1,795 @@
+/**
+ * Copyright The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rsgroup;
+
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+
+import com.google.common.collect.Sets;
+import com.google.protobuf.ServiceException;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.Set;
+import java.util.TreeSet;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.Coprocessor;
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.MetaTableAccessor;
+import org.apache.hadoop.hbase.NamespaceDescriptor;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.TableStateManager;
+import org.apache.hadoop.hbase.client.ClusterConnection;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Mutation;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.constraint.ConstraintException;
+import org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint;
+import org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel;
+import org.apache.hadoop.hbase.master.MasterServices;
+import org.apache.hadoop.hbase.master.ServerListener;
+import org.apache.hadoop.hbase.master.procedure.CreateTableProcedure;
+import org.apache.hadoop.hbase.master.procedure.ProcedurePrepareLatch;
+import org.apache.hadoop.hbase.net.Address;
+import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
+import org.apache.hadoop.hbase.protobuf.RequestConverter;
+import org.apache.hadoop.hbase.protobuf.generated.ClientProtos;
+import org.apache.hadoop.hbase.protobuf.generated.MultiRowMutationProtos;
+import org.apache.hadoop.hbase.protobuf.generated.RSGroupProtos;
+import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos;
+import 
org.apache.hadoop.hbase.protobuf.generated.MultiRowMutationProtos.MutateRowsRequest;
+import org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy;
+import org.apache.hadoop.hbase.security.access.AccessControlLists;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.ModifyRegionUtils;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * This is an implementation of {@link RSGroupInfoManager}. Which makes
+ * use of an HBase table as the persistence store for the group information.
+ * It also makes use of zookeeper to store group information needed
+ * for bootstrapping during offline mode.
+ */
+public class RSGroupInfoManagerImpl implements RSGroupInfoManager, 
ServerListener {
+  private static final Log LOG = 
LogFactory.getLog(RSGroupInfoManagerImpl.class);
+
+  /** Table descriptor for hbase:rsgroup catalog table */
+  private final static 

[05/14] hbase git commit: HBASE-15631 Backport Regionserver Groups (HBASE-6721) to branch-1 (Francis Liu and Andrew Purtell)

2017-10-23 Thread apurtell
http://git-wip-us.apache.org/repos/asf/hbase/blob/c3200076/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RSGroupProtos.java
--
diff --git 
a/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RSGroupProtos.java
 
b/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RSGroupProtos.java
new file mode 100644
index 000..5f5eb3b
--- /dev/null
+++ 
b/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RSGroupProtos.java
@@ -0,0 +1,1332 @@
+// Generated by the protocol buffer compiler.  DO NOT EDIT!
+// source: RSGroup.proto
+
+package org.apache.hadoop.hbase.protobuf.generated;
+
+public final class RSGroupProtos {
+  private RSGroupProtos() {}
+  public static void registerAllExtensions(
+  com.google.protobuf.ExtensionRegistry registry) {
+  }
+  public interface RSGroupInfoOrBuilder
+  extends com.google.protobuf.MessageOrBuilder {
+
+// required string name = 1;
+/**
+ * required string name = 1;
+ */
+boolean hasName();
+/**
+ * required string name = 1;
+ */
+java.lang.String getName();
+/**
+ * required string name = 1;
+ */
+com.google.protobuf.ByteString
+getNameBytes();
+
+// repeated .hbase.pb.ServerName servers = 4;
+/**
+ * repeated .hbase.pb.ServerName servers = 4;
+ */
+
java.util.List
 
+getServersList();
+/**
+ * repeated .hbase.pb.ServerName servers = 4;
+ */
+org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName 
getServers(int index);
+/**
+ * repeated .hbase.pb.ServerName servers = 4;
+ */
+int getServersCount();
+/**
+ * repeated .hbase.pb.ServerName servers = 4;
+ */
+java.util.List 
+getServersOrBuilderList();
+/**
+ * repeated .hbase.pb.ServerName servers = 4;
+ */
+org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerNameOrBuilder 
getServersOrBuilder(
+int index);
+
+// repeated .hbase.pb.TableName tables = 3;
+/**
+ * repeated .hbase.pb.TableName tables = 3;
+ */
+
java.util.List
 
+getTablesList();
+/**
+ * repeated .hbase.pb.TableName tables = 3;
+ */
+org.apache.hadoop.hbase.protobuf.generated.TableProtos.TableName 
getTables(int index);
+/**
+ * repeated .hbase.pb.TableName tables = 3;
+ */
+int getTablesCount();
+/**
+ * repeated .hbase.pb.TableName tables = 3;
+ */
+java.util.List 
+getTablesOrBuilderList();
+/**
+ * repeated .hbase.pb.TableName tables = 3;
+ */
+org.apache.hadoop.hbase.protobuf.generated.TableProtos.TableNameOrBuilder 
getTablesOrBuilder(
+int index);
+  }
+  /**
+   * Protobuf type {@code hbase.pb.RSGroupInfo}
+   */
+  public static final class RSGroupInfo extends
+  com.google.protobuf.GeneratedMessage
+  implements RSGroupInfoOrBuilder {
+// Use RSGroupInfo.newBuilder() to construct.
+private RSGroupInfo(com.google.protobuf.GeneratedMessage.Builder 
builder) {
+  super(builder);
+  this.unknownFields = builder.getUnknownFields();
+}
+private RSGroupInfo(boolean noInit) { this.unknownFields = 
com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
+
+private static final RSGroupInfo defaultInstance;
+public static RSGroupInfo getDefaultInstance() {
+  return defaultInstance;
+}
+
+public RSGroupInfo getDefaultInstanceForType() {
+  return defaultInstance;
+}
+
+private final com.google.protobuf.UnknownFieldSet unknownFields;
+@java.lang.Override
+public final com.google.protobuf.UnknownFieldSet
+getUnknownFields() {
+  return this.unknownFields;
+}
+private RSGroupInfo(
+com.google.protobuf.CodedInputStream input,
+com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+throws com.google.protobuf.InvalidProtocolBufferException {
+  initFields();
+  int mutable_bitField0_ = 0;
+  com.google.protobuf.UnknownFieldSet.Builder unknownFields =
+  com.google.protobuf.UnknownFieldSet.newBuilder();
+  try {
+boolean done = false;
+while (!done) {
+  int tag = input.readTag();
+  switch (tag) {
+case 0:
+  done = true;
+  break;
+default: {
+  if (!parseUnknownField(input, unknownFields,
+ extensionRegistry, tag)) {
+done = true;
+  }
+  break;
+}
+case 10: {
+  bitField0_ |= 0x0001;
+  name_ = input.readBytes();
+  break;
+}
+case 26: {
+  if (!((mutable_bitField0_ & 0x0004) == 0x0004)) {
+tables_ = new 
java.util.ArrayList();
+mutable_bitField0_ |= 0x0004;

[06/14] hbase git commit: HBASE-15631 Backport Regionserver Groups (HBASE-6721) to branch-1 (Francis Liu and Andrew Purtell)

2017-10-23 Thread apurtell
http://git-wip-us.apache.org/repos/asf/hbase/blob/c3200076/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RSGroupAdminProtos.java
--
diff --git 
a/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RSGroupAdminProtos.java
 
b/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RSGroupAdminProtos.java
new file mode 100644
index 000..3d2285c
--- /dev/null
+++ 
b/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/RSGroupAdminProtos.java
@@ -0,0 +1,13571 @@
+// Generated by the protocol buffer compiler.  DO NOT EDIT!
+// source: RSGroupAdmin.proto
+
+package org.apache.hadoop.hbase.protobuf.generated;
+
+public final class RSGroupAdminProtos {
+  private RSGroupAdminProtos() {}
+  public static void registerAllExtensions(
+  com.google.protobuf.ExtensionRegistry registry) {
+  }
+  public interface ListTablesOfRSGroupRequestOrBuilder
+  extends com.google.protobuf.MessageOrBuilder {
+
+// required string r_s_group_name = 1;
+/**
+ * required string r_s_group_name = 1;
+ */
+boolean hasRSGroupName();
+/**
+ * required string r_s_group_name = 1;
+ */
+java.lang.String getRSGroupName();
+/**
+ * required string r_s_group_name = 1;
+ */
+com.google.protobuf.ByteString
+getRSGroupNameBytes();
+  }
+  /**
+   * Protobuf type {@code hbase.pb.ListTablesOfRSGroupRequest}
+   */
+  public static final class ListTablesOfRSGroupRequest extends
+  com.google.protobuf.GeneratedMessage
+  implements ListTablesOfRSGroupRequestOrBuilder {
+// Use ListTablesOfRSGroupRequest.newBuilder() to construct.
+private 
ListTablesOfRSGroupRequest(com.google.protobuf.GeneratedMessage.Builder 
builder) {
+  super(builder);
+  this.unknownFields = builder.getUnknownFields();
+}
+private ListTablesOfRSGroupRequest(boolean noInit) { this.unknownFields = 
com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
+
+private static final ListTablesOfRSGroupRequest defaultInstance;
+public static ListTablesOfRSGroupRequest getDefaultInstance() {
+  return defaultInstance;
+}
+
+public ListTablesOfRSGroupRequest getDefaultInstanceForType() {
+  return defaultInstance;
+}
+
+private final com.google.protobuf.UnknownFieldSet unknownFields;
+@java.lang.Override
+public final com.google.protobuf.UnknownFieldSet
+getUnknownFields() {
+  return this.unknownFields;
+}
+private ListTablesOfRSGroupRequest(
+com.google.protobuf.CodedInputStream input,
+com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+throws com.google.protobuf.InvalidProtocolBufferException {
+  initFields();
+  int mutable_bitField0_ = 0;
+  com.google.protobuf.UnknownFieldSet.Builder unknownFields =
+  com.google.protobuf.UnknownFieldSet.newBuilder();
+  try {
+boolean done = false;
+while (!done) {
+  int tag = input.readTag();
+  switch (tag) {
+case 0:
+  done = true;
+  break;
+default: {
+  if (!parseUnknownField(input, unknownFields,
+ extensionRegistry, tag)) {
+done = true;
+  }
+  break;
+}
+case 10: {
+  bitField0_ |= 0x0001;
+  rSGroupName_ = input.readBytes();
+  break;
+}
+  }
+}
+  } catch (com.google.protobuf.InvalidProtocolBufferException e) {
+throw e.setUnfinishedMessage(this);
+  } catch (java.io.IOException e) {
+throw new com.google.protobuf.InvalidProtocolBufferException(
+e.getMessage()).setUnfinishedMessage(this);
+  } finally {
+this.unknownFields = unknownFields.build();
+makeExtensionsImmutable();
+  }
+}
+public static final com.google.protobuf.Descriptors.Descriptor
+getDescriptor() {
+  return 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.internal_static_hbase_pb_ListTablesOfRSGroupRequest_descriptor;
+}
+
+protected com.google.protobuf.GeneratedMessage.FieldAccessorTable
+internalGetFieldAccessorTable() {
+  return 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.internal_static_hbase_pb_ListTablesOfRSGroupRequest_fieldAccessorTable
+  .ensureFieldAccessorsInitialized(
+  
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.ListTablesOfRSGroupRequest.class,
 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.ListTablesOfRSGroupRequest.Builder.class);
+}
+
+public static com.google.protobuf.Parser 
PARSER =
+new com.google.protobuf.AbstractParser() {
+  public ListTablesOfRSGroupRequest parsePartialFrom(
+  

[09/14] hbase git commit: HBASE-15631 Backport Regionserver Groups (HBASE-6721) to branch-1 (Francis Liu and Andrew Purtell)

2017-10-23 Thread apurtell
http://git-wip-us.apache.org/repos/asf/hbase/blob/64328cae/hbase-rsgroup/src/test/java/org/apache/hadoop/hbase/rsgroup/TestRSGroupsBase.java
--
diff --git 
a/hbase-rsgroup/src/test/java/org/apache/hadoop/hbase/rsgroup/TestRSGroupsBase.java
 
b/hbase-rsgroup/src/test/java/org/apache/hadoop/hbase/rsgroup/TestRSGroupsBase.java
new file mode 100644
index 000..0db0fea
--- /dev/null
+++ 
b/hbase-rsgroup/src/test/java/org/apache/hadoop/hbase/rsgroup/TestRSGroupsBase.java
@@ -0,0 +1,815 @@
+/**
+ * Copyright The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rsgroup;
+
+import com.google.common.collect.Maps;
+import com.google.common.collect.Sets;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.ClusterStatus;
+import org.apache.hadoop.hbase.HBaseCluster;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.NamespaceDescriptor;
+import org.apache.hadoop.hbase.RegionLoad;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.Waiter;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.constraint.ConstraintException;
+import org.apache.hadoop.hbase.net.Address;
+import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
+import org.apache.hadoop.hbase.protobuf.generated.AdminProtos;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.Assert;
+import org.junit.Ignore;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.security.SecureRandom;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+public abstract class TestRSGroupsBase {
+  protected static final Log LOG = LogFactory.getLog(TestRSGroupsBase.class);
+
+  //shared
+  protected final static String groupPrefix = "Group";
+  protected final static String tablePrefix = "Group";
+  protected final static SecureRandom rand = new SecureRandom();
+
+  //shared, cluster type specific
+  protected static HBaseTestingUtility TEST_UTIL;
+  protected static HBaseAdmin admin;
+  protected static HBaseCluster cluster;
+  protected static RSGroupAdmin rsGroupAdmin;
+
+  public final static long WAIT_TIMEOUT = 6*5;
+  public final static int NUM_SLAVES_BASE = 4; //number of slaves for the 
smallest cluster
+
+
+
+  protected RSGroupInfo addGroup(RSGroupAdmin gAdmin, String groupName,
+ int serverCount) throws IOException, 
InterruptedException {
+RSGroupInfo defaultInfo = gAdmin
+.getRSGroupInfo(RSGroupInfo.DEFAULT_GROUP);
+assertTrue(defaultInfo != null);
+assertTrue(defaultInfo.getServers().size() >= serverCount);
+gAdmin.addRSGroup(groupName);
+
+Set set = new HashSet();
+for(Address server: defaultInfo.getServers()) {
+  if(set.size() == serverCount) {
+break;
+  }
+  set.add(server);
+}
+gAdmin.moveServers(set, groupName);
+RSGroupInfo result = gAdmin.getRSGroupInfo(groupName);
+assertTrue(result.getServers().size() >= serverCount);
+return result;
+  }
+
+  static void removeGroup(RSGroupAdminClient groupAdmin, String groupName) 
throws IOException {
+RSGroupInfo info = groupAdmin.getRSGroupInfo(groupName);
+groupAdmin.moveTables(info.getTables(), RSGroupInfo.DEFAULT_GROUP);
+groupAdmin.moveServers(info.getServers(), RSGroupInfo.DEFAULT_GROUP);
+groupAdmin.removeRSGroup(groupName);
+  }
+
+  protected void deleteTableIfNecessary() throws IOException {
+for (HTableDescriptor desc : 
TEST_UTIL.getHBaseAdmin().listTables(tablePrefix+".*")) {
+  

[2/2] hbase git commit: HBASE-16338 Remove Jackson1 deps

2017-10-23 Thread mdrob
HBASE-16338 Remove Jackson1 deps

* Change imports from org.codehaus to com.fasterxml
* Exclude transitive jackson1 from hadoop and others
* Minor test cleanup to add assert messages, fix some parameter order
* Add anti-pattern check for using jackson 1 imports
* Add explicit non-null serialization directive to ScannerModel


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/34df2e66
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/34df2e66
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/34df2e66

Branch: refs/heads/branch-2
Commit: 34df2e665e3c9e11ed590a32bc55cf2de1e25818
Parents: df71eef
Author: Mike Drob 
Authored: Mon Oct 2 16:31:48 2017 -0500
Committer: Mike Drob 
Committed: Mon Oct 23 15:24:51 2017 -0500

--
 dev-support/hbase-personality.sh|   6 ++
 hbase-client/pom.xml|   4 +
 .../apache/hadoop/hbase/util/JsonMapper.java|   2 +-
 .../hadoop/hbase/client/TestOperation.java  |   2 +-
 hbase-it/pom.xml|   4 +
 .../hadoop/hbase/RESTApiClusterManager.java |  18 ++--
 hbase-mapreduce/pom.xml |  12 +--
 .../hadoop/hbase/PerformanceEvaluation.java |  10 +-
 .../hadoop/hbase/TestPerformanceEvaluation.java |   6 +-
 hbase-rest/pom.xml  |  21 ++--
 .../hbase/rest/ProtobufStreamingOutput.java | 105 ++
 .../hbase/rest/ProtobufStreamingUtil.java   | 106 ---
 .../apache/hadoop/hbase/rest/RESTServer.java|   4 +-
 .../hadoop/hbase/rest/TableScanResource.java|  26 ++---
 .../hadoop/hbase/rest/model/CellModel.java  |   2 +-
 .../hbase/rest/model/ColumnSchemaModel.java |   5 +-
 .../hbase/rest/model/NamespacesModel.java   |   3 +-
 .../hadoop/hbase/rest/model/RowModel.java   |   2 +-
 .../hadoop/hbase/rest/model/ScannerModel.java   |   6 +-
 .../rest/model/StorageClusterStatusModel.java   |   6 ++
 .../rest/model/StorageClusterVersionModel.java  |   3 -
 .../hbase/rest/model/TableSchemaModel.java  |   7 +-
 .../hbase/rest/HBaseRESTTestingUtility.java |   5 +-
 .../hadoop/hbase/rest/RowResourceBase.java  |   4 +-
 .../apache/hadoop/hbase/rest/TestDeleteRow.java |   2 +-
 .../hadoop/hbase/rest/TestMultiRowResource.java |   9 +-
 .../rest/TestNamespacesInstanceResource.java|   9 +-
 .../hadoop/hbase/rest/TestSchemaResource.java   |  52 ++---
 .../apache/hadoop/hbase/rest/TestTableScan.java |  60 +++
 .../hadoop/hbase/rest/TestVersionResource.java  |  21 ++--
 .../hbase/rest/model/TestColumnSchemaModel.java |  16 +--
 .../hadoop/hbase/rest/model/TestModelBase.java  |   6 +-
 .../hbase/rest/model/TestTableSchemaModel.java  |   3 +
 hbase-server/pom.xml|  16 +--
 .../hadoop/hbase/io/hfile/AgeSnapshot.java  |   2 +-
 .../hadoop/hbase/io/hfile/BlockCacheUtil.java   |  17 ++-
 .../hadoop/hbase/io/hfile/LruBlockCache.java|   5 +-
 .../hbase/io/hfile/bucket/BucketAllocator.java  |   2 +-
 .../org/apache/hadoop/hbase/ipc/RpcServer.java  |   2 +-
 .../hbase/monitoring/MonitoredTaskImpl.java |   2 +-
 .../org/apache/hadoop/hbase/util/JSONBean.java  |   6 +-
 .../hadoop/hbase/util/JSONMetricUtil.java   |  10 +-
 .../hadoop/hbase/wal/WALPrettyPrinter.java  |   2 +-
 .../hbase-webapps/master/processMaster.jsp  |   2 +-
 .../hbase-webapps/master/processRS.jsp  |   2 +-
 .../hbase-webapps/regionserver/processRS.jsp|   2 +-
 .../hbase/io/hfile/TestBlockCacheReporting.java |   4 +-
 .../hadoop/hbase/util/TestJSONMetricUtil.java   |  33 +++---
 hbase-shaded/hbase-shaded-mapreduce/pom.xml |   4 -
 hbase-shaded/pom.xml|   4 +
 hbase-shell/src/main/ruby/hbase/taskmonitor.rb  |   2 +-
 hbase-spark/pom.xml |  20 
 pom.xml |  97 -
 53 files changed, 417 insertions(+), 364 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/34df2e66/dev-support/hbase-personality.sh
--
diff --git a/dev-support/hbase-personality.sh b/dev-support/hbase-personality.sh
index 24f2ef5..dcf4f7a 100755
--- a/dev-support/hbase-personality.sh
+++ b/dev-support/hbase-personality.sh
@@ -443,6 +443,12 @@ function hbaseanti_patchfile
 ((result=result+1))
   fi
 
+  warnings=$(${GREP} 'import org.codehaus.jackson' "${patchfile}")
+  if [[ ${warnings} -gt 0 ]]; then
+add_vote_table -1 hbaseanti "" "The patch appears use Jackson 1 
classes/annotations: ${warnings}."
+((result=result+1))
+  fi
+
   if [[ ${result} -gt 0 ]]; then
 return 1
   fi


[1/2] hbase git commit: HBASE-16338 Remove Jackson1 deps

2017-10-23 Thread mdrob
Repository: hbase
Updated Branches:
  refs/heads/branch-2 df71eeff1 -> 34df2e665


http://git-wip-us.apache.org/repos/asf/hbase/blob/34df2e66/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestBlockCacheReporting.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestBlockCacheReporting.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestBlockCacheReporting.java
index ee5a364..dab8673 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestBlockCacheReporting.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestBlockCacheReporting.java
@@ -23,6 +23,8 @@ import java.io.IOException;
 import java.util.Map;
 import java.util.NavigableSet;
 
+import com.fasterxml.jackson.core.JsonGenerationException;
+import com.fasterxml.jackson.databind.JsonMappingException;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
@@ -32,8 +34,6 @@ import org.apache.hadoop.hbase.testclassification.IOTests;
 import org.apache.hadoop.hbase.testclassification.SmallTests;
 import org.apache.hadoop.hbase.io.hfile.TestCacheConfig.DataCacheEntry;
 import org.apache.hadoop.hbase.io.hfile.TestCacheConfig.IndexCacheEntry;
-import org.codehaus.jackson.JsonGenerationException;
-import org.codehaus.jackson.map.JsonMappingException;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;

http://git-wip-us.apache.org/repos/asf/hbase/blob/34df2e66/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestJSONMetricUtil.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestJSONMetricUtil.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestJSONMetricUtil.java
index 30da26a..1135039 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestJSONMetricUtil.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestJSONMetricUtil.java
@@ -22,6 +22,7 @@ import java.lang.management.GarbageCollectorMXBean;
 import java.lang.management.ManagementFactory;
 import java.util.Hashtable;
 import java.util.List;
+import java.util.Map;
 
 import javax.management.MalformedObjectNameException;
 import javax.management.ObjectName;
@@ -29,13 +30,14 @@ import javax.management.openmbean.CompositeData;
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
 
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.databind.JsonNode;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.hbase.testclassification.MiscTests;
 import org.apache.hadoop.hbase.testclassification.SmallTests;
-import org.codehaus.jackson.JsonNode;
-import org.codehaus.jackson.JsonProcessingException;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
@@ -51,17 +53,14 @@ public class TestJSONMetricUtil {
 String[] values = {"MemoryPool", "Par Eden Space"};
 String[] values2 = {"MemoryPool", "Par Eden Space", "Test"};
 String[] emptyValue = {};
-Hashtable properties = 
JSONMetricUtil.buldKeyValueTable(keys, values);
-Hashtable nullObject = 
JSONMetricUtil.buldKeyValueTable(keys, values2);
-Hashtable nullObject1 = 
JSONMetricUtil.buldKeyValueTable(keys, emptyValue);
-Hashtable nullObject2 = 
JSONMetricUtil.buldKeyValueTable(emptyKey, values2);
-Hashtable nullObject3 = 
JSONMetricUtil.buldKeyValueTable(emptyKey, emptyValue);
-assertEquals(properties.get("type"), values[0]);
-assertEquals(properties.get("name"), values[1]);
-assertEquals(nullObject, null);
-assertEquals(nullObject1, null);
-assertEquals(nullObject2, null);
-assertEquals(nullObject3, null);
+Map properties = JSONMetricUtil.buldKeyValueTable(keys, 
values);
+assertEquals(values[0], properties.get("type"));
+assertEquals(values[1], properties.get("name"));
+
+assertNull(JSONMetricUtil.buldKeyValueTable(keys, values2));
+assertNull(JSONMetricUtil.buldKeyValueTable(keys, emptyValue));
+assertNull(JSONMetricUtil.buldKeyValueTable(emptyKey, values2));
+assertNull(JSONMetricUtil.buldKeyValueTable(emptyKey, emptyValue));
   }
 
   @Test
@@ -73,10 +72,10 @@ public class TestJSONMetricUtil {
 JsonNode r2 = JSONMetricUtil.searchJson(node, "data2");
 JsonNode r3 = JSONMetricUtil.searchJson(node, "data3");
 JsonNode r4 = JSONMetricUtil.searchJson(node, "data4");
-assertEquals(r1.getIntValue(), 100);
-assertEquals(r2.getTextValue(), "hello");
-assertEquals(r3.get(0).getIntValue(), 1);
-assertEquals(r4.getIntValue(), 0);
+

hbase git commit: Install rsync

2017-10-23 Thread mdrob
Repository: hbase
Updated Branches:
  refs/heads/HBASE-19054 93b402d3d -> 10a4a3788


Install rsync


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/10a4a378
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/10a4a378
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/10a4a378

Branch: refs/heads/HBASE-19054
Commit: 10a4a37883b70f5fbf93694ee27426887de777a0
Parents: 93b402d
Author: Mike Drob 
Authored: Mon Oct 23 10:44:35 2017 -0500
Committer: Mike Drob 
Committed: Mon Oct 23 10:44:35 2017 -0500

--
 dev-support/docker/Dockerfile | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/10a4a378/dev-support/docker/Dockerfile
--
diff --git a/dev-support/docker/Dockerfile b/dev-support/docker/Dockerfile
index 717f911..49ad14d 100644
--- a/dev-support/docker/Dockerfile
+++ b/dev-support/docker/Dockerfile
@@ -25,6 +25,7 @@ RUN apt-get -q update && apt-get -q install 
--no-install-recommends -y \
libperl-critic-perl \
pylint \
python-dateutil \
+   rsync \
ruby \
shellcheck \
wget \



hbase-site git commit: INFRA-10751 Empty commit

2017-10-23 Thread git-site-role
Repository: hbase-site
Updated Branches:
  refs/heads/asf-site 41a7fcc53 -> 647929ed0


INFRA-10751 Empty commit


Project: http://git-wip-us.apache.org/repos/asf/hbase-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase-site/commit/647929ed
Tree: http://git-wip-us.apache.org/repos/asf/hbase-site/tree/647929ed
Diff: http://git-wip-us.apache.org/repos/asf/hbase-site/diff/647929ed

Branch: refs/heads/asf-site
Commit: 647929ed0be6fa207c2c9519855a82c4447d7bf8
Parents: 41a7fcc
Author: jenkins 
Authored: Mon Oct 23 15:16:24 2017 +
Committer: jenkins 
Committed: Mon Oct 23 15:16:24 2017 +

--

--




[15/51] [partial] hbase-site git commit: Published site at .

2017-10-23 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/41a7fcc5/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegion.PrepareFlushResult.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegion.PrepareFlushResult.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegion.PrepareFlushResult.html
index 12fe16f..b1e0997 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegion.PrepareFlushResult.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegion.PrepareFlushResult.html
@@ -1960,6279 +1960,6285 @@
 1952  protected void 
doRegionCompactionPrep() throws IOException {
 1953  }
 1954
-1955  @Override
-1956  public void triggerMajorCompaction() 
throws IOException {
-1957
stores.values().forEach(HStore::triggerMajorCompaction);
-1958  }
-1959
-1960  /**
-1961   * Synchronously compact all stores in 
the region.
-1962   * pThis operation could block 
for a long time, so don't call it from a
-1963   * time-sensitive thread.
-1964   * pNote that no locks are 
taken to prevent possible conflicts between
-1965   * compaction and splitting 
activities. The regionserver does not normally compact
-1966   * and split in parallel. However by 
calling this method you may introduce
-1967   * unexpected and unhandled 
concurrency. Don't do this unless you know what
-1968   * you are doing.
-1969   *
-1970   * @param majorCompaction True to 
force a major compaction regardless of thresholds
-1971   * @throws IOException
-1972   */
-1973  public void compact(boolean 
majorCompaction) throws IOException {
-1974if (majorCompaction) {
-1975  triggerMajorCompaction();
-1976}
-1977for (HStore s : stores.values()) {
-1978  OptionalCompactionContext 
compaction = s.requestCompaction();
-1979  if (compaction.isPresent()) {
-1980ThroughputController controller 
= null;
-1981if (rsServices != null) {
-1982  controller = 
CompactionThroughputControllerFactory.create(rsServices, conf);
-1983}
-1984if (controller == null) {
-1985  controller = 
NoLimitThroughputController.INSTANCE;
-1986}
-1987compact(compaction.get(), s, 
controller, null);
-1988  }
-1989}
-1990  }
-1991
-1992  /**
-1993   * This is a helper function that 
compact all the stores synchronously.
-1994   * p
-1995   * It is used by utilities and 
testing
-1996   */
-1997  @VisibleForTesting
-1998  public void compactStores() throws 
IOException {
-1999for (HStore s : stores.values()) {
-2000  OptionalCompactionContext 
compaction = s.requestCompaction();
-2001  if (compaction.isPresent()) {
-2002compact(compaction.get(), s, 
NoLimitThroughputController.INSTANCE, null);
-2003  }
-2004}
-2005  }
-2006
-2007  /**
-2008   * This is a helper function that 
compact the given store.
-2009   * p
-2010   * It is used by utilities and 
testing
-2011   */
-2012  @VisibleForTesting
-2013  void compactStore(byte[] family, 
ThroughputController throughputController) throws IOException {
-2014HStore s = getStore(family);
-2015OptionalCompactionContext 
compaction = s.requestCompaction();
-2016if (compaction.isPresent()) {
-2017  compact(compaction.get(), s, 
throughputController, null);
-2018}
-2019  }
-2020
-2021  /**
-2022   * Called by compaction thread and 
after region is opened to compact the
-2023   * HStores if necessary.
-2024   *
-2025   * pThis operation could block 
for a long time, so don't call it from a
-2026   * time-sensitive thread.
-2027   *
-2028   * Note that no locking is necessary 
at this level because compaction only
-2029   * conflicts with a region split, and 
that cannot happen because the region
-2030   * server does them sequentially and 
not in parallel.
-2031   *
-2032   * @param compaction Compaction 
details, obtained by requestCompaction()
-2033   * @param throughputController
-2034   * @return whether the compaction 
completed
-2035   */
+1955  /**
+1956   * Synchronously compact all stores in 
the region.
+1957   * pThis operation could block 
for a long time, so don't call it from a
+1958   * time-sensitive thread.
+1959   * pNote that no locks are 
taken to prevent possible conflicts between
+1960   * compaction and splitting 
activities. The regionserver does not normally compact
+1961   * and split in parallel. However by 
calling this method you may introduce
+1962   * unexpected and unhandled 
concurrency. Don't do this unless you know what
+1963   * you are doing.
+1964   *
+1965   * @param majorCompaction True to 
force a major compaction regardless of thresholds
+1966   * @throws IOException
+1967   */
+1968  public void compact(boolean 
majorCompaction) throws IOException {
+1969if (majorCompaction) {
+1970  
stores.values().forEach(HStore::triggerMajorCompaction);
+1971}
+1972for (HStore s : stores.values()) {
+1973 

[28/51] [partial] hbase-site git commit: Published site at .

2017-10-23 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/41a7fcc5/devapidocs/src-html/org/apache/hadoop/hbase/coprocessor/RegionObserver.MutationType.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/coprocessor/RegionObserver.MutationType.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/coprocessor/RegionObserver.MutationType.html
index 4e07a5f..1be8978 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/coprocessor/RegionObserver.MutationType.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/coprocessor/RegionObserver.MutationType.html
@@ -30,1014 +30,1008 @@
 022import java.io.IOException;
 023import java.util.List;
 024import java.util.Map;
-025import java.util.NavigableSet;
-026
-027import org.apache.hadoop.fs.FileSystem;
-028import org.apache.hadoop.fs.Path;
-029import org.apache.hadoop.hbase.Cell;
-030import 
org.apache.hadoop.hbase.CompareOperator;
-031import 
org.apache.hadoop.hbase.HBaseInterfaceAudience;
-032import 
org.apache.hadoop.hbase.client.Append;
-033import 
org.apache.hadoop.hbase.client.Delete;
-034import 
org.apache.hadoop.hbase.client.Durability;
-035import 
org.apache.hadoop.hbase.client.Get;
-036import 
org.apache.hadoop.hbase.client.Increment;
-037import 
org.apache.hadoop.hbase.client.Mutation;
-038import 
org.apache.hadoop.hbase.client.Put;
-039import 
org.apache.hadoop.hbase.client.RegionInfo;
-040import 
org.apache.hadoop.hbase.client.Result;
-041import 
org.apache.hadoop.hbase.client.Scan;
-042import 
org.apache.hadoop.hbase.filter.ByteArrayComparable;
-043import 
org.apache.hadoop.hbase.io.FSDataInputStreamWrapper;
-044import 
org.apache.hadoop.hbase.io.Reference;
-045import 
org.apache.hadoop.hbase.io.hfile.CacheConfig;
-046import 
org.apache.hadoop.hbase.regionserver.InternalScanner;
-047import 
org.apache.hadoop.hbase.regionserver.KeyValueScanner;
-048import 
org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
-049import 
org.apache.hadoop.hbase.regionserver.OperationStatus;
-050import 
org.apache.hadoop.hbase.regionserver.Region;
-051import 
org.apache.hadoop.hbase.regionserver.Region.Operation;
-052import 
org.apache.hadoop.hbase.regionserver.RegionScanner;
-053import 
org.apache.hadoop.hbase.regionserver.ScanType;
-054import 
org.apache.hadoop.hbase.regionserver.Store;
-055import 
org.apache.hadoop.hbase.regionserver.StoreFile;
-056import 
org.apache.hadoop.hbase.regionserver.StoreFileReader;
-057import 
org.apache.hadoop.hbase.regionserver.compactions.CompactionLifeCycleTracker;
-058import 
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest;
-059import 
org.apache.hadoop.hbase.regionserver.querymatcher.DeleteTracker;
-060import 
org.apache.hadoop.hbase.shaded.com.google.common.collect.ImmutableList;
-061import 
org.apache.hadoop.hbase.util.Pair;
-062import 
org.apache.hadoop.hbase.wal.WALEdit;
-063import 
org.apache.hadoop.hbase.wal.WALKey;
-064import 
org.apache.yetus.audience.InterfaceAudience;
-065import 
org.apache.yetus.audience.InterfaceStability;
-066
-067/**
-068 * Coprocessors implement this interface 
to observe and mediate client actions on the region.
-069 * p
-070 * Since most implementations will be 
interested in only a subset of hooks, this class uses
-071 * 'default' functions to avoid having to 
add unnecessary overrides. When the functions are
-072 * non-empty, it's simply to satisfy the 
compiler by returning value of expected (non-void) type. It
-073 * is done in a way that these default 
definitions act as no-op. So our suggestion to implementation
-074 * would be to not call these 'default' 
methods from overrides.
-075 * p
-076 * h3Exception 
Handling/h3br
-077 * For all functions, exception handling 
is done as follows:
+025
+026import org.apache.hadoop.fs.FileSystem;
+027import org.apache.hadoop.fs.Path;
+028import org.apache.hadoop.hbase.Cell;
+029import 
org.apache.hadoop.hbase.CompareOperator;
+030import 
org.apache.hadoop.hbase.HBaseInterfaceAudience;
+031import 
org.apache.hadoop.hbase.client.Append;
+032import 
org.apache.hadoop.hbase.client.Delete;
+033import 
org.apache.hadoop.hbase.client.Durability;
+034import 
org.apache.hadoop.hbase.client.Get;
+035import 
org.apache.hadoop.hbase.client.Increment;
+036import 
org.apache.hadoop.hbase.client.Mutation;
+037import 
org.apache.hadoop.hbase.client.Put;
+038import 
org.apache.hadoop.hbase.client.RegionInfo;
+039import 
org.apache.hadoop.hbase.client.Result;
+040import 
org.apache.hadoop.hbase.client.Scan;
+041import 
org.apache.hadoop.hbase.filter.ByteArrayComparable;
+042import 
org.apache.hadoop.hbase.io.FSDataInputStreamWrapper;
+043import 
org.apache.hadoop.hbase.io.Reference;
+044import 
org.apache.hadoop.hbase.io.hfile.CacheConfig;
+045import 
org.apache.hadoop.hbase.regionserver.InternalScanner;
+046import 
org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
+047import 
org.apache.hadoop.hbase.regionserver.OperationStatus;
+048import 

[36/51] [partial] hbase-site git commit: Published site at .

2017-10-23 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/41a7fcc5/devapidocs/org/apache/hadoop/hbase/regionserver/Region.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/regionserver/Region.html 
b/devapidocs/org/apache/hadoop/hbase/regionserver/Region.html
index 581c4b7..fb1404f 100644
--- a/devapidocs/org/apache/hadoop/hbase/regionserver/Region.html
+++ b/devapidocs/org/apache/hadoop/hbase/regionserver/Region.html
@@ -18,7 +18,7 @@
 catch(err) {
 }
 //-->
-var methods = 
{"i0":6,"i1":6,"i2":6,"i3":6,"i4":6,"i5":6,"i6":6,"i7":6,"i8":6,"i9":6,"i10":6,"i11":6,"i12":6,"i13":6,"i14":6,"i15":6,"i16":6,"i17":6,"i18":6,"i19":6,"i20":6,"i21":6,"i22":6,"i23":6,"i24":6,"i25":6,"i26":6,"i27":6,"i28":6,"i29":6,"i30":6,"i31":6,"i32":6,"i33":6,"i34":6,"i35":6,"i36":6,"i37":6,"i38":6,"i39":6,"i40":6,"i41":6,"i42":6,"i43":6,"i44":6,"i45":6,"i46":6,"i47":6,"i48":6,"i49":6,"i50":6,"i51":6};
+var methods = 
{"i0":6,"i1":6,"i2":6,"i3":6,"i4":6,"i5":6,"i6":6,"i7":6,"i8":6,"i9":6,"i10":6,"i11":6,"i12":6,"i13":6,"i14":6,"i15":6,"i16":6,"i17":6,"i18":6,"i19":6,"i20":6,"i21":6,"i22":6,"i23":6,"i24":6,"i25":6,"i26":6,"i27":6,"i28":6,"i29":6,"i30":6,"i31":6,"i32":6,"i33":6,"i34":6,"i35":6,"i36":6,"i37":6,"i38":6,"i39":6,"i40":6,"i41":6,"i42":6,"i43":6,"i44":6,"i45":6,"i46":6,"i47":6,"i48":6,"i49":6,"i50":6};
 var tabs = {65535:["t0","All Methods"],2:["t2","Instance 
Methods"],4:["t3","Abstract Methods"]};
 var altColor = "altColor";
 var rowColor = "rowColor";
@@ -111,7 +111,7 @@ var activeTableTab = "activeTableTab";
 
 @InterfaceAudience.LimitedPrivate(value="Coprocesssor")
  @InterfaceStability.Evolving
-public interface Region
+public interface Region
 extends ConfigurationObserver
 Region is a subset of HRegion with operations required for 
the Coprocessors. The 
operations include ability to do mutations, requesting compaction, getting
  different counters/sizes, locking rows and getting access to Stores.
@@ -428,20 +428,20 @@ extends 
 void
-requestCompaction(byte[]family,
+requestCompaction(byte[]family,
  http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringwhy,
  intpriority,
- CompactionLifeCycleTrackertracker,
- Useruser)
+ booleanmajor,
+ CompactionLifeCycleTrackertracker)
 Request compaction for the given family
 
 
 
 void
-requestCompaction(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringwhy,
+requestCompaction(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringwhy,
  intpriority,
- CompactionLifeCycleTrackertracker,
- Useruser)
+ booleanmajor,
+ CompactionLifeCycleTrackertracker)
 Request compaction on this region.
 
 
@@ -459,12 +459,6 @@ extends 
-void
-triggerMajorCompaction()
-Trigger major compaction on all stores in the region.
-
-
 
 
 
@@ -493,7 +487,7 @@ extends 
 
 getRegionInfo
-RegionInfogetRegionInfo()
+RegionInfogetRegionInfo()
 
 Returns:
 region information for this region
@@ -506,7 +500,7 @@ extends 
 
 getTableDescriptor
-TableDescriptorgetTableDescriptor()
+TableDescriptorgetTableDescriptor()
 
 Returns:
 table descriptor for this region
@@ -519,7 +513,7 @@ extends 
 
 isAvailable
-booleanisAvailable()
+booleanisAvailable()
 
 Returns:
 true if region is available (not closed and not closing)
@@ -532,7 +526,7 @@ extends 
 
 isClosed
-booleanisClosed()
+booleanisClosed()
 
 Returns:
 true if region is closed
@@ -545,7 +539,7 @@ extends 
 
 isClosing
-booleanisClosing()
+booleanisClosing()
 
 Returns:
 True if closing process has started
@@ -558,7 +552,7 @@ extends 
 
 isRecovering
-booleanisRecovering()
+booleanisRecovering()
 
 Returns:
 True if region is in recovering state
@@ -571,7 +565,7 @@ extends 
 
 isReadOnly
-booleanisReadOnly()
+booleanisReadOnly()
 
 Returns:
 True if region is read only
@@ -584,7 +578,7 @@ extends 
 
 isSplittable
-booleanisSplittable()
+booleanisSplittable()
 
 Returns:
 true if region is splittable
@@ -597,7 +591,7 @@ extends 
 
 isMergeable
-booleanisMergeable()
+booleanisMergeable()
 
 Returns:
 true if region is mergeable
@@ -610,7 +604,7 @@ extends 
 
 getStores
-http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">List? extends StoregetStores()
+http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">List? extends StoregetStores()
 Return the list of Stores managed by this region
  Use with caution.  Exposed for use of fixup utilities.
 
@@ -625,7 +619,7 @@ extends 
 
 getStore
-StoregetStore(byte[]family)
+StoregetStore(byte[]family)
 Return the Store 

[26/51] [partial] hbase-site git commit: Published site at .

2017-10-23 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/41a7fcc5/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/CompactSplit.AggregatingCompactionLifeCycleTracker.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/CompactSplit.AggregatingCompactionLifeCycleTracker.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/CompactSplit.AggregatingCompactionLifeCycleTracker.html
new file mode 100644
index 000..7df078f
--- /dev/null
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/CompactSplit.AggregatingCompactionLifeCycleTracker.html
@@ -0,0 +1,843 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+Source code
+
+
+
+
+001/**
+002 *
+003 * Licensed to the Apache Software 
Foundation (ASF) under one
+004 * or more contributor license 
agreements.  See the NOTICE file
+005 * distributed with this work for 
additional information
+006 * regarding copyright ownership.  The 
ASF licenses this file
+007 * to you under the Apache License, 
Version 2.0 (the
+008 * "License"); you may not use this file 
except in compliance
+009 * with the License.  You may obtain a 
copy of the License at
+010 *
+011 * 
http://www.apache.org/licenses/LICENSE-2.0
+012 *
+013 * Unless required by applicable law or 
agreed to in writing, software
+014 * distributed under the License is 
distributed on an "AS IS" BASIS,
+015 * WITHOUT WARRANTIES OR CONDITIONS OF 
ANY KIND, either express or implied.
+016 * See the License for the specific 
language governing permissions and
+017 * limitations under the License.
+018 */
+019package 
org.apache.hadoop.hbase.regionserver;
+020
+021import static 
org.apache.hadoop.hbase.regionserver.Store.NO_PRIORITY;
+022import static 
org.apache.hadoop.hbase.regionserver.Store.PRIORITY_USER;
+023
+024import java.io.IOException;
+025import java.io.PrintWriter;
+026import java.io.StringWriter;
+027import java.util.Comparator;
+028import java.util.Iterator;
+029import java.util.Optional;
+030import 
java.util.concurrent.BlockingQueue;
+031import java.util.concurrent.Executors;
+032import 
java.util.concurrent.RejectedExecutionException;
+033import 
java.util.concurrent.RejectedExecutionHandler;
+034import 
java.util.concurrent.ThreadFactory;
+035import 
java.util.concurrent.ThreadPoolExecutor;
+036import java.util.concurrent.TimeUnit;
+037import 
java.util.concurrent.atomic.AtomicInteger;
+038import java.util.function.IntSupplier;
+039
+040import org.apache.commons.logging.Log;
+041import 
org.apache.commons.logging.LogFactory;
+042import 
org.apache.hadoop.conf.Configuration;
+043import 
org.apache.hadoop.hbase.conf.ConfigurationManager;
+044import 
org.apache.hadoop.hbase.conf.PropagatingConfigurationObserver;
+045import 
org.apache.hadoop.hbase.quotas.RegionServerSpaceQuotaManager;
+046import 
org.apache.hadoop.hbase.regionserver.compactions.CompactionContext;
+047import 
org.apache.hadoop.hbase.regionserver.compactions.CompactionLifeCycleTracker;
+048import 
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequestImpl;
+049import 
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequester;
+050import 
org.apache.hadoop.hbase.regionserver.throttle.CompactionThroughputControllerFactory;
+051import 
org.apache.hadoop.hbase.regionserver.throttle.ThroughputController;
+052import 
org.apache.hadoop.hbase.security.User;
+053import 
org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+054import 
org.apache.hadoop.hbase.util.StealJobQueue;
+055import 
org.apache.hadoop.ipc.RemoteException;
+056import 
org.apache.hadoop.util.StringUtils;
+057import 
org.apache.yetus.audience.InterfaceAudience;
+058
+059import 
org.apache.hadoop.hbase.shaded.com.google.common.annotations.VisibleForTesting;
+060import 
org.apache.hadoop.hbase.shaded.com.google.common.base.Preconditions;
+061
+062/**
+063 * Compact region on request and then run 
split if appropriate
+064 */
+065@InterfaceAudience.Private
+066public class CompactSplit implements 
CompactionRequester, PropagatingConfigurationObserver {
+067  private static final Log LOG = 
LogFactory.getLog(CompactSplit.class);
+068
+069  // Configuration key for the large 
compaction threads.
+070  public final static String 
LARGE_COMPACTION_THREADS =
+071  
"hbase.regionserver.thread.compaction.large";
+072  public final static int 
LARGE_COMPACTION_THREADS_DEFAULT = 1;
+073
+074  // Configuration key for the small 
compaction threads.
+075  public final static String 
SMALL_COMPACTION_THREADS =
+076  
"hbase.regionserver.thread.compaction.small";
+077  public final static int 
SMALL_COMPACTION_THREADS_DEFAULT = 1;
+078
+079  // Configuration key for split 
threads
+080  public final static String 
SPLIT_THREADS = "hbase.regionserver.thread.split";
+081  public final static int 
SPLIT_THREADS_DEFAULT = 1;
+082
+083  public static final String 
REGION_SERVER_REGION_SPLIT_LIMIT =
+084  

[46/51] [partial] hbase-site git commit: Published site at .

2017-10-23 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/41a7fcc5/devapidocs/org/apache/hadoop/hbase/coprocessor/class-use/ObserverContext.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/coprocessor/class-use/ObserverContext.html 
b/devapidocs/org/apache/hadoop/hbase/coprocessor/class-use/ObserverContext.html
index 7e9657a..def8500 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/coprocessor/class-use/ObserverContext.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/coprocessor/class-use/ObserverContext.html
@@ -425,9 +425,9 @@
 
 
 default void
-RegionObserver.postCompactSelection(ObserverContextRegionCoprocessorEnvironmentc,
+RegionObserver.postCompactSelection(ObserverContextRegionCoprocessorEnvironmentc,
 Storestore,
-
org.apache.hadoop.hbase.shaded.com.google.common.collect.ImmutableList? 
extends StoreFileselected,
+http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">List? extends StoreFileselected,
 CompactionLifeCycleTrackertracker,
 CompactionRequestrequest)
 Called after the StoreFiles to compact 
have been selected from the available

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/41a7fcc5/devapidocs/org/apache/hadoop/hbase/coprocessor/class-use/RegionCoprocessorEnvironment.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/coprocessor/class-use/RegionCoprocessorEnvironment.html
 
b/devapidocs/org/apache/hadoop/hbase/coprocessor/class-use/RegionCoprocessorEnvironment.html
index c312d8d..58b0bc5 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/coprocessor/class-use/RegionCoprocessorEnvironment.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/coprocessor/class-use/RegionCoprocessorEnvironment.html
@@ -312,9 +312,9 @@
 
 
 default void
-RegionObserver.postCompactSelection(ObserverContextRegionCoprocessorEnvironmentc,
+RegionObserver.postCompactSelection(ObserverContextRegionCoprocessorEnvironmentc,
 Storestore,
-
org.apache.hadoop.hbase.shaded.com.google.common.collect.ImmutableList? 
extends StoreFileselected,
+http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">List? extends StoreFileselected,
 CompactionLifeCycleTrackertracker,
 CompactionRequestrequest)
 Called after the StoreFiles to compact 
have been selected from the available

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/41a7fcc5/devapidocs/org/apache/hadoop/hbase/coprocessor/example/ExampleRegionObserverWithMetrics.ExampleRegionObserver.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/coprocessor/example/ExampleRegionObserverWithMetrics.ExampleRegionObserver.html
 
b/devapidocs/org/apache/hadoop/hbase/coprocessor/example/ExampleRegionObserverWithMetrics.ExampleRegionObserver.html
index 0a0368a..e7ff7c9 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/coprocessor/example/ExampleRegionObserverWithMetrics.ExampleRegionObserver.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/coprocessor/example/ExampleRegionObserverWithMetrics.ExampleRegionObserver.html
@@ -243,7 +243,7 @@ implements RegionObserver
-postAppend,
 postBatchMutate,
 postBatchMutateIndispensably,
 postBulkLoadHFile,
 postCheckAndDelete,
 postCheckAndPut,
 postClose,
 postCloseRegionOperation, postCommitStoreFile,
 postCompact,
 postCompactSelection,
 postDelete,
 postExists,
 postFlush,
 postFlush, postIncrement,
 postInstantiateDeleteTracker,
 postLogReplay,
 postMutationBeforeWAL,
 postOpen,
 postPut,
 postReplayWALs,
 postScannerClose,
 postScannerFilterRow,
 postScannerNext,
 postScannerOpen,
 pos
 tStartRegionOperation, postStoreFileReaderOpen,
 postWALRestore,
 preAppend,
 preAppendAfterRowLock,
 preBatchMutate,
 preBulkLoadHFile,
 preCheckAndDelete,
 preCheckAndDeleteAfterRowLock,
 preCheckAndPut,
 preCheckAndPutAfterRowLock,
 preClose,
 preCommitStoreFile,
 preCompact,
 preCompactSelection,
 preDelete,
 preExists,
 preFlush,
 preFlush, preIncrement,
 preIncrementAfterRowLock,
 preOpen,
 prePrepareTimeStampForDeleteVersion,
 prePut,
 preReplayWALs,
 preScannerClose,
 preScannerNext,
 preScannerOpen,
 preStoreFileReaderOpen,
 preWALRestore
+postAppend,
 postBatchMutate,
 postBatchMutateIndispensably,
 postBulkLoadHFile,
 postCheckAndDelete,
 postCheckAndPut,
 postClose,
 postCloseRegionOperation, postCommitStoreFile,
 postCompact,
 postCompactSelection,
 postDelete,
 postExists,
 postFlush,
 postFlush,
 postIncrement,
 postInstantiateDeleteTracker,
 postLogReplay,
 postMutationBeforeWAL, postOpen,
 postPut,
 postReplayWALs,
 postScannerClose,
 postScannerFilterRow,
 postScannerNext,
 postScannerOpen,
 

[43/51] [partial] hbase-site git commit: Published site at .

2017-10-23 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/41a7fcc5/devapidocs/org/apache/hadoop/hbase/regionserver/CompactSplit.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/regionserver/CompactSplit.html 
b/devapidocs/org/apache/hadoop/hbase/regionserver/CompactSplit.html
index 2c487db..1ea2f27 100644
--- a/devapidocs/org/apache/hadoop/hbase/regionserver/CompactSplit.html
+++ b/devapidocs/org/apache/hadoop/hbase/regionserver/CompactSplit.html
@@ -18,7 +18,7 @@
 catch(err) {
 }
 //-->
-var methods = 
{"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":10,"i14":10,"i15":10,"i16":10,"i17":10,"i18":10,"i19":10,"i20":10,"i21":10,"i22":10,"i23":10,"i24":10,"i25":10,"i26":10,"i27":10,"i28":10,"i29":10,"i30":10};
+var methods = 
{"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":10,"i14":10,"i15":10,"i16":10,"i17":10,"i18":10,"i19":10,"i20":10,"i21":10,"i22":10,"i23":10,"i24":10,"i25":10,"i26":10,"i27":10,"i28":10,"i29":10,"i30":10,"i31":10};
 var tabs = {65535:["t0","All Methods"],2:["t2","Instance 
Methods"],8:["t4","Concrete Methods"]};
 var altColor = "altColor";
 var rowColor = "rowColor";
@@ -50,7 +50,7 @@ var activeTableTab = "activeTableTab";
 
 
 PrevClass
-NextClass
+NextClass
 
 
 Frames
@@ -109,14 +109,14 @@ var activeTableTab = "activeTableTab";
 
 
 All Implemented Interfaces:
-ConfigurationObserver, PropagatingConfigurationObserver
+ConfigurationObserver, PropagatingConfigurationObserver, CompactionRequester
 
 
 
 @InterfaceAudience.Private
-public class CompactSplit
+public class CompactSplit
 extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
-implements PropagatingConfigurationObserver
+implements CompactionRequester, PropagatingConfigurationObserver
 Compact region on request and then run split if 
appropriate
 
 
@@ -137,10 +137,14 @@ implements Class and Description
 
 
+private static class
+CompactSplit.AggregatingCompactionLifeCycleTracker
+
+
 private class
 CompactSplit.CompactionRunner
 
-
+
 private static class
 CompactSplit.Rejection
 Cleanup class to use when rejecting a compaction request 
from the queue.
@@ -350,7 +354,9 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringwhy,
  intpriority,
  CompactionLifeCycleTrackertracker,
- Useruser)
+ Useruser)
+Request compaction on the given store.
+
 
 
 void
@@ -358,7 +364,9 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringwhy,
  intpriority,
  CompactionLifeCycleTrackertracker,
- Useruser)
+ Useruser)
+Request compaction on all the stores of the given 
region.
+
 
 
 private void
@@ -430,6 +438,11 @@ implements waitFor(http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ThreadPoolExecutor.html?is-external=true;
 title="class or interface in 
java.util.concurrent">ThreadPoolExecutort,
http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringname)
 
+
+private CompactionLifeCycleTracker
+wrap(CompactionLifeCycleTrackertracker,
+http://docs.oracle.com/javase/8/docs/api/java/util/function/IntSupplier.html?is-external=true;
 title="class or interface in 
java.util.function">IntSuppliernumberOfStores)
+
 
 
 
@@ -458,7 +471,7 @@ implements 
 
 LOG
-private static finalorg.apache.commons.logging.Log LOG
+private static finalorg.apache.commons.logging.Log LOG
 
 
 
@@ -467,7 +480,7 @@ implements 
 
 LARGE_COMPACTION_THREADS
-public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String LARGE_COMPACTION_THREADS
+public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String LARGE_COMPACTION_THREADS
 
 See Also:
 Constant
 Field Values
@@ -480,7 +493,7 @@ implements 
 
 LARGE_COMPACTION_THREADS_DEFAULT
-public static finalint LARGE_COMPACTION_THREADS_DEFAULT
+public static finalint LARGE_COMPACTION_THREADS_DEFAULT
 
 See Also:
 Constant
 Field Values
@@ -493,7 +506,7 @@ implements 
 
 SMALL_COMPACTION_THREADS
-public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String SMALL_COMPACTION_THREADS
+public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String SMALL_COMPACTION_THREADS
 
 See Also:
 Constant
 Field Values
@@ 

[32/51] [partial] hbase-site git commit: Published site at .

2017-10-23 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/41a7fcc5/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/HStoreFile.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/HStoreFile.html 
b/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/HStoreFile.html
index ff635ad..6ca3233 100644
--- a/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/HStoreFile.html
+++ b/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/HStoreFile.html
@@ -1199,8 +1199,8 @@
 
 
 void
-RegionCoprocessorHost.postCompactSelection(HStorestore,
-
org.apache.hadoop.hbase.shaded.com.google.common.collect.ImmutableListHStoreFileselected,
+RegionCoprocessorHost.postCompactSelection(HStorestore,
+http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListHStoreFileselected,
 CompactionLifeCycleTrackertracker,
 CompactionRequestrequest,
 Useruser)

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/41a7fcc5/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/Store.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/Store.html 
b/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/Store.html
index 8412851..938739b 100644
--- a/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/Store.html
+++ b/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/Store.html
@@ -132,9 +132,9 @@
 
 
 default void
-RegionObserver.postCompactSelection(ObserverContextRegionCoprocessorEnvironmentc,
+RegionObserver.postCompactSelection(ObserverContextRegionCoprocessorEnvironmentc,
 Storestore,
-
org.apache.hadoop.hbase.shaded.com.google.common.collect.ImmutableList? 
extends StoreFileselected,
+http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">List? extends StoreFileselected,
 CompactionLifeCycleTrackertracker,
 CompactionRequestrequest)
 Called after the StoreFiles to compact 
have been selected from the available
@@ -266,6 +266,28 @@
 
 
 
+
+Methods in org.apache.hadoop.hbase.regionserver
 with parameters of type Store
+
+Modifier and Type
+Method and Description
+
+
+
+void
+CompactSplit.AggregatingCompactionLifeCycleTracker.afterExecution(Storestore)
+
+
+void
+CompactSplit.AggregatingCompactionLifeCycleTracker.beforeExecution(Storestore)
+
+
+void
+CompactSplit.AggregatingCompactionLifeCycleTracker.notExecuted(Storestore,
+   http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in 
java.lang">Stringreason)
+
+
+
 
 
 
@@ -280,16 +302,23 @@
 
 
 default void
-CompactionLifeCycleTracker.afterExecute(Storestore)
+CompactionLifeCycleTracker.afterExecution(Storestore)
 Called after compaction is executed by 
CompactSplitThread.
 
 
 
 default void
-CompactionLifeCycleTracker.beforeExecute(Storestore)
+CompactionLifeCycleTracker.beforeExecution(Storestore)
 Called before compaction is executed by 
CompactSplitThread.
 
 
+
+default void
+CompactionLifeCycleTracker.notExecuted(Storestore,
+   http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringreason)
+Called if the compaction request is failed for some 
reason.
+
+
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/41a7fcc5/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/StoreFile.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/StoreFile.html 
b/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/StoreFile.html
index 5b16be4..16a8c56 100644
--- a/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/StoreFile.html
+++ b/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/StoreFile.html
@@ -141,9 +141,9 @@
 
 
 default void
-RegionObserver.postCompactSelection(ObserverContextRegionCoprocessorEnvironmentc,
+RegionObserver.postCompactSelection(ObserverContextRegionCoprocessorEnvironmentc,
 Storestore,
-
org.apache.hadoop.hbase.shaded.com.google.common.collect.ImmutableList? 
extends StoreFileselected,
+http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">List? extends StoreFileselected,
 CompactionLifeCycleTrackertracker,
 CompactionRequestrequest)
 Called after the StoreFiles to compact 
have been selected from the available


[47/51] [partial] hbase-site git commit: Published site at .

2017-10-23 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/41a7fcc5/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html 
b/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html
index ceee279..506f3f2 100644
--- a/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html
+++ b/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html
@@ -107,7 +107,7 @@ var activeTableTab = "activeTableTab";
 
 @InterfaceAudience.LimitedPrivate(value="Coprocesssor")
  @InterfaceStability.Evolving
-public interface RegionObserver
+public interface RegionObserver
 Coprocessors implement this interface to observe and 
mediate client actions on the region.
  
  Since most implementations will be interested in only a subset of hooks, this 
class uses
@@ -269,9 +269,9 @@ public interface 
 default void
-postCompactSelection(ObserverContextRegionCoprocessorEnvironmentc,
+postCompactSelection(ObserverContextRegionCoprocessorEnvironmentc,
 Storestore,
-
org.apache.hadoop.hbase.shaded.com.google.common.collect.ImmutableList? 
extends StoreFileselected,
+http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">List? extends StoreFileselected,
 CompactionLifeCycleTrackertracker,
 CompactionRequestrequest)
 Called after the StoreFiles to compact 
have been selected from the available
@@ -716,7 +716,7 @@ public interface 
 
 preOpen
-defaultvoidpreOpen(ObserverContextRegionCoprocessorEnvironmentc)
+defaultvoidpreOpen(ObserverContextRegionCoprocessorEnvironmentc)
   throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
 Called before the region is reported as open to the 
master.
 
@@ -733,7 +733,7 @@ public interface 
 
 postOpen
-defaultvoidpostOpen(ObserverContextRegionCoprocessorEnvironmentc)
+defaultvoidpostOpen(ObserverContextRegionCoprocessorEnvironmentc)
 Called after the region is reported as open to the 
master.
 
 Parameters:
@@ -747,7 +747,7 @@ public interface 
 
 postLogReplay
-defaultvoidpostLogReplay(ObserverContextRegionCoprocessorEnvironmentc)
+defaultvoidpostLogReplay(ObserverContextRegionCoprocessorEnvironmentc)
 Called after the log replay on the region is over.
 
 Parameters:
@@ -761,7 +761,7 @@ public interface 
 
 preFlush
-defaultvoidpreFlush(ObserverContextRegionCoprocessorEnvironmentc)
+defaultvoidpreFlush(ObserverContextRegionCoprocessorEnvironmentc)
throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
 Called before the memstore is flushed to disk.
 
@@ -778,7 +778,7 @@ public interface 
 
 preFlush
-defaultInternalScannerpreFlush(ObserverContextRegionCoprocessorEnvironmentc,
+defaultInternalScannerpreFlush(ObserverContextRegionCoprocessorEnvironmentc,
  Storestore,
  InternalScannerscanner)
   throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
@@ -802,7 +802,7 @@ public interface 
 
 postFlush
-defaultvoidpostFlush(ObserverContextRegionCoprocessorEnvironmentc)
+defaultvoidpostFlush(ObserverContextRegionCoprocessorEnvironmentc)
 throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
 Called after the memstore is flushed to disk.
 
@@ -819,7 +819,7 @@ public interface 
 
 postFlush
-defaultvoidpostFlush(ObserverContextRegionCoprocessorEnvironmentc,
+defaultvoidpostFlush(ObserverContextRegionCoprocessorEnvironmentc,
Storestore,
StoreFileresultFile)
 throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
@@ -840,14 +840,14 @@ public interface 
 
 preCompactSelection
-defaultvoidpreCompactSelection(ObserverContextRegionCoprocessorEnvironmentc,
+defaultvoidpreCompactSelection(ObserverContextRegionCoprocessorEnvironmentc,
  Storestore,
  http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">List? extends StoreFilecandidates,
  CompactionLifeCycleTrackertracker)
   throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
 Called prior to selecting the StoreFiles to compact 

[14/51] [partial] hbase-site git commit: Published site at .

2017-10-23 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/41a7fcc5/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegion.RegionScannerImpl.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegion.RegionScannerImpl.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegion.RegionScannerImpl.html
index 12fe16f..b1e0997 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegion.RegionScannerImpl.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegion.RegionScannerImpl.html
@@ -1960,6279 +1960,6285 @@
 1952  protected void 
doRegionCompactionPrep() throws IOException {
 1953  }
 1954
-1955  @Override
-1956  public void triggerMajorCompaction() 
throws IOException {
-1957
stores.values().forEach(HStore::triggerMajorCompaction);
-1958  }
-1959
-1960  /**
-1961   * Synchronously compact all stores in 
the region.
-1962   * pThis operation could block 
for a long time, so don't call it from a
-1963   * time-sensitive thread.
-1964   * pNote that no locks are 
taken to prevent possible conflicts between
-1965   * compaction and splitting 
activities. The regionserver does not normally compact
-1966   * and split in parallel. However by 
calling this method you may introduce
-1967   * unexpected and unhandled 
concurrency. Don't do this unless you know what
-1968   * you are doing.
-1969   *
-1970   * @param majorCompaction True to 
force a major compaction regardless of thresholds
-1971   * @throws IOException
-1972   */
-1973  public void compact(boolean 
majorCompaction) throws IOException {
-1974if (majorCompaction) {
-1975  triggerMajorCompaction();
-1976}
-1977for (HStore s : stores.values()) {
-1978  OptionalCompactionContext 
compaction = s.requestCompaction();
-1979  if (compaction.isPresent()) {
-1980ThroughputController controller 
= null;
-1981if (rsServices != null) {
-1982  controller = 
CompactionThroughputControllerFactory.create(rsServices, conf);
-1983}
-1984if (controller == null) {
-1985  controller = 
NoLimitThroughputController.INSTANCE;
-1986}
-1987compact(compaction.get(), s, 
controller, null);
-1988  }
-1989}
-1990  }
-1991
-1992  /**
-1993   * This is a helper function that 
compact all the stores synchronously.
-1994   * p
-1995   * It is used by utilities and 
testing
-1996   */
-1997  @VisibleForTesting
-1998  public void compactStores() throws 
IOException {
-1999for (HStore s : stores.values()) {
-2000  OptionalCompactionContext 
compaction = s.requestCompaction();
-2001  if (compaction.isPresent()) {
-2002compact(compaction.get(), s, 
NoLimitThroughputController.INSTANCE, null);
-2003  }
-2004}
-2005  }
-2006
-2007  /**
-2008   * This is a helper function that 
compact the given store.
-2009   * p
-2010   * It is used by utilities and 
testing
-2011   */
-2012  @VisibleForTesting
-2013  void compactStore(byte[] family, 
ThroughputController throughputController) throws IOException {
-2014HStore s = getStore(family);
-2015OptionalCompactionContext 
compaction = s.requestCompaction();
-2016if (compaction.isPresent()) {
-2017  compact(compaction.get(), s, 
throughputController, null);
-2018}
-2019  }
-2020
-2021  /**
-2022   * Called by compaction thread and 
after region is opened to compact the
-2023   * HStores if necessary.
-2024   *
-2025   * pThis operation could block 
for a long time, so don't call it from a
-2026   * time-sensitive thread.
-2027   *
-2028   * Note that no locking is necessary 
at this level because compaction only
-2029   * conflicts with a region split, and 
that cannot happen because the region
-2030   * server does them sequentially and 
not in parallel.
-2031   *
-2032   * @param compaction Compaction 
details, obtained by requestCompaction()
-2033   * @param throughputController
-2034   * @return whether the compaction 
completed
-2035   */
+1955  /**
+1956   * Synchronously compact all stores in 
the region.
+1957   * pThis operation could block 
for a long time, so don't call it from a
+1958   * time-sensitive thread.
+1959   * pNote that no locks are 
taken to prevent possible conflicts between
+1960   * compaction and splitting 
activities. The regionserver does not normally compact
+1961   * and split in parallel. However by 
calling this method you may introduce
+1962   * unexpected and unhandled 
concurrency. Don't do this unless you know what
+1963   * you are doing.
+1964   *
+1965   * @param majorCompaction True to 
force a major compaction regardless of thresholds
+1966   * @throws IOException
+1967   */
+1968  public void compact(boolean 
majorCompaction) throws IOException {
+1969if (majorCompaction) {
+1970  
stores.values().forEach(HStore::triggerMajorCompaction);
+1971}
+1972for (HStore s : stores.values()) {
+1973  

[17/51] [partial] hbase-site git commit: Published site at .

2017-10-23 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/41a7fcc5/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegion.MutationBatchOperation.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegion.MutationBatchOperation.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegion.MutationBatchOperation.html
index 12fe16f..b1e0997 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegion.MutationBatchOperation.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegion.MutationBatchOperation.html
@@ -1960,6279 +1960,6285 @@
 1952  protected void 
doRegionCompactionPrep() throws IOException {
 1953  }
 1954
-1955  @Override
-1956  public void triggerMajorCompaction() 
throws IOException {
-1957
stores.values().forEach(HStore::triggerMajorCompaction);
-1958  }
-1959
-1960  /**
-1961   * Synchronously compact all stores in 
the region.
-1962   * pThis operation could block 
for a long time, so don't call it from a
-1963   * time-sensitive thread.
-1964   * pNote that no locks are 
taken to prevent possible conflicts between
-1965   * compaction and splitting 
activities. The regionserver does not normally compact
-1966   * and split in parallel. However by 
calling this method you may introduce
-1967   * unexpected and unhandled 
concurrency. Don't do this unless you know what
-1968   * you are doing.
-1969   *
-1970   * @param majorCompaction True to 
force a major compaction regardless of thresholds
-1971   * @throws IOException
-1972   */
-1973  public void compact(boolean 
majorCompaction) throws IOException {
-1974if (majorCompaction) {
-1975  triggerMajorCompaction();
-1976}
-1977for (HStore s : stores.values()) {
-1978  OptionalCompactionContext 
compaction = s.requestCompaction();
-1979  if (compaction.isPresent()) {
-1980ThroughputController controller 
= null;
-1981if (rsServices != null) {
-1982  controller = 
CompactionThroughputControllerFactory.create(rsServices, conf);
-1983}
-1984if (controller == null) {
-1985  controller = 
NoLimitThroughputController.INSTANCE;
-1986}
-1987compact(compaction.get(), s, 
controller, null);
-1988  }
-1989}
-1990  }
-1991
-1992  /**
-1993   * This is a helper function that 
compact all the stores synchronously.
-1994   * p
-1995   * It is used by utilities and 
testing
-1996   */
-1997  @VisibleForTesting
-1998  public void compactStores() throws 
IOException {
-1999for (HStore s : stores.values()) {
-2000  OptionalCompactionContext 
compaction = s.requestCompaction();
-2001  if (compaction.isPresent()) {
-2002compact(compaction.get(), s, 
NoLimitThroughputController.INSTANCE, null);
-2003  }
-2004}
-2005  }
-2006
-2007  /**
-2008   * This is a helper function that 
compact the given store.
-2009   * p
-2010   * It is used by utilities and 
testing
-2011   */
-2012  @VisibleForTesting
-2013  void compactStore(byte[] family, 
ThroughputController throughputController) throws IOException {
-2014HStore s = getStore(family);
-2015OptionalCompactionContext 
compaction = s.requestCompaction();
-2016if (compaction.isPresent()) {
-2017  compact(compaction.get(), s, 
throughputController, null);
-2018}
-2019  }
-2020
-2021  /**
-2022   * Called by compaction thread and 
after region is opened to compact the
-2023   * HStores if necessary.
-2024   *
-2025   * pThis operation could block 
for a long time, so don't call it from a
-2026   * time-sensitive thread.
-2027   *
-2028   * Note that no locking is necessary 
at this level because compaction only
-2029   * conflicts with a region split, and 
that cannot happen because the region
-2030   * server does them sequentially and 
not in parallel.
-2031   *
-2032   * @param compaction Compaction 
details, obtained by requestCompaction()
-2033   * @param throughputController
-2034   * @return whether the compaction 
completed
-2035   */
+1955  /**
+1956   * Synchronously compact all stores in 
the region.
+1957   * pThis operation could block 
for a long time, so don't call it from a
+1958   * time-sensitive thread.
+1959   * pNote that no locks are 
taken to prevent possible conflicts between
+1960   * compaction and splitting 
activities. The regionserver does not normally compact
+1961   * and split in parallel. However by 
calling this method you may introduce
+1962   * unexpected and unhandled 
concurrency. Don't do this unless you know what
+1963   * you are doing.
+1964   *
+1965   * @param majorCompaction True to 
force a major compaction regardless of thresholds
+1966   * @throws IOException
+1967   */
+1968  public void compact(boolean 
majorCompaction) throws IOException {
+1969if (majorCompaction) {
+1970  
stores.values().forEach(HStore::triggerMajorCompaction);
+1971}
+1972for (HStore s : 

[29/51] [partial] hbase-site git commit: Published site at .

2017-10-23 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/41a7fcc5/devapidocs/org/apache/hadoop/hbase/security/package-tree.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/security/package-tree.html 
b/devapidocs/org/apache/hadoop/hbase/security/package-tree.html
index c4d3f6e..6a709a8 100644
--- a/devapidocs/org/apache/hadoop/hbase/security/package-tree.html
+++ b/devapidocs/org/apache/hadoop/hbase/security/package-tree.html
@@ -191,9 +191,9 @@
 
 java.lang.http://docs.oracle.com/javase/8/docs/api/java/lang/Enum.html?is-external=true;
 title="class or interface in java.lang">EnumE (implements java.lang.http://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true;
 title="class or interface in java.lang">ComparableT, java.io.http://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html?is-external=true;
 title="class or interface in java.io">Serializable)
 
-org.apache.hadoop.hbase.security.SaslUtil.QualityOfProtection
-org.apache.hadoop.hbase.security.AuthMethod
 org.apache.hadoop.hbase.security.SaslStatus
+org.apache.hadoop.hbase.security.AuthMethod
+org.apache.hadoop.hbase.security.SaslUtil.QualityOfProtection
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/41a7fcc5/devapidocs/org/apache/hadoop/hbase/security/visibility/VisibilityController.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/security/visibility/VisibilityController.html
 
b/devapidocs/org/apache/hadoop/hbase/security/visibility/VisibilityController.html
index 679d04c..a4c07b4 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/security/visibility/VisibilityController.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/security/visibility/VisibilityController.html
@@ -597,7 +597,7 @@ implements RegionObserver
-postAppend,
 postBatchMutate,
 postBatchMutateIndispensably,
 postBulkLoadHFile,
 postCheckAndDelete,
 postCheckAndPut,
 postClose,
 postCloseRegionOperation, postCommitStoreFile,
 postCompact,
 postCompactSelection,
 postDelete,
 postExists,
 postFlush,
 postFlush, postGetOp,
 postIncrement,
 postPut,
 postReplayWALs, postScannerNext,
 postStartRegionOperation,
 postStoreFileReaderOpen,
 postWALRestore,
 preAppendAfterRowLock,
 preBulkLoadHFile,
 preCheckAndDel
 ete, preCheckAndDeleteAfterRowLock,
 preCheckAndPut,
 preCheckAndPutAfterRowLock, preClose,
 preCommitStoreFile,
 preCompact,
 preCompactSelection,
 preDelete,
 preExists,
 preFlush,
 preFlush,
 preIncrementAfterRowLock,
 preOpen,
 prePut,
 preReplayWALs,
 preStoreFileReaderOpen,
 preWALRestore
+postAppend,
 postBatchMutate,
 postBatchMutateIndispensably,
 postBulkLoadHFile,
 postCheckAndDelete,
 postCheckAndPut,
 postClose,
 postCloseRegionOperation, postCommitStoreFile,
 postCompact,
 postCompactSelection,
 postDelete,
 postExists,
 postFlush,
 postFlush,
 postGetOp,
 postIncrement,
 postPut,
 postReplayWALs,
 postScannerNext,
 postStartRegionOperation,
 postStoreFileReaderOpen,
 postWALRestore,
 preAppendAfterRowLock,
 preBulkLoadHFile,
 preCheckAndDelete,
 preCheckAndDeleteAfterRowLock,
 preCheckAndPut,
 preCheckAndPutAfterRowLock,
 preClose,
 preCommitStoreFile,
 preCompact,
 preCompactSelection,
 preDelete,
 preExists,
 preFlush,
 preFlush, preIncrementAfterRowLock,
 preOpen,
 prePut,
 preReplayWALs,
 preStoreFileReaderOpen,
 preWALRestore
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/41a7fcc5/devapidocs/org/apache/hadoop/hbase/thrift/package-tree.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/thrift/package-tree.html 
b/devapidocs/org/apache/hadoop/hbase/thrift/package-tree.html
index 29f4bfc..570c4fa 100644
--- a/devapidocs/org/apache/hadoop/hbase/thrift/package-tree.html
+++ b/devapidocs/org/apache/hadoop/hbase/thrift/package-tree.html
@@ -199,8 +199,8 @@
 java.lang.http://docs.oracle.com/javase/8/docs/api/java/lang/Enum.html?is-external=true;
 title="class or interface in java.lang">EnumE (implements java.lang.http://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true;
 title="class or interface in java.lang">ComparableT, java.io.http://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html?is-external=true;
 title="class or interface in java.io">Serializable)
 
 org.apache.hadoop.hbase.thrift.ThriftMetrics.ThriftServerType
-org.apache.hadoop.hbase.thrift.MetricsThriftServerSourceFactoryImpl.FactoryStorage
 org.apache.hadoop.hbase.thrift.ThriftServerRunner.ImplType
+org.apache.hadoop.hbase.thrift.MetricsThriftServerSourceFactoryImpl.FactoryStorage
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/41a7fcc5/devapidocs/org/apache/hadoop/hbase/tool/WriteSinkCoprocessor.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/tool/WriteSinkCoprocessor.html 

[35/51] [partial] hbase-site git commit: Published site at .

2017-10-23 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/41a7fcc5/devapidocs/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.html 
b/devapidocs/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.html
index 9fd4c05..17fb713 100644
--- a/devapidocs/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.html
+++ b/devapidocs/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.html
@@ -115,7 +115,7 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Private
-public class RegionCoprocessorHost
+public class RegionCoprocessorHost
 extends CoprocessorHostRegionCoprocessor,RegionCoprocessorEnvironment
 Implements the coprocessor environment and runtime support 
for coprocessors
  loaded within a Region.
@@ -351,8 +351,8 @@ extends 
 void
-postCompactSelection(HStorestore,
-
org.apache.hadoop.hbase.shaded.com.google.common.collect.ImmutableListHStoreFileselected,
+postCompactSelection(HStorestore,
+http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListHStoreFileselected,
 CompactionLifeCycleTrackertracker,
 CompactionRequestrequest,
 Useruser)
@@ -708,7 +708,7 @@ extends 
 
 LOG
-private static finalorg.apache.commons.logging.Log LOG
+private static finalorg.apache.commons.logging.Log LOG
 
 
 
@@ -717,7 +717,7 @@ extends 
 
 SHARED_DATA_MAP
-private static 
finalorg.apache.commons.collections4.map.ReferenceMaphttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String,http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ConcurrentMap.html?is-external=true;
 title="class or interface in java.util.concurrent">ConcurrentMaphttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String,http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object SHARED_DATA_MAP
+private static 
finalorg.apache.commons.collections4.map.ReferenceMaphttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String,http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ConcurrentMap.html?is-external=true;
 title="class or interface in java.util.concurrent">ConcurrentMaphttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String,http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object SHARED_DATA_MAP
 
 
 
@@ -726,7 +726,7 @@ extends 
 
 hasCustomPostScannerFilterRow
-private finalboolean hasCustomPostScannerFilterRow
+private finalboolean hasCustomPostScannerFilterRow
 
 
 
@@ -735,7 +735,7 @@ extends 
 
 rsServices
-RegionServerServices rsServices
+RegionServerServices rsServices
 The region server services
 
 
@@ -745,7 +745,7 @@ extends 
 
 region
-HRegion region
+HRegion region
 The region
 
 
@@ -755,7 +755,7 @@ extends 
 
 regionObserverGetter
-privateCoprocessorHost.ObserverGetterRegionCoprocessor,RegionObserver 
regionObserverGetter
+privateCoprocessorHost.ObserverGetterRegionCoprocessor,RegionObserver 
regionObserverGetter
 
 
 
@@ -764,7 +764,7 @@ extends 
 
 endpointObserverGetter
-privateCoprocessorHost.ObserverGetterRegionCoprocessor,EndpointObserver endpointObserverGetter
+privateCoprocessorHost.ObserverGetterRegionCoprocessor,EndpointObserver endpointObserverGetter
 
 
 
@@ -781,7 +781,7 @@ extends 
 
 RegionCoprocessorHost
-publicRegionCoprocessorHost(HRegionregion,
+publicRegionCoprocessorHost(HRegionregion,
  RegionServerServicesrsServices,
  
org.apache.hadoop.conf.Configurationconf)
 Constructor
@@ -807,7 +807,7 @@ extends 
 
 getTableCoprocessorAttrsFromSchema
-statichttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionCoprocessorHost.TableCoprocessorAttributegetTableCoprocessorAttrsFromSchema(org.apache.hadoop.conf.Configurationconf,
+statichttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionCoprocessorHost.TableCoprocessorAttributegetTableCoprocessorAttrsFromSchema(org.apache.hadoop.conf.Configurationconf,

 TableDescriptorhtd)
 
 
@@ -817,7 +817,7 @@ extends 
 
 testTableCoprocessorAttrs
-public staticvoidtestTableCoprocessorAttrs(org.apache.hadoop.conf.Configurationconf,
+public 

[22/51] [partial] hbase-site git commit: Published site at .

2017-10-23 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/41a7fcc5/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegion.BatchOperation.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegion.BatchOperation.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegion.BatchOperation.html
index 12fe16f..b1e0997 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegion.BatchOperation.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegion.BatchOperation.html
@@ -1960,6279 +1960,6285 @@
 1952  protected void 
doRegionCompactionPrep() throws IOException {
 1953  }
 1954
-1955  @Override
-1956  public void triggerMajorCompaction() 
throws IOException {
-1957
stores.values().forEach(HStore::triggerMajorCompaction);
-1958  }
-1959
-1960  /**
-1961   * Synchronously compact all stores in 
the region.
-1962   * pThis operation could block 
for a long time, so don't call it from a
-1963   * time-sensitive thread.
-1964   * pNote that no locks are 
taken to prevent possible conflicts between
-1965   * compaction and splitting 
activities. The regionserver does not normally compact
-1966   * and split in parallel. However by 
calling this method you may introduce
-1967   * unexpected and unhandled 
concurrency. Don't do this unless you know what
-1968   * you are doing.
-1969   *
-1970   * @param majorCompaction True to 
force a major compaction regardless of thresholds
-1971   * @throws IOException
-1972   */
-1973  public void compact(boolean 
majorCompaction) throws IOException {
-1974if (majorCompaction) {
-1975  triggerMajorCompaction();
-1976}
-1977for (HStore s : stores.values()) {
-1978  OptionalCompactionContext 
compaction = s.requestCompaction();
-1979  if (compaction.isPresent()) {
-1980ThroughputController controller 
= null;
-1981if (rsServices != null) {
-1982  controller = 
CompactionThroughputControllerFactory.create(rsServices, conf);
-1983}
-1984if (controller == null) {
-1985  controller = 
NoLimitThroughputController.INSTANCE;
-1986}
-1987compact(compaction.get(), s, 
controller, null);
-1988  }
-1989}
-1990  }
-1991
-1992  /**
-1993   * This is a helper function that 
compact all the stores synchronously.
-1994   * p
-1995   * It is used by utilities and 
testing
-1996   */
-1997  @VisibleForTesting
-1998  public void compactStores() throws 
IOException {
-1999for (HStore s : stores.values()) {
-2000  OptionalCompactionContext 
compaction = s.requestCompaction();
-2001  if (compaction.isPresent()) {
-2002compact(compaction.get(), s, 
NoLimitThroughputController.INSTANCE, null);
-2003  }
-2004}
-2005  }
-2006
-2007  /**
-2008   * This is a helper function that 
compact the given store.
-2009   * p
-2010   * It is used by utilities and 
testing
-2011   */
-2012  @VisibleForTesting
-2013  void compactStore(byte[] family, 
ThroughputController throughputController) throws IOException {
-2014HStore s = getStore(family);
-2015OptionalCompactionContext 
compaction = s.requestCompaction();
-2016if (compaction.isPresent()) {
-2017  compact(compaction.get(), s, 
throughputController, null);
-2018}
-2019  }
-2020
-2021  /**
-2022   * Called by compaction thread and 
after region is opened to compact the
-2023   * HStores if necessary.
-2024   *
-2025   * pThis operation could block 
for a long time, so don't call it from a
-2026   * time-sensitive thread.
-2027   *
-2028   * Note that no locking is necessary 
at this level because compaction only
-2029   * conflicts with a region split, and 
that cannot happen because the region
-2030   * server does them sequentially and 
not in parallel.
-2031   *
-2032   * @param compaction Compaction 
details, obtained by requestCompaction()
-2033   * @param throughputController
-2034   * @return whether the compaction 
completed
-2035   */
+1955  /**
+1956   * Synchronously compact all stores in 
the region.
+1957   * pThis operation could block 
for a long time, so don't call it from a
+1958   * time-sensitive thread.
+1959   * pNote that no locks are 
taken to prevent possible conflicts between
+1960   * compaction and splitting 
activities. The regionserver does not normally compact
+1961   * and split in parallel. However by 
calling this method you may introduce
+1962   * unexpected and unhandled 
concurrency. Don't do this unless you know what
+1963   * you are doing.
+1964   *
+1965   * @param majorCompaction True to 
force a major compaction regardless of thresholds
+1966   * @throws IOException
+1967   */
+1968  public void compact(boolean 
majorCompaction) throws IOException {
+1969if (majorCompaction) {
+1970  
stores.values().forEach(HStore::triggerMajorCompaction);
+1971}
+1972for (HStore s : stores.values()) {
+1973  

[02/51] [partial] hbase-site git commit: Published site at .

2017-10-23 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/41a7fcc5/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HStore.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HStore.html 
b/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HStore.html
index 7e5e128..5bc0a56 100644
--- a/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HStore.html
+++ b/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HStore.html
@@ -1633,951 +1633,954 @@
 1625return 
StoreUtils.hasReferences(this.storeEngine.getStoreFileManager().getStorefiles());
 1626  }
 1627
-1628  @Override
-1629  public CompactionProgress 
getCompactionProgress() {
-1630return 
this.storeEngine.getCompactor().getProgress();
-1631  }
-1632
-1633  @Override
-1634  public boolean 
shouldPerformMajorCompaction() throws IOException {
-1635for (HStoreFile sf : 
this.storeEngine.getStoreFileManager().getStorefiles()) {
-1636  // TODO: what are these reader 
checks all over the place?
-1637  if (sf.getReader() == null) {
-1638LOG.debug("StoreFile " + sf + " 
has null Reader");
-1639return false;
-1640  }
-1641}
-1642return 
storeEngine.getCompactionPolicy().shouldPerformMajorCompaction(
-1643
this.storeEngine.getStoreFileManager().getStorefiles());
-1644  }
-1645
-1646  public 
OptionalCompactionContext requestCompaction() throws IOException {
-1647return 
requestCompaction(NO_PRIORITY, CompactionLifeCycleTracker.DUMMY, null);
-1648  }
-1649
-1650  public 
OptionalCompactionContext requestCompaction(int priority,
-1651  CompactionLifeCycleTracker 
tracker, User user) throws IOException {
-1652// don't even select for compaction 
if writes are disabled
-1653if (!this.areWritesEnabled()) {
-1654  return Optional.empty();
-1655}
-1656// Before we do compaction, try to 
get rid of unneeded files to simplify things.
-1657removeUnneededFiles();
-1658
-1659final CompactionContext compaction = 
storeEngine.createCompaction();
-1660CompactionRequestImpl request = 
null;
-1661this.lock.readLock().lock();
-1662try {
-1663  synchronized (filesCompacting) {
-1664// First, see if coprocessor 
would want to override selection.
-1665if (this.getCoprocessorHost() != 
null) {
-1666  final ListHStoreFile 
candidatesForCoproc = compaction.preSelect(this.filesCompacting);
-1667  boolean override = false;
-1668  //TODO: is it correct way to 
get CompactionRequest?
-1669  override = 
getCoprocessorHost().preCompactSelection(this, candidatesForCoproc,
-1670tracker, user);
-1671  if (override) {
-1672// Coprocessor is overriding 
normal file selection.
-1673compaction.forceSelect(new 
CompactionRequestImpl(candidatesForCoproc));
-1674  }
-1675}
-1676
-1677// Normal case - coprocessor is 
not overriding file selection.
-1678if (!compaction.hasSelection()) 
{
-1679  boolean isUserCompaction = 
priority == Store.PRIORITY_USER;
-1680  boolean mayUseOffPeak = 
offPeakHours.isOffPeakHour() 
-1681  
offPeakCompactionTracker.compareAndSet(false, true);
-1682  try {
-1683
compaction.select(this.filesCompacting, isUserCompaction,
-1684  mayUseOffPeak, forceMajor 
 filesCompacting.isEmpty());
-1685  } catch (IOException e) {
-1686if (mayUseOffPeak) {
-1687  
offPeakCompactionTracker.set(false);
-1688}
-1689throw e;
-1690  }
-1691  assert 
compaction.hasSelection();
-1692  if (mayUseOffPeak  
!compaction.getRequest().isOffPeak()) {
-1693// Compaction policy doesn't 
want to take advantage of off-peak.
-1694
offPeakCompactionTracker.set(false);
-1695  }
-1696}
-1697if (this.getCoprocessorHost() != 
null) {
-1698  
this.getCoprocessorHost().postCompactSelection(
-1699  this, 
ImmutableList.copyOf(compaction.getRequest().getFiles()), tracker,
-1700  compaction.getRequest(), 
user);
-1701}
-1702// Finally, we have the 
resulting files list. Check if we have any files at all.
-1703request = 
compaction.getRequest();
-1704CollectionHStoreFile 
selectedFiles = request.getFiles();
-1705if (selectedFiles.isEmpty()) {
-1706  return Optional.empty();
-1707}
-1708
-1709
addToCompactingFiles(selectedFiles);
-1710
-1711// If we're enqueuing a major, 
clear the force flag.
-1712this.forceMajor = 
this.forceMajor  !request.isMajor();
+1628  /**
+1629   * getter for CompactionProgress 
object
+1630   * @return CompactionProgress object; 
can be null
+1631   */
+1632  public CompactionProgress 
getCompactionProgress() {
+1633return 

[51/51] [partial] hbase-site git commit: Published site at .

2017-10-23 Thread git-site-role
Published site at .


Project: http://git-wip-us.apache.org/repos/asf/hbase-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase-site/commit/41a7fcc5
Tree: http://git-wip-us.apache.org/repos/asf/hbase-site/tree/41a7fcc5
Diff: http://git-wip-us.apache.org/repos/asf/hbase-site/diff/41a7fcc5

Branch: refs/heads/asf-site
Commit: 41a7fcc53c6ea06609ac94619a07d6d4a722bf50
Parents: c0a60b2
Author: jenkins 
Authored: Mon Oct 23 15:15:42 2017 +
Committer: jenkins 
Committed: Mon Oct 23 15:15:42 2017 +

--
 acid-semantics.html | 4 +-
 apache_hbase_reference_guide.pdf| 4 +-
 book.html   | 2 +-
 bulk-loads.html | 4 +-
 checkstyle-aggregate.html   | 32464 -
 checkstyle.rss  |22 +-
 coc.html| 4 +-
 cygwin.html | 4 +-
 dependencies.html   | 4 +-
 dependency-convergence.html | 4 +-
 dependency-info.html| 4 +-
 dependency-management.html  | 4 +-
 devapidocs/allclasses-frame.html| 2 +
 devapidocs/allclasses-noframe.html  | 2 +
 devapidocs/constant-values.html | 6 +-
 devapidocs/index-all.html   |72 +-
 .../hadoop/hbase/backup/BackupObserver.html | 2 +-
 .../hadoop/hbase/backup/package-tree.html   | 2 +-
 .../hbase/class-use/HDFSBlocksDistribution.html | 8 +-
 .../hadoop/hbase/client/package-tree.html   |28 +-
 .../hbase/constraint/ConstraintProcessor.html   | 2 +-
 .../RegionObserver.MutationType.html|10 +-
 .../hbase/coprocessor/RegionObserver.html   |   146 +-
 .../coprocessor/class-use/ObserverContext.html  | 4 +-
 .../class-use/RegionCoprocessorEnvironment.html | 4 +-
 ...serverWithMetrics.ExampleRegionObserver.html | 2 +-
 .../example/ZooKeeperScanPolicyObserver.html|20 +-
 .../hadoop/hbase/filter/package-tree.html   | 8 +-
 .../hadoop/hbase/io/hfile/package-tree.html | 8 +-
 .../apache/hadoop/hbase/ipc/package-tree.html   | 2 +-
 .../hadoop/hbase/mapreduce/package-tree.html| 4 +-
 .../org/apache/hadoop/hbase/master/HMaster.html | 2 +-
 .../master/HMasterCommandLine.LocalHMaster.html | 2 +-
 .../hbase/master/balancer/package-tree.html | 2 +-
 .../hadoop/hbase/master/package-tree.html   | 6 +-
 .../org/apache/hadoop/hbase/package-tree.html   |18 +-
 .../hadoop/hbase/procedure2/package-tree.html   | 8 +-
 .../hadoop/hbase/quotas/package-tree.html   | 6 +-
 ...t.AggregatingCompactionLifeCycleTracker.html |   417 +
 .../CompactSplit.CompactionRunner.html  |30 +-
 .../regionserver/CompactSplit.Rejection.html| 6 +-
 .../hadoop/hbase/regionserver/CompactSplit.html |   147 +-
 .../regionserver/HRegion.BatchOperation.html|34 +-
 .../regionserver/HRegion.BulkLoadListener.html  | 8 +-
 .../HRegion.FlushResult.Result.html |10 +-
 .../hbase/regionserver/HRegion.FlushResult.html | 8 +-
 .../HRegion.MutationBatchOperation.html |20 +-
 .../regionserver/HRegion.RegionScannerImpl.html |90 +-
 .../HRegion.ReplayBatchOperation.html   |18 +-
 .../regionserver/HRegion.RowLockContext.html|28 +-
 .../hbase/regionserver/HRegion.RowLockImpl.html |16 +-
 .../hadoop/hbase/regionserver/HRegion.html  |   493 +-
 .../HRegionServer.CompactionChecker.html|14 +-
 .../HRegionServer.MovedRegionInfo.html  |16 +-
 .../HRegionServer.MovedRegionsCleaner.html  |16 +-
 .../HRegionServer.PeriodicMemStoreFlusher.html  |12 +-
 .../hbase/regionserver/HRegionServer.html   |   786 +-
 .../regionserver/HStore.StoreFlusherImpl.html   |32 +-
 .../hadoop/hbase/regionserver/HStore.html   |   155 +-
 .../hadoop/hbase/regionserver/HStoreFile.html   |32 +-
 .../hbase/regionserver/RSRpcServices.html   |80 +-
 .../hbase/regionserver/Region.Operation.html|34 +-
 .../hbase/regionserver/Region.RowLock.html  | 4 +-
 .../hadoop/hbase/regionserver/Region.html   |   154 +-
 ...processorHost.BulkLoadObserverOperation.html | 4 +-
 ...RegionCoprocessorHost.RegionEnvironment.html |28 +-
 ...st.RegionEnvironmentForCoreCoprocessors.html | 8 +-
 ...CoprocessorHost.RegionObserverOperation.html | 6 +-
 ...processorHost.TableCoprocessorAttribute.html |20 +-
 .../regionserver/RegionCoprocessorHost.html |   160 +-
 ...ionServerServices.PostOpenDeployContext.html |12 +-
 ...erServices.RegionStateTransitionContext.html |20 +-
 

  1   2   >