hbase git commit: HBASE-20260 Conflict with new upgrade guide ADDENDUM

2018-03-27 Thread mdrob
Repository: hbase
Updated Branches:
  refs/heads/branch-2 f4fe0521a -> 78452113c


HBASE-20260 Conflict with new upgrade guide ADDENDUM


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/78452113
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/78452113
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/78452113

Branch: refs/heads/branch-2
Commit: 78452113cb7f314fd4ebf7425b6127cfc82698c4
Parents: f4fe052
Author: Mike Drob 
Authored: Mon Mar 26 12:42:49 2018 -0500
Committer: Mike Drob 
Committed: Tue Mar 27 20:09:59 2018 -0700

--
 src/main/asciidoc/_chapters/upgrading.adoc | 5 -
 1 file changed, 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/78452113/src/main/asciidoc/_chapters/upgrading.adoc
--
diff --git a/src/main/asciidoc/_chapters/upgrading.adoc 
b/src/main/asciidoc/_chapters/upgrading.adoc
index 0ca808c..79ade65 100644
--- a/src/main/asciidoc/_chapters/upgrading.adoc
+++ b/src/main/asciidoc/_chapters/upgrading.adoc
@@ -310,8 +310,3 @@ Quitting...
 === Upgrading to 1.x
 
 Please consult the documentation published specifically for the version of 
HBase that you are upgrading to for details on the upgrade process.
-
-[[upgrade2.0]]
-=== Upgrading to 2.x
-
-Coming soon...



hbase git commit: HBASE-20260 Conflict with new upgrade guide ADDENDUM

2018-03-27 Thread mdrob
Repository: hbase
Updated Branches:
  refs/heads/master f7eefaa12 -> 3b6199a27


HBASE-20260 Conflict with new upgrade guide ADDENDUM


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/3b6199a2
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/3b6199a2
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/3b6199a2

Branch: refs/heads/master
Commit: 3b6199a27a944f9f05ca6512c59766ed0f590f48
Parents: f7eefaa
Author: Mike Drob 
Authored: Mon Mar 26 12:42:49 2018 -0500
Committer: Mike Drob 
Committed: Tue Mar 27 20:09:32 2018 -0700

--
 src/main/asciidoc/_chapters/upgrading.adoc | 5 -
 1 file changed, 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/3b6199a2/src/main/asciidoc/_chapters/upgrading.adoc
--
diff --git a/src/main/asciidoc/_chapters/upgrading.adoc 
b/src/main/asciidoc/_chapters/upgrading.adoc
index 4f0c445..450b3a1 100644
--- a/src/main/asciidoc/_chapters/upgrading.adoc
+++ b/src/main/asciidoc/_chapters/upgrading.adoc
@@ -552,8 +552,3 @@ Doing a raw scan will now return results that have expired 
according to TTL sett
 === Upgrading to 1.x
 
 Please consult the documentation published specifically for the version of 
HBase that you are upgrading to for details on the upgrade process.
-
-[[upgrade2.0]]
-=== Upgrading to 2.x
-
-Coming soon...



[2/2] hbase git commit: HBASE-20130 (ADDENDUM) Use defaults (16020 & 16030) as base ports when the RS is bound to localhost

2018-03-27 Thread busbey
HBASE-20130 (ADDENDUM) Use defaults (16020 & 16030) as base ports when the RS 
is bound to localhost

  * fixed shellcheck errors
  * modified script to set environment variabless HBASE_RS_BASE_PORT, 
HBASE_RS_INFO_BASE_PORT to defaults only if its not already set
  * modified ref guide for default master ports and setting environment 
variables HBASE_RS_BASE_PORT, HBASE_RS_INFO_BASE_PORT for supporting more than 
10 instances for region server on localhost.

Signed-off-by: Michael Stack 
Signed-off-by: Chia-Ping Tsai 
Signed-off-by: Sean Busbey 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/2e612527
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/2e612527
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/2e612527

Branch: refs/heads/branch-2.0
Commit: 2e612527aab6a10ca154fe95f2eb00e2c348a1f2
Parents: 24f19fd
Author: Umesh Agashe 
Authored: Tue Mar 27 13:47:31 2018 -0700
Committer: Sean Busbey 
Committed: Tue Mar 27 21:26:29 2018 -0500

--
 bin/local-regionservers.sh   | 18 +++---
 src/main/asciidoc/_chapters/getting_started.adoc |  6 +++---
 2 files changed, 14 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/2e612527/bin/local-regionservers.sh
--
diff --git a/bin/local-regionservers.sh b/bin/local-regionservers.sh
index 79dc5d0..97e5eed 100755
--- a/bin/local-regionservers.sh
+++ b/bin/local-regionservers.sh
@@ -21,8 +21,12 @@
 # Supports up to 10 regionservers (limitation = overlapping ports)
 # For supporting more instances select different values (e.g. 16200, 16300)
 # for HBASE_RS_BASE_PORT and HBASE_RS_INFO_BASE_PORT below
-HBASE_RS_BASE_PORT=16020
-HBASE_RS_INFO_BASE_PORT=16030
+if [ -z "$HBASE_RS_BASE_PORT" ]; then
+  HBASE_RS_BASE_PORT=16020
+fi
+if [ -z "$HBASE_RS_INFO_BASE_PORT" ]; then
+  HBASE_RS_INFO_BASE_PORT=16030
+fi
 
 bin=`dirname "${BASH_SOURCE-$0}"`
 bin=`cd "$bin" >/dev/null && pwd`
@@ -48,22 +52,22 @@ run_regionserver () {
   DN=$2
   export HBASE_IDENT_STRING="$USER-$DN"
   HBASE_REGIONSERVER_ARGS="\
--Dhbase.regionserver.port=`expr $HBASE_RS_BASE_PORT + $DN` \
--Dhbase.regionserver.info.port=`expr $HBASE_RS_INFO_BASE_PORT + $DN`"
+-Dhbase.regionserver.port=`expr "$HBASE_RS_BASE_PORT" + "$DN"` \
+-Dhbase.regionserver.info.port=`expr "$HBASE_RS_INFO_BASE_PORT" + "$DN"`"
 
   "$bin"/hbase-daemon.sh  --config "${HBASE_CONF_DIR}" \
 --autostart-window-size "${AUTOSTART_WINDOW_SIZE}" \
 --autostart-window-retry-limit "${AUTOSTART_WINDOW_RETRY_LIMIT}" \
-$1 regionserver $HBASE_REGIONSERVER_ARGS
+"$1" regionserver "$HBASE_REGIONSERVER_ARGS"
 }
 
 cmd=$1
 shift;
 
-for i in $*
+for i in "$@"
 do
   if [[ "$i" =~ ^[0-9]+$ ]]; then
-   run_regionserver $cmd $i
+   run_regionserver "$cmd" "$i"
   else
echo "Invalid argument"
   fi

http://git-wip-us.apache.org/repos/asf/hbase/blob/2e612527/src/main/asciidoc/_chapters/getting_started.adoc
--
diff --git a/src/main/asciidoc/_chapters/getting_started.adoc 
b/src/main/asciidoc/_chapters/getting_started.adoc
index dae3688..32eb669 100644
--- a/src/main/asciidoc/_chapters/getting_started.adoc
+++ b/src/main/asciidoc/_chapters/getting_started.adoc
@@ -384,8 +384,8 @@ The HMaster server controls the HBase cluster.
 You can start up to 9 backup HMaster servers, which makes 10 total HMasters, 
counting the primary.
 To start a backup HMaster, use the `local-master-backup.sh`.
 For each backup master you want to start, add a parameter representing the 
port offset for that master.
-Each HMaster uses three ports (16010, 16020, and 16030 by default). The port 
offset is added to these ports, so using an offset of 2, the backup HMaster 
would use ports 16012, 16022, and 16032.
-The following command starts 3 backup servers using ports 16012/16022/16032, 
16013/16023/16033, and 16015/16025/16035.
+Each HMaster uses two ports (16000 and 16010 by default). The port offset is 
added to these ports, so using an offset of 2, the backup HMaster would use 
ports 16002 and 16012.
+The following command starts 3 backup servers using ports 16002/16012, 
16003/16013, and 16005/16015.
 +
 
 
@@ -411,7 +411,7 @@ The `local-regionservers.sh` command allows you to run 
multiple RegionServers.
 It works in a similar way to the `local-master-backup.sh` command, in that 
each parameter you provide represents the port offset for an instance.
 Each RegionServer requires two ports, and the default ports are 16020 and 
16030.
 Since HBase version 1.1.0, HMaster doesn't use region server ports, this 
leaves 10 ports (16020 

[1/2] hbase git commit: HBASE-20130 (ADDENDUM) Use defaults (16020 & 16030) as base ports when the RS is bound to localhost

2018-03-27 Thread busbey
Repository: hbase
Updated Branches:
  refs/heads/branch-2 5712fd045 -> f4fe0521a
  refs/heads/branch-2.0 24f19fd5f -> 2e612527a


HBASE-20130 (ADDENDUM) Use defaults (16020 & 16030) as base ports when the RS 
is bound to localhost

  * fixed shellcheck errors
  * modified script to set environment variabless HBASE_RS_BASE_PORT, 
HBASE_RS_INFO_BASE_PORT to defaults only if its not already set
  * modified ref guide for default master ports and setting environment 
variables HBASE_RS_BASE_PORT, HBASE_RS_INFO_BASE_PORT for supporting more than 
10 instances for region server on localhost.

Signed-off-by: Michael Stack 
Signed-off-by: Chia-Ping Tsai 
Signed-off-by: Sean Busbey 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/f4fe0521
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/f4fe0521
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/f4fe0521

Branch: refs/heads/branch-2
Commit: f4fe0521a919bc9661b3ef7c49b9e459eff7adce
Parents: 5712fd0
Author: Umesh Agashe 
Authored: Tue Mar 27 13:47:31 2018 -0700
Committer: Sean Busbey 
Committed: Tue Mar 27 21:25:42 2018 -0500

--
 bin/local-regionservers.sh   | 18 +++---
 src/main/asciidoc/_chapters/getting_started.adoc |  6 +++---
 2 files changed, 14 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/f4fe0521/bin/local-regionservers.sh
--
diff --git a/bin/local-regionservers.sh b/bin/local-regionservers.sh
index 79dc5d0..97e5eed 100755
--- a/bin/local-regionservers.sh
+++ b/bin/local-regionservers.sh
@@ -21,8 +21,12 @@
 # Supports up to 10 regionservers (limitation = overlapping ports)
 # For supporting more instances select different values (e.g. 16200, 16300)
 # for HBASE_RS_BASE_PORT and HBASE_RS_INFO_BASE_PORT below
-HBASE_RS_BASE_PORT=16020
-HBASE_RS_INFO_BASE_PORT=16030
+if [ -z "$HBASE_RS_BASE_PORT" ]; then
+  HBASE_RS_BASE_PORT=16020
+fi
+if [ -z "$HBASE_RS_INFO_BASE_PORT" ]; then
+  HBASE_RS_INFO_BASE_PORT=16030
+fi
 
 bin=`dirname "${BASH_SOURCE-$0}"`
 bin=`cd "$bin" >/dev/null && pwd`
@@ -48,22 +52,22 @@ run_regionserver () {
   DN=$2
   export HBASE_IDENT_STRING="$USER-$DN"
   HBASE_REGIONSERVER_ARGS="\
--Dhbase.regionserver.port=`expr $HBASE_RS_BASE_PORT + $DN` \
--Dhbase.regionserver.info.port=`expr $HBASE_RS_INFO_BASE_PORT + $DN`"
+-Dhbase.regionserver.port=`expr "$HBASE_RS_BASE_PORT" + "$DN"` \
+-Dhbase.regionserver.info.port=`expr "$HBASE_RS_INFO_BASE_PORT" + "$DN"`"
 
   "$bin"/hbase-daemon.sh  --config "${HBASE_CONF_DIR}" \
 --autostart-window-size "${AUTOSTART_WINDOW_SIZE}" \
 --autostart-window-retry-limit "${AUTOSTART_WINDOW_RETRY_LIMIT}" \
-$1 regionserver $HBASE_REGIONSERVER_ARGS
+"$1" regionserver "$HBASE_REGIONSERVER_ARGS"
 }
 
 cmd=$1
 shift;
 
-for i in $*
+for i in "$@"
 do
   if [[ "$i" =~ ^[0-9]+$ ]]; then
-   run_regionserver $cmd $i
+   run_regionserver "$cmd" "$i"
   else
echo "Invalid argument"
   fi

http://git-wip-us.apache.org/repos/asf/hbase/blob/f4fe0521/src/main/asciidoc/_chapters/getting_started.adoc
--
diff --git a/src/main/asciidoc/_chapters/getting_started.adoc 
b/src/main/asciidoc/_chapters/getting_started.adoc
index 2229eee..1cdc0a2 100644
--- a/src/main/asciidoc/_chapters/getting_started.adoc
+++ b/src/main/asciidoc/_chapters/getting_started.adoc
@@ -366,8 +366,8 @@ The HMaster server controls the HBase cluster.
 You can start up to 9 backup HMaster servers, which makes 10 total HMasters, 
counting the primary.
 To start a backup HMaster, use the `local-master-backup.sh`.
 For each backup master you want to start, add a parameter representing the 
port offset for that master.
-Each HMaster uses three ports (16010, 16020, and 16030 by default). The port 
offset is added to these ports, so using an offset of 2, the backup HMaster 
would use ports 16012, 16022, and 16032.
-The following command starts 3 backup servers using ports 16012/16022/16032, 
16013/16023/16033, and 16015/16025/16035.
+Each HMaster uses two ports (16000 and 16010 by default). The port offset is 
added to these ports, so using an offset of 2, the backup HMaster would use 
ports 16002 and 16012.
+The following command starts 3 backup servers using ports 16002/16012, 
16003/16013, and 16005/16015.
 +
 
 
@@ -393,7 +393,7 @@ The `local-regionservers.sh` command allows you to run 
multiple RegionServers.
 It works in a similar way to the `local-master-backup.sh` command, in that 
each parameter you provide represents the port offset for an instance.
 Each RegionServer requires two ports, and the 

hbase git commit: HBASE-20130 (ADDENDUM) Use defaults (16020 & 16030) as base ports when the RS is bound to localhost

2018-03-27 Thread busbey
Repository: hbase
Updated Branches:
  refs/heads/master 0adcb4096 -> f7eefaa12


HBASE-20130 (ADDENDUM) Use defaults (16020 & 16030) as base ports when the RS 
is bound to localhost

  * fixed shellcheck errors
  * modified script to set environment variabless HBASE_RS_BASE_PORT, 
HBASE_RS_INFO_BASE_PORT to defaults only if its not already set
  * modified ref guide for default master ports and setting environment 
variables HBASE_RS_BASE_PORT, HBASE_RS_INFO_BASE_PORT for supporting more than 
10 instances for region server on localhost.

Signed-off-by: Michael Stack 
Signed-off-by: Chia-Ping Tsai 
Signed-off-by: Sean Busbey 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/f7eefaa1
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/f7eefaa1
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/f7eefaa1

Branch: refs/heads/master
Commit: f7eefaa126b8748d5a1d069d0ededb1222a45e5e
Parents: 0adcb40
Author: Umesh Agashe 
Authored: Tue Mar 27 13:47:31 2018 -0700
Committer: Sean Busbey 
Committed: Tue Mar 27 21:24:12 2018 -0500

--
 bin/local-regionservers.sh   | 18 +++---
 src/main/asciidoc/_chapters/getting_started.adoc |  6 +++---
 2 files changed, 14 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/f7eefaa1/bin/local-regionservers.sh
--
diff --git a/bin/local-regionservers.sh b/bin/local-regionservers.sh
index 79dc5d0..97e5eed 100755
--- a/bin/local-regionservers.sh
+++ b/bin/local-regionservers.sh
@@ -21,8 +21,12 @@
 # Supports up to 10 regionservers (limitation = overlapping ports)
 # For supporting more instances select different values (e.g. 16200, 16300)
 # for HBASE_RS_BASE_PORT and HBASE_RS_INFO_BASE_PORT below
-HBASE_RS_BASE_PORT=16020
-HBASE_RS_INFO_BASE_PORT=16030
+if [ -z "$HBASE_RS_BASE_PORT" ]; then
+  HBASE_RS_BASE_PORT=16020
+fi
+if [ -z "$HBASE_RS_INFO_BASE_PORT" ]; then
+  HBASE_RS_INFO_BASE_PORT=16030
+fi
 
 bin=`dirname "${BASH_SOURCE-$0}"`
 bin=`cd "$bin" >/dev/null && pwd`
@@ -48,22 +52,22 @@ run_regionserver () {
   DN=$2
   export HBASE_IDENT_STRING="$USER-$DN"
   HBASE_REGIONSERVER_ARGS="\
--Dhbase.regionserver.port=`expr $HBASE_RS_BASE_PORT + $DN` \
--Dhbase.regionserver.info.port=`expr $HBASE_RS_INFO_BASE_PORT + $DN`"
+-Dhbase.regionserver.port=`expr "$HBASE_RS_BASE_PORT" + "$DN"` \
+-Dhbase.regionserver.info.port=`expr "$HBASE_RS_INFO_BASE_PORT" + "$DN"`"
 
   "$bin"/hbase-daemon.sh  --config "${HBASE_CONF_DIR}" \
 --autostart-window-size "${AUTOSTART_WINDOW_SIZE}" \
 --autostart-window-retry-limit "${AUTOSTART_WINDOW_RETRY_LIMIT}" \
-$1 regionserver $HBASE_REGIONSERVER_ARGS
+"$1" regionserver "$HBASE_REGIONSERVER_ARGS"
 }
 
 cmd=$1
 shift;
 
-for i in $*
+for i in "$@"
 do
   if [[ "$i" =~ ^[0-9]+$ ]]; then
-   run_regionserver $cmd $i
+   run_regionserver "$cmd" "$i"
   else
echo "Invalid argument"
   fi

http://git-wip-us.apache.org/repos/asf/hbase/blob/f7eefaa1/src/main/asciidoc/_chapters/getting_started.adoc
--
diff --git a/src/main/asciidoc/_chapters/getting_started.adoc 
b/src/main/asciidoc/_chapters/getting_started.adoc
index 2229eee..1cdc0a2 100644
--- a/src/main/asciidoc/_chapters/getting_started.adoc
+++ b/src/main/asciidoc/_chapters/getting_started.adoc
@@ -366,8 +366,8 @@ The HMaster server controls the HBase cluster.
 You can start up to 9 backup HMaster servers, which makes 10 total HMasters, 
counting the primary.
 To start a backup HMaster, use the `local-master-backup.sh`.
 For each backup master you want to start, add a parameter representing the 
port offset for that master.
-Each HMaster uses three ports (16010, 16020, and 16030 by default). The port 
offset is added to these ports, so using an offset of 2, the backup HMaster 
would use ports 16012, 16022, and 16032.
-The following command starts 3 backup servers using ports 16012/16022/16032, 
16013/16023/16033, and 16015/16025/16035.
+Each HMaster uses two ports (16000 and 16010 by default). The port offset is 
added to these ports, so using an offset of 2, the backup HMaster would use 
ports 16002 and 16012.
+The following command starts 3 backup servers using ports 16002/16012, 
16003/16013, and 16005/16015.
 +
 
 
@@ -393,7 +393,7 @@ The `local-regionservers.sh` command allows you to run 
multiple RegionServers.
 It works in a similar way to the `local-master-backup.sh` command, in that 
each parameter you provide represents the port offset for an instance.
 Each RegionServer requires two ports, and the default ports are 16020 and 
16030.
 Since HBase version 

hbase git commit: HBASE-13884 Fix Compactions section in HBase book

2018-03-27 Thread stack
Repository: hbase
Updated Branches:
  refs/heads/master 69f5d707b -> 0adcb4096


HBASE-13884 Fix Compactions section in HBase book


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/0adcb409
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/0adcb409
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/0adcb409

Branch: refs/heads/master
Commit: 0adcb40963392de6731311eb2423d2f9b425d52e
Parents: 69f5d70
Author: Michael Stack 
Authored: Tue Mar 27 17:40:31 2018 -0700
Committer: Michael Stack 
Committed: Tue Mar 27 17:40:35 2018 -0700

--
 src/main/asciidoc/_chapters/architecture.adoc | 17 ++---
 1 file changed, 14 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/0adcb409/src/main/asciidoc/_chapters/architecture.adoc
--
diff --git a/src/main/asciidoc/_chapters/architecture.adoc 
b/src/main/asciidoc/_chapters/architecture.adoc
index 7be5f48..f35e118 100644
--- a/src/main/asciidoc/_chapters/architecture.adoc
+++ b/src/main/asciidoc/_chapters/architecture.adoc
@@ -1757,9 +1757,20 @@ These parameters will be explained in context, and then 
will be given in a table
 == Being Stuck
 
 When the MemStore gets too large, it needs to flush its contents to a 
StoreFile.
-However, a Store can only have `hbase.hstore.blockingStoreFiles` files, so the 
MemStore needs to wait for the number of StoreFiles to be reduced by one or 
more compactions.
-However, if the MemStore grows larger than 
`hbase.hregion.memstore.flush.size`, it is not able to flush its contents to a 
StoreFile.
-If the MemStore is too large and the number of StoreFiles is also too high, 
the algorithm is said to be "stuck". The compaction algorithm checks for this 
"stuck" situation and provides mechanisms to alleviate it.
+However, Stores are configured with a bound on the number StoreFiles,
+`hbase.hstore.blockingStoreFiles`, and if in excess, the MemStore flush must 
wait
+until the StoreFile count is reduced by one or more compactions. If the 
MemStore
+is too large and the number of StoreFiles is also too high, the algorithm is 
said
+to be "stuck". By default we'll wait on compactions up to
+`hbase.hstore.blockingWaitTime` milliseconds. If this period expires, we'll 
flush
+anyways even though we are in excess of the
+`hbase.hstore.blockingStoreFiles` count.
+
+Upping the `hbase.hstore.blockingStoreFiles` count will allow flushes to happen
+but a Store with many StoreFiles in will likely have higher read latencies. 
Try to
+figure why Compactions are not keeping up. Is it a write spurt that is bringing
+about this situation or is a regular occurance and the cluster is 
under-provisioned
+for the volume of writes?
 
 [[exploringcompaction.policy]]
 == The ExploringCompactionPolicy Algorithm



[1/3] hbase git commit: HBASE-20199 Add a unit test to verify flush and snapshot permission requirements aren't excessive

2018-03-27 Thread elserj
Repository: hbase
Updated Branches:
  refs/heads/branch-2 6f1aa0edf -> 5712fd045
  refs/heads/branch-2.0 163c8beeb -> 24f19fd5f
  refs/heads/master 09ed7c7a1 -> 69f5d707b


HBASE-20199 Add a unit test to verify flush and snapshot permission 
requirements aren't excessive

Signed-off-by: Ted Yu 
Signed-off-by: Michael Stack 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/24f19fd5
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/24f19fd5
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/24f19fd5

Branch: refs/heads/branch-2.0
Commit: 24f19fd5f0da5b575093601abbfbabfd05f25039
Parents: 163c8be
Author: Josh Elser 
Authored: Mon Mar 26 17:52:37 2018 -0400
Committer: Josh Elser 
Committed: Tue Mar 27 20:04:16 2018 -0400

--
 .../access/TestAdminOnlyOperations.java | 268 --
 .../security/access/TestRpcAccessChecks.java| 362 +++
 2 files changed, 362 insertions(+), 268 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/24f19fd5/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAdminOnlyOperations.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAdminOnlyOperations.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAdminOnlyOperations.java
deleted file mode 100644
index 42d2f36..000
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAdminOnlyOperations.java
+++ /dev/null
@@ -1,268 +0,0 @@
-
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hbase.security.access;
-
-import static org.apache.hadoop.hbase.AuthUtil.toGroupEntry;
-import static org.junit.Assert.assertTrue;
-import static org.junit.Assert.fail;
-import static org.mockito.Mockito.mock;
-
-import com.google.protobuf.Service;
-import com.google.protobuf.ServiceException;
-import java.io.IOException;
-import java.security.PrivilegedExceptionAction;
-import java.util.Collections;
-import java.util.HashMap;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hbase.HBaseClassTestRule;
-import org.apache.hadoop.hbase.HBaseTestingUtility;
-import org.apache.hadoop.hbase.ServerName;
-import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.client.Admin;
-import org.apache.hadoop.hbase.client.Connection;
-import org.apache.hadoop.hbase.client.ConnectionFactory;
-import org.apache.hadoop.hbase.coprocessor.CoprocessorHost;
-import org.apache.hadoop.hbase.coprocessor.MasterCoprocessor;
-import org.apache.hadoop.hbase.coprocessor.RegionServerCoprocessor;
-import org.apache.hadoop.hbase.ipc.protobuf.generated.TestProtos;
-import org.apache.hadoop.hbase.ipc.protobuf.generated.TestRpcServiceProtos;
-import org.apache.hadoop.hbase.security.AccessDeniedException;
-import org.apache.hadoop.hbase.security.User;
-import org.apache.hadoop.hbase.testclassification.MediumTests;
-import org.apache.hadoop.hbase.testclassification.SecurityTests;
-import org.junit.BeforeClass;
-import org.junit.ClassRule;
-import org.junit.Test;
-import org.junit.experimental.categories.Category;
-
-/**
- * This class tests operations in MasterRpcServices which require ADMIN access.
- * It doesn't test all operations which require ADMIN access, only those which 
get vetted within
- * MasterRpcServices at the point of entry itself (unlike old approach of using
- * hooks in AccessController).
- *
- * Sidenote:
- * There is one big difference between how security tests for AccessController 
hooks work, and how
- * the tests in this class for security in MasterRpcServices work.
- * The difference arises because of the way AC & MasterRpcServices get the 
user.
- *
- * In AccessController, it first checks if there is an active rpc user in 
ObserverContext. If not,
- * it uses UserProvider for current user. This *might* make sense in the 
context 

[3/3] hbase git commit: HBASE-20199 Add a unit test to verify flush and snapshot permission requirements aren't excessive

2018-03-27 Thread elserj
HBASE-20199 Add a unit test to verify flush and snapshot permission 
requirements aren't excessive

Signed-off-by: Ted Yu 
Signed-off-by: Michael Stack 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/69f5d707
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/69f5d707
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/69f5d707

Branch: refs/heads/master
Commit: 69f5d707b64081f2c0d6dc7c097a0f5cc7418d05
Parents: 09ed7c7
Author: Josh Elser 
Authored: Mon Mar 26 17:52:37 2018 -0400
Committer: Josh Elser 
Committed: Tue Mar 27 20:17:08 2018 -0400

--
 .../access/TestAdminOnlyOperations.java | 268 --
 .../security/access/TestRpcAccessChecks.java| 362 +++
 2 files changed, 362 insertions(+), 268 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/69f5d707/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAdminOnlyOperations.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAdminOnlyOperations.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAdminOnlyOperations.java
deleted file mode 100644
index 42d2f36..000
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAdminOnlyOperations.java
+++ /dev/null
@@ -1,268 +0,0 @@
-
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hbase.security.access;
-
-import static org.apache.hadoop.hbase.AuthUtil.toGroupEntry;
-import static org.junit.Assert.assertTrue;
-import static org.junit.Assert.fail;
-import static org.mockito.Mockito.mock;
-
-import com.google.protobuf.Service;
-import com.google.protobuf.ServiceException;
-import java.io.IOException;
-import java.security.PrivilegedExceptionAction;
-import java.util.Collections;
-import java.util.HashMap;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hbase.HBaseClassTestRule;
-import org.apache.hadoop.hbase.HBaseTestingUtility;
-import org.apache.hadoop.hbase.ServerName;
-import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.client.Admin;
-import org.apache.hadoop.hbase.client.Connection;
-import org.apache.hadoop.hbase.client.ConnectionFactory;
-import org.apache.hadoop.hbase.coprocessor.CoprocessorHost;
-import org.apache.hadoop.hbase.coprocessor.MasterCoprocessor;
-import org.apache.hadoop.hbase.coprocessor.RegionServerCoprocessor;
-import org.apache.hadoop.hbase.ipc.protobuf.generated.TestProtos;
-import org.apache.hadoop.hbase.ipc.protobuf.generated.TestRpcServiceProtos;
-import org.apache.hadoop.hbase.security.AccessDeniedException;
-import org.apache.hadoop.hbase.security.User;
-import org.apache.hadoop.hbase.testclassification.MediumTests;
-import org.apache.hadoop.hbase.testclassification.SecurityTests;
-import org.junit.BeforeClass;
-import org.junit.ClassRule;
-import org.junit.Test;
-import org.junit.experimental.categories.Category;
-
-/**
- * This class tests operations in MasterRpcServices which require ADMIN access.
- * It doesn't test all operations which require ADMIN access, only those which 
get vetted within
- * MasterRpcServices at the point of entry itself (unlike old approach of using
- * hooks in AccessController).
- *
- * Sidenote:
- * There is one big difference between how security tests for AccessController 
hooks work, and how
- * the tests in this class for security in MasterRpcServices work.
- * The difference arises because of the way AC & MasterRpcServices get the 
user.
- *
- * In AccessController, it first checks if there is an active rpc user in 
ObserverContext. If not,
- * it uses UserProvider for current user. This *might* make sense in the 
context of coprocessors,
- * because they can be called outside the context of RPCs.
- * But in the context of MasterRpcServices, only one way makes sense - 
RPCServer.getRequestUser().

[2/3] hbase git commit: HBASE-20199 Add a unit test to verify flush and snapshot permission requirements aren't excessive

2018-03-27 Thread elserj
HBASE-20199 Add a unit test to verify flush and snapshot permission 
requirements aren't excessive

Signed-off-by: Ted Yu 
Signed-off-by: Michael Stack 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/5712fd04
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/5712fd04
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/5712fd04

Branch: refs/heads/branch-2
Commit: 5712fd0451cbf8f40b780ff69a0e9994ff9af3a1
Parents: 6f1aa0e
Author: Josh Elser 
Authored: Mon Mar 26 17:52:37 2018 -0400
Committer: Josh Elser 
Committed: Tue Mar 27 20:10:28 2018 -0400

--
 .../access/TestAdminOnlyOperations.java | 268 --
 .../security/access/TestRpcAccessChecks.java| 362 +++
 2 files changed, 362 insertions(+), 268 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/5712fd04/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAdminOnlyOperations.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAdminOnlyOperations.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAdminOnlyOperations.java
deleted file mode 100644
index 42d2f36..000
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAdminOnlyOperations.java
+++ /dev/null
@@ -1,268 +0,0 @@
-
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hbase.security.access;
-
-import static org.apache.hadoop.hbase.AuthUtil.toGroupEntry;
-import static org.junit.Assert.assertTrue;
-import static org.junit.Assert.fail;
-import static org.mockito.Mockito.mock;
-
-import com.google.protobuf.Service;
-import com.google.protobuf.ServiceException;
-import java.io.IOException;
-import java.security.PrivilegedExceptionAction;
-import java.util.Collections;
-import java.util.HashMap;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hbase.HBaseClassTestRule;
-import org.apache.hadoop.hbase.HBaseTestingUtility;
-import org.apache.hadoop.hbase.ServerName;
-import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.client.Admin;
-import org.apache.hadoop.hbase.client.Connection;
-import org.apache.hadoop.hbase.client.ConnectionFactory;
-import org.apache.hadoop.hbase.coprocessor.CoprocessorHost;
-import org.apache.hadoop.hbase.coprocessor.MasterCoprocessor;
-import org.apache.hadoop.hbase.coprocessor.RegionServerCoprocessor;
-import org.apache.hadoop.hbase.ipc.protobuf.generated.TestProtos;
-import org.apache.hadoop.hbase.ipc.protobuf.generated.TestRpcServiceProtos;
-import org.apache.hadoop.hbase.security.AccessDeniedException;
-import org.apache.hadoop.hbase.security.User;
-import org.apache.hadoop.hbase.testclassification.MediumTests;
-import org.apache.hadoop.hbase.testclassification.SecurityTests;
-import org.junit.BeforeClass;
-import org.junit.ClassRule;
-import org.junit.Test;
-import org.junit.experimental.categories.Category;
-
-/**
- * This class tests operations in MasterRpcServices which require ADMIN access.
- * It doesn't test all operations which require ADMIN access, only those which 
get vetted within
- * MasterRpcServices at the point of entry itself (unlike old approach of using
- * hooks in AccessController).
- *
- * Sidenote:
- * There is one big difference between how security tests for AccessController 
hooks work, and how
- * the tests in this class for security in MasterRpcServices work.
- * The difference arises because of the way AC & MasterRpcServices get the 
user.
- *
- * In AccessController, it first checks if there is an active rpc user in 
ObserverContext. If not,
- * it uses UserProvider for current user. This *might* make sense in the 
context of coprocessors,
- * because they can be called outside the context of RPCs.
- * But in the context of MasterRpcServices, only one way makes sense - 

[3/3] hbase git commit: HBASE-20280 Fix possibility of deadlocking in refreshFileConnections

2018-03-27 Thread apurtell
HBASE-20280 Fix possibility of deadlocking in refreshFileConnections

When prefetch on open is specified, there is a deadlocking case
where if the prefetch is cancelled, the PrefetchExecutor interrupts
the threads if necessary, when that happens in FileIOEngine, it
causes an ClosedByInterruptException which is a subclass of
ClosedChannelException. If we retry all ClosedChannelExceptions,
this will lock as this access is expected to be interrupted.
This change removes calling refreshFileConnections for
ClosedByInterruptExceptions.

Signed-off-by: Andrew Purtell 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/09ed7c7a
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/09ed7c7a
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/09ed7c7a

Branch: refs/heads/master
Commit: 09ed7c7a1004df9fb98e58dd17eb07a479e427f9
Parents: eb424ac
Author: Zach York 
Authored: Thu Mar 15 16:46:40 2018 -0700
Committer: Andrew Purtell 
Committed: Tue Mar 27 16:53:01 2018 -0700

--
 .../org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java | 3 +++
 1 file changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/09ed7c7a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
index 648d4bc..29b810f 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
@@ -22,6 +22,7 @@ import java.io.File;
 import java.io.IOException;
 import java.io.RandomAccessFile;
 import java.nio.ByteBuffer;
+import java.nio.channels.ClosedByInterruptException;
 import java.nio.channels.ClosedChannelException;
 import java.nio.channels.FileChannel;
 import java.util.Arrays;
@@ -229,6 +230,8 @@ public class FileIOEngine implements IOEngine {
   }
   try {
 accessLen = accessor.access(fileChannel, buffer, accessOffset);
+  } catch (ClosedByInterruptException e) {
+throw e;
   } catch (ClosedChannelException e) {
 LOG.warn("Caught ClosedChannelException accessing BucketCache, 
reopening file. ", e);
 refreshFileConnection(accessFileNum);



[1/3] hbase git commit: HBASE-20280 Fix possibility of deadlocking in refreshFileConnections

2018-03-27 Thread apurtell
Repository: hbase
Updated Branches:
  refs/heads/branch-2 a601c57f9 -> 6f1aa0edf
  refs/heads/branch-2.0 7251ab6f9 -> 163c8beeb
  refs/heads/master eb424ac5f -> 09ed7c7a1


HBASE-20280 Fix possibility of deadlocking in refreshFileConnections

When prefetch on open is specified, there is a deadlocking case
where if the prefetch is cancelled, the PrefetchExecutor interrupts
the threads if necessary, when that happens in FileIOEngine, it
causes an ClosedByInterruptException which is a subclass of
ClosedChannelException. If we retry all ClosedChannelExceptions,
this will lock as this access is expected to be interrupted.
This change removes calling refreshFileConnections for
ClosedByInterruptExceptions.

Signed-off-by: Andrew Purtell 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/6f1aa0ed
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/6f1aa0ed
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/6f1aa0ed

Branch: refs/heads/branch-2
Commit: 6f1aa0edff50d8a0a71c5011f0736e11f8986f98
Parents: a601c57
Author: Zach York 
Authored: Thu Mar 15 16:46:40 2018 -0700
Committer: Andrew Purtell 
Committed: Tue Mar 27 16:52:59 2018 -0700

--
 .../org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java | 3 +++
 1 file changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/6f1aa0ed/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
index 648d4bc..29b810f 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
@@ -22,6 +22,7 @@ import java.io.File;
 import java.io.IOException;
 import java.io.RandomAccessFile;
 import java.nio.ByteBuffer;
+import java.nio.channels.ClosedByInterruptException;
 import java.nio.channels.ClosedChannelException;
 import java.nio.channels.FileChannel;
 import java.util.Arrays;
@@ -229,6 +230,8 @@ public class FileIOEngine implements IOEngine {
   }
   try {
 accessLen = accessor.access(fileChannel, buffer, accessOffset);
+  } catch (ClosedByInterruptException e) {
+throw e;
   } catch (ClosedChannelException e) {
 LOG.warn("Caught ClosedChannelException accessing BucketCache, 
reopening file. ", e);
 refreshFileConnection(accessFileNum);



[2/3] hbase git commit: HBASE-20280 Fix possibility of deadlocking in refreshFileConnections

2018-03-27 Thread apurtell
HBASE-20280 Fix possibility of deadlocking in refreshFileConnections

When prefetch on open is specified, there is a deadlocking case
where if the prefetch is cancelled, the PrefetchExecutor interrupts
the threads if necessary, when that happens in FileIOEngine, it
causes an ClosedByInterruptException which is a subclass of
ClosedChannelException. If we retry all ClosedChannelExceptions,
this will lock as this access is expected to be interrupted.
This change removes calling refreshFileConnections for
ClosedByInterruptExceptions.

Signed-off-by: Andrew Purtell 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/163c8bee
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/163c8bee
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/163c8bee

Branch: refs/heads/branch-2.0
Commit: 163c8beebae9d27be098b73ad30578cd1380b691
Parents: 7251ab6
Author: Zach York 
Authored: Thu Mar 15 16:46:40 2018 -0700
Committer: Andrew Purtell 
Committed: Tue Mar 27 16:52:59 2018 -0700

--
 .../org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java | 3 +++
 1 file changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/163c8bee/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
index 648d4bc..29b810f 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
@@ -22,6 +22,7 @@ import java.io.File;
 import java.io.IOException;
 import java.io.RandomAccessFile;
 import java.nio.ByteBuffer;
+import java.nio.channels.ClosedByInterruptException;
 import java.nio.channels.ClosedChannelException;
 import java.nio.channels.FileChannel;
 import java.util.Arrays;
@@ -229,6 +230,8 @@ public class FileIOEngine implements IOEngine {
   }
   try {
 accessLen = accessor.access(fileChannel, buffer, accessOffset);
+  } catch (ClosedByInterruptException e) {
+throw e;
   } catch (ClosedChannelException e) {
 LOG.warn("Caught ClosedChannelException accessing BucketCache, 
reopening file. ", e);
 refreshFileConnection(accessFileNum);



[1/2] hbase git commit: HBASE-20280 Fix possibility of deadlocking in refreshFileConnections

2018-03-27 Thread apurtell
Repository: hbase
Updated Branches:
  refs/heads/branch-1 8018c28c2 -> 773af3e0c
  refs/heads/branch-1.4 d6036447b -> 85457bd94


HBASE-20280 Fix possibility of deadlocking in refreshFileConnections

When prefetch on open is specified, there is a deadlocking case
where if the prefetch is cancelled, the PrefetchExecutor interrupts
the threads if necessary, when that happens in FileIOEngine, it
causes an ClosedByInterruptException which is a subclass of
ClosedChannelException. If we retry all ClosedChannelExceptions,
this will lock as this access is expected to be interrupted.
This change removes calling refreshFileConnections for
ClosedByInterruptExceptions.

Signed-off-by: Andrew Purtell 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/773af3e0
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/773af3e0
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/773af3e0

Branch: refs/heads/branch-1
Commit: 773af3e0ca649fb361674becf10691b598cd6be8
Parents: 8018c28
Author: Zach York 
Authored: Thu Mar 15 16:46:40 2018 -0700
Committer: Andrew Purtell 
Committed: Tue Mar 27 22:59:58 2018 +

--
 .../org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java | 3 +++
 1 file changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/773af3e0/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
index 7b773bd..ddefa85 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
@@ -22,6 +22,7 @@ import java.io.File;
 import java.io.IOException;
 import java.io.RandomAccessFile;
 import java.nio.ByteBuffer;
+import java.nio.channels.ClosedByInterruptException;
 import java.nio.channels.ClosedChannelException;
 import java.nio.channels.FileChannel;
 import java.util.Arrays;
@@ -189,6 +190,8 @@ public class FileIOEngine implements IOEngine {
   }
   try {
 accessLen = accessor.access(fileChannel, buffer, accessOffset);
+  } catch (ClosedByInterruptException e) {
+throw e;
   } catch (ClosedChannelException e) {
 LOG.warn("Caught ClosedChannelException accessing BucketCache, 
reopening file. ", e);
 refreshFileConnection(accessFileNum);



[2/2] hbase git commit: HBASE-20280 Fix possibility of deadlocking in refreshFileConnections

2018-03-27 Thread apurtell
HBASE-20280 Fix possibility of deadlocking in refreshFileConnections

When prefetch on open is specified, there is a deadlocking case
where if the prefetch is cancelled, the PrefetchExecutor interrupts
the threads if necessary, when that happens in FileIOEngine, it
causes an ClosedByInterruptException which is a subclass of
ClosedChannelException. If we retry all ClosedChannelExceptions,
this will lock as this access is expected to be interrupted.
This change removes calling refreshFileConnections for
ClosedByInterruptExceptions.

Signed-off-by: Andrew Purtell 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/85457bd9
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/85457bd9
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/85457bd9

Branch: refs/heads/branch-1.4
Commit: 85457bd94645a3bd655f0f8e1531d434f10e9df1
Parents: d603644
Author: Zach York 
Authored: Thu Mar 15 16:46:40 2018 -0700
Committer: Andrew Purtell 
Committed: Tue Mar 27 23:00:02 2018 +

--
 .../org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java | 3 +++
 1 file changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/85457bd9/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
index 7b773bd..ddefa85 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
@@ -22,6 +22,7 @@ import java.io.File;
 import java.io.IOException;
 import java.io.RandomAccessFile;
 import java.nio.ByteBuffer;
+import java.nio.channels.ClosedByInterruptException;
 import java.nio.channels.ClosedChannelException;
 import java.nio.channels.FileChannel;
 import java.util.Arrays;
@@ -189,6 +190,8 @@ public class FileIOEngine implements IOEngine {
   }
   try {
 accessLen = accessor.access(fileChannel, buffer, accessOffset);
+  } catch (ClosedByInterruptException e) {
+throw e;
   } catch (ClosedChannelException e) {
 LOG.warn("Caught ClosedChannelException accessing BucketCache, 
reopening file. ", e);
 refreshFileConnection(accessFileNum);



svn commit: r25999 - in /release/hbase: 1.3.1/ 1.4.1/

2018-03-27 Thread apurtell
Author: apurtell
Date: Tue Mar 27 23:39:48 2018
New Revision: 25999

Log:
Prune old releases 1.3.1 and 1.4.1

Removed:
release/hbase/1.3.1/
release/hbase/1.4.1/



hbase git commit: HBASE-16568 Remove Cygwin-oriented instructions (for installing HBase in Windows OS) from official reference materials

2018-03-27 Thread stack
Repository: hbase
Updated Branches:
  refs/heads/master d87139989 -> eb424ac5f


HBASE-16568 Remove Cygwin-oriented instructions (for installing HBase in 
Windows OS) from official reference materials


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/eb424ac5
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/eb424ac5
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/eb424ac5

Branch: refs/heads/master
Commit: eb424ac5f458c47a29a91f1d093baa9232c105ba
Parents: d871399
Author: Daniel Vimont 
Authored: Tue Oct 11 16:00:31 2016 +0900
Committer: Michael Stack 
Committed: Tue Mar 27 16:38:27 2018 -0700

--
 src/main/asciidoc/_chapters/unit_testing.adoc |   2 -
 src/site/asciidoc/cygwin.adoc | 197 -
 src/site/site.xml |   1 -
 src/site/xdoc/cygwin.xml  | 245 -
 4 files changed, 445 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/eb424ac5/src/main/asciidoc/_chapters/unit_testing.adoc
--
diff --git a/src/main/asciidoc/_chapters/unit_testing.adoc 
b/src/main/asciidoc/_chapters/unit_testing.adoc
index e503f81..3329a75 100644
--- a/src/main/asciidoc/_chapters/unit_testing.adoc
+++ b/src/main/asciidoc/_chapters/unit_testing.adoc
@@ -327,7 +327,5 @@ A record is inserted, a Get is performed from the same 
table, and the insertion
 
 NOTE: Starting the mini-cluster takes about 20-30 seconds, but that should be 
appropriate for integration testing.
 
-To use an HBase mini-cluster on Microsoft Windows, you need to use a Cygwin 
environment.
-
 See the paper at 
link:http://blog.sematext.com/2010/08/30/hbase-case-study-using-hbasetestingutility-for-local-testing-development/[HBase
 Case-Study: Using HBaseTestingUtility for Local Testing and
 Development] (2010) for more information about 
HBaseTestingUtility.

http://git-wip-us.apache.org/repos/asf/hbase/blob/eb424ac5/src/site/asciidoc/cygwin.adoc
--
diff --git a/src/site/asciidoc/cygwin.adoc b/src/site/asciidoc/cygwin.adoc
deleted file mode 100644
index 1056dec..000
--- a/src/site/asciidoc/cygwin.adoc
+++ /dev/null
@@ -1,197 +0,0 @@
-
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
-
-
-
-== Installing Apache HBase (TM) on Windows using Cygwin
-
-== Introduction
-
-link:https://hbase.apache.org[Apache HBase (TM)] is a distributed, 
column-oriented store, modeled after Google's 
link:http://research.google.com/archive/bigtable.html[BigTable]. Apache HBase 
is built on top of link:https://hadoop.apache.org[Hadoop] for its 
link:https://hadoop.apache.org/mapreduce[MapReduce] 
link:https://hadoop.apache.org/hdfs[distributed file system] implementations. 
All these projects are open-source and part of the 
link:https://www.apache.org[Apache Software Foundation].
-
-== Purpose
-
-This document explains the *intricacies* of running Apache HBase on Windows 
using Cygwin* as an all-in-one single-node installation for testing and 
development. The HBase 
link:https://hbase.apache.org/apidocs/overview-summary.html#overview_description[Overview]
 and link:book.html#getting_started[QuickStart] guides on the other hand go a 
long way in explaning how to setup link:https://hadoop.apache.org/hbase[HBase] 
in more complex deployment scenarios.
-
-== Installation
-
-For running Apache HBase on Windows, 3 technologies are required: 
-* Java
-* Cygwin
-* SSH 
-
-The following paragraphs detail the installation of each of the aforementioned 
technologies.
-
-=== Java
-
-HBase depends on the link:http://java.sun.com/javase/6/[Java Platform, 
Standard Edition, 6 Release]. So the target system has to be provided with at 
least the Java Runtime Environment (JRE); however if the system will also be 
used for development, the Jave Development Kit (JDK) is preferred. You can 
download the latest versions for both from 

svn commit: r25998 - /dev/hbase/hbase-1.3.2RC0/

2018-03-27 Thread apurtell
Author: apurtell
Date: Tue Mar 27 23:38:06 2018
New Revision: 25998

Log:
Remove old hbase-1.3.2RC0

Removed:
dev/hbase/hbase-1.3.2RC0/



svn commit: r25997 - /dev/hbase/hbase-1.3.2RC1/ /release/hbase/1.3.2/

2018-03-27 Thread apurtell
Author: apurtell
Date: Tue Mar 27 23:33:10 2018
New Revision: 25997

Log:
Release HBase 1.3.2

Added:
release/hbase/1.3.2/
  - copied from r25996, dev/hbase/hbase-1.3.2RC1/
Removed:
dev/hbase/hbase-1.3.2RC1/



[1/3] hbase git commit: HBASE-20111 A region's splittable state now includes the configuration splitPolicy

2018-03-27 Thread elserj
Repository: hbase
Updated Branches:
  refs/heads/branch-2 c329a3438 -> a601c57f9
  refs/heads/branch-2.0 cbea942ef -> 7251ab6f9
  refs/heads/master 19d7bdcb4 -> d87139989


HBASE-20111 A region's splittable state now includes the configuration 
splitPolicy

The Master asks a RegionServer whether a Region can be split or not, primarily 
to
verify that the region is not closing, opening, etc. This change has the 
RegionServer
also consult the configured RegionSplitPolicy.

Signed-off-by: Josh Elser 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/7251ab6f
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/7251ab6f
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/7251ab6f

Branch: refs/heads/branch-2.0
Commit: 7251ab6f96426a2af0ed622ab2f4a25f0f3d
Parents: cbea942
Author: Rajeshbabu Chintaguntla 
Authored: Tue Mar 27 14:46:18 2018 -0400
Committer: Josh Elser 
Committed: Tue Mar 27 14:48:48 2018 -0400

--
 .../assignment/SplitTableRegionProcedure.java   |  2 +-
 .../hbase/regionserver/RSRpcServices.java   |  5 ++-
 .../apache/hadoop/hbase/client/TestAdmin1.java  | 32 +++-
 3 files changed, 36 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/7251ab6f/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/SplitTableRegionProcedure.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/SplitTableRegionProcedure.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/SplitTableRegionProcedure.java
index ffd92d1..994983f 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/SplitTableRegionProcedure.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/SplitTableRegionProcedure.java
@@ -179,7 +179,7 @@ public class SplitTableRegionProcedure
 }
 
 if (!splittable) {
-  IOException e = new IOException(regionToSplit.getShortNameToLog() + " 
NOT splittable");
+  IOException e = new 
DoNotRetryIOException(regionToSplit.getShortNameToLog() + " NOT splittable");
   if (splittableCheckIOE != null) e.initCause(splittableCheckIOE);
   throw e;
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/7251ab6f/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
index 92e081b..3db7c08 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
@@ -1729,10 +1729,13 @@ public class RSRpcServices implements 
HBaseRPCErrorHandler,
   HRegion region = getRegion(request.getRegion());
   RegionInfo info = region.getRegionInfo();
   byte[] bestSplitRow = null;
+  boolean shouldSplit = true;
   if (request.hasBestSplitRow() && request.getBestSplitRow()) {
 HRegion r = region;
 region.startRegionOperation(Operation.SPLIT_REGION);
 r.forceSplit(null);
+// Even after setting force split if split policy says no to split 
then we should not split.
+shouldSplit = region.getSplitPolicy().shouldSplit() && 
!info.isMetaRegion();
 bestSplitRow = r.checkSplit();
 // when all table data are in memstore, bestSplitRow = null
 // try to flush region first
@@ -1747,7 +1750,7 @@ public class RSRpcServices implements 
HBaseRPCErrorHandler,
   if (request.hasCompactionState() && request.getCompactionState()) {
 
builder.setCompactionState(ProtobufUtil.createCompactionState(region.getCompactionState()));
   }
-  builder.setSplittable(region.isSplittable());
+  builder.setSplittable(region.isSplittable() && shouldSplit);
   builder.setMergeable(region.isMergeable());
   if (request.hasBestSplitRow() && request.getBestSplitRow() && 
bestSplitRow != null) {
 builder.setBestSplitRow(UnsafeByteOperations.unsafeWrap(bestSplitRow));

http://git-wip-us.apache.org/repos/asf/hbase/blob/7251ab6f/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin1.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin1.java 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin1.java
index c48d130..8ac2ddaf 100644
--- 

[2/3] hbase git commit: HBASE-20111 A region's splittable state now includes the configuration splitPolicy

2018-03-27 Thread elserj
HBASE-20111 A region's splittable state now includes the configuration 
splitPolicy

The Master asks a RegionServer whether a Region can be split or not, primarily 
to
verify that the region is not closing, opening, etc. This change has the 
RegionServer
also consult the configured RegionSplitPolicy.

Signed-off-by: Josh Elser 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/a601c57f
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/a601c57f
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/a601c57f

Branch: refs/heads/branch-2
Commit: a601c57f9765d81206efebc33c5fbb35fc3310ff
Parents: c329a34
Author: Rajeshbabu Chintaguntla 
Authored: Tue Mar 27 14:46:18 2018 -0400
Committer: Josh Elser 
Committed: Tue Mar 27 14:58:58 2018 -0400

--
 .../assignment/SplitTableRegionProcedure.java   |  2 +-
 .../hbase/regionserver/RSRpcServices.java   |  5 ++-
 .../apache/hadoop/hbase/client/TestAdmin1.java  | 32 +++-
 3 files changed, 36 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/a601c57f/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/SplitTableRegionProcedure.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/SplitTableRegionProcedure.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/SplitTableRegionProcedure.java
index ffd92d1..994983f 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/SplitTableRegionProcedure.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/SplitTableRegionProcedure.java
@@ -179,7 +179,7 @@ public class SplitTableRegionProcedure
 }
 
 if (!splittable) {
-  IOException e = new IOException(regionToSplit.getShortNameToLog() + " 
NOT splittable");
+  IOException e = new 
DoNotRetryIOException(regionToSplit.getShortNameToLog() + " NOT splittable");
   if (splittableCheckIOE != null) e.initCause(splittableCheckIOE);
   throw e;
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/a601c57f/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
index 2793d2d..2c2d3cc 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
@@ -1736,10 +1736,13 @@ public class RSRpcServices implements 
HBaseRPCErrorHandler,
   HRegion region = getRegion(request.getRegion());
   RegionInfo info = region.getRegionInfo();
   byte[] bestSplitRow = null;
+  boolean shouldSplit = true;
   if (request.hasBestSplitRow() && request.getBestSplitRow()) {
 HRegion r = region;
 region.startRegionOperation(Operation.SPLIT_REGION);
 r.forceSplit(null);
+// Even after setting force split if split policy says no to split 
then we should not split.
+shouldSplit = region.getSplitPolicy().shouldSplit() && 
!info.isMetaRegion();
 bestSplitRow = r.checkSplit();
 // when all table data are in memstore, bestSplitRow = null
 // try to flush region first
@@ -1754,7 +1757,7 @@ public class RSRpcServices implements 
HBaseRPCErrorHandler,
   if (request.hasCompactionState() && request.getCompactionState()) {
 
builder.setCompactionState(ProtobufUtil.createCompactionState(region.getCompactionState()));
   }
-  builder.setSplittable(region.isSplittable());
+  builder.setSplittable(region.isSplittable() && shouldSplit);
   builder.setMergeable(region.isMergeable());
   if (request.hasBestSplitRow() && request.getBestSplitRow() && 
bestSplitRow != null) {
 builder.setBestSplitRow(UnsafeByteOperations.unsafeWrap(bestSplitRow));

http://git-wip-us.apache.org/repos/asf/hbase/blob/a601c57f/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin1.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin1.java 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin1.java
index c48d130..8ac2ddaf 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin1.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin1.java
@@ -31,6 +31,7 @@ import java.util.List;
 import java.util.Map;
 import 

[3/3] hbase git commit: HBASE-20111 A region's splittable state now includes the configuration splitPolicy

2018-03-27 Thread elserj
HBASE-20111 A region's splittable state now includes the configuration 
splitPolicy

The Master asks a RegionServer whether a Region can be split or not, primarily 
to
verify that the region is not closing, opening, etc. This change has the 
RegionServer
also consult the configured RegionSplitPolicy.

Signed-off-by: Josh Elser 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/d8713998
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/d8713998
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/d8713998

Branch: refs/heads/master
Commit: d87139989fae093d8d395f0b1963952749dd2386
Parents: 19d7bdc
Author: Rajeshbabu Chintaguntla 
Authored: Tue Mar 27 14:46:18 2018 -0400
Committer: Josh Elser 
Committed: Tue Mar 27 18:42:49 2018 -0400

--
 .../assignment/SplitTableRegionProcedure.java   |  2 +-
 .../hbase/regionserver/RSRpcServices.java   |  5 ++-
 .../apache/hadoop/hbase/client/TestAdmin1.java  | 32 +++-
 3 files changed, 36 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/d8713998/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/SplitTableRegionProcedure.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/SplitTableRegionProcedure.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/SplitTableRegionProcedure.java
index 9f7ca17..341affb 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/SplitTableRegionProcedure.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/SplitTableRegionProcedure.java
@@ -179,7 +179,7 @@ public class SplitTableRegionProcedure
 }
 
 if (!splittable) {
-  IOException e = new IOException(regionToSplit.getShortNameToLog() + " 
NOT splittable");
+  IOException e = new 
DoNotRetryIOException(regionToSplit.getShortNameToLog() + " NOT splittable");
   if (splittableCheckIOE != null) e.initCause(splittableCheckIOE);
   throw e;
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/d8713998/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
index 348c9b6..f3bb24d 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
@@ -1736,10 +1736,13 @@ public class RSRpcServices implements 
HBaseRPCErrorHandler,
   HRegion region = getRegion(request.getRegion());
   RegionInfo info = region.getRegionInfo();
   byte[] bestSplitRow = null;
+  boolean shouldSplit = true;
   if (request.hasBestSplitRow() && request.getBestSplitRow()) {
 HRegion r = region;
 region.startRegionOperation(Operation.SPLIT_REGION);
 r.forceSplit(null);
+// Even after setting force split if split policy says no to split 
then we should not split.
+shouldSplit = region.getSplitPolicy().shouldSplit() && 
!info.isMetaRegion();
 bestSplitRow = r.checkSplit();
 // when all table data are in memstore, bestSplitRow = null
 // try to flush region first
@@ -1754,7 +1757,7 @@ public class RSRpcServices implements 
HBaseRPCErrorHandler,
   if (request.hasCompactionState() && request.getCompactionState()) {
 
builder.setCompactionState(ProtobufUtil.createCompactionState(region.getCompactionState()));
   }
-  builder.setSplittable(region.isSplittable());
+  builder.setSplittable(region.isSplittable() && shouldSplit);
   builder.setMergeable(region.isMergeable());
   if (request.hasBestSplitRow() && request.getBestSplitRow() && 
bestSplitRow != null) {
 builder.setBestSplitRow(UnsafeByteOperations.unsafeWrap(bestSplitRow));

http://git-wip-us.apache.org/repos/asf/hbase/blob/d8713998/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin1.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin1.java 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin1.java
index c48d130..8ac2ddaf 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin1.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin1.java
@@ -31,6 +31,7 @@ import java.util.List;
 import java.util.Map;
 import 

hbase git commit: HBASE-20224 Web UI is broken in standalone mode - addendum for hbase-archetypes module

2018-03-27 Thread tedyu
Repository: hbase
Updated Branches:
  refs/heads/master a7609bb3c -> 19d7bdcb4


HBASE-20224 Web UI is broken in standalone mode - addendum for hbase-archetypes 
module


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/19d7bdcb
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/19d7bdcb
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/19d7bdcb

Branch: refs/heads/master
Commit: 19d7bdcb4ae282eab40bf7081c73ac72f7426d0a
Parents: a7609bb
Author: tedyu 
Authored: Tue Mar 27 13:51:44 2018 -0700
Committer: tedyu 
Committed: Tue Mar 27 13:51:44 2018 -0700

--
 .../src/test/resources/hbase-site.xml   | 39 
 .../src/test/resources/hbase-site.xml   | 39 
 2 files changed, 78 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/19d7bdcb/hbase-archetypes/hbase-client-project/src/test/resources/hbase-site.xml
--
diff --git 
a/hbase-archetypes/hbase-client-project/src/test/resources/hbase-site.xml 
b/hbase-archetypes/hbase-client-project/src/test/resources/hbase-site.xml
new file mode 100644
index 000..858d428
--- /dev/null
+++ b/hbase-archetypes/hbase-client-project/src/test/resources/hbase-site.xml
@@ -0,0 +1,39 @@
+
+
+
+
+  
+hbase.defaults.for.version.skip
+true
+  
+  
+hbase.hconnection.threads.keepalivetime
+3
+  
+  
+hbase.localcluster.assign.random.ports
+true
+
+  Assign random ports to master and RS info server (UI).
+
+  
+

http://git-wip-us.apache.org/repos/asf/hbase/blob/19d7bdcb/hbase-archetypes/hbase-shaded-client-project/src/test/resources/hbase-site.xml
--
diff --git 
a/hbase-archetypes/hbase-shaded-client-project/src/test/resources/hbase-site.xml
 
b/hbase-archetypes/hbase-shaded-client-project/src/test/resources/hbase-site.xml
new file mode 100644
index 000..858d428
--- /dev/null
+++ 
b/hbase-archetypes/hbase-shaded-client-project/src/test/resources/hbase-site.xml
@@ -0,0 +1,39 @@
+
+
+
+
+  
+hbase.defaults.for.version.skip
+true
+  
+  
+hbase.hconnection.threads.keepalivetime
+3
+  
+  
+hbase.localcluster.assign.random.ports
+true
+
+  Assign random ports to master and RS info server (UI).
+
+  
+



hbase git commit: HBASE-20291 Fix The POM for net.minidev:json-smart:jar:2.3-SNAPSHOT missing with hadoop 3 profile

2018-03-27 Thread tedyu
Repository: hbase
Updated Branches:
  refs/heads/master 2a2258656 -> a7609bb3c


HBASE-20291 Fix The POM for net.minidev:json-smart:jar:2.3-SNAPSHOT missing 
with hadoop 3 profile

Signed-off-by: tedyu 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/a7609bb3
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/a7609bb3
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/a7609bb3

Branch: refs/heads/master
Commit: a7609bb3cd1a47015a0b30ab056a98410d11c80e
Parents: 2a22586
Author: Artem Ervits 
Authored: Mon Mar 26 22:05:29 2018 -0400
Committer: tedyu 
Committed: Tue Mar 27 08:33:17 2018 -0700

--
 hbase-common/pom.xml | 4 
 1 file changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/a7609bb3/hbase-common/pom.xml
--
diff --git a/hbase-common/pom.xml b/hbase-common/pom.xml
index 5ae8e0b..6ac590f 100644
--- a/hbase-common/pom.xml
+++ b/hbase-common/pom.xml
@@ -371,6 +371,10 @@
   org.apache.htrace
   htrace-core
 
+
+  net.minidev
+  json-smart
+
   
 
   



[03/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/master/RegionPlacementMaintainer.RandomizedMatrix.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/master/RegionPlacementMaintainer.RandomizedMatrix.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/master/RegionPlacementMaintainer.RandomizedMatrix.html
index bcb65f1..a9d5986 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/master/RegionPlacementMaintainer.RandomizedMatrix.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/master/RegionPlacementMaintainer.RandomizedMatrix.html
@@ -39,1086 +39,1087 @@
 031import java.util.Scanner;
 032import java.util.Set;
 033import java.util.TreeMap;
-034import 
org.apache.commons.cli.CommandLine;
-035import 
org.apache.commons.cli.GnuParser;
-036import 
org.apache.commons.cli.HelpFormatter;
-037import org.apache.commons.cli.Options;
-038import 
org.apache.commons.cli.ParseException;
-039import 
org.apache.commons.lang3.StringUtils;
-040import 
org.apache.hadoop.conf.Configuration;
-041import org.apache.hadoop.fs.FileSystem;
-042import 
org.apache.hadoop.hbase.ClusterMetrics.Option;
-043import 
org.apache.hadoop.hbase.HBaseConfiguration;
-044import 
org.apache.hadoop.hbase.HConstants;
-045import 
org.apache.hadoop.hbase.ServerName;
-046import 
org.apache.hadoop.hbase.TableName;
-047import 
org.apache.hadoop.hbase.client.Admin;
-048import 
org.apache.hadoop.hbase.client.ClusterConnection;
-049import 
org.apache.hadoop.hbase.client.Connection;
-050import 
org.apache.hadoop.hbase.client.ConnectionFactory;
-051import 
org.apache.hadoop.hbase.client.RegionInfo;
-052import 
org.apache.hadoop.hbase.favored.FavoredNodeAssignmentHelper;
-053import 
org.apache.hadoop.hbase.favored.FavoredNodesPlan;
-054import 
org.apache.hadoop.hbase.util.FSUtils;
-055import 
org.apache.hadoop.hbase.util.MunkresAssignment;
-056import 
org.apache.hadoop.hbase.util.Pair;
-057import 
org.apache.yetus.audience.InterfaceAudience;
-058import org.slf4j.Logger;
-059import org.slf4j.LoggerFactory;
-060
-061import 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-062import 
org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
-063import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService.BlockingInterface;
-064import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.UpdateFavoredNodesRequest;
-065import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.UpdateFavoredNodesResponse;
-066
-067/**
-068 * A tool that is used for manipulating 
and viewing favored nodes information
-069 * for regions. Run with -h to get a list 
of the options
-070 */
-071@InterfaceAudience.Private
-072// TODO: Remove? Unused. Partially 
implemented only.
-073public class RegionPlacementMaintainer 
{
-074  private static final Logger LOG = 
LoggerFactory.getLogger(RegionPlacementMaintainer.class
-075  .getName());
-076  //The cost of a placement that should 
never be assigned.
-077  private static final float MAX_COST = 
Float.POSITIVE_INFINITY;
-078
-079  // The cost of a placement that is 
undesirable but acceptable.
-080  private static final float AVOID_COST = 
10f;
-081
-082  // The amount by which the cost of a 
placement is increased if it is the
-083  // last slot of the server. This is 
done to more evenly distribute the slop
-084  // amongst servers.
-085  private static final float 
LAST_SLOT_COST_PENALTY = 0.5f;
-086
-087  // The amount by which the cost of a 
primary placement is penalized if it is
-088  // not the host currently serving the 
region. This is done to minimize moves.
-089  private static final float 
NOT_CURRENT_HOST_PENALTY = 0.1f;
-090
-091  private static boolean 
USE_MUNKRES_FOR_PLACING_SECONDARY_AND_TERTIARY = false;
-092
-093  private Configuration conf;
-094  private final boolean 
enforceLocality;
-095  private final boolean 
enforceMinAssignmentMove;
-096  private RackManager rackManager;
-097  private SetTableName 
targetTableSet;
-098  private final Connection connection;
-099
-100  public 
RegionPlacementMaintainer(Configuration conf) {
-101this(conf, true, true);
-102  }
-103
-104  public 
RegionPlacementMaintainer(Configuration conf, boolean enforceLocality,
-105  boolean enforceMinAssignmentMove) 
{
-106this.conf = conf;
-107this.enforceLocality = 
enforceLocality;
-108this.enforceMinAssignmentMove = 
enforceMinAssignmentMove;
-109this.targetTableSet = new 
HashSet();
-110this.rackManager = new 
RackManager(conf);
-111try {
-112  this.connection = 
ConnectionFactory.createConnection(this.conf);
-113} catch (IOException e) {
-114  throw new RuntimeException(e);
-115}
-116  }
-117
-118  private static void printHelp(Options 
opt) {
-119new HelpFormatter().printHelp(
-120"RegionPlacement  -w | -u | 
-n | -v | -t | -h | -overwrite -r regionName -f favoredNodes " 

hbase-site git commit: INFRA-10751 Empty commit

2018-03-27 Thread git-site-role
Repository: hbase-site
Updated Branches:
  refs/heads/asf-site e0fb1fdea -> 53df288a2


INFRA-10751 Empty commit


Project: http://git-wip-us.apache.org/repos/asf/hbase-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase-site/commit/53df288a
Tree: http://git-wip-us.apache.org/repos/asf/hbase-site/tree/53df288a
Diff: http://git-wip-us.apache.org/repos/asf/hbase-site/diff/53df288a

Branch: refs/heads/asf-site
Commit: 53df288a219a6deaa05a4d2210a767a99d03261d
Parents: e0fb1fd
Author: jenkins 
Authored: Tue Mar 27 14:48:51 2018 +
Committer: jenkins 
Committed: Tue Mar 27 14:48:51 2018 +

--

--




[17/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/client/ConnectionImplementation.ServerErrorTracker.ServerErrors.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/ConnectionImplementation.ServerErrorTracker.ServerErrors.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/ConnectionImplementation.ServerErrorTracker.ServerErrors.html
index d7aa8b1..98a45a0 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/ConnectionImplementation.ServerErrorTracker.ServerErrors.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/ConnectionImplementation.ServerErrorTracker.ServerErrors.html
@@ -680,1330 +680,1333 @@
 672}
 673ListHRegionLocation locations 
= new ArrayList();
 674for (RegionInfo regionInfo : regions) 
{
-675  RegionLocations list = 
locateRegion(tableName, regionInfo.getStartKey(), useCache, true);
-676  if (list != null) {
-677for (HRegionLocation loc : 
list.getRegionLocations()) {
-678  if (loc != null) {
-679locations.add(loc);
-680  }
-681}
-682  }
-683}
-684return locations;
-685  }
-686
-687  @Override
-688  public HRegionLocation 
locateRegion(final TableName tableName, final byte[] row)
-689  throws IOException {
-690RegionLocations locations = 
locateRegion(tableName, row, true, true);
-691return locations == null ? null : 
locations.getRegionLocation();
-692  }
-693
-694  @Override
-695  public HRegionLocation 
relocateRegion(final TableName tableName, final byte[] row)
-696  throws IOException {
-697RegionLocations locations =
-698  relocateRegion(tableName, row, 
RegionReplicaUtil.DEFAULT_REPLICA_ID);
-699return locations == null ? null
-700  : 
locations.getRegionLocation(RegionReplicaUtil.DEFAULT_REPLICA_ID);
-701  }
-702
-703  @Override
-704  public RegionLocations 
relocateRegion(final TableName tableName,
-705  final byte [] row, int replicaId) 
throws IOException{
-706// Since this is an explicit request 
not to use any caching, finding
-707// disabled tables should not be 
desirable.  This will ensure that an exception is thrown when
-708// the first time a disabled table is 
interacted with.
-709if 
(!tableName.equals(TableName.META_TABLE_NAME)  
isTableDisabled(tableName)) {
-710  throw new 
TableNotEnabledException(tableName.getNameAsString() + " is disabled.");
-711}
-712
-713return locateRegion(tableName, row, 
false, true, replicaId);
-714  }
+675  if 
(!RegionReplicaUtil.isDefaultReplica(regionInfo)) {
+676continue;
+677  }
+678  RegionLocations list = 
locateRegion(tableName, regionInfo.getStartKey(), useCache, true);
+679  if (list != null) {
+680for (HRegionLocation loc : 
list.getRegionLocations()) {
+681  if (loc != null) {
+682locations.add(loc);
+683  }
+684}
+685  }
+686}
+687return locations;
+688  }
+689
+690  @Override
+691  public HRegionLocation 
locateRegion(final TableName tableName, final byte[] row)
+692  throws IOException {
+693RegionLocations locations = 
locateRegion(tableName, row, true, true);
+694return locations == null ? null : 
locations.getRegionLocation();
+695  }
+696
+697  @Override
+698  public HRegionLocation 
relocateRegion(final TableName tableName, final byte[] row)
+699  throws IOException {
+700RegionLocations locations =
+701  relocateRegion(tableName, row, 
RegionReplicaUtil.DEFAULT_REPLICA_ID);
+702return locations == null ? null
+703  : 
locations.getRegionLocation(RegionReplicaUtil.DEFAULT_REPLICA_ID);
+704  }
+705
+706  @Override
+707  public RegionLocations 
relocateRegion(final TableName tableName,
+708  final byte [] row, int replicaId) 
throws IOException{
+709// Since this is an explicit request 
not to use any caching, finding
+710// disabled tables should not be 
desirable.  This will ensure that an exception is thrown when
+711// the first time a disabled table is 
interacted with.
+712if 
(!tableName.equals(TableName.META_TABLE_NAME)  
isTableDisabled(tableName)) {
+713  throw new 
TableNotEnabledException(tableName.getNameAsString() + " is disabled.");
+714}
 715
-716  @Override
-717  public RegionLocations 
locateRegion(final TableName tableName, final byte[] row, boolean useCache,
-718  boolean retry) throws IOException 
{
-719return locateRegion(tableName, row, 
useCache, retry, RegionReplicaUtil.DEFAULT_REPLICA_ID);
-720  }
-721
-722  @Override
-723  public RegionLocations 
locateRegion(final TableName tableName, final byte[] row, boolean useCache,
-724  boolean retry, int replicaId) 
throws IOException {
-725checkClosed();
-726if (tableName == null || 
tableName.getName().length == 0) {
-727  throw new 
IllegalArgumentException("table name cannot 

[40/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/org/apache/hadoop/hbase/client/ConnectionImplementation.MasterServiceStubMaker.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/ConnectionImplementation.MasterServiceStubMaker.html
 
b/devapidocs/org/apache/hadoop/hbase/client/ConnectionImplementation.MasterServiceStubMaker.html
index 0bd9e91..e95132e 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/client/ConnectionImplementation.MasterServiceStubMaker.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/client/ConnectionImplementation.MasterServiceStubMaker.html
@@ -113,7 +113,7 @@ var activeTableTab = "activeTableTab";
 
 
 
-private final class ConnectionImplementation.MasterServiceStubMaker
+private final class ConnectionImplementation.MasterServiceStubMaker
 extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
 Class to make a MasterServiceStubMaker stub.
 
@@ -197,7 +197,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 MasterServiceStubMaker
-privateMasterServiceStubMaker()
+privateMasterServiceStubMaker()
 
 
 
@@ -214,7 +214,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 isMasterRunning
-privatevoidisMasterRunning(org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.MasterService.BlockingInterfacestub)
+privatevoidisMasterRunning(org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.MasterService.BlockingInterfacestub)
   throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
 
 Throws:
@@ -228,7 +228,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 makeStubNoRetries
-privateorg.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.MasterService.BlockingInterfacemakeStubNoRetries()
+privateorg.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.MasterService.BlockingInterfacemakeStubNoRetries()

   throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException,

  
org.apache.zookeeper.KeeperException
 Create a stub. Try once only. It is not typed because there 
is no common type to protobuf
@@ -248,7 +248,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 makeStub
-org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.MasterService.BlockingInterfacemakeStub()
+org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.MasterService.BlockingInterfacemakeStub()

  throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
 Create a stub against the master. Retry if necessary.
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/org/apache/hadoop/hbase/client/ConnectionImplementation.ServerErrorTracker.ServerErrors.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/ConnectionImplementation.ServerErrorTracker.ServerErrors.html
 
b/devapidocs/org/apache/hadoop/hbase/client/ConnectionImplementation.ServerErrorTracker.ServerErrors.html
index 55cb0ff..11e4528 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/client/ConnectionImplementation.ServerErrorTracker.ServerErrors.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/client/ConnectionImplementation.ServerErrorTracker.ServerErrors.html
@@ -113,7 +113,7 @@ var activeTableTab = "activeTableTab";
 
 
 
-private static class ConnectionImplementation.ServerErrorTracker.ServerErrors
+private static class ConnectionImplementation.ServerErrorTracker.ServerErrors
 extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
 The record of errors for a server.
 
@@ -208,7 +208,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 retries
-private finalhttps://docs.oracle.com/javase/8/docs/api/java/util/concurrent/atomic/AtomicInteger.html?is-external=true;
 title="class or interface in java.util.concurrent.atomic">AtomicInteger retries
+private finalhttps://docs.oracle.com/javase/8/docs/api/java/util/concurrent/atomic/AtomicInteger.html?is-external=true;
 title="class or interface in java.util.concurrent.atomic">AtomicInteger retries
 
 
 
@@ -225,7 +225,7 @@ extends 

[22/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/backup/BackupDriver.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/backup/BackupDriver.html 
b/devapidocs/src-html/org/apache/hadoop/hbase/backup/BackupDriver.html
index f71cea0..65c1a1f 100644
--- a/devapidocs/src-html/org/apache/hadoop/hbase/backup/BackupDriver.html
+++ b/devapidocs/src-html/org/apache/hadoop/hbase/backup/BackupDriver.html
@@ -45,180 +45,181 @@
 037import java.io.IOException;
 038import java.net.URI;
 039
-040import 
org.apache.commons.cli.CommandLine;
-041import 
org.apache.hadoop.conf.Configuration;
-042import org.apache.hadoop.fs.Path;
-043import 
org.apache.hadoop.hbase.HBaseConfiguration;
-044import 
org.apache.hadoop.hbase.backup.BackupRestoreConstants.BackupCommand;
-045import 
org.apache.hadoop.hbase.backup.impl.BackupCommands;
-046import 
org.apache.hadoop.hbase.backup.impl.BackupManager;
-047import 
org.apache.hadoop.hbase.util.AbstractHBaseTool;
-048import 
org.apache.hadoop.hbase.util.FSUtils;
-049import 
org.apache.hadoop.util.ToolRunner;
-050import org.apache.log4j.Level;
-051import org.apache.log4j.LogManager;
-052import 
org.apache.yetus.audience.InterfaceAudience;
-053import org.slf4j.Logger;
-054import org.slf4j.LoggerFactory;
-055
-056/**
-057 *
-058 * Command-line entry point for backup 
operation
-059 *
-060 */
-061@InterfaceAudience.Private
-062public class BackupDriver extends 
AbstractHBaseTool {
-063
-064  private static final Logger LOG = 
LoggerFactory.getLogger(BackupDriver.class);
-065  private CommandLine cmd;
-066
-067  public BackupDriver() throws 
IOException {
-068init();
-069  }
-070
-071  protected void init() throws 
IOException {
-072// disable irrelevant loggers to 
avoid it mess up command output
-073
LogUtils.disableZkAndClientLoggers();
-074  }
-075
-076  private int parseAndRun(String[] args) 
throws IOException {
-077
-078// Check if backup is enabled
-079if 
(!BackupManager.isBackupEnabled(getConf())) {
-080  
System.err.println(BackupRestoreConstants.ENABLE_BACKUP);
-081  return -1;
-082}
-083
-084
System.out.println(BackupRestoreConstants.VERIFY_BACKUP);
-085
-086String cmd = null;
-087String[] remainArgs = null;
-088if (args == null || args.length == 0) 
{
-089  printToolUsage();
-090  return -1;
-091} else {
-092  cmd = args[0];
-093  remainArgs = new String[args.length 
- 1];
-094  if (args.length  1) {
-095System.arraycopy(args, 1, 
remainArgs, 0, args.length - 1);
-096  }
-097}
-098
-099BackupCommand type = 
BackupCommand.HELP;
-100if 
(BackupCommand.CREATE.name().equalsIgnoreCase(cmd)) {
-101  type = BackupCommand.CREATE;
-102} else if 
(BackupCommand.HELP.name().equalsIgnoreCase(cmd)) {
-103  type = BackupCommand.HELP;
-104} else if 
(BackupCommand.DELETE.name().equalsIgnoreCase(cmd)) {
-105  type = BackupCommand.DELETE;
-106} else if 
(BackupCommand.DESCRIBE.name().equalsIgnoreCase(cmd)) {
-107  type = BackupCommand.DESCRIBE;
-108} else if 
(BackupCommand.HISTORY.name().equalsIgnoreCase(cmd)) {
-109  type = BackupCommand.HISTORY;
-110} else if 
(BackupCommand.PROGRESS.name().equalsIgnoreCase(cmd)) {
-111  type = BackupCommand.PROGRESS;
-112} else if 
(BackupCommand.SET.name().equalsIgnoreCase(cmd)) {
-113  type = BackupCommand.SET;
-114} else if 
(BackupCommand.REPAIR.name().equalsIgnoreCase(cmd)) {
-115  type = BackupCommand.REPAIR;
-116} else if 
(BackupCommand.MERGE.name().equalsIgnoreCase(cmd)) {
-117  type = BackupCommand.MERGE;
-118} else {
-119  System.out.println("Unsupported 
command for backup: " + cmd);
-120  printToolUsage();
-121  return -1;
-122}
-123
-124// enable debug logging
-125if (this.cmd.hasOption(OPTION_DEBUG)) 
{
-126  
LogManager.getLogger("org.apache.hadoop.hbase.backup").setLevel(Level.DEBUG);
-127} else {
-128  
LogManager.getLogger("org.apache.hadoop.hbase.backup").setLevel(Level.INFO);
-129}
-130
-131BackupCommands.Command command = 
BackupCommands.createCommand(getConf(), type, this.cmd);
-132if (type == BackupCommand.CREATE 
 conf != null) {
-133  ((BackupCommands.CreateCommand) 
command).setConf(conf);
-134}
-135try {
-136  command.execute();
-137} catch (IOException e) {
-138  if 
(e.getMessage().equals(BackupCommands.INCORRECT_USAGE)) {
-139return -1;
-140  }
-141  throw e;
-142} finally {
-143  command.finish();
-144}
-145return 0;
-146  }
-147
-148  @Override
-149  protected void addOptions() {
-150// define supported options
-151addOptNoArg(OPTION_DEBUG, 
OPTION_DEBUG_DESC);
-152addOptWithArg(OPTION_TABLE, 
OPTION_TABLE_DESC);
-153addOptWithArg(OPTION_BANDWIDTH, 
OPTION_BANDWIDTH_DESC);
-154

[51/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.


Project: http://git-wip-us.apache.org/repos/asf/hbase-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase-site/commit/e0fb1fde
Tree: http://git-wip-us.apache.org/repos/asf/hbase-site/tree/e0fb1fde
Diff: http://git-wip-us.apache.org/repos/asf/hbase-site/diff/e0fb1fde

Branch: refs/heads/asf-site
Commit: e0fb1fdea581b8795d4fb4fda0389a363211c9a9
Parents: c41a1fc
Author: jenkins 
Authored: Tue Mar 27 14:47:33 2018 +
Committer: jenkins 
Committed: Tue Mar 27 14:47:33 2018 +

--
 acid-semantics.html |  4 +-
 apache_hbase_reference_guide.pdf| 393235 
 apidocs/index-all.html  |  6 +-
 .../hadoop/hbase/snapshot/ExportSnapshot.html   |  6 +-
 .../hadoop/hbase/snapshot/SnapshotInfo.html |  6 +-
 .../apache/hadoop/hbase/util/RegionMover.html   | 30 +-
 .../hadoop/hbase/snapshot/ExportSnapshot.html   | 86 +-
 .../hadoop/hbase/snapshot/SnapshotInfo.html | 34 +-
 .../apache/hadoop/hbase/util/RegionMover.html   |   1923 +-
 book.html   |   1458 +-
 bulk-loads.html |  4 +-
 checkstyle-aggregate.html   |  17146 +-
 checkstyle.rss  |  8 +-
 coc.html|  4 +-
 cygwin.html |  4 +-
 dependencies.html   |  4 +-
 dependency-convergence.html |  8 +-
 dependency-info.html|  4 +-
 dependency-management.html  | 42 +-
 devapidocs/allclasses-frame.html|  2 +
 devapidocs/allclasses-noframe.html  |  2 +
 devapidocs/constant-values.html |  6 +-
 devapidocs/index-all.html   |134 +-
 ...poundConfiguration.ImmutableConfWrapper.html | 18 +-
 ...ompoundConfiguration.ImmutableConfigMap.html | 10 +-
 .../hadoop/hbase/CompoundConfiguration.html | 38 +-
 .../hadoop/hbase/backup/BackupDriver.html   | 36 +-
 .../hadoop/hbase/backup/RestoreDriver.html  | 42 +-
 .../BackupRestoreConstants.BackupCommand.html   |  4 +-
 .../impl/BackupCommands.BackupSetCommand.html   |  8 +-
 .../backup/impl/BackupCommands.Command.html |  4 +-
 .../impl/BackupCommands.CreateCommand.html  |  8 +-
 .../impl/BackupCommands.DeleteCommand.html  |  8 +-
 .../impl/BackupCommands.DescribeCommand.html|  8 +-
 .../backup/impl/BackupCommands.HelpCommand.html |  8 +-
 .../impl/BackupCommands.HistoryCommand.html |  8 +-
 .../impl/BackupCommands.MergeCommand.html   |  8 +-
 .../impl/BackupCommands.ProgressCommand.html|  8 +-
 .../impl/BackupCommands.RepairCommand.html  |  8 +-
 .../hbase/backup/impl/BackupCommands.html   |  8 +-
 .../hadoop/hbase/backup/impl/BackupManager.html | 76 +-
 .../impl/class-use/BackupCommands.Command.html  |  4 +-
 .../MapReduceBackupCopyJob.SnapshotCopy.html|  4 +-
 .../hadoop/hbase/backup/package-tree.html   |  4 +-
 .../hbase/class-use/DoNotRetryIOException.html  |  2 +-
 ...ectionImplementation.MasterServiceState.html | 18 +-
 ...onImplementation.MasterServiceStubMaker.html | 10 +-
 ...ntation.ServerErrorTracker.ServerErrors.html | 10 +-
 ...ectionImplementation.ServerErrorTracker.html | 20 +-
 .../hbase/client/ConnectionImplementation.html  | 96 +-
 .../hadoop/hbase/client/package-tree.html   | 26 +-
 .../hadoop/hbase/filter/package-tree.html   |  8 +-
 ...ilePrettyPrinter.KeyValueStatsCollector.html | 38 +-
 ...ilePrettyPrinter.SimpleReporter.Builder.html | 24 +-
 .../HFilePrettyPrinter.SimpleReporter.html  | 16 +-
 .../hbase/io/hfile/HFilePrettyPrinter.html  | 78 +-
 .../bucket/BucketAllocator.BucketSizeInfo.html  | 12 +-
 .../hadoop/hbase/io/hfile/package-tree.html |  6 +-
 .../apache/hadoop/hbase/ipc/package-tree.html   |  2 +-
 .../hadoop/hbase/mapreduce/package-tree.html|  2 +-
 .../master/HMaster.InitializationMonitor.html   | 20 +-
 .../master/HMaster.MasterStoppedException.html  |  4 +-
 .../hbase/master/HMaster.RedirectServlet.html   | 12 +-
 .../org/apache/hadoop/hbase/master/HMaster.html |472 +-
 .../master/HMasterCommandLine.LocalHMaster.html | 10 +-
 .../hadoop/hbase/master/HMasterCommandLine.html | 22 +-
 ...ionPlacementMaintainer.RandomizedMatrix.html | 22 +-
 .../hbase/master/RegionPlacementMaintainer.html | 66 +-
 .../hbase/master/balancer/package-tree.html |  2 +-
 .../master/cleaner/CleanerChore.Action.html |  4 +-
 .../cleaner/CleanerChore.CleanerTask.html   |  

[12/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.SimpleReporter.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.SimpleReporter.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.SimpleReporter.html
index 50caf18..61bf913 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.SimpleReporter.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.SimpleReporter.html
@@ -45,773 +45,774 @@
 037import java.util.TimeZone;
 038import java.util.concurrent.TimeUnit;
 039
-040import 
org.apache.commons.cli.CommandLine;
-041import 
org.apache.commons.cli.CommandLineParser;
-042import 
org.apache.commons.cli.HelpFormatter;
-043import org.apache.commons.cli.Option;
-044import 
org.apache.commons.cli.OptionGroup;
-045import org.apache.commons.cli.Options;
-046import 
org.apache.commons.cli.ParseException;
-047import 
org.apache.commons.cli.PosixParser;
-048import 
org.apache.commons.lang3.StringUtils;
-049import 
org.apache.hadoop.conf.Configuration;
-050import 
org.apache.hadoop.conf.Configured;
-051import org.apache.hadoop.fs.FileSystem;
-052import org.apache.hadoop.fs.Path;
-053import org.apache.hadoop.hbase.Cell;
-054import 
org.apache.hadoop.hbase.CellComparator;
-055import 
org.apache.hadoop.hbase.CellUtil;
-056import 
org.apache.hadoop.hbase.HBaseConfiguration;
-057import 
org.apache.hadoop.hbase.HBaseInterfaceAudience;
-058import 
org.apache.hadoop.hbase.HConstants;
-059import 
org.apache.hadoop.hbase.HRegionInfo;
-060import 
org.apache.hadoop.hbase.KeyValue;
-061import 
org.apache.hadoop.hbase.KeyValueUtil;
-062import 
org.apache.hadoop.hbase.PrivateCellUtil;
-063import 
org.apache.hadoop.hbase.TableName;
-064import org.apache.hadoop.hbase.Tag;
-065import 
org.apache.hadoop.hbase.io.FSDataInputStreamWrapper;
-066import 
org.apache.hadoop.hbase.io.hfile.HFile.FileInfo;
-067import 
org.apache.hadoop.hbase.mob.MobUtils;
-068import 
org.apache.hadoop.hbase.regionserver.HStoreFile;
-069import 
org.apache.hadoop.hbase.regionserver.TimeRangeTracker;
-070import 
org.apache.hadoop.hbase.util.BloomFilter;
-071import 
org.apache.hadoop.hbase.util.BloomFilterFactory;
-072import 
org.apache.hadoop.hbase.util.BloomFilterUtil;
-073import 
org.apache.hadoop.hbase.util.Bytes;
-074import 
org.apache.hadoop.hbase.util.FSUtils;
-075import 
org.apache.hadoop.hbase.util.HFileArchiveUtil;
-076import org.apache.hadoop.util.Tool;
-077import 
org.apache.hadoop.util.ToolRunner;
-078import 
org.apache.yetus.audience.InterfaceAudience;
-079import 
org.apache.yetus.audience.InterfaceStability;
-080import org.slf4j.Logger;
-081import org.slf4j.LoggerFactory;
-082
-083import 
com.codahale.metrics.ConsoleReporter;
-084import com.codahale.metrics.Counter;
-085import com.codahale.metrics.Gauge;
-086import com.codahale.metrics.Histogram;
-087import com.codahale.metrics.Meter;
-088import 
com.codahale.metrics.MetricFilter;
-089import 
com.codahale.metrics.MetricRegistry;
-090import 
com.codahale.metrics.ScheduledReporter;
-091import com.codahale.metrics.Snapshot;
-092import com.codahale.metrics.Timer;
-093
-094/**
-095 * Implements pretty-printing 
functionality for {@link HFile}s.
-096 */
-097@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.TOOLS)
-098@InterfaceStability.Evolving
-099public class HFilePrettyPrinter extends 
Configured implements Tool {
-100
-101  private static final Logger LOG = 
LoggerFactory.getLogger(HFilePrettyPrinter.class);
-102
-103  private Options options = new 
Options();
-104
-105  private boolean verbose;
-106  private boolean printValue;
-107  private boolean printKey;
-108  private boolean shouldPrintMeta;
-109  private boolean printBlockIndex;
-110  private boolean printBlockHeaders;
-111  private boolean printStats;
-112  private boolean checkRow;
-113  private boolean checkFamily;
-114  private boolean isSeekToRow = false;
-115  private boolean checkMobIntegrity = 
false;
-116  private MapString, 
ListPath mobFileLocations;
-117  private static final int 
FOUND_MOB_FILES_CACHE_CAPACITY = 50;
-118  private static final int 
MISSING_MOB_FILES_CACHE_CAPACITY = 20;
-119  private PrintStream out = System.out;
-120  private PrintStream err = System.err;
-121
-122  /**
-123   * The row which the user wants to 
specify and print all the KeyValues for.
-124   */
-125  private byte[] row = null;
-126
-127  private ListPath files = new 
ArrayList();
-128  private int count;
-129
-130  private static final String FOUR_SPACES 
= "";
-131
-132  public HFilePrettyPrinter() {
-133super();
-134init();
-135  }
-136
-137  public HFilePrettyPrinter(Configuration 
conf) {
-138super(conf);
-139init();
-140  }
-141
-142  private void init() {
-143options.addOption("v", "verbose", 
false,
-144"Verbose output; 

[04/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/master/HMasterCommandLine.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/master/HMasterCommandLine.html 
b/devapidocs/src-html/org/apache/hadoop/hbase/master/HMasterCommandLine.html
index 64b9ab5..30fe780 100644
--- a/devapidocs/src-html/org/apache/hadoop/hbase/master/HMasterCommandLine.html
+++ b/devapidocs/src-html/org/apache/hadoop/hbase/master/HMasterCommandLine.html
@@ -30,308 +30,309 @@
 022import java.io.IOException;
 023import java.util.List;
 024
-025import 
org.apache.commons.cli.CommandLine;
-026import 
org.apache.commons.cli.GnuParser;
-027import org.apache.commons.cli.Options;
-028import 
org.apache.commons.cli.ParseException;
-029import 
org.apache.hadoop.conf.Configuration;
-030import 
org.apache.hadoop.hbase.HConstants;
-031import 
org.apache.hadoop.hbase.LocalHBaseCluster;
-032import 
org.apache.hadoop.hbase.MasterNotRunningException;
-033import 
org.apache.hadoop.hbase.ZNodeClearer;
-034import 
org.apache.hadoop.hbase.ZooKeeperConnectionException;
-035import 
org.apache.hadoop.hbase.trace.TraceUtil;
-036import 
org.apache.yetus.audience.InterfaceAudience;
-037import 
org.apache.hadoop.hbase.client.Admin;
-038import 
org.apache.hadoop.hbase.client.Connection;
-039import 
org.apache.hadoop.hbase.client.ConnectionFactory;
-040import 
org.apache.hadoop.hbase.regionserver.HRegionServer;
-041import 
org.apache.hadoop.hbase.util.JVMClusterUtil;
-042import 
org.apache.hadoop.hbase.util.ServerCommandLine;
-043import 
org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster;
-044import 
org.apache.hadoop.hbase.zookeeper.ZKUtil;
-045import 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
-046import 
org.apache.zookeeper.KeeperException;
-047import org.slf4j.Logger;
-048import org.slf4j.LoggerFactory;
-049
-050@InterfaceAudience.Private
-051public class HMasterCommandLine extends 
ServerCommandLine {
-052  private static final Logger LOG = 
LoggerFactory.getLogger(HMasterCommandLine.class);
-053
-054  private static final String USAGE =
-055"Usage: Master [opts] 
start|stop|clear\n" +
-056" start  Start Master. If local mode, 
start Master and RegionServer in same JVM\n" +
-057" stop   Start cluster shutdown; 
Master signals RegionServer shutdown\n" +
-058" clear  Delete the master znode in 
ZooKeeper after a master crashes\n "+
-059" where [opts] are:\n" +
-060"   
--minRegionServers=servers   Minimum RegionServers needed to host user 
tables.\n" +
-061"   
--localRegionServers=servers " +
-062  "RegionServers to start in master 
process when in standalone mode.\n" +
-063"   --masters=servers 
   Masters to start in this process.\n" +
-064"   --backup   
Master should start in backup mode";
-065
-066  private final Class? extends 
HMaster masterClass;
-067
-068  public HMasterCommandLine(Class? 
extends HMaster masterClass) {
-069this.masterClass = masterClass;
-070  }
-071
-072  @Override
-073  protected String getUsage() {
-074return USAGE;
-075  }
-076
-077  @Override
-078  public int run(String args[]) throws 
Exception {
-079Options opt = new Options();
-080opt.addOption("localRegionServers", 
true,
-081  "RegionServers to start in master 
process when running standalone");
-082opt.addOption("masters", true, 
"Masters to start in this process");
-083opt.addOption("minRegionServers", 
true, "Minimum RegionServers needed to host user tables");
-084opt.addOption("backup", false, "Do 
not try to become HMaster until the primary fails");
-085
-086CommandLine cmd;
-087try {
-088  cmd = new GnuParser().parse(opt, 
args);
-089} catch (ParseException e) {
-090  LOG.error("Could not parse: ", 
e);
-091  usage(null);
-092  return 1;
-093}
-094
+025import 
org.apache.hadoop.conf.Configuration;
+026import 
org.apache.hadoop.hbase.HConstants;
+027import 
org.apache.hadoop.hbase.LocalHBaseCluster;
+028import 
org.apache.hadoop.hbase.MasterNotRunningException;
+029import 
org.apache.hadoop.hbase.ZNodeClearer;
+030import 
org.apache.hadoop.hbase.ZooKeeperConnectionException;
+031import 
org.apache.hadoop.hbase.trace.TraceUtil;
+032import 
org.apache.yetus.audience.InterfaceAudience;
+033import 
org.apache.hadoop.hbase.client.Admin;
+034import 
org.apache.hadoop.hbase.client.Connection;
+035import 
org.apache.hadoop.hbase.client.ConnectionFactory;
+036import 
org.apache.hadoop.hbase.regionserver.HRegionServer;
+037import 
org.apache.hadoop.hbase.util.JVMClusterUtil;
+038import 
org.apache.hadoop.hbase.util.ServerCommandLine;
+039import 
org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster;
+040import 
org.apache.hadoop.hbase.zookeeper.ZKUtil;
+041import 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+042import 
org.apache.zookeeper.KeeperException;
+043import org.slf4j.Logger;

[24/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/CompoundConfiguration.ImmutableConfigMap.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/CompoundConfiguration.ImmutableConfigMap.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/CompoundConfiguration.ImmutableConfigMap.html
index 1038256..89735e7 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/CompoundConfiguration.ImmutableConfigMap.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/CompoundConfiguration.ImmutableConfigMap.html
@@ -36,398 +36,399 @@
 028import java.util.List;
 029import java.util.Map;
 030
-031import 
org.apache.commons.collections4.iterators.UnmodifiableIterator;
-032import 
org.apache.hadoop.conf.Configuration;
-033import 
org.apache.hadoop.hbase.util.Bytes;
-034import 
org.apache.yetus.audience.InterfaceAudience;
-035
-036/**
-037 * Do a shallow merge of multiple KV 
configuration pools. This is a very useful
-038 * utility class to easily add per-object 
configurations in addition to wider
-039 * scope settings. This is different from 
Configuration.addResource()
-040 * functionality, which performs a deep 
merge and mutates the common data
-041 * structure.
-042 * p
-043 * The iterator on CompoundConfiguration 
is unmodifiable. Obtaining iterator is an expensive
-044 * operation.
-045 * p
-046 * For clarity: the shallow merge allows 
the user to mutate either of the
-047 * configuration objects and have changes 
reflected everywhere. In contrast to a
-048 * deep merge, that requires you to 
explicitly know all applicable copies to
-049 * propagate changes.
-050 * 
-051 * WARNING: The values set in the 
CompoundConfiguration are do not handle Property variable
-052 * substitution.  However, if they are 
set in the underlying configuration substitutions are
-053 * done. 
-054 */
-055@InterfaceAudience.Private
-056public class CompoundConfiguration 
extends Configuration {
-057
-058  private Configuration mutableConf = 
null;
-059
-060  /**
-061   * Default Constructor. Initializes 
empty configuration
-062   */
-063  public CompoundConfiguration() {
-064  }
-065
-066  // Devs: these APIs are the same 
contract as their counterparts in
-067  // Configuration.java
-068  private interface ImmutableConfigMap 
extends IterableMap.EntryString,String {
-069String get(String key);
-070String getRaw(String key);
-071Class? getClassByName(String 
name) throws ClassNotFoundException;
-072int size();
-073  }
-074
-075  private final 
ListImmutableConfigMap configs = new ArrayList();
-076
-077  static class ImmutableConfWrapper 
implements  ImmutableConfigMap {
-078   private final Configuration c;
-079
-080ImmutableConfWrapper(Configuration 
conf) {
-081  c = conf;
-082}
-083
-084@Override
-085public 
IteratorMap.EntryString,String iterator() {
-086  return c.iterator();
-087}
-088
-089@Override
-090public String get(String key) {
-091  return c.get(key);
-092}
-093
-094@Override
-095public String getRaw(String key) {
-096  return c.getRaw(key);
-097}
-098
-099@Override
-100public Class? 
getClassByName(String name)
-101throws ClassNotFoundException {
-102  return c.getClassByName(name);
-103}
-104
-105@Override
-106public int size() {
-107  return c.size();
-108}
-109
-110@Override
-111public String toString() {
-112  return c.toString();
-113}
-114  }
-115
-116  /**
-117   * If set has been called, it will 
create a mutableConf.  This converts the mutableConf to an
-118   * immutable one and resets it to allow 
a new mutable conf.  This is used when a new map or
-119   * conf is added to the compound 
configuration to preserve proper override semantics.
-120   */
-121  void freezeMutableConf() {
-122if (mutableConf == null) {
-123  // do nothing if there is no 
current mutableConf
-124  return;
-125}
-126
-127this.configs.add(0, new 
ImmutableConfWrapper(mutableConf));
-128mutableConf = null;
-129  }
-130
-131  /**
-132   * Add Hadoop Configuration object to 
config list.
-133   * The added configuration overrides 
the previous ones if there are name collisions.
-134   * @param conf configuration object
-135   * @return this, for builder pattern
-136   */
-137  public CompoundConfiguration add(final 
Configuration conf) {
-138freezeMutableConf();
-139
-140if (conf instanceof 
CompoundConfiguration) {
-141  this.configs.addAll(0, 
((CompoundConfiguration) conf).configs);
-142  return this;
-143}
-144// put new config at the front of the 
list (top priority)
-145this.configs.add(0, new 
ImmutableConfWrapper(conf));
-146return this;
-147  }
-148
-149  /**
-150   * Add Bytes map to config list. This 
map is generally
-151   * created by HTableDescriptor or 
HColumnDescriptor, but can be 

[33/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.RegionEnvironment.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.RegionEnvironment.html
 
b/devapidocs/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.RegionEnvironment.html
index beb9075..5082701 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.RegionEnvironment.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.RegionEnvironment.html
@@ -126,7 +126,7 @@ var activeTableTab = "activeTableTab";
 
 
 
-private static class RegionCoprocessorHost.RegionEnvironment
+private static class RegionCoprocessorHost.RegionEnvironment
 extends BaseEnvironmentRegionCoprocessor
 implements RegionCoprocessorEnvironment
 Encapsulation of the environment of each coprocessor
@@ -303,7 +303,7 @@ implements 
 
 region
-privateRegion region
+privateRegion region
 
 
 
@@ -312,7 +312,7 @@ implements 
 
 sharedData
-https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ConcurrentMap.html?is-external=true;
 title="class or interface in java.util.concurrent">ConcurrentMaphttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String,https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object sharedData
+https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ConcurrentMap.html?is-external=true;
 title="class or interface in java.util.concurrent">ConcurrentMaphttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String,https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object sharedData
 
 
 
@@ -321,7 +321,7 @@ implements 
 
 metricRegistry
-private finalMetricRegistry metricRegistry
+private finalMetricRegistry metricRegistry
 
 
 
@@ -330,7 +330,7 @@ implements 
 
 services
-private finalRegionServerServices services
+private finalRegionServerServices services
 
 
 
@@ -347,7 +347,7 @@ implements 
 
 RegionEnvironment
-publicRegionEnvironment(RegionCoprocessorimpl,
+publicRegionEnvironment(RegionCoprocessorimpl,
  intpriority,
  intseq,
  org.apache.hadoop.conf.Configurationconf,
@@ -376,7 +376,7 @@ implements 
 
 getRegion
-publicRegiongetRegion()
+publicRegiongetRegion()
 
 Specified by:
 getRegionin
 interfaceRegionCoprocessorEnvironment
@@ -391,7 +391,7 @@ implements 
 
 getOnlineRegions
-publicOnlineRegionsgetOnlineRegions()
+publicOnlineRegionsgetOnlineRegions()
 
 Specified by:
 getOnlineRegionsin
 interfaceRegionCoprocessorEnvironment
@@ -406,7 +406,7 @@ implements 
 
 getConnection
-publicConnectiongetConnection()
+publicConnectiongetConnection()
 Description copied from 
interface:RegionCoprocessorEnvironment
 Returns the hosts' Connection to the Cluster. Do not 
close! This is a shared connection
  with the hosting server. Throws https://docs.oracle.com/javase/8/docs/api/java/lang/UnsupportedOperationException.html?is-external=true;
 title="class or interface in 
java.lang">UnsupportedOperationException if you try to close
@@ -445,7 +445,7 @@ implements 
 
 createConnection
-publicConnectioncreateConnection(org.apache.hadoop.conf.Configurationconf)
+publicConnectioncreateConnection(org.apache.hadoop.conf.Configurationconf)
 throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
 Description copied from 
interface:RegionCoprocessorEnvironment
 Creates a cluster connection using the passed Configuration.
@@ -481,7 +481,7 @@ implements 
 
 getServerName
-publicServerNamegetServerName()
+publicServerNamegetServerName()
 
 Specified by:
 getServerNamein
 interfaceRegionCoprocessorEnvironment
@@ -496,7 +496,7 @@ implements 
 
 shutdown
-publicvoidshutdown()
+publicvoidshutdown()
 Description copied from 
class:BaseEnvironment
 Clean up the environment
 
@@ -511,7 +511,7 @@ implements 
 
 getSharedData
-publichttps://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ConcurrentMap.html?is-external=true;
 title="class or interface in java.util.concurrent">ConcurrentMaphttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String,https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">ObjectgetSharedData()
+publichttps://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ConcurrentMap.html?is-external=true;
 title="class or interface in 

[48/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/apidocs/src-html/org/apache/hadoop/hbase/util/RegionMover.html
--
diff --git a/apidocs/src-html/org/apache/hadoop/hbase/util/RegionMover.html 
b/apidocs/src-html/org/apache/hadoop/hbase/util/RegionMover.html
index e50230e..701d135 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/util/RegionMover.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/util/RegionMover.html
@@ -49,967 +49,968 @@
 041import java.util.concurrent.Future;
 042import java.util.concurrent.TimeUnit;
 043import 
java.util.concurrent.TimeoutException;
-044import 
org.apache.commons.cli.CommandLine;
-045import 
org.apache.hadoop.conf.Configuration;
-046import 
org.apache.hadoop.hbase.ClusterMetrics.Option;
-047import 
org.apache.hadoop.hbase.HBaseConfiguration;
-048import 
org.apache.hadoop.hbase.HConstants;
-049import 
org.apache.hadoop.hbase.ServerName;
-050import 
org.apache.hadoop.hbase.TableName;
-051import 
org.apache.hadoop.hbase.client.Admin;
-052import 
org.apache.hadoop.hbase.client.Connection;
-053import 
org.apache.hadoop.hbase.client.ConnectionFactory;
-054import 
org.apache.hadoop.hbase.client.Get;
-055import 
org.apache.hadoop.hbase.client.RegionInfo;
-056import 
org.apache.hadoop.hbase.client.Result;
-057import 
org.apache.hadoop.hbase.client.ResultScanner;
-058import 
org.apache.hadoop.hbase.client.Scan;
-059import 
org.apache.hadoop.hbase.client.Table;
-060import 
org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter;
-061import 
org.apache.hadoop.hbase.zookeeper.MetaTableLocator;
-062import 
org.apache.hadoop.hbase.zookeeper.ZKWatcher;
-063import 
org.apache.yetus.audience.InterfaceAudience;
-064import org.slf4j.Logger;
-065import org.slf4j.LoggerFactory;
-066
-067/**
-068 * Tool for loading/unloading regions 
to/from given regionserver This tool can be run from Command
-069 * line directly as a utility. Supports 
Ack/No Ack mode for loading/unloading operations.Ack mode
-070 * acknowledges if regions are online 
after movement while noAck mode is best effort mode that
-071 * improves performance but will still 
move on if region is stuck/not moved. Motivation behind noAck
-072 * mode being RS shutdown where even if a 
Region is stuck, upon shutdown master will move it
-073 * anyways. This can also be used by 
constructiong an Object using the builder and then calling
-074 * {@link #load()} or {@link #unload()} 
methods for the desired operations.
-075 */
-076@InterfaceAudience.Public
-077public class RegionMover extends 
AbstractHBaseTool {
-078  public static final String 
MOVE_RETRIES_MAX_KEY = "hbase.move.retries.max";
-079  public static final String 
MOVE_WAIT_MAX_KEY = "hbase.move.wait.max";
-080  public static final String 
SERVERSTART_WAIT_MAX_KEY = "hbase.serverstart.wait.max";
-081  public static final int 
DEFAULT_MOVE_RETRIES_MAX = 5;
-082  public static final int 
DEFAULT_MOVE_WAIT_MAX = 60;
-083  public static final int 
DEFAULT_SERVERSTART_WAIT_MAX = 180;
-084  static final Logger LOG = 
LoggerFactory.getLogger(RegionMover.class);
-085  private RegionMoverBuilder rmbuilder;
-086  private boolean ack = true;
-087  private int maxthreads = 1;
-088  private int timeout;
-089  private String loadUnload;
-090  private String hostname;
-091  private String filename;
-092  private String excludeFile;
-093  private int port;
-094
-095  private RegionMover(RegionMoverBuilder 
builder) {
-096this.hostname = builder.hostname;
-097this.filename = builder.filename;
-098this.excludeFile = 
builder.excludeFile;
-099this.maxthreads = 
builder.maxthreads;
-100this.ack = builder.ack;
-101this.port = builder.port;
-102this.timeout = builder.timeout;
-103  }
-104
-105  private RegionMover() {
-106  }
-107
-108  /**
-109   * Builder for Region mover. Use the 
{@link #build()} method to create RegionMover object. Has
-110   * {@link #filename(String)}, {@link 
#excludeFile(String)}, {@link #maxthreads(int)},
-111   * {@link #ack(boolean)}, {@link 
#timeout(int)} methods to set the corresponding options
-112   */
-113  public static class RegionMoverBuilder 
{
-114private boolean ack = true;
-115private int maxthreads = 1;
-116private int timeout = 
Integer.MAX_VALUE;
-117private String hostname;
-118private String filename;
-119private String excludeFile = null;
-120String defaultDir = 
System.getProperty("java.io.tmpdir");
-121
-122private int port = 
HConstants.DEFAULT_REGIONSERVER_PORT;
-123
-124/**
-125 * @param hostname Hostname to unload 
regions from or load regions to. Can be either hostname
-126 * or hostname:port.
-127 */
-128public RegionMoverBuilder(String 
hostname) {
-129  String[] splitHostname = 
hostname.toLowerCase().split(":");
-130  this.hostname = splitHostname[0];
-131  if (splitHostname.length == 2) {
-132this.port = 
Integer.parseInt(splitHostname[1]);
-133  }
-134 

[47/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/book.html
--
diff --git a/book.html b/book.html
index a35b472..2770027 100644
--- a/book.html
+++ b/book.html
@@ -138,7 +138,7 @@
 75. Storing Medium-sized Objects (MOB)
 
 
-Backup and Restore
+Backup and Restore
 
 76. Overview
 77. Terminology
@@ -227,78 +227,77 @@
 132. ZooKeeper
 133. Amazon EC2
 134. HBase and Hadoop version issues
-135. IPC Configuration 
Conflicts with Hadoop
-136. HBase and HDFS
-137. Running unit or integration tests
-138. Case Studies
-139. Cryptographic Features
-140. Operating System 
Specific Issues
-141. JDK Issues
+135. HBase and HDFS
+136. Running unit or integration tests
+137. Case Studies
+138. Cryptographic Features
+139. Operating System 
Specific Issues
+140. JDK Issues
 
 
 Apache HBase Case Studies
 
-142. Overview
-143. Schema Design
-144. Performance/Troubleshooting
+141. Overview
+142. Schema Design
+143. Performance/Troubleshooting
 
 
 Apache HBase Operational Management
 
-145. HBase Tools and Utilities
-146. Region Management
-147. Node Management
-148. HBase Metrics
-149. HBase Monitoring
-150. Cluster Replication
-151. Running 
Multiple Workloads On a Single Cluster
-152. HBase Backup
-153. HBase Snapshots
-154. Storing Snapshots in Microsoft Azure Blob 
Storage
-155. Capacity Planning and Region Sizing
-156. Table Rename
-157. RegionServer Grouping
-158. Region Normalizer
+144. HBase Tools and Utilities
+145. Region Management
+146. Node Management
+147. HBase Metrics
+148. HBase Monitoring
+149. Cluster Replication
+150. Running 
Multiple Workloads On a Single Cluster
+151. HBase Backup
+152. HBase Snapshots
+153. Storing Snapshots in Microsoft Azure Blob 
Storage
+154. Capacity Planning and Region Sizing
+155. Table Rename
+156. RegionServer Grouping
+157. Region Normalizer
 
 
 Building and Developing Apache HBase
 
-159. Getting Involved
-160. Apache HBase Repositories
-161. IDEs
-162. Building Apache HBase
-163. Releasing Apache HBase
-164. Voting on Release Candidates
-165. Generating the HBase Reference Guide
-166. Updating https://hbase.apache.org;>hbase.apache.org
-167. Tests
-168. Developer Guidelines
+158. Getting Involved
+159. Apache HBase Repositories
+160. IDEs
+161. Building Apache HBase
+162. Releasing Apache HBase
+163. Voting on Release Candidates
+164. Generating the HBase Reference Guide
+165. Updating https://hbase.apache.org;>hbase.apache.org
+166. Tests
+167. Developer Guidelines
 
 
 Unit Testing HBase Applications
 
-169. JUnit
-170. Mockito
-171. MRUnit
-172. 
Integration Testing with an HBase Mini-Cluster
+168. JUnit
+169. Mockito
+170. MRUnit
+171. 
Integration Testing with an HBase Mini-Cluster
 
 
 Protobuf in HBase
 
-173. Protobuf
+172. Protobuf
 
 
 ZooKeeper
 
-174. Using existing 
ZooKeeper ensemble
-175. SASL Authentication with ZooKeeper
+173. Using existing 
ZooKeeper ensemble
+174. SASL Authentication with ZooKeeper
 
 
 Community
 
-176. Decisions
-177. Community Roles
-178. Commit Message format
+175. Decisions
+176. Community Roles
+177. Commit Message format
 
 
 Appendix
@@ -308,7 +307,6 @@
 Appendix C: hbck In Depth
 Appendix D: Access Control Matrix
 Appendix E: Compression and Data Block Encoding In 
HBase
-179. Enable Data Block 
Encoding
 Appendix F: SQL over HBase
 Appendix G: YCSB
 Appendix H: HFile format
@@ -317,8 +315,8 @@
 Appendix K: HBase and the Apache Software 
Foundation
 Appendix L: Apache HBase Orca
 Appendix M: Enabling Dapper-like Tracing in 
HBase
-180. Client Modifications
-181. Tracing from HBase Shell
+178. Client Modifications
+179. Tracing from HBase Shell
 Appendix N: 0.95 RPC Specification
 
 
@@ -468,34 +466,6 @@ table, enable or disable the table, and start and stop 
HBase.
 
 Apart from downloading HBase, this procedure should take less than 10 
minutes.
 
-
-
-
-
-
-
-
-
-Prior to HBase 0.94.x, HBase expected the loopback IP address to be 
127.0.0.1.
-Ubuntu and some other distributions default to 127.0.1.1 and this will cause
-problems for you. See https://web-beta.archive.org/web/20140104070155/http://blog.devving.com/why-does-hbase-care-about-etchosts;>Why
 does HBase care about /etc/hosts? for detail
-
-
-The following /etc/hosts file works correctly for HBase 0.94.x and 
earlier, on Ubuntu. Use this as a template if you run into trouble.
-
-
-
-127.0.0.1 localhost
-127.0.0.1 ubuntu.ubuntu-domain ubuntu
-
-
-
-This issue has been fixed in hbase-0.96.0 and beyond.
-
-
-
-
-
 
 2.1. JDK Version Requirements
 
@@ -949,10 +919,10 @@ Running multiple HRegionServers on the same system can be 
useful for testing in
 The local-regionservers.sh command allows you to run multiple 
RegionServers.
 It works in a similar way to the local-master-backup.sh command, 
in that each parameter you provide represents the port offset for an instance.
 Each RegionServer requires two ports, and the default ports are 16020 and 
16030.
-However, the base ports for 

[01/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
Repository: hbase-site
Updated Branches:
  refs/heads/asf-site c41a1fcb4 -> e0fb1fdea


http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/master/cleaner/CleanerChore.Action.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/master/cleaner/CleanerChore.Action.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/master/cleaner/CleanerChore.Action.html
index c878296..c14b16b 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/master/cleaner/CleanerChore.Action.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/master/cleaner/CleanerChore.Action.html
@@ -33,478 +33,540 @@
 025import java.util.Map;
 026import 
java.util.concurrent.ExecutionException;
 027import 
java.util.concurrent.ForkJoinPool;
-028import 
java.util.concurrent.RecursiveTask;
-029import 
java.util.concurrent.atomic.AtomicBoolean;
-030import 
org.apache.hadoop.conf.Configuration;
-031import org.apache.hadoop.fs.FileStatus;
-032import org.apache.hadoop.fs.FileSystem;
-033import org.apache.hadoop.fs.Path;
-034import 
org.apache.hadoop.hbase.ScheduledChore;
-035import 
org.apache.hadoop.hbase.Stoppable;
-036import 
org.apache.hadoop.hbase.conf.ConfigurationObserver;
-037import 
org.apache.hadoop.hbase.util.FSUtils;
-038import 
org.apache.hadoop.ipc.RemoteException;
-039import 
org.apache.yetus.audience.InterfaceAudience;
-040import org.slf4j.Logger;
-041import org.slf4j.LoggerFactory;
-042
-043import 
org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
-044import 
org.apache.hbase.thirdparty.com.google.common.base.Predicate;
-045import 
org.apache.hbase.thirdparty.com.google.common.collect.ImmutableSet;
-046import 
org.apache.hbase.thirdparty.com.google.common.collect.Iterables;
-047import 
org.apache.hbase.thirdparty.com.google.common.collect.Lists;
-048
-049/**
-050 * Abstract Cleaner that uses a chain of 
delegates to clean a directory of files
-051 * @param T Cleaner delegate 
class that is dynamically loaded from configuration
-052 */
-053@edu.umd.cs.findbugs.annotations.SuppressWarnings(value="ST_WRITE_TO_STATIC_FROM_INSTANCE_METHOD",
-054justification="TODO: Fix. It is wonky 
have static pool initialized from instance")
-055@InterfaceAudience.Private
-056public abstract class CleanerChoreT 
extends FileCleanerDelegate extends ScheduledChore
-057implements ConfigurationObserver {
-058
-059  private static final Logger LOG = 
LoggerFactory.getLogger(CleanerChore.class);
-060  private static final int 
AVAIL_PROCESSORS = Runtime.getRuntime().availableProcessors();
-061
-062  /**
-063   * If it is an integer and = 1, it 
would be the size;
-064   * if 0.0  size = 1.0, size 
would be available processors * size.
-065   * Pay attention that 1.0 is different 
from 1, former indicates it will use 100% of cores,
-066   * while latter will use only 1 thread 
for chore to scan dir.
-067   */
-068  public static final String 
CHORE_POOL_SIZE = "hbase.cleaner.scan.dir.concurrent.size";
-069  private static final String 
DEFAULT_CHORE_POOL_SIZE = "0.25";
-070
-071  // It may be waste resources for each 
cleaner chore own its pool,
-072  // so let's make pool for all cleaner 
chores.
-073  private static volatile ForkJoinPool 
CHOREPOOL;
-074  private static volatile int 
CHOREPOOLSIZE;
-075
-076  protected final FileSystem fs;
-077  private final Path oldFileDir;
-078  private final Configuration conf;
-079  protected final MapString, 
Object params;
-080  private final AtomicBoolean enabled = 
new AtomicBoolean(true);
-081  private final AtomicBoolean reconfig = 
new AtomicBoolean(false);
-082  protected ListT 
cleanersChain;
-083
-084  public CleanerChore(String name, final 
int sleepPeriod, final Stoppable s, Configuration conf,
-085  FileSystem fs, Path 
oldFileDir, String confKey) {
-086this(name, sleepPeriod, s, conf, fs, 
oldFileDir, confKey, null);
-087  }
-088
-089  /**
-090   * @param name name of the chore being 
run
-091   * @param sleepPeriod the period of 
time to sleep between each run
-092   * @param s the stopper
-093   * @param conf configuration to use
-094   * @param fs handle to the FS
-095   * @param oldFileDir the path to the 
archived files
-096   * @param confKey configuration key for 
the classes to instantiate
-097   * @param params members could be used 
in cleaner
-098   */
-099  public CleanerChore(String name, final 
int sleepPeriod, final Stoppable s, Configuration conf,
-100  FileSystem fs, Path oldFileDir, 
String confKey, MapString, Object params) {
-101super(name, s, sleepPeriod);
-102this.fs = fs;
-103this.oldFileDir = oldFileDir;
-104this.conf = conf;
-105this.params = params;
-106initCleanerChain(confKey);
-107
-108if (CHOREPOOL == null) {
-109  String poolSize = 
conf.get(CHORE_POOL_SIZE, DEFAULT_CHORE_POOL_SIZE);
-110  CHOREPOOLSIZE = 
calculatePoolSize(poolSize);
-111 

[31/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/org/apache/hadoop/hbase/regionserver/package-tree.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/regionserver/package-tree.html 
b/devapidocs/org/apache/hadoop/hbase/regionserver/package-tree.html
index 85bb6c9..750a1f2 100644
--- a/devapidocs/org/apache/hadoop/hbase/regionserver/package-tree.html
+++ b/devapidocs/org/apache/hadoop/hbase/regionserver/package-tree.html
@@ -704,20 +704,20 @@
 
 java.lang.https://docs.oracle.com/javase/8/docs/api/java/lang/Enum.html?is-external=true;
 title="class or interface in java.lang">EnumE (implements java.lang.https://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true;
 title="class or interface in java.lang">ComparableT, java.io.https://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html?is-external=true;
 title="class or interface in java.io">Serializable)
 
+org.apache.hadoop.hbase.regionserver.ScanType
+org.apache.hadoop.hbase.regionserver.HRegion.FlushResult.Result
+org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceFactoryImpl.FactoryStorage
+org.apache.hadoop.hbase.regionserver.ChunkCreator.ChunkType
 org.apache.hadoop.hbase.regionserver.CompactingMemStore.IndexType
+org.apache.hadoop.hbase.regionserver.BloomType
 org.apache.hadoop.hbase.regionserver.FlushType
-org.apache.hadoop.hbase.regionserver.DefaultHeapMemoryTuner.StepDirection
-org.apache.hadoop.hbase.regionserver.ChunkCreator.ChunkType
-org.apache.hadoop.hbase.regionserver.ScannerContext.NextState
+org.apache.hadoop.hbase.regionserver.TimeRangeTracker.Type
 org.apache.hadoop.hbase.regionserver.Region.Operation
+org.apache.hadoop.hbase.regionserver.ScannerContext.NextState
 org.apache.hadoop.hbase.regionserver.MemStoreCompactionStrategy.Action
-org.apache.hadoop.hbase.regionserver.ScanType
-org.apache.hadoop.hbase.regionserver.BloomType
-org.apache.hadoop.hbase.regionserver.HRegion.FlushResult.Result
 org.apache.hadoop.hbase.regionserver.SplitLogWorker.TaskExecutor.Status
-org.apache.hadoop.hbase.regionserver.TimeRangeTracker.Type
 org.apache.hadoop.hbase.regionserver.ScannerContext.LimitScope
-org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceFactoryImpl.FactoryStorage
+org.apache.hadoop.hbase.regionserver.DefaultHeapMemoryTuner.StepDirection
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/org/apache/hadoop/hbase/regionserver/querymatcher/package-tree.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/regionserver/querymatcher/package-tree.html
 
b/devapidocs/org/apache/hadoop/hbase/regionserver/querymatcher/package-tree.html
index 858ccf6..3bd22b5 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/regionserver/querymatcher/package-tree.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/regionserver/querymatcher/package-tree.html
@@ -130,9 +130,9 @@
 
 java.lang.https://docs.oracle.com/javase/8/docs/api/java/lang/Enum.html?is-external=true;
 title="class or interface in java.lang">EnumE (implements java.lang.https://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true;
 title="class or interface in java.lang">ComparableT, java.io.https://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html?is-external=true;
 title="class or interface in java.io">Serializable)
 
-org.apache.hadoop.hbase.regionserver.querymatcher.DeleteTracker.DeleteResult
-org.apache.hadoop.hbase.regionserver.querymatcher.StripeCompactionScanQueryMatcher.DropDeletesInOutput
 org.apache.hadoop.hbase.regionserver.querymatcher.ScanQueryMatcher.MatchCode
+org.apache.hadoop.hbase.regionserver.querymatcher.StripeCompactionScanQueryMatcher.DropDeletesInOutput
+org.apache.hadoop.hbase.regionserver.querymatcher.DeleteTracker.DeleteResult
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/org/apache/hadoop/hbase/replication/ReplicationQueueStorage.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/replication/ReplicationQueueStorage.html 
b/devapidocs/org/apache/hadoop/hbase/replication/ReplicationQueueStorage.html
index d7a6f7f..6b44a8e 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/replication/ReplicationQueueStorage.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/replication/ReplicationQueueStorage.html
@@ -18,7 +18,7 @@
 catch(err) {
 }
 //-->
-var methods = 
{"i0":6,"i1":6,"i2":6,"i3":6,"i4":6,"i5":6,"i6":6,"i7":6,"i8":6,"i9":6,"i10":6,"i11":6,"i12":6,"i13":6,"i14":6,"i15":6,"i16":6,"i17":6,"i18":6,"i19":6};
+var methods = 
{"i0":6,"i1":6,"i2":6,"i3":6,"i4":6,"i5":6,"i6":6,"i7":6,"i8":6,"i9":6,"i10":6,"i11":6,"i12":6,"i13":6,"i14":6,"i15":6,"i16":6,"i17":6,"i18":6,"i19":6,"i20":6};
 var tabs = {65535:["t0","All Methods"],2:["t2","Instance 
Methods"],4:["t3","Abstract Methods"]};
 var altColor = 

[19/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/client/ConnectionImplementation.MasterServiceState.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/ConnectionImplementation.MasterServiceState.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/ConnectionImplementation.MasterServiceState.html
index d7aa8b1..98a45a0 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/ConnectionImplementation.MasterServiceState.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/ConnectionImplementation.MasterServiceState.html
@@ -680,1330 +680,1333 @@
 672}
 673ListHRegionLocation locations 
= new ArrayList();
 674for (RegionInfo regionInfo : regions) 
{
-675  RegionLocations list = 
locateRegion(tableName, regionInfo.getStartKey(), useCache, true);
-676  if (list != null) {
-677for (HRegionLocation loc : 
list.getRegionLocations()) {
-678  if (loc != null) {
-679locations.add(loc);
-680  }
-681}
-682  }
-683}
-684return locations;
-685  }
-686
-687  @Override
-688  public HRegionLocation 
locateRegion(final TableName tableName, final byte[] row)
-689  throws IOException {
-690RegionLocations locations = 
locateRegion(tableName, row, true, true);
-691return locations == null ? null : 
locations.getRegionLocation();
-692  }
-693
-694  @Override
-695  public HRegionLocation 
relocateRegion(final TableName tableName, final byte[] row)
-696  throws IOException {
-697RegionLocations locations =
-698  relocateRegion(tableName, row, 
RegionReplicaUtil.DEFAULT_REPLICA_ID);
-699return locations == null ? null
-700  : 
locations.getRegionLocation(RegionReplicaUtil.DEFAULT_REPLICA_ID);
-701  }
-702
-703  @Override
-704  public RegionLocations 
relocateRegion(final TableName tableName,
-705  final byte [] row, int replicaId) 
throws IOException{
-706// Since this is an explicit request 
not to use any caching, finding
-707// disabled tables should not be 
desirable.  This will ensure that an exception is thrown when
-708// the first time a disabled table is 
interacted with.
-709if 
(!tableName.equals(TableName.META_TABLE_NAME)  
isTableDisabled(tableName)) {
-710  throw new 
TableNotEnabledException(tableName.getNameAsString() + " is disabled.");
-711}
-712
-713return locateRegion(tableName, row, 
false, true, replicaId);
-714  }
+675  if 
(!RegionReplicaUtil.isDefaultReplica(regionInfo)) {
+676continue;
+677  }
+678  RegionLocations list = 
locateRegion(tableName, regionInfo.getStartKey(), useCache, true);
+679  if (list != null) {
+680for (HRegionLocation loc : 
list.getRegionLocations()) {
+681  if (loc != null) {
+682locations.add(loc);
+683  }
+684}
+685  }
+686}
+687return locations;
+688  }
+689
+690  @Override
+691  public HRegionLocation 
locateRegion(final TableName tableName, final byte[] row)
+692  throws IOException {
+693RegionLocations locations = 
locateRegion(tableName, row, true, true);
+694return locations == null ? null : 
locations.getRegionLocation();
+695  }
+696
+697  @Override
+698  public HRegionLocation 
relocateRegion(final TableName tableName, final byte[] row)
+699  throws IOException {
+700RegionLocations locations =
+701  relocateRegion(tableName, row, 
RegionReplicaUtil.DEFAULT_REPLICA_ID);
+702return locations == null ? null
+703  : 
locations.getRegionLocation(RegionReplicaUtil.DEFAULT_REPLICA_ID);
+704  }
+705
+706  @Override
+707  public RegionLocations 
relocateRegion(final TableName tableName,
+708  final byte [] row, int replicaId) 
throws IOException{
+709// Since this is an explicit request 
not to use any caching, finding
+710// disabled tables should not be 
desirable.  This will ensure that an exception is thrown when
+711// the first time a disabled table is 
interacted with.
+712if 
(!tableName.equals(TableName.META_TABLE_NAME)  
isTableDisabled(tableName)) {
+713  throw new 
TableNotEnabledException(tableName.getNameAsString() + " is disabled.");
+714}
 715
-716  @Override
-717  public RegionLocations 
locateRegion(final TableName tableName, final byte[] row, boolean useCache,
-718  boolean retry) throws IOException 
{
-719return locateRegion(tableName, row, 
useCache, retry, RegionReplicaUtil.DEFAULT_REPLICA_ID);
-720  }
-721
-722  @Override
-723  public RegionLocations 
locateRegion(final TableName tableName, final byte[] row, boolean useCache,
-724  boolean retry, int replicaId) 
throws IOException {
-725checkClosed();
-726if (tableName == null || 
tableName.getName().length == 0) {
-727  throw new 
IllegalArgumentException("table name cannot be null or zero length");
-728}
-729if 

[23/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/CompoundConfiguration.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/CompoundConfiguration.html 
b/devapidocs/src-html/org/apache/hadoop/hbase/CompoundConfiguration.html
index 1038256..89735e7 100644
--- a/devapidocs/src-html/org/apache/hadoop/hbase/CompoundConfiguration.html
+++ b/devapidocs/src-html/org/apache/hadoop/hbase/CompoundConfiguration.html
@@ -36,398 +36,399 @@
 028import java.util.List;
 029import java.util.Map;
 030
-031import 
org.apache.commons.collections4.iterators.UnmodifiableIterator;
-032import 
org.apache.hadoop.conf.Configuration;
-033import 
org.apache.hadoop.hbase.util.Bytes;
-034import 
org.apache.yetus.audience.InterfaceAudience;
-035
-036/**
-037 * Do a shallow merge of multiple KV 
configuration pools. This is a very useful
-038 * utility class to easily add per-object 
configurations in addition to wider
-039 * scope settings. This is different from 
Configuration.addResource()
-040 * functionality, which performs a deep 
merge and mutates the common data
-041 * structure.
-042 * p
-043 * The iterator on CompoundConfiguration 
is unmodifiable. Obtaining iterator is an expensive
-044 * operation.
-045 * p
-046 * For clarity: the shallow merge allows 
the user to mutate either of the
-047 * configuration objects and have changes 
reflected everywhere. In contrast to a
-048 * deep merge, that requires you to 
explicitly know all applicable copies to
-049 * propagate changes.
-050 * 
-051 * WARNING: The values set in the 
CompoundConfiguration are do not handle Property variable
-052 * substitution.  However, if they are 
set in the underlying configuration substitutions are
-053 * done. 
-054 */
-055@InterfaceAudience.Private
-056public class CompoundConfiguration 
extends Configuration {
-057
-058  private Configuration mutableConf = 
null;
-059
-060  /**
-061   * Default Constructor. Initializes 
empty configuration
-062   */
-063  public CompoundConfiguration() {
-064  }
-065
-066  // Devs: these APIs are the same 
contract as their counterparts in
-067  // Configuration.java
-068  private interface ImmutableConfigMap 
extends IterableMap.EntryString,String {
-069String get(String key);
-070String getRaw(String key);
-071Class? getClassByName(String 
name) throws ClassNotFoundException;
-072int size();
-073  }
-074
-075  private final 
ListImmutableConfigMap configs = new ArrayList();
-076
-077  static class ImmutableConfWrapper 
implements  ImmutableConfigMap {
-078   private final Configuration c;
-079
-080ImmutableConfWrapper(Configuration 
conf) {
-081  c = conf;
-082}
-083
-084@Override
-085public 
IteratorMap.EntryString,String iterator() {
-086  return c.iterator();
-087}
-088
-089@Override
-090public String get(String key) {
-091  return c.get(key);
-092}
-093
-094@Override
-095public String getRaw(String key) {
-096  return c.getRaw(key);
-097}
-098
-099@Override
-100public Class? 
getClassByName(String name)
-101throws ClassNotFoundException {
-102  return c.getClassByName(name);
-103}
-104
-105@Override
-106public int size() {
-107  return c.size();
-108}
-109
-110@Override
-111public String toString() {
-112  return c.toString();
-113}
-114  }
-115
-116  /**
-117   * If set has been called, it will 
create a mutableConf.  This converts the mutableConf to an
-118   * immutable one and resets it to allow 
a new mutable conf.  This is used when a new map or
-119   * conf is added to the compound 
configuration to preserve proper override semantics.
-120   */
-121  void freezeMutableConf() {
-122if (mutableConf == null) {
-123  // do nothing if there is no 
current mutableConf
-124  return;
-125}
-126
-127this.configs.add(0, new 
ImmutableConfWrapper(mutableConf));
-128mutableConf = null;
-129  }
-130
-131  /**
-132   * Add Hadoop Configuration object to 
config list.
-133   * The added configuration overrides 
the previous ones if there are name collisions.
-134   * @param conf configuration object
-135   * @return this, for builder pattern
-136   */
-137  public CompoundConfiguration add(final 
Configuration conf) {
-138freezeMutableConf();
-139
-140if (conf instanceof 
CompoundConfiguration) {
-141  this.configs.addAll(0, 
((CompoundConfiguration) conf).configs);
-142  return this;
-143}
-144// put new config at the front of the 
list (top priority)
-145this.configs.add(0, new 
ImmutableConfWrapper(conf));
-146return this;
-147  }
-148
-149  /**
-150   * Add Bytes map to config list. This 
map is generally
-151   * created by HTableDescriptor or 
HColumnDescriptor, but can be abstractly
-152   * used. The added configuration 
overrides the previous ones if there are
-153   * name 

[41/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/org/apache/hadoop/hbase/backup/impl/BackupCommands.DeleteCommand.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/backup/impl/BackupCommands.DeleteCommand.html
 
b/devapidocs/org/apache/hadoop/hbase/backup/impl/BackupCommands.DeleteCommand.html
index 57aa892..96d3860 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/backup/impl/BackupCommands.DeleteCommand.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/backup/impl/BackupCommands.DeleteCommand.html
@@ -162,8 +162,8 @@ extends Constructor and Description
 
 
-DeleteCommand(org.apache.hadoop.conf.Configurationconf,
- org.apache.commons.cli.CommandLinecmdline)
+DeleteCommand(org.apache.hadoop.conf.Configurationconf,
+ 
org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLinecmdline)
 
 
 
@@ -230,14 +230,14 @@ extends 
+
 
 
 
 
 DeleteCommand
 DeleteCommand(org.apache.hadoop.conf.Configurationconf,
-  org.apache.commons.cli.CommandLinecmdline)
+  
org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLinecmdline)
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/org/apache/hadoop/hbase/backup/impl/BackupCommands.DescribeCommand.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/backup/impl/BackupCommands.DescribeCommand.html
 
b/devapidocs/org/apache/hadoop/hbase/backup/impl/BackupCommands.DescribeCommand.html
index 5de92de..dcfda4e 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/backup/impl/BackupCommands.DescribeCommand.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/backup/impl/BackupCommands.DescribeCommand.html
@@ -162,8 +162,8 @@ extends Constructor and Description
 
 
-DescribeCommand(org.apache.hadoop.conf.Configurationconf,
-   
org.apache.commons.cli.CommandLinecmdline)
+DescribeCommand(org.apache.hadoop.conf.Configurationconf,
+   
org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLinecmdline)
 
 
 
@@ -224,14 +224,14 @@ extends 
+
 
 
 
 
 DescribeCommand
 DescribeCommand(org.apache.hadoop.conf.Configurationconf,
-org.apache.commons.cli.CommandLinecmdline)
+
org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLinecmdline)
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/org/apache/hadoop/hbase/backup/impl/BackupCommands.HelpCommand.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/backup/impl/BackupCommands.HelpCommand.html
 
b/devapidocs/org/apache/hadoop/hbase/backup/impl/BackupCommands.HelpCommand.html
index b9a6076..769fc54 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/backup/impl/BackupCommands.HelpCommand.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/backup/impl/BackupCommands.HelpCommand.html
@@ -162,8 +162,8 @@ extends Constructor and Description
 
 
-HelpCommand(org.apache.hadoop.conf.Configurationconf,
-   org.apache.commons.cli.CommandLinecmdline)
+HelpCommand(org.apache.hadoop.conf.Configurationconf,
+   
org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLinecmdline)
 
 
 
@@ -224,14 +224,14 @@ extends 
+
 
 
 
 
 HelpCommand
 HelpCommand(org.apache.hadoop.conf.Configurationconf,
-org.apache.commons.cli.CommandLinecmdline)
+
org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLinecmdline)
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/org/apache/hadoop/hbase/backup/impl/BackupCommands.HistoryCommand.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/backup/impl/BackupCommands.HistoryCommand.html
 
b/devapidocs/org/apache/hadoop/hbase/backup/impl/BackupCommands.HistoryCommand.html
index d653bda..89762c6 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/backup/impl/BackupCommands.HistoryCommand.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/backup/impl/BackupCommands.HistoryCommand.html
@@ -173,8 +173,8 @@ extends Constructor and Description
 
 
-HistoryCommand(org.apache.hadoop.conf.Configurationconf,
-  
org.apache.commons.cli.CommandLinecmdline)
+HistoryCommand(org.apache.hadoop.conf.Configurationconf,
+  
org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLinecmdline)
 
 
 
@@ -272,14 +272,14 @@ extends 
+
 
 
 
 
 HistoryCommand
 HistoryCommand(org.apache.hadoop.conf.Configurationconf,
-   org.apache.commons.cli.CommandLinecmdline)
+   
org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLinecmdline)
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/org/apache/hadoop/hbase/backup/impl/BackupCommands.MergeCommand.html
--
diff 

[08/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/master/HMaster.MasterStoppedException.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/master/HMaster.MasterStoppedException.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/master/HMaster.MasterStoppedException.html
index 79bf967..c8b113b 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/master/HMaster.MasterStoppedException.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/master/HMaster.MasterStoppedException.html
@@ -115,3514 +115,3517 @@
 107import 
org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer;
 108import 
org.apache.hadoop.hbase.master.balancer.ClusterStatusChore;
 109import 
org.apache.hadoop.hbase.master.balancer.LoadBalancerFactory;
-110import 
org.apache.hadoop.hbase.master.cleaner.HFileCleaner;
-111import 
org.apache.hadoop.hbase.master.cleaner.LogCleaner;
-112import 
org.apache.hadoop.hbase.master.cleaner.ReplicationBarrierCleaner;
-113import 
org.apache.hadoop.hbase.master.locking.LockManager;
-114import 
org.apache.hadoop.hbase.master.normalizer.NormalizationPlan;
-115import 
org.apache.hadoop.hbase.master.normalizer.NormalizationPlan.PlanType;
-116import 
org.apache.hadoop.hbase.master.normalizer.RegionNormalizer;
-117import 
org.apache.hadoop.hbase.master.normalizer.RegionNormalizerChore;
-118import 
org.apache.hadoop.hbase.master.normalizer.RegionNormalizerFactory;
-119import 
org.apache.hadoop.hbase.master.procedure.CreateTableProcedure;
-120import 
org.apache.hadoop.hbase.master.procedure.DeleteNamespaceProcedure;
-121import 
org.apache.hadoop.hbase.master.procedure.DeleteTableProcedure;
-122import 
org.apache.hadoop.hbase.master.procedure.DisableTableProcedure;
-123import 
org.apache.hadoop.hbase.master.procedure.EnableTableProcedure;
-124import 
org.apache.hadoop.hbase.master.procedure.MasterProcedureConstants;
-125import 
org.apache.hadoop.hbase.master.procedure.MasterProcedureEnv;
-126import 
org.apache.hadoop.hbase.master.procedure.MasterProcedureScheduler;
-127import 
org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil;
-128import 
org.apache.hadoop.hbase.master.procedure.ModifyTableProcedure;
-129import 
org.apache.hadoop.hbase.master.procedure.ProcedurePrepareLatch;
-130import 
org.apache.hadoop.hbase.master.procedure.RecoverMetaProcedure;
-131import 
org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure;
-132import 
org.apache.hadoop.hbase.master.replication.AddPeerProcedure;
-133import 
org.apache.hadoop.hbase.master.replication.DisablePeerProcedure;
-134import 
org.apache.hadoop.hbase.master.replication.EnablePeerProcedure;
-135import 
org.apache.hadoop.hbase.master.replication.ModifyPeerProcedure;
-136import 
org.apache.hadoop.hbase.master.replication.RemovePeerProcedure;
-137import 
org.apache.hadoop.hbase.master.replication.ReplicationPeerManager;
-138import 
org.apache.hadoop.hbase.master.replication.UpdatePeerConfigProcedure;
-139import 
org.apache.hadoop.hbase.master.snapshot.SnapshotManager;
-140import 
org.apache.hadoop.hbase.mob.MobConstants;
-141import 
org.apache.hadoop.hbase.monitoring.MemoryBoundedLogMessageBuffer;
-142import 
org.apache.hadoop.hbase.monitoring.MonitoredTask;
-143import 
org.apache.hadoop.hbase.monitoring.TaskMonitor;
-144import 
org.apache.hadoop.hbase.procedure.MasterProcedureManagerHost;
-145import 
org.apache.hadoop.hbase.procedure.flush.MasterFlushTableProcedureManager;
-146import 
org.apache.hadoop.hbase.procedure2.LockedResource;
-147import 
org.apache.hadoop.hbase.procedure2.Procedure;
-148import 
org.apache.hadoop.hbase.procedure2.ProcedureEvent;
-149import 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor;
-150import 
org.apache.hadoop.hbase.procedure2.RemoteProcedureDispatcher.RemoteProcedure;
-151import 
org.apache.hadoop.hbase.procedure2.RemoteProcedureException;
-152import 
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore;
-153import 
org.apache.hadoop.hbase.quotas.MasterQuotaManager;
-154import 
org.apache.hadoop.hbase.quotas.MasterSpaceQuotaObserver;
-155import 
org.apache.hadoop.hbase.quotas.QuotaObserverChore;
-156import 
org.apache.hadoop.hbase.quotas.QuotaUtil;
-157import 
org.apache.hadoop.hbase.quotas.SnapshotQuotaObserverChore;
-158import 
org.apache.hadoop.hbase.quotas.SpaceQuotaSnapshotNotifier;
-159import 
org.apache.hadoop.hbase.quotas.SpaceQuotaSnapshotNotifierFactory;
-160import 
org.apache.hadoop.hbase.regionserver.DefaultStoreEngine;
-161import 
org.apache.hadoop.hbase.regionserver.HRegionServer;
-162import 
org.apache.hadoop.hbase.regionserver.HStore;
-163import 
org.apache.hadoop.hbase.regionserver.RSRpcServices;
-164import 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost;
-165import 
org.apache.hadoop.hbase.regionserver.RegionSplitPolicy;
-166import 
org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy;
-167import 

[44/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/checkstyle.rss
--
diff --git a/checkstyle.rss b/checkstyle.rss
index 9137e54..ba8ce97 100644
--- a/checkstyle.rss
+++ b/checkstyle.rss
@@ -26,7 +26,7 @@ under the License.
 2007 - 2018 The Apache Software Foundation
 
   File: 3595,
- Errors: 15919,
+ Errors: 15918,
  Warnings: 0,
  Infos: 0
   
@@ -1735,7 +1735,7 @@ under the License.
   0
 
 
-  8
+  7
 
   
   
@@ -20789,7 +20789,7 @@ under the License.
   0
 
 
-  2
+  3
 
   
   
@@ -26767,7 +26767,7 @@ under the License.
   0
 
 
-  34
+  33
 
   
   

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/coc.html
--
diff --git a/coc.html b/coc.html
index 26000ea..bbd2f25 100644
--- a/coc.html
+++ b/coc.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase  
   Code of Conduct Policy
@@ -368,7 +368,7 @@ email to mailto:priv...@hbase.apache.org;>the priv
 https://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2018-03-26
+  Last Published: 
2018-03-27
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/cygwin.html
--
diff --git a/cygwin.html b/cygwin.html
index a721a22..febe7df 100644
--- a/cygwin.html
+++ b/cygwin.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase  Installing Apache HBase (TM) on Windows using 
Cygwin
 
@@ -667,7 +667,7 @@ Now your HBase server is running, start 
coding and build that next
 https://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2018-03-26
+  Last Published: 
2018-03-27
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/dependencies.html
--
diff --git a/dependencies.html b/dependencies.html
index fb2e529..982e932 100644
--- a/dependencies.html
+++ b/dependencies.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase  Project Dependencies
 
@@ -433,7 +433,7 @@
 https://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2018-03-26
+  Last Published: 
2018-03-27
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/dependency-convergence.html
--
diff --git a/dependency-convergence.html b/dependency-convergence.html
index e1351ee..625156f 100644
--- a/dependency-convergence.html
+++ b/dependency-convergence.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase  Reactor Dependency Convergence
 
@@ -278,10 +278,10 @@
 42
 
 Number of dependencies (NOD):
-315
+314
 
 Number of unique artifacts (NOA):
-351
+350
 
 Number of version-conflicting artifacts (NOC):
 23
@@ -1098,7 +1098,7 @@
 https://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2018-03-26
+  Last Published: 
2018-03-27
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/dependency-info.html
--
diff --git a/dependency-info.html b/dependency-info.html
index 02dd0de..15a9c7b 100644
--- a/dependency-info.html
+++ b/dependency-info.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase  Dependency Information
 
@@ -306,7 +306,7 @@
 https://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2018-03-26
+  Last Published: 
2018-03-27
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/dependency-management.html
--
diff --git a/dependency-management.html 

[43/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/index-all.html
--
diff --git a/devapidocs/index-all.html b/devapidocs/index-all.html
index 3df5466..35c8a90 100644
--- a/devapidocs/index-all.html
+++ b/devapidocs/index-all.html
@@ -1985,7 +1985,7 @@
 
 Add a remote rpc.
 
-addOption(Option)
 - Method in class org.apache.hadoop.hbase.util.AbstractHBaseTool
+addOption(Option)
 - Method in class org.apache.hadoop.hbase.util.AbstractHBaseTool
 
 addOptions()
 - Method in class org.apache.hadoop.hbase.backup.BackupDriver
 
@@ -2242,7 +2242,7 @@
 
 addReplicationPeerAsync(String,
 ReplicationPeerConfig, boolean) - Method in class 
org.apache.hadoop.hbase.client.HBaseAdmin
 
-addRequiredOption(Option)
 - Method in class org.apache.hadoop.hbase.util.AbstractHBaseTool
+addRequiredOption(Option)
 - Method in class org.apache.hadoop.hbase.util.AbstractHBaseTool
 
 addRequiredOptWithArg(String,
 String) - Method in class org.apache.hadoop.hbase.util.AbstractHBaseTool
 
@@ -5046,7 +5046,7 @@
 
 BackupSet(String,
 ListTableName) - Constructor for class 
org.apache.hadoop.hbase.backup.util.BackupSet
 
-BackupSetCommand(Configuration,
 CommandLine) - Constructor for class 
org.apache.hadoop.hbase.backup.impl.BackupCommands.BackupSetCommand
+BackupSetCommand(Configuration,
 CommandLine) - Constructor for class 
org.apache.hadoop.hbase.backup.impl.BackupCommands.BackupSetCommand
 
 backupSetName
 - Variable in class org.apache.hadoop.hbase.backup.BackupRequest
 
@@ -9054,6 +9054,10 @@
 
 canUpdate(HRegionLocation,
 HRegionLocation) - Static method in class 
org.apache.hadoop.hbase.client.AsyncRegionLocator
 
+canUpdateImmediately(Configuration)
 - Method in class org.apache.hadoop.hbase.master.cleaner.CleanerChore.DirScanPool
+
+Checks if pool can be updated immediately.
+
 canUpdateTableDescriptor()
 - Method in class org.apache.hadoop.hbase.master.HMaster
 
 canUpdateTableDescriptor()
 - Method in class org.apache.hadoop.hbase.regionserver.HRegionServer
@@ -10240,7 +10244,7 @@
 
 checkAnyLimitReached(ScannerContext.LimitScope)
 - Method in class org.apache.hadoop.hbase.regionserver.ScannerContext
 
-checkArguments(CommandLine)
 - Method in class org.apache.hadoop.hbase.thrift2.ThriftServer
+checkArguments(CommandLine)
 - Method in class org.apache.hadoop.hbase.thrift2.ThriftServer
 
 checkArrayBounds(byte[],
 int, int) - Method in class org.apache.hadoop.hbase.IndividualBytesFieldCell
 
@@ -11135,10 +11139,6 @@
 
 choreForTesting()
 - Method in class org.apache.hadoop.hbase.ScheduledChore
 
-CHOREPOOL
 - Static variable in class org.apache.hadoop.hbase.master.cleaner.CleanerChore
-
-CHOREPOOLSIZE
 - Static variable in class org.apache.hadoop.hbase.master.cleaner.CleanerChore
-
 ChoreService - 
Class in org.apache.hadoop.hbase
 
 ChoreService is a service that can be used to schedule 
instances of ScheduledChore to run
@@ -11398,6 +11398,8 @@
 
 CleanerChore.CleanerTask - Class in org.apache.hadoop.hbase.master.cleaner
 
+CleanerChore.DirScanPool - Class in org.apache.hadoop.hbase.master.cleaner
+
 cleanerChoreSwitch(boolean)
 - Method in interface org.apache.hadoop.hbase.client.Admin
 
 Enable/Disable the cleaner chore.
@@ -11414,6 +11416,8 @@
 
 CleanerContext(FileStatus)
 - Constructor for class org.apache.hadoop.hbase.master.cleaner.LogCleaner.CleanerContext
 
+cleanerLatch
 - Variable in class org.apache.hadoop.hbase.master.cleaner.CleanerChore.DirScanPool
+
 cleanersChain
 - Variable in class org.apache.hadoop.hbase.master.cleaner.CleanerChore
 
 CleanerTask(FileStatus,
 boolean) - Constructor for class 
org.apache.hadoop.hbase.master.cleaner.CleanerChore.CleanerTask
@@ -14669,6 +14673,8 @@
 
 compare(RegionInfo,
 RegionInfo) - Method in class org.apache.hadoop.hbase.master.CatalogJanitor.SplitParentFirstComparator
 
+compare(NormalizationPlan,
 NormalizationPlan) - Method in class 
org.apache.hadoop.hbase.master.normalizer.SimpleRegionNormalizer.PlanComparator
+
 compare(RegionPlan,
 RegionPlan) - Method in class org.apache.hadoop.hbase.master.RegionPlan.RegionPlanComparator
 
 compare(PartitionedMobCompactionRequest.CompactionDelPartition,
 PartitionedMobCompactionRequest.CompactionDelPartition) - Method in 
class org.apache.hadoop.hbase.mob.compactions.PartitionedMobCompactor.DelPartitionComparator
@@ -14694,7 +14700,7 @@
 
 compare(Path,
 Path) - Method in class 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.LogsComparator
 
-compare(Option,
 Option) - Method in class org.apache.hadoop.hbase.util.AbstractHBaseTool.OptionsOrderComparator
+compare(Option,
 Option) - Method in class org.apache.hadoop.hbase.util.AbstractHBaseTool.OptionsOrderComparator
 
 compare(byte[],
 byte[]) - Method in class org.apache.hadoop.hbase.util.Bytes.ByteArrayComparator
 
@@ -18958,9 +18964,9 @@
 Create a 'smarter' Connection, one that is capable of 
by-passing RPC if the request is to
  the local server; i.e.
 

[29/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/org/apache/hadoop/hbase/snapshot/CreateSnapshot.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/snapshot/CreateSnapshot.html 
b/devapidocs/org/apache/hadoop/hbase/snapshot/CreateSnapshot.html
index 65ae57d..4039f9d 100644
--- a/devapidocs/org/apache/hadoop/hbase/snapshot/CreateSnapshot.html
+++ b/devapidocs/org/apache/hadoop/hbase/snapshot/CreateSnapshot.html
@@ -119,7 +119,7 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Private
-public class CreateSnapshot
+public class CreateSnapshot
 extends AbstractHBaseTool
 This is a command line class that will snapshot a given 
table.
 
@@ -210,7 +210,7 @@ extends 
 protected void
-processOptions(org.apache.commons.cli.CommandLinecmd)
+processOptions(org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLinecmd)
 This method is called to process the options after they 
have been parsed.
 
 
@@ -220,7 +220,7 @@ extends AbstractHBaseTool
-addOption,
 addOptNoArg,
 addOptNoArg,
 addOptWithArg,
 addOptWithArg,
 addRequiredOption,
 addRequiredOptWithArg, addRequiredOptWithArg,
 doStaticMain,
 getConf,
 getOptionAsDouble,
 getOptionAsInt,
 getOptionAsLong,
 parseArgs,
 parseInt,
 parseLong,
 printUsage,
 printUsage,
 processOldArgs,
 run,
 setConf
 
+addOption,
 addOptNoArg,
 addOptNoArg,
 addOptWithArg,
 addOptWithArg,
 addRequiredOption,
 addRequiredOptWithArg,
 addRequiredOptWithArg,
 doStaticMain,
 getConf,
 getOptionAsDouble,
 getOptionAsInt,
 getOptionAsLong,
 parseArgs,
 parseInt,
 parseLong,
 printUsage,
 printUsage,
 processOldArgs,
 run
 , setConf
 
 
 
@@ -249,7 +249,7 @@ extends 
 
 snapshotType
-privateSnapshotType snapshotType
+privateSnapshotType snapshotType
 
 
 
@@ -258,7 +258,7 @@ extends 
 
 tableName
-privateTableName tableName
+privateTableName tableName
 
 
 
@@ -267,7 +267,7 @@ extends 
 
 snapshotName
-privatehttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String snapshotName
+privatehttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String snapshotName
 
 
 
@@ -284,7 +284,7 @@ extends 
 
 CreateSnapshot
-publicCreateSnapshot()
+publicCreateSnapshot()
 
 
 
@@ -301,7 +301,7 @@ extends 
 
 main
-public staticvoidmain(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String[]args)
+public staticvoidmain(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String[]args)
 
 
 
@@ -310,7 +310,7 @@ extends 
 
 addOptions
-protectedvoidaddOptions()
+protectedvoidaddOptions()
 Description copied from 
class:AbstractHBaseTool
 Override this to add command-line options using AbstractHBaseTool.addOptWithArg(java.lang.String,
 java.lang.String)
  and similar methods.
@@ -320,18 +320,18 @@ extends 
+
 
 
 
 
 processOptions
-protectedvoidprocessOptions(org.apache.commons.cli.CommandLinecmd)
-Description copied from 
class:AbstractHBaseTool
+protectedvoidprocessOptions(org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLinecmd)
+Description copied from 
class:AbstractHBaseTool
 This method is called to process the options after they 
have been parsed.
 
 Specified by:
-processOptionsin
 classAbstractHBaseTool
+processOptionsin
 classAbstractHBaseTool
 
 
 
@@ -341,7 +341,7 @@ extends 
 
 doWork
-protectedintdoWork()
+protectedintdoWork()
   throws https://docs.oracle.com/javase/8/docs/api/java/lang/Exception.html?is-external=true;
 title="class or interface in java.lang">Exception
 Description copied from 
class:AbstractHBaseTool
 The "main function" of the tool

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/org/apache/hadoop/hbase/snapshot/ExportSnapshot.Options.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/snapshot/ExportSnapshot.Options.html 
b/devapidocs/org/apache/hadoop/hbase/snapshot/ExportSnapshot.Options.html
index a7b6b6b..3ab7f5d 100644
--- a/devapidocs/org/apache/hadoop/hbase/snapshot/ExportSnapshot.Options.html
+++ b/devapidocs/org/apache/hadoop/hbase/snapshot/ExportSnapshot.Options.html
@@ -128,51 +128,51 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 Field and Description
 
 
-(package private) static 
org.apache.commons.cli.Option
+(package private) static 
org.apache.hbase.thirdparty.org.apache.commons.cli.Option
 BANDWIDTH
 
 
-(package private) static 
org.apache.commons.cli.Option
+(package private) static 
org.apache.hbase.thirdparty.org.apache.commons.cli.Option
 CHGROUP
 
 
-(package private) static 
org.apache.commons.cli.Option
+(package private) static 
org.apache.hbase.thirdparty.org.apache.commons.cli.Option
 CHMOD
 
 

[09/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/master/HMaster.InitializationMonitor.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/master/HMaster.InitializationMonitor.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/master/HMaster.InitializationMonitor.html
index 79bf967..c8b113b 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/master/HMaster.InitializationMonitor.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/master/HMaster.InitializationMonitor.html
@@ -115,3514 +115,3517 @@
 107import 
org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer;
 108import 
org.apache.hadoop.hbase.master.balancer.ClusterStatusChore;
 109import 
org.apache.hadoop.hbase.master.balancer.LoadBalancerFactory;
-110import 
org.apache.hadoop.hbase.master.cleaner.HFileCleaner;
-111import 
org.apache.hadoop.hbase.master.cleaner.LogCleaner;
-112import 
org.apache.hadoop.hbase.master.cleaner.ReplicationBarrierCleaner;
-113import 
org.apache.hadoop.hbase.master.locking.LockManager;
-114import 
org.apache.hadoop.hbase.master.normalizer.NormalizationPlan;
-115import 
org.apache.hadoop.hbase.master.normalizer.NormalizationPlan.PlanType;
-116import 
org.apache.hadoop.hbase.master.normalizer.RegionNormalizer;
-117import 
org.apache.hadoop.hbase.master.normalizer.RegionNormalizerChore;
-118import 
org.apache.hadoop.hbase.master.normalizer.RegionNormalizerFactory;
-119import 
org.apache.hadoop.hbase.master.procedure.CreateTableProcedure;
-120import 
org.apache.hadoop.hbase.master.procedure.DeleteNamespaceProcedure;
-121import 
org.apache.hadoop.hbase.master.procedure.DeleteTableProcedure;
-122import 
org.apache.hadoop.hbase.master.procedure.DisableTableProcedure;
-123import 
org.apache.hadoop.hbase.master.procedure.EnableTableProcedure;
-124import 
org.apache.hadoop.hbase.master.procedure.MasterProcedureConstants;
-125import 
org.apache.hadoop.hbase.master.procedure.MasterProcedureEnv;
-126import 
org.apache.hadoop.hbase.master.procedure.MasterProcedureScheduler;
-127import 
org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil;
-128import 
org.apache.hadoop.hbase.master.procedure.ModifyTableProcedure;
-129import 
org.apache.hadoop.hbase.master.procedure.ProcedurePrepareLatch;
-130import 
org.apache.hadoop.hbase.master.procedure.RecoverMetaProcedure;
-131import 
org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure;
-132import 
org.apache.hadoop.hbase.master.replication.AddPeerProcedure;
-133import 
org.apache.hadoop.hbase.master.replication.DisablePeerProcedure;
-134import 
org.apache.hadoop.hbase.master.replication.EnablePeerProcedure;
-135import 
org.apache.hadoop.hbase.master.replication.ModifyPeerProcedure;
-136import 
org.apache.hadoop.hbase.master.replication.RemovePeerProcedure;
-137import 
org.apache.hadoop.hbase.master.replication.ReplicationPeerManager;
-138import 
org.apache.hadoop.hbase.master.replication.UpdatePeerConfigProcedure;
-139import 
org.apache.hadoop.hbase.master.snapshot.SnapshotManager;
-140import 
org.apache.hadoop.hbase.mob.MobConstants;
-141import 
org.apache.hadoop.hbase.monitoring.MemoryBoundedLogMessageBuffer;
-142import 
org.apache.hadoop.hbase.monitoring.MonitoredTask;
-143import 
org.apache.hadoop.hbase.monitoring.TaskMonitor;
-144import 
org.apache.hadoop.hbase.procedure.MasterProcedureManagerHost;
-145import 
org.apache.hadoop.hbase.procedure.flush.MasterFlushTableProcedureManager;
-146import 
org.apache.hadoop.hbase.procedure2.LockedResource;
-147import 
org.apache.hadoop.hbase.procedure2.Procedure;
-148import 
org.apache.hadoop.hbase.procedure2.ProcedureEvent;
-149import 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor;
-150import 
org.apache.hadoop.hbase.procedure2.RemoteProcedureDispatcher.RemoteProcedure;
-151import 
org.apache.hadoop.hbase.procedure2.RemoteProcedureException;
-152import 
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore;
-153import 
org.apache.hadoop.hbase.quotas.MasterQuotaManager;
-154import 
org.apache.hadoop.hbase.quotas.MasterSpaceQuotaObserver;
-155import 
org.apache.hadoop.hbase.quotas.QuotaObserverChore;
-156import 
org.apache.hadoop.hbase.quotas.QuotaUtil;
-157import 
org.apache.hadoop.hbase.quotas.SnapshotQuotaObserverChore;
-158import 
org.apache.hadoop.hbase.quotas.SpaceQuotaSnapshotNotifier;
-159import 
org.apache.hadoop.hbase.quotas.SpaceQuotaSnapshotNotifierFactory;
-160import 
org.apache.hadoop.hbase.regionserver.DefaultStoreEngine;
-161import 
org.apache.hadoop.hbase.regionserver.HRegionServer;
-162import 
org.apache.hadoop.hbase.regionserver.HStore;
-163import 
org.apache.hadoop.hbase.regionserver.RSRpcServices;
-164import 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost;
-165import 
org.apache.hadoop.hbase.regionserver.RegionSplitPolicy;
-166import 
org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy;
-167import 

[34/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/org/apache/hadoop/hbase/master/replication/RemovePeerProcedure.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/master/replication/RemovePeerProcedure.html
 
b/devapidocs/org/apache/hadoop/hbase/master/replication/RemovePeerProcedure.html
index e03eef8..b9c7cbd 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/master/replication/RemovePeerProcedure.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/master/replication/RemovePeerProcedure.html
@@ -18,7 +18,7 @@
 catch(err) {
 }
 //-->
-var methods = {"i0":10,"i1":10,"i2":10,"i3":10};
+var methods = {"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10};
 var tabs = {65535:["t0","All Methods"],2:["t2","Instance 
Methods"],8:["t4","Concrete Methods"]};
 var altColor = "altColor";
 var rowColor = "rowColor";
@@ -134,7 +134,7 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Private
-public class RemovePeerProcedure
+public class RemovePeerProcedure
 extends ModifyPeerProcedure
 The procedure for removing a replication peer.
 
@@ -188,6 +188,10 @@ extends private static org.slf4j.Logger
 LOG
 
+
+private ReplicationPeerConfig
+peerConfig
+
 
 
 
@@ -238,22 +242,36 @@ extends Method and Description
 
 
+protected void
+deserializeStateData(ProcedureStateSerializerserializer)
+Called on store load to allow the user to decode the 
previously serialized
+ state.
+
+
+
 PeerProcedureInterface.PeerOperationType
 getPeerOperationType()
 
-
+
 protected void
 postPeerModification(MasterProcedureEnvenv)
 Called before we finish the procedure.
 
 
-
+
 protected void
 prePeerModification(MasterProcedureEnvenv)
 Called before we start the actual processing.
 
 
-
+
+protected void
+serializeStateData(ProcedureStateSerializerserializer)
+The user-level code of the procedure may have some state to
+ persist (e.g.
+
+
+
 protected void
 updatePeerStorage(MasterProcedureEnvenv)
 
@@ -270,7 +288,7 @@ extends AbstractPeerProcedure
-acquireLock,
 deserializeStateData,
 getLatch,
 getPeerId,
 hasLock,
 h
 oldLock, releaseLock,
 serializeStateData
+acquireLock,
 getLatch,
 getPeerId,
 hasLock,
 holdLock,
 releaseLock
 
 
 
 
@@ -310,10 +328,19 @@ extends 
 
 
-
+
 
 LOG
-private static finalorg.slf4j.Logger LOG
+private static finalorg.slf4j.Logger LOG
+
+
+
+
+
+
+
+peerConfig
+privateReplicationPeerConfig 
peerConfig
 
 
 
@@ -330,7 +357,7 @@ extends 
 
 RemovePeerProcedure
-publicRemovePeerProcedure()
+publicRemovePeerProcedure()
 
 
 
@@ -339,7 +366,7 @@ extends 
 
 RemovePeerProcedure
-publicRemovePeerProcedure(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringpeerId)
+publicRemovePeerProcedure(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringpeerId)
 
 
 
@@ -356,7 +383,7 @@ extends 
 
 getPeerOperationType
-publicPeerProcedureInterface.PeerOperationTypegetPeerOperationType()
+publicPeerProcedureInterface.PeerOperationTypegetPeerOperationType()
 
 
 
@@ -365,7 +392,7 @@ extends 
 
 prePeerModification
-protectedvoidprePeerModification(MasterProcedureEnvenv)
+protectedvoidprePeerModification(MasterProcedureEnvenv)
 throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
 Description copied from 
class:ModifyPeerProcedure
 Called before we start the actual processing. The 
implementation should call the pre CP hook,
@@ -387,7 +414,7 @@ extends 
 
 updatePeerStorage
-protectedvoidupdatePeerStorage(MasterProcedureEnvenv)
+protectedvoidupdatePeerStorage(MasterProcedureEnvenv)
   throws ReplicationException
 
 Specified by:
@@ -400,10 +427,10 @@ extends 
 
 
-
+
 
 postPeerModification
-protectedvoidpostPeerModification(MasterProcedureEnvenv)
+protectedvoidpostPeerModification(MasterProcedureEnvenv)
  throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException,
 ReplicationException
 Description copied from 
class:ModifyPeerProcedure
@@ -423,6 +450,49 @@ extends 
+
+
+
+
+serializeStateData
+protectedvoidserializeStateData(ProcedureStateSerializerserializer)
+   throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
+Description copied from 
class:Procedure
+The user-level code of the procedure may have some state to
+ persist (e.g. input arguments or current position in the processing state) to
+ be able to resume on failure.
+
+Overrides:
+serializeStateDatain
 

[16/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/client/ConnectionImplementation.ServerErrorTracker.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/ConnectionImplementation.ServerErrorTracker.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/ConnectionImplementation.ServerErrorTracker.html
index d7aa8b1..98a45a0 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/ConnectionImplementation.ServerErrorTracker.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/ConnectionImplementation.ServerErrorTracker.html
@@ -680,1330 +680,1333 @@
 672}
 673ListHRegionLocation locations 
= new ArrayList();
 674for (RegionInfo regionInfo : regions) 
{
-675  RegionLocations list = 
locateRegion(tableName, regionInfo.getStartKey(), useCache, true);
-676  if (list != null) {
-677for (HRegionLocation loc : 
list.getRegionLocations()) {
-678  if (loc != null) {
-679locations.add(loc);
-680  }
-681}
-682  }
-683}
-684return locations;
-685  }
-686
-687  @Override
-688  public HRegionLocation 
locateRegion(final TableName tableName, final byte[] row)
-689  throws IOException {
-690RegionLocations locations = 
locateRegion(tableName, row, true, true);
-691return locations == null ? null : 
locations.getRegionLocation();
-692  }
-693
-694  @Override
-695  public HRegionLocation 
relocateRegion(final TableName tableName, final byte[] row)
-696  throws IOException {
-697RegionLocations locations =
-698  relocateRegion(tableName, row, 
RegionReplicaUtil.DEFAULT_REPLICA_ID);
-699return locations == null ? null
-700  : 
locations.getRegionLocation(RegionReplicaUtil.DEFAULT_REPLICA_ID);
-701  }
-702
-703  @Override
-704  public RegionLocations 
relocateRegion(final TableName tableName,
-705  final byte [] row, int replicaId) 
throws IOException{
-706// Since this is an explicit request 
not to use any caching, finding
-707// disabled tables should not be 
desirable.  This will ensure that an exception is thrown when
-708// the first time a disabled table is 
interacted with.
-709if 
(!tableName.equals(TableName.META_TABLE_NAME)  
isTableDisabled(tableName)) {
-710  throw new 
TableNotEnabledException(tableName.getNameAsString() + " is disabled.");
-711}
-712
-713return locateRegion(tableName, row, 
false, true, replicaId);
-714  }
+675  if 
(!RegionReplicaUtil.isDefaultReplica(regionInfo)) {
+676continue;
+677  }
+678  RegionLocations list = 
locateRegion(tableName, regionInfo.getStartKey(), useCache, true);
+679  if (list != null) {
+680for (HRegionLocation loc : 
list.getRegionLocations()) {
+681  if (loc != null) {
+682locations.add(loc);
+683  }
+684}
+685  }
+686}
+687return locations;
+688  }
+689
+690  @Override
+691  public HRegionLocation 
locateRegion(final TableName tableName, final byte[] row)
+692  throws IOException {
+693RegionLocations locations = 
locateRegion(tableName, row, true, true);
+694return locations == null ? null : 
locations.getRegionLocation();
+695  }
+696
+697  @Override
+698  public HRegionLocation 
relocateRegion(final TableName tableName, final byte[] row)
+699  throws IOException {
+700RegionLocations locations =
+701  relocateRegion(tableName, row, 
RegionReplicaUtil.DEFAULT_REPLICA_ID);
+702return locations == null ? null
+703  : 
locations.getRegionLocation(RegionReplicaUtil.DEFAULT_REPLICA_ID);
+704  }
+705
+706  @Override
+707  public RegionLocations 
relocateRegion(final TableName tableName,
+708  final byte [] row, int replicaId) 
throws IOException{
+709// Since this is an explicit request 
not to use any caching, finding
+710// disabled tables should not be 
desirable.  This will ensure that an exception is thrown when
+711// the first time a disabled table is 
interacted with.
+712if 
(!tableName.equals(TableName.META_TABLE_NAME)  
isTableDisabled(tableName)) {
+713  throw new 
TableNotEnabledException(tableName.getNameAsString() + " is disabled.");
+714}
 715
-716  @Override
-717  public RegionLocations 
locateRegion(final TableName tableName, final byte[] row, boolean useCache,
-718  boolean retry) throws IOException 
{
-719return locateRegion(tableName, row, 
useCache, retry, RegionReplicaUtil.DEFAULT_REPLICA_ID);
-720  }
-721
-722  @Override
-723  public RegionLocations 
locateRegion(final TableName tableName, final byte[] row, boolean useCache,
-724  boolean retry, int replicaId) 
throws IOException {
-725checkClosed();
-726if (tableName == null || 
tableName.getName().length == 0) {
-727  throw new 
IllegalArgumentException("table name cannot be null or zero length");
-728}
-729if 

[21/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/BackupCommands.Command.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/BackupCommands.Command.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/BackupCommands.Command.html
index a12ad81..f236300 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/BackupCommands.Command.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/BackupCommands.Command.html
@@ -49,31 +49,31 @@
 041import java.net.URI;
 042import java.util.List;
 043
-044import 
org.apache.commons.cli.CommandLine;
-045import 
org.apache.commons.cli.HelpFormatter;
-046import org.apache.commons.cli.Options;
-047import 
org.apache.commons.lang3.StringUtils;
-048import 
org.apache.hadoop.conf.Configuration;
-049import 
org.apache.hadoop.conf.Configured;
-050import org.apache.hadoop.fs.FileSystem;
-051import org.apache.hadoop.fs.Path;
-052import 
org.apache.hadoop.hbase.HBaseConfiguration;
-053import 
org.apache.hadoop.hbase.TableName;
-054import 
org.apache.hadoop.hbase.backup.BackupAdmin;
-055import 
org.apache.hadoop.hbase.backup.BackupInfo;
-056import 
org.apache.hadoop.hbase.backup.BackupInfo.BackupState;
-057import 
org.apache.hadoop.hbase.backup.BackupRequest;
-058import 
org.apache.hadoop.hbase.backup.BackupRestoreConstants;
-059import 
org.apache.hadoop.hbase.backup.BackupRestoreConstants.BackupCommand;
-060import 
org.apache.hadoop.hbase.backup.BackupType;
-061import 
org.apache.hadoop.hbase.backup.HBackupFileSystem;
-062import 
org.apache.hadoop.hbase.backup.util.BackupSet;
-063import 
org.apache.hadoop.hbase.backup.util.BackupUtils;
-064import 
org.apache.hadoop.hbase.client.Connection;
-065import 
org.apache.hadoop.hbase.client.ConnectionFactory;
-066import 
org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
-067import 
org.apache.yetus.audience.InterfaceAudience;
-068import 
org.apache.hbase.thirdparty.com.google.common.collect.Lists;
+044import 
org.apache.commons.lang3.StringUtils;
+045import 
org.apache.hadoop.conf.Configuration;
+046import 
org.apache.hadoop.conf.Configured;
+047import org.apache.hadoop.fs.FileSystem;
+048import org.apache.hadoop.fs.Path;
+049import 
org.apache.hadoop.hbase.HBaseConfiguration;
+050import 
org.apache.hadoop.hbase.TableName;
+051import 
org.apache.hadoop.hbase.backup.BackupAdmin;
+052import 
org.apache.hadoop.hbase.backup.BackupInfo;
+053import 
org.apache.hadoop.hbase.backup.BackupInfo.BackupState;
+054import 
org.apache.hadoop.hbase.backup.BackupRequest;
+055import 
org.apache.hadoop.hbase.backup.BackupRestoreConstants;
+056import 
org.apache.hadoop.hbase.backup.BackupRestoreConstants.BackupCommand;
+057import 
org.apache.hadoop.hbase.backup.BackupType;
+058import 
org.apache.hadoop.hbase.backup.HBackupFileSystem;
+059import 
org.apache.hadoop.hbase.backup.util.BackupSet;
+060import 
org.apache.hadoop.hbase.backup.util.BackupUtils;
+061import 
org.apache.hadoop.hbase.client.Connection;
+062import 
org.apache.hadoop.hbase.client.ConnectionFactory;
+063import 
org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+064import 
org.apache.yetus.audience.InterfaceAudience;
+065import 
org.apache.hbase.thirdparty.com.google.common.collect.Lists;
+066import 
org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLine;
+067import 
org.apache.hbase.thirdparty.org.apache.commons.cli.HelpFormatter;
+068import 
org.apache.hbase.thirdparty.org.apache.commons.cli.Options;
 069
 070/**
 071 * General backup commands, options and 
usage messages

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/BackupCommands.CreateCommand.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/BackupCommands.CreateCommand.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/BackupCommands.CreateCommand.html
index a12ad81..f236300 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/BackupCommands.CreateCommand.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/BackupCommands.CreateCommand.html
@@ -49,31 +49,31 @@
 041import java.net.URI;
 042import java.util.List;
 043
-044import 
org.apache.commons.cli.CommandLine;
-045import 
org.apache.commons.cli.HelpFormatter;
-046import org.apache.commons.cli.Options;
-047import 
org.apache.commons.lang3.StringUtils;
-048import 
org.apache.hadoop.conf.Configuration;
-049import 
org.apache.hadoop.conf.Configured;
-050import org.apache.hadoop.fs.FileSystem;
-051import org.apache.hadoop.fs.Path;
-052import 
org.apache.hadoop.hbase.HBaseConfiguration;
-053import 
org.apache.hadoop.hbase.TableName;
-054import 
org.apache.hadoop.hbase.backup.BackupAdmin;
-055import 
org.apache.hadoop.hbase.backup.BackupInfo;
-056import 

[35/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.PlanComparator.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.PlanComparator.html
 
b/devapidocs/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.PlanComparator.html
new file mode 100644
index 000..906a0f1
--- /dev/null
+++ 
b/devapidocs/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.PlanComparator.html
@@ -0,0 +1,295 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+
+SimpleRegionNormalizer.PlanComparator (Apache HBase 3.0.0-SNAPSHOT 
API)
+
+
+
+
+
+var methods = {"i0":10};
+var tabs = {65535:["t0","All Methods"],2:["t2","Instance 
Methods"],8:["t4","Concrete Methods"]};
+var altColor = "altColor";
+var rowColor = "rowColor";
+var tableTab = "tableTab";
+var activeTableTab = "activeTableTab";
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+Skip navigation links
+
+
+
+
+Overview
+Package
+Class
+Use
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+PrevClass
+NextClass
+
+
+Frames
+NoFrames
+
+
+AllClasses
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.hadoop.hbase.master.normalizer
+Class 
SimpleRegionNormalizer.PlanComparator
+
+
+
+https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">java.lang.Object
+
+
+org.apache.hadoop.hbase.master.normalizer.SimpleRegionNormalizer.PlanComparator
+
+
+
+
+
+
+
+All Implemented Interfaces:
+https://docs.oracle.com/javase/8/docs/api/java/util/Comparator.html?is-external=true;
 title="class or interface in java.util">ComparatorNormalizationPlan
+
+
+Enclosing class:
+SimpleRegionNormalizer
+
+
+
+static class SimpleRegionNormalizer.PlanComparator
+extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
+implements https://docs.oracle.com/javase/8/docs/api/java/util/Comparator.html?is-external=true;
 title="class or interface in java.util">ComparatorNormalizationPlan
+Comparator class that gives higher priority to region Split 
plan.
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+PlanComparator()
+
+
+
+
+
+
+
+
+
+Method Summary
+
+All MethodsInstance MethodsConcrete Methods
+
+Modifier and Type
+Method and Description
+
+
+int
+compare(NormalizationPlanplan1,
+   NormalizationPlanplan2)
+
+
+
+
+
+
+Methods inherited from classjava.lang.https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
+https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#clone--;
 title="class or interface in java.lang">clone, https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#equals-java.lang.Object-;
 title="class or interface in java.lang">equals, https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#finalize--;
 title="class or interface in java.lang">finalize, https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#getClass--;
 title="class or interface in java.lang">getClass, https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#hashCode--;
 title="class or interface in java.lang">hashCode, https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#notify--;
 title="class or interface in java.lang">notify, https://docs.oracle.com/javase/8/docs/api/ja
 va/lang/Object.html?is-external=true#notifyAll--" title="class or interface in 
java.lang">notifyAll, https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#toString--;
 title="class or interface in java.lang">toString, https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#wait--;
 title="class or interface in java.lang">wait, https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#wait-long-;
 title="class or interface in java.lang">wait, https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#wait-long-int-;
 title="class or interface in java.lang">wait
+
+
+
+
+
+Methods inherited from interfacejava.util.https://docs.oracle.com/javase/8/docs/api/java/util/Comparator.html?is-external=true;
 

[10/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.Bucket.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.Bucket.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.Bucket.html
index b035b7c..29f1b92 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.Bucket.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.Bucket.html
@@ -37,19 +37,19 @@
 029import java.util.Set;
 030import 
java.util.concurrent.atomic.LongAdder;
 031
-032import 
org.apache.hbase.thirdparty.com.google.common.collect.MinMaxPriorityQueue;
-033import 
org.apache.commons.collections4.map.LinkedMap;
-034import 
org.apache.yetus.audience.InterfaceAudience;
-035import org.slf4j.Logger;
-036import org.slf4j.LoggerFactory;
-037import 
org.apache.hadoop.hbase.io.hfile.BlockCacheKey;
-038import 
org.apache.hadoop.hbase.io.hfile.CacheConfig;
-039import 
org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.BucketEntry;
-040import 
com.fasterxml.jackson.annotation.JsonIgnoreProperties;
-041
-042import 
org.apache.hbase.thirdparty.com.google.common.base.MoreObjects;
-043import 
org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
-044import 
org.apache.hbase.thirdparty.com.google.common.primitives.Ints;
+032import 
org.apache.yetus.audience.InterfaceAudience;
+033import org.slf4j.Logger;
+034import org.slf4j.LoggerFactory;
+035import 
org.apache.hadoop.hbase.io.hfile.BlockCacheKey;
+036import 
org.apache.hadoop.hbase.io.hfile.CacheConfig;
+037import 
org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.BucketEntry;
+038import 
com.fasterxml.jackson.annotation.JsonIgnoreProperties;
+039
+040import 
org.apache.hbase.thirdparty.com.google.common.base.MoreObjects;
+041import 
org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
+042import 
org.apache.hbase.thirdparty.com.google.common.collect.MinMaxPriorityQueue;
+043import 
org.apache.hbase.thirdparty.com.google.common.primitives.Ints;
+044import 
org.apache.hbase.thirdparty.org.apache.commons.collections4.map.LinkedMap;
 045
 046/**
 047 * This class is used to allocate a block 
with specified size and free the block

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.BucketSizeInfo.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.BucketSizeInfo.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.BucketSizeInfo.html
index b035b7c..29f1b92 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.BucketSizeInfo.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.BucketSizeInfo.html
@@ -37,19 +37,19 @@
 029import java.util.Set;
 030import 
java.util.concurrent.atomic.LongAdder;
 031
-032import 
org.apache.hbase.thirdparty.com.google.common.collect.MinMaxPriorityQueue;
-033import 
org.apache.commons.collections4.map.LinkedMap;
-034import 
org.apache.yetus.audience.InterfaceAudience;
-035import org.slf4j.Logger;
-036import org.slf4j.LoggerFactory;
-037import 
org.apache.hadoop.hbase.io.hfile.BlockCacheKey;
-038import 
org.apache.hadoop.hbase.io.hfile.CacheConfig;
-039import 
org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.BucketEntry;
-040import 
com.fasterxml.jackson.annotation.JsonIgnoreProperties;
-041
-042import 
org.apache.hbase.thirdparty.com.google.common.base.MoreObjects;
-043import 
org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
-044import 
org.apache.hbase.thirdparty.com.google.common.primitives.Ints;
+032import 
org.apache.yetus.audience.InterfaceAudience;
+033import org.slf4j.Logger;
+034import org.slf4j.LoggerFactory;
+035import 
org.apache.hadoop.hbase.io.hfile.BlockCacheKey;
+036import 
org.apache.hadoop.hbase.io.hfile.CacheConfig;
+037import 
org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.BucketEntry;
+038import 
com.fasterxml.jackson.annotation.JsonIgnoreProperties;
+039
+040import 
org.apache.hbase.thirdparty.com.google.common.base.MoreObjects;
+041import 
org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
+042import 
org.apache.hbase.thirdparty.com.google.common.collect.MinMaxPriorityQueue;
+043import 
org.apache.hbase.thirdparty.com.google.common.primitives.Ints;
+044import 
org.apache.hbase.thirdparty.org.apache.commons.collections4.map.LinkedMap;
 045
 046/**
 047 * This class is used to allocate a block 
with specified size and free the block

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.IndexStatistics.html

[36/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/org/apache/hadoop/hbase/master/cleaner/CleanerChore.DirScanPool.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/master/cleaner/CleanerChore.DirScanPool.html
 
b/devapidocs/org/apache/hadoop/hbase/master/cleaner/CleanerChore.DirScanPool.html
new file mode 100644
index 000..cb7d24d
--- /dev/null
+++ 
b/devapidocs/org/apache/hadoop/hbase/master/cleaner/CleanerChore.DirScanPool.html
@@ -0,0 +1,415 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+
+CleanerChore.DirScanPool (Apache HBase 3.0.0-SNAPSHOT API)
+
+
+
+
+
+var methods = {"i0":10,"i1":10,"i2":10,"i3":10,"i4":10};
+var tabs = {65535:["t0","All Methods"],2:["t2","Instance 
Methods"],8:["t4","Concrete Methods"]};
+var altColor = "altColor";
+var rowColor = "rowColor";
+var tableTab = "tableTab";
+var activeTableTab = "activeTableTab";
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+Skip navigation links
+
+
+
+
+Overview
+Package
+Class
+Use
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+PrevClass
+NextClass
+
+
+Frames
+NoFrames
+
+
+AllClasses
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.hadoop.hbase.master.cleaner
+Class 
CleanerChore.DirScanPool
+
+
+
+https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">java.lang.Object
+
+
+org.apache.hadoop.hbase.master.cleaner.CleanerChore.DirScanPool
+
+
+
+
+
+
+
+Enclosing class:
+CleanerChoreT extends FileCleanerDelegate
+
+
+
+private static class CleanerChore.DirScanPool
+extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
+
+
+
+
+
+
+
+
+
+
+
+Field Summary
+
+Fields
+
+Modifier and Type
+Field and Description
+
+
+(package private) int
+cleanerLatch
+
+
+(package private) https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ForkJoinPool.html?is-external=true;
 title="class or interface in java.util.concurrent">ForkJoinPool
+pool
+
+
+(package private) https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/atomic/AtomicBoolean.html?is-external=true;
 title="class or interface in 
java.util.concurrent.atomic">AtomicBoolean
+reconfigNotification
+
+
+(package private) int
+size
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+DirScanPool(org.apache.hadoop.conf.Configurationconf)
+
+
+
+
+
+
+
+
+
+Method Summary
+
+All MethodsInstance MethodsConcrete Methods
+
+Modifier and Type
+Method and Description
+
+
+(package private) boolean
+canUpdateImmediately(org.apache.hadoop.conf.Configurationconf)
+Checks if pool can be updated immediately.
+
+
+
+(package private) void
+latchCountDown()
+
+
+(package private) void
+latchCountUp()
+
+
+(package private) void
+submit(https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ForkJoinTask.html?is-external=true;
 title="class or interface in 
java.util.concurrent">ForkJoinTasktask)
+
+
+(package private) void
+updatePool(longtimeout)
+Update pool with new size.
+
+
+
+
+
+
+
+Methods inherited from classjava.lang.https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
+https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#clone--;
 title="class or interface in java.lang">clone, https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#equals-java.lang.Object-;
 title="class or interface in java.lang">equals, https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#finalize--;
 title="class or interface in java.lang">finalize, https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#getClass--;
 title="class or interface in java.lang">getClass, https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#hashCode--;
 title="class or interface in java.lang">hashCode, https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#notify--;
 title="class or interface in java.lang">notify, https://docs.oracle.com/javase/8/docs/api/ja
 va/lang/Object.html?is-external=true#notifyAll--" title="class or interface in 
java.lang">notifyAll, https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#toString--;
 title="class or interface in java.lang">toString, 

[25/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/CompoundConfiguration.ImmutableConfWrapper.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/CompoundConfiguration.ImmutableConfWrapper.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/CompoundConfiguration.ImmutableConfWrapper.html
index 1038256..89735e7 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/CompoundConfiguration.ImmutableConfWrapper.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/CompoundConfiguration.ImmutableConfWrapper.html
@@ -36,398 +36,399 @@
 028import java.util.List;
 029import java.util.Map;
 030
-031import 
org.apache.commons.collections4.iterators.UnmodifiableIterator;
-032import 
org.apache.hadoop.conf.Configuration;
-033import 
org.apache.hadoop.hbase.util.Bytes;
-034import 
org.apache.yetus.audience.InterfaceAudience;
-035
-036/**
-037 * Do a shallow merge of multiple KV 
configuration pools. This is a very useful
-038 * utility class to easily add per-object 
configurations in addition to wider
-039 * scope settings. This is different from 
Configuration.addResource()
-040 * functionality, which performs a deep 
merge and mutates the common data
-041 * structure.
-042 * p
-043 * The iterator on CompoundConfiguration 
is unmodifiable. Obtaining iterator is an expensive
-044 * operation.
-045 * p
-046 * For clarity: the shallow merge allows 
the user to mutate either of the
-047 * configuration objects and have changes 
reflected everywhere. In contrast to a
-048 * deep merge, that requires you to 
explicitly know all applicable copies to
-049 * propagate changes.
-050 * 
-051 * WARNING: The values set in the 
CompoundConfiguration are do not handle Property variable
-052 * substitution.  However, if they are 
set in the underlying configuration substitutions are
-053 * done. 
-054 */
-055@InterfaceAudience.Private
-056public class CompoundConfiguration 
extends Configuration {
-057
-058  private Configuration mutableConf = 
null;
-059
-060  /**
-061   * Default Constructor. Initializes 
empty configuration
-062   */
-063  public CompoundConfiguration() {
-064  }
-065
-066  // Devs: these APIs are the same 
contract as their counterparts in
-067  // Configuration.java
-068  private interface ImmutableConfigMap 
extends IterableMap.EntryString,String {
-069String get(String key);
-070String getRaw(String key);
-071Class? getClassByName(String 
name) throws ClassNotFoundException;
-072int size();
-073  }
-074
-075  private final 
ListImmutableConfigMap configs = new ArrayList();
-076
-077  static class ImmutableConfWrapper 
implements  ImmutableConfigMap {
-078   private final Configuration c;
-079
-080ImmutableConfWrapper(Configuration 
conf) {
-081  c = conf;
-082}
-083
-084@Override
-085public 
IteratorMap.EntryString,String iterator() {
-086  return c.iterator();
-087}
-088
-089@Override
-090public String get(String key) {
-091  return c.get(key);
-092}
-093
-094@Override
-095public String getRaw(String key) {
-096  return c.getRaw(key);
-097}
-098
-099@Override
-100public Class? 
getClassByName(String name)
-101throws ClassNotFoundException {
-102  return c.getClassByName(name);
-103}
-104
-105@Override
-106public int size() {
-107  return c.size();
-108}
-109
-110@Override
-111public String toString() {
-112  return c.toString();
-113}
-114  }
-115
-116  /**
-117   * If set has been called, it will 
create a mutableConf.  This converts the mutableConf to an
-118   * immutable one and resets it to allow 
a new mutable conf.  This is used when a new map or
-119   * conf is added to the compound 
configuration to preserve proper override semantics.
-120   */
-121  void freezeMutableConf() {
-122if (mutableConf == null) {
-123  // do nothing if there is no 
current mutableConf
-124  return;
-125}
-126
-127this.configs.add(0, new 
ImmutableConfWrapper(mutableConf));
-128mutableConf = null;
-129  }
-130
-131  /**
-132   * Add Hadoop Configuration object to 
config list.
-133   * The added configuration overrides 
the previous ones if there are name collisions.
-134   * @param conf configuration object
-135   * @return this, for builder pattern
-136   */
-137  public CompoundConfiguration add(final 
Configuration conf) {
-138freezeMutableConf();
-139
-140if (conf instanceof 
CompoundConfiguration) {
-141  this.configs.addAll(0, 
((CompoundConfiguration) conf).configs);
-142  return this;
-143}
-144// put new config at the front of the 
list (top priority)
-145this.configs.add(0, new 
ImmutableConfWrapper(conf));
-146return this;
-147  }
-148
-149  /**
-150   * Add Bytes map to config list. This 
map is generally
-151   * created by HTableDescriptor or 
HColumnDescriptor, but can be 

[38/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/org/apache/hadoop/hbase/master/HMaster.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/master/HMaster.html 
b/devapidocs/org/apache/hadoop/hbase/master/HMaster.html
index afc44ef..c6298a1 100644
--- a/devapidocs/org/apache/hadoop/hbase/master/HMaster.html
+++ b/devapidocs/org/apache/hadoop/hbase/master/HMaster.html
@@ -128,7 +128,7 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.LimitedPrivate(value="Tools")
-public class HMaster
+public class HMaster
 extends HRegionServer
 implements MasterServices
 HMaster is the "master server" for HBase. An HBase cluster 
has one active
@@ -1485,7 +1485,7 @@ implements 
 
 LOG
-private staticorg.slf4j.Logger LOG
+private staticorg.slf4j.Logger LOG
 
 
 
@@ -1494,7 +1494,7 @@ implements 
 
 MASTER
-public static finalhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String MASTER
+public static finalhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String MASTER
 
 See Also:
 Constant
 Field Values
@@ -1507,7 +1507,7 @@ implements 
 
 activeMasterManager
-private finalActiveMasterManager activeMasterManager
+private finalActiveMasterManager activeMasterManager
 
 
 
@@ -1516,7 +1516,7 @@ implements 
 
 regionServerTracker
-RegionServerTracker regionServerTracker
+RegionServerTracker regionServerTracker
 
 
 
@@ -1525,7 +1525,7 @@ implements 
 
 drainingServerTracker
-privateDrainingServerTracker drainingServerTracker
+privateDrainingServerTracker drainingServerTracker
 
 
 
@@ -1534,7 +1534,7 @@ implements 
 
 loadBalancerTracker
-LoadBalancerTracker loadBalancerTracker
+LoadBalancerTracker loadBalancerTracker
 
 
 
@@ -1543,7 +1543,7 @@ implements 
 
 splitOrMergeTracker
-privateSplitOrMergeTracker splitOrMergeTracker
+privateSplitOrMergeTracker splitOrMergeTracker
 
 
 
@@ -1552,7 +1552,7 @@ implements 
 
 regionNormalizerTracker
-privateRegionNormalizerTracker 
regionNormalizerTracker
+privateRegionNormalizerTracker 
regionNormalizerTracker
 
 
 
@@ -1561,7 +1561,7 @@ implements 
 
 maintenanceModeTracker
-privateMasterMaintenanceModeTracker maintenanceModeTracker
+privateMasterMaintenanceModeTracker maintenanceModeTracker
 
 
 
@@ -1570,7 +1570,7 @@ implements 
 
 clusterSchemaService
-privateClusterSchemaService clusterSchemaService
+privateClusterSchemaService clusterSchemaService
 
 
 
@@ -1579,7 +1579,7 @@ implements 
 
 HBASE_MASTER_WAIT_ON_SERVICE_IN_SECONDS
-public static finalhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String HBASE_MASTER_WAIT_ON_SERVICE_IN_SECONDS
+public static finalhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String HBASE_MASTER_WAIT_ON_SERVICE_IN_SECONDS
 
 See Also:
 Constant
 Field Values
@@ -1592,7 +1592,7 @@ implements 
 
 DEFAULT_HBASE_MASTER_WAIT_ON_SERVICE_IN_SECONDS
-public static finalint DEFAULT_HBASE_MASTER_WAIT_ON_SERVICE_IN_SECONDS
+public static finalint DEFAULT_HBASE_MASTER_WAIT_ON_SERVICE_IN_SECONDS
 
 See Also:
 Constant
 Field Values
@@ -1605,7 +1605,7 @@ implements 
 
 metricsMaster
-finalMetricsMaster metricsMaster
+finalMetricsMaster metricsMaster
 
 
 
@@ -1614,7 +1614,7 @@ implements 
 
 fileSystemManager
-privateMasterFileSystem fileSystemManager
+privateMasterFileSystem fileSystemManager
 
 
 
@@ -1623,7 +1623,7 @@ implements 
 
 walManager
-privateMasterWalManager walManager
+privateMasterWalManager walManager
 
 
 
@@ -1632,7 +1632,7 @@ implements 
 
 serverManager
-private volatileServerManager serverManager
+private volatileServerManager serverManager
 
 
 
@@ -1641,7 +1641,7 @@ implements 
 
 assignmentManager
-privateAssignmentManager assignmentManager
+privateAssignmentManager assignmentManager
 
 
 
@@ -1650,7 +1650,7 @@ implements 
 
 replicationPeerManager
-privateReplicationPeerManager replicationPeerManager
+privateReplicationPeerManager replicationPeerManager
 
 
 
@@ -1659,7 +1659,7 @@ implements 
 
 rsFatals
-MemoryBoundedLogMessageBuffer rsFatals
+MemoryBoundedLogMessageBuffer rsFatals
 
 
 
@@ -1668,7 +1668,7 @@ implements 
 
 activeMaster
-private volatileboolean activeMaster
+private volatileboolean activeMaster
 
 
 
@@ -1677,7 +1677,7 @@ implements 
 
 initialized
-private finalProcedureEvent? 
initialized
+private finalProcedureEvent? 
initialized
 
 
 
@@ -1686,7 +1686,7 @@ implements 
 
 serviceStarted
-volatileboolean serviceStarted
+volatileboolean serviceStarted
 
 
 
@@ -1695,7 +1695,7 @@ implements 
 
 serverCrashProcessingEnabled
-private finalProcedureEvent? 
serverCrashProcessingEnabled
+private finalProcedureEvent? 
serverCrashProcessingEnabled
 
 
 
@@ -1704,7 +1704,7 @@ implements 
 
 maxBlancingTime
-private 

[06/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/master/HMaster.html
--
diff --git a/devapidocs/src-html/org/apache/hadoop/hbase/master/HMaster.html 
b/devapidocs/src-html/org/apache/hadoop/hbase/master/HMaster.html
index 79bf967..c8b113b 100644
--- a/devapidocs/src-html/org/apache/hadoop/hbase/master/HMaster.html
+++ b/devapidocs/src-html/org/apache/hadoop/hbase/master/HMaster.html
@@ -115,3514 +115,3517 @@
 107import 
org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer;
 108import 
org.apache.hadoop.hbase.master.balancer.ClusterStatusChore;
 109import 
org.apache.hadoop.hbase.master.balancer.LoadBalancerFactory;
-110import 
org.apache.hadoop.hbase.master.cleaner.HFileCleaner;
-111import 
org.apache.hadoop.hbase.master.cleaner.LogCleaner;
-112import 
org.apache.hadoop.hbase.master.cleaner.ReplicationBarrierCleaner;
-113import 
org.apache.hadoop.hbase.master.locking.LockManager;
-114import 
org.apache.hadoop.hbase.master.normalizer.NormalizationPlan;
-115import 
org.apache.hadoop.hbase.master.normalizer.NormalizationPlan.PlanType;
-116import 
org.apache.hadoop.hbase.master.normalizer.RegionNormalizer;
-117import 
org.apache.hadoop.hbase.master.normalizer.RegionNormalizerChore;
-118import 
org.apache.hadoop.hbase.master.normalizer.RegionNormalizerFactory;
-119import 
org.apache.hadoop.hbase.master.procedure.CreateTableProcedure;
-120import 
org.apache.hadoop.hbase.master.procedure.DeleteNamespaceProcedure;
-121import 
org.apache.hadoop.hbase.master.procedure.DeleteTableProcedure;
-122import 
org.apache.hadoop.hbase.master.procedure.DisableTableProcedure;
-123import 
org.apache.hadoop.hbase.master.procedure.EnableTableProcedure;
-124import 
org.apache.hadoop.hbase.master.procedure.MasterProcedureConstants;
-125import 
org.apache.hadoop.hbase.master.procedure.MasterProcedureEnv;
-126import 
org.apache.hadoop.hbase.master.procedure.MasterProcedureScheduler;
-127import 
org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil;
-128import 
org.apache.hadoop.hbase.master.procedure.ModifyTableProcedure;
-129import 
org.apache.hadoop.hbase.master.procedure.ProcedurePrepareLatch;
-130import 
org.apache.hadoop.hbase.master.procedure.RecoverMetaProcedure;
-131import 
org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure;
-132import 
org.apache.hadoop.hbase.master.replication.AddPeerProcedure;
-133import 
org.apache.hadoop.hbase.master.replication.DisablePeerProcedure;
-134import 
org.apache.hadoop.hbase.master.replication.EnablePeerProcedure;
-135import 
org.apache.hadoop.hbase.master.replication.ModifyPeerProcedure;
-136import 
org.apache.hadoop.hbase.master.replication.RemovePeerProcedure;
-137import 
org.apache.hadoop.hbase.master.replication.ReplicationPeerManager;
-138import 
org.apache.hadoop.hbase.master.replication.UpdatePeerConfigProcedure;
-139import 
org.apache.hadoop.hbase.master.snapshot.SnapshotManager;
-140import 
org.apache.hadoop.hbase.mob.MobConstants;
-141import 
org.apache.hadoop.hbase.monitoring.MemoryBoundedLogMessageBuffer;
-142import 
org.apache.hadoop.hbase.monitoring.MonitoredTask;
-143import 
org.apache.hadoop.hbase.monitoring.TaskMonitor;
-144import 
org.apache.hadoop.hbase.procedure.MasterProcedureManagerHost;
-145import 
org.apache.hadoop.hbase.procedure.flush.MasterFlushTableProcedureManager;
-146import 
org.apache.hadoop.hbase.procedure2.LockedResource;
-147import 
org.apache.hadoop.hbase.procedure2.Procedure;
-148import 
org.apache.hadoop.hbase.procedure2.ProcedureEvent;
-149import 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor;
-150import 
org.apache.hadoop.hbase.procedure2.RemoteProcedureDispatcher.RemoteProcedure;
-151import 
org.apache.hadoop.hbase.procedure2.RemoteProcedureException;
-152import 
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore;
-153import 
org.apache.hadoop.hbase.quotas.MasterQuotaManager;
-154import 
org.apache.hadoop.hbase.quotas.MasterSpaceQuotaObserver;
-155import 
org.apache.hadoop.hbase.quotas.QuotaObserverChore;
-156import 
org.apache.hadoop.hbase.quotas.QuotaUtil;
-157import 
org.apache.hadoop.hbase.quotas.SnapshotQuotaObserverChore;
-158import 
org.apache.hadoop.hbase.quotas.SpaceQuotaSnapshotNotifier;
-159import 
org.apache.hadoop.hbase.quotas.SpaceQuotaSnapshotNotifierFactory;
-160import 
org.apache.hadoop.hbase.regionserver.DefaultStoreEngine;
-161import 
org.apache.hadoop.hbase.regionserver.HRegionServer;
-162import 
org.apache.hadoop.hbase.regionserver.HStore;
-163import 
org.apache.hadoop.hbase.regionserver.RSRpcServices;
-164import 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost;
-165import 
org.apache.hadoop.hbase.regionserver.RegionSplitPolicy;
-166import 
org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy;
-167import 
org.apache.hadoop.hbase.regionserver.compactions.FIFOCompactionPolicy;
-168import 

[42/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/org/apache/hadoop/hbase/CompoundConfiguration.ImmutableConfWrapper.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/CompoundConfiguration.ImmutableConfWrapper.html
 
b/devapidocs/org/apache/hadoop/hbase/CompoundConfiguration.ImmutableConfWrapper.html
index e584222..f3b180f 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/CompoundConfiguration.ImmutableConfWrapper.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/CompoundConfiguration.ImmutableConfWrapper.html
@@ -117,7 +117,7 @@ var activeTableTab = "activeTableTab";
 
 
 
-static class CompoundConfiguration.ImmutableConfWrapper
+static class CompoundConfiguration.ImmutableConfWrapper
 extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
 implements CompoundConfiguration.ImmutableConfigMap
 
@@ -233,7 +233,7 @@ implements 
 
 c
-private finalorg.apache.hadoop.conf.Configuration c
+private finalorg.apache.hadoop.conf.Configuration c
 
 
 
@@ -250,7 +250,7 @@ implements 
 
 ImmutableConfWrapper
-ImmutableConfWrapper(org.apache.hadoop.conf.Configurationconf)
+ImmutableConfWrapper(org.apache.hadoop.conf.Configurationconf)
 
 
 
@@ -267,7 +267,7 @@ implements 
 
 iterator
-publichttps://docs.oracle.com/javase/8/docs/api/java/util/Iterator.html?is-external=true;
 title="class or interface in java.util">Iteratorhttps://docs.oracle.com/javase/8/docs/api/java/util/Map.Entry.html?is-external=true;
 title="class or interface in java.util">Map.Entryhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String,https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringiterator()
+publichttps://docs.oracle.com/javase/8/docs/api/java/util/Iterator.html?is-external=true;
 title="class or interface in java.util">Iteratorhttps://docs.oracle.com/javase/8/docs/api/java/util/Map.Entry.html?is-external=true;
 title="class or interface in java.util">Map.Entryhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String,https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringiterator()
 
 Specified by:
 https://docs.oracle.com/javase/8/docs/api/java/lang/Iterable.html?is-external=true#iterator--;
 title="class or interface in java.lang">iteratorin 
interfacehttps://docs.oracle.com/javase/8/docs/api/java/lang/Iterable.html?is-external=true;
 title="class or interface in java.lang">Iterablehttps://docs.oracle.com/javase/8/docs/api/java/util/Map.Entry.html?is-external=true;
 title="class or interface in java.util">Map.Entryhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String,https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
@@ -280,7 +280,7 @@ implements 
 
 get
-publichttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringget(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringkey)
+publichttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringget(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringkey)
 
 Specified by:
 getin
 interfaceCompoundConfiguration.ImmutableConfigMap
@@ -293,7 +293,7 @@ implements 
 
 getRaw
-publichttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringgetRaw(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringkey)
+publichttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringgetRaw(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringkey)
 
 Specified by:
 getRawin
 interfaceCompoundConfiguration.ImmutableConfigMap
@@ -306,7 +306,7 @@ implements 
 
 getClassByName
-publichttps://docs.oracle.com/javase/8/docs/api/java/lang/Class.html?is-external=true;
 title="class or interface in java.lang">Class?getClassByName(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringname)
+publichttps://docs.oracle.com/javase/8/docs/api/java/lang/Class.html?is-external=true;
 title="class or interface in 

[15/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/client/ConnectionImplementation.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/ConnectionImplementation.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/ConnectionImplementation.html
index d7aa8b1..98a45a0 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/ConnectionImplementation.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/ConnectionImplementation.html
@@ -680,1330 +680,1333 @@
 672}
 673ListHRegionLocation locations 
= new ArrayList();
 674for (RegionInfo regionInfo : regions) 
{
-675  RegionLocations list = 
locateRegion(tableName, regionInfo.getStartKey(), useCache, true);
-676  if (list != null) {
-677for (HRegionLocation loc : 
list.getRegionLocations()) {
-678  if (loc != null) {
-679locations.add(loc);
-680  }
-681}
-682  }
-683}
-684return locations;
-685  }
-686
-687  @Override
-688  public HRegionLocation 
locateRegion(final TableName tableName, final byte[] row)
-689  throws IOException {
-690RegionLocations locations = 
locateRegion(tableName, row, true, true);
-691return locations == null ? null : 
locations.getRegionLocation();
-692  }
-693
-694  @Override
-695  public HRegionLocation 
relocateRegion(final TableName tableName, final byte[] row)
-696  throws IOException {
-697RegionLocations locations =
-698  relocateRegion(tableName, row, 
RegionReplicaUtil.DEFAULT_REPLICA_ID);
-699return locations == null ? null
-700  : 
locations.getRegionLocation(RegionReplicaUtil.DEFAULT_REPLICA_ID);
-701  }
-702
-703  @Override
-704  public RegionLocations 
relocateRegion(final TableName tableName,
-705  final byte [] row, int replicaId) 
throws IOException{
-706// Since this is an explicit request 
not to use any caching, finding
-707// disabled tables should not be 
desirable.  This will ensure that an exception is thrown when
-708// the first time a disabled table is 
interacted with.
-709if 
(!tableName.equals(TableName.META_TABLE_NAME)  
isTableDisabled(tableName)) {
-710  throw new 
TableNotEnabledException(tableName.getNameAsString() + " is disabled.");
-711}
-712
-713return locateRegion(tableName, row, 
false, true, replicaId);
-714  }
+675  if 
(!RegionReplicaUtil.isDefaultReplica(regionInfo)) {
+676continue;
+677  }
+678  RegionLocations list = 
locateRegion(tableName, regionInfo.getStartKey(), useCache, true);
+679  if (list != null) {
+680for (HRegionLocation loc : 
list.getRegionLocations()) {
+681  if (loc != null) {
+682locations.add(loc);
+683  }
+684}
+685  }
+686}
+687return locations;
+688  }
+689
+690  @Override
+691  public HRegionLocation 
locateRegion(final TableName tableName, final byte[] row)
+692  throws IOException {
+693RegionLocations locations = 
locateRegion(tableName, row, true, true);
+694return locations == null ? null : 
locations.getRegionLocation();
+695  }
+696
+697  @Override
+698  public HRegionLocation 
relocateRegion(final TableName tableName, final byte[] row)
+699  throws IOException {
+700RegionLocations locations =
+701  relocateRegion(tableName, row, 
RegionReplicaUtil.DEFAULT_REPLICA_ID);
+702return locations == null ? null
+703  : 
locations.getRegionLocation(RegionReplicaUtil.DEFAULT_REPLICA_ID);
+704  }
+705
+706  @Override
+707  public RegionLocations 
relocateRegion(final TableName tableName,
+708  final byte [] row, int replicaId) 
throws IOException{
+709// Since this is an explicit request 
not to use any caching, finding
+710// disabled tables should not be 
desirable.  This will ensure that an exception is thrown when
+711// the first time a disabled table is 
interacted with.
+712if 
(!tableName.equals(TableName.META_TABLE_NAME)  
isTableDisabled(tableName)) {
+713  throw new 
TableNotEnabledException(tableName.getNameAsString() + " is disabled.");
+714}
 715
-716  @Override
-717  public RegionLocations 
locateRegion(final TableName tableName, final byte[] row, boolean useCache,
-718  boolean retry) throws IOException 
{
-719return locateRegion(tableName, row, 
useCache, retry, RegionReplicaUtil.DEFAULT_REPLICA_ID);
-720  }
-721
-722  @Override
-723  public RegionLocations 
locateRegion(final TableName tableName, final byte[] row, boolean useCache,
-724  boolean retry, int replicaId) 
throws IOException {
-725checkClosed();
-726if (tableName == null || 
tableName.getName().length == 0) {
-727  throw new 
IllegalArgumentException("table name cannot be null or zero length");
-728}
-729if 
(tableName.equals(TableName.META_TABLE_NAME)) {
-730  return locateMeta(tableName, 
useCache, replicaId);

[49/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/apidocs/index-all.html
--
diff --git a/apidocs/index-all.html b/apidocs/index-all.html
index d06338b..3fb0348 100644
--- a/apidocs/index-all.html
+++ b/apidocs/index-all.html
@@ -13761,11 +13761,11 @@ Input/OutputFormats, a table indexing MapReduce job, 
and utility methods.
 QOS attributes: these attributes are used to demarcate RPC 
call processing
  by different set of handlers.
 
-processOptions(CommandLine)
 - Method in class org.apache.hadoop.hbase.snapshot.ExportSnapshot
+processOptions(CommandLine)
 - Method in class org.apache.hadoop.hbase.snapshot.ExportSnapshot
 
-processOptions(CommandLine)
 - Method in class org.apache.hadoop.hbase.snapshot.SnapshotInfo
+processOptions(CommandLine)
 - Method in class org.apache.hadoop.hbase.snapshot.SnapshotInfo
 
-processOptions(CommandLine)
 - Method in class org.apache.hadoop.hbase.util.RegionMover
+processOptions(CommandLine)
 - Method in class org.apache.hadoop.hbase.util.RegionMover
 
 processParameter(String,
 String) - Method in class org.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/apidocs/org/apache/hadoop/hbase/snapshot/ExportSnapshot.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/snapshot/ExportSnapshot.html 
b/apidocs/org/apache/hadoop/hbase/snapshot/ExportSnapshot.html
index 2017e2f..555a256 100644
--- a/apidocs/org/apache/hadoop/hbase/snapshot/ExportSnapshot.html
+++ b/apidocs/org/apache/hadoop/hbase/snapshot/ExportSnapshot.html
@@ -227,7 +227,7 @@ implements org.apache.hadoop.util.Tool
 
 
 protected void
-processOptions(org.apache.commons.cli.CommandLinecmd)
+processOptions(org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLinecmd)
 This method is called to process the options after they 
have been parsed.
 
 
@@ -353,13 +353,13 @@ implements org.apache.hadoop.util.Tool
 
 
 Method Detail
-
+
 
 
 
 
 processOptions
-protectedvoidprocessOptions(org.apache.commons.cli.CommandLinecmd)
+protectedvoidprocessOptions(org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLinecmd)
 Description copied from 
class:org.apache.hadoop.hbase.util.AbstractHBaseTool
 This method is called to process the options after they 
have been parsed.
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/apidocs/org/apache/hadoop/hbase/snapshot/SnapshotInfo.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/snapshot/SnapshotInfo.html 
b/apidocs/org/apache/hadoop/hbase/snapshot/SnapshotInfo.html
index b3d4ad7..33b3f8a 100644
--- a/apidocs/org/apache/hadoop/hbase/snapshot/SnapshotInfo.html
+++ b/apidocs/org/apache/hadoop/hbase/snapshot/SnapshotInfo.html
@@ -231,7 +231,7 @@ extends org.apache.hadoop.hbase.util.AbstractHBaseTool
 
 
 protected void
-processOptions(org.apache.commons.cli.CommandLinecmd)
+processOptions(org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLinecmd)
 This method is called to process the options after they 
have been parsed.
 
 
@@ -317,13 +317,13 @@ extends 
org.apache.hadoop.hbase.util.AbstractHBaseTool
 
 
 
-
+
 
 
 
 
 processOptions
-protectedvoidprocessOptions(org.apache.commons.cli.CommandLinecmd)
+protectedvoidprocessOptions(org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLinecmd)
 Description copied from 
class:org.apache.hadoop.hbase.util.AbstractHBaseTool
 This method is called to process the options after they 
have been parsed.
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/apidocs/org/apache/hadoop/hbase/util/RegionMover.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/util/RegionMover.html 
b/apidocs/org/apache/hadoop/hbase/util/RegionMover.html
index f13e256..691d5cb 100644
--- a/apidocs/org/apache/hadoop/hbase/util/RegionMover.html
+++ b/apidocs/org/apache/hadoop/hbase/util/RegionMover.html
@@ -119,7 +119,7 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Public
-public class RegionMover
+public class RegionMover
 extends org.apache.hadoop.hbase.util.AbstractHBaseTool
 Tool for loading/unloading regions to/from given 
regionserver This tool can be run from Command
  line directly as a utility. Supports Ack/No Ack mode for loading/unloading 
operations.Ack mode
@@ -218,7 +218,7 @@ extends org.apache.hadoop.hbase.util.AbstractHBaseTool
 
 
 protected void
-processOptions(org.apache.commons.cli.CommandLinecmd)
+processOptions(org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLinecmd)
 This method is called to process the options after they 
have been parsed.
 
 
@@ -266,7 +266,7 @@ extends org.apache.hadoop.hbase.util.AbstractHBaseTool
 
 
 MOVE_RETRIES_MAX_KEY
-public static 

[13/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.SimpleReporter.Builder.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.SimpleReporter.Builder.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.SimpleReporter.Builder.html
index 50caf18..61bf913 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.SimpleReporter.Builder.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.SimpleReporter.Builder.html
@@ -45,773 +45,774 @@
 037import java.util.TimeZone;
 038import java.util.concurrent.TimeUnit;
 039
-040import 
org.apache.commons.cli.CommandLine;
-041import 
org.apache.commons.cli.CommandLineParser;
-042import 
org.apache.commons.cli.HelpFormatter;
-043import org.apache.commons.cli.Option;
-044import 
org.apache.commons.cli.OptionGroup;
-045import org.apache.commons.cli.Options;
-046import 
org.apache.commons.cli.ParseException;
-047import 
org.apache.commons.cli.PosixParser;
-048import 
org.apache.commons.lang3.StringUtils;
-049import 
org.apache.hadoop.conf.Configuration;
-050import 
org.apache.hadoop.conf.Configured;
-051import org.apache.hadoop.fs.FileSystem;
-052import org.apache.hadoop.fs.Path;
-053import org.apache.hadoop.hbase.Cell;
-054import 
org.apache.hadoop.hbase.CellComparator;
-055import 
org.apache.hadoop.hbase.CellUtil;
-056import 
org.apache.hadoop.hbase.HBaseConfiguration;
-057import 
org.apache.hadoop.hbase.HBaseInterfaceAudience;
-058import 
org.apache.hadoop.hbase.HConstants;
-059import 
org.apache.hadoop.hbase.HRegionInfo;
-060import 
org.apache.hadoop.hbase.KeyValue;
-061import 
org.apache.hadoop.hbase.KeyValueUtil;
-062import 
org.apache.hadoop.hbase.PrivateCellUtil;
-063import 
org.apache.hadoop.hbase.TableName;
-064import org.apache.hadoop.hbase.Tag;
-065import 
org.apache.hadoop.hbase.io.FSDataInputStreamWrapper;
-066import 
org.apache.hadoop.hbase.io.hfile.HFile.FileInfo;
-067import 
org.apache.hadoop.hbase.mob.MobUtils;
-068import 
org.apache.hadoop.hbase.regionserver.HStoreFile;
-069import 
org.apache.hadoop.hbase.regionserver.TimeRangeTracker;
-070import 
org.apache.hadoop.hbase.util.BloomFilter;
-071import 
org.apache.hadoop.hbase.util.BloomFilterFactory;
-072import 
org.apache.hadoop.hbase.util.BloomFilterUtil;
-073import 
org.apache.hadoop.hbase.util.Bytes;
-074import 
org.apache.hadoop.hbase.util.FSUtils;
-075import 
org.apache.hadoop.hbase.util.HFileArchiveUtil;
-076import org.apache.hadoop.util.Tool;
-077import 
org.apache.hadoop.util.ToolRunner;
-078import 
org.apache.yetus.audience.InterfaceAudience;
-079import 
org.apache.yetus.audience.InterfaceStability;
-080import org.slf4j.Logger;
-081import org.slf4j.LoggerFactory;
-082
-083import 
com.codahale.metrics.ConsoleReporter;
-084import com.codahale.metrics.Counter;
-085import com.codahale.metrics.Gauge;
-086import com.codahale.metrics.Histogram;
-087import com.codahale.metrics.Meter;
-088import 
com.codahale.metrics.MetricFilter;
-089import 
com.codahale.metrics.MetricRegistry;
-090import 
com.codahale.metrics.ScheduledReporter;
-091import com.codahale.metrics.Snapshot;
-092import com.codahale.metrics.Timer;
-093
-094/**
-095 * Implements pretty-printing 
functionality for {@link HFile}s.
-096 */
-097@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.TOOLS)
-098@InterfaceStability.Evolving
-099public class HFilePrettyPrinter extends 
Configured implements Tool {
-100
-101  private static final Logger LOG = 
LoggerFactory.getLogger(HFilePrettyPrinter.class);
-102
-103  private Options options = new 
Options();
-104
-105  private boolean verbose;
-106  private boolean printValue;
-107  private boolean printKey;
-108  private boolean shouldPrintMeta;
-109  private boolean printBlockIndex;
-110  private boolean printBlockHeaders;
-111  private boolean printStats;
-112  private boolean checkRow;
-113  private boolean checkFamily;
-114  private boolean isSeekToRow = false;
-115  private boolean checkMobIntegrity = 
false;
-116  private MapString, 
ListPath mobFileLocations;
-117  private static final int 
FOUND_MOB_FILES_CACHE_CAPACITY = 50;
-118  private static final int 
MISSING_MOB_FILES_CACHE_CAPACITY = 20;
-119  private PrintStream out = System.out;
-120  private PrintStream err = System.err;
-121
-122  /**
-123   * The row which the user wants to 
specify and print all the KeyValues for.
-124   */
-125  private byte[] row = null;
-126
-127  private ListPath files = new 
ArrayList();
-128  private int count;
-129
-130  private static final String FOUR_SPACES 
= "";
-131
-132  public HFilePrettyPrinter() {
-133super();
-134init();
-135  }
-136
-137  public HFilePrettyPrinter(Configuration 
conf) {
-138super(conf);
-139init();
-140  }
-141
-142  private void init() {
-143options.addOption("v", 

[39/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.KeyValueStatsCollector.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.KeyValueStatsCollector.html
 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.KeyValueStatsCollector.html
index a11a57c..00052ea 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.KeyValueStatsCollector.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.KeyValueStatsCollector.html
@@ -113,7 +113,7 @@ var activeTableTab = "activeTableTab";
 
 
 
-private static class HFilePrettyPrinter.KeyValueStatsCollector
+private static class HFilePrettyPrinter.KeyValueStatsCollector
 extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
 
 
@@ -263,7 +263,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 metricsRegistry
-private finalcom.codahale.metrics.MetricRegistry metricsRegistry
+private finalcom.codahale.metrics.MetricRegistry metricsRegistry
 
 
 
@@ -272,7 +272,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 metricsOutput
-private finalhttps://docs.oracle.com/javase/8/docs/api/java/io/ByteArrayOutputStream.html?is-external=true;
 title="class or interface in java.io">ByteArrayOutputStream metricsOutput
+private finalhttps://docs.oracle.com/javase/8/docs/api/java/io/ByteArrayOutputStream.html?is-external=true;
 title="class or interface in java.io">ByteArrayOutputStream metricsOutput
 
 
 
@@ -281,7 +281,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 simpleReporter
-private finalHFilePrettyPrinter.SimpleReporter simpleReporter
+private finalHFilePrettyPrinter.SimpleReporter simpleReporter
 
 
 
@@ -290,7 +290,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 keyLen
-com.codahale.metrics.Histogram keyLen
+com.codahale.metrics.Histogram keyLen
 
 
 
@@ -299,7 +299,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 valLen
-com.codahale.metrics.Histogram valLen
+com.codahale.metrics.Histogram valLen
 
 
 
@@ -308,7 +308,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 rowSizeBytes
-com.codahale.metrics.Histogram rowSizeBytes
+com.codahale.metrics.Histogram rowSizeBytes
 
 
 
@@ -317,7 +317,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 rowSizeCols
-com.codahale.metrics.Histogram rowSizeCols
+com.codahale.metrics.Histogram rowSizeCols
 
 
 
@@ -326,7 +326,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 curRowBytes
-long curRowBytes
+long curRowBytes
 
 
 
@@ -335,7 +335,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 curRowCols
-long curRowCols
+long curRowCols
 
 
 
@@ -344,7 +344,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 biggestRow
-byte[] biggestRow
+byte[] biggestRow
 
 
 
@@ -353,7 +353,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 prevCell
-privateCell prevCell
+privateCell prevCell
 
 
 
@@ -362,7 +362,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 maxRowBytes
-privatelong maxRowBytes
+privatelong maxRowBytes
 
 
 
@@ -371,7 +371,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 curRowKeyLength
-privatelong curRowKeyLength
+privatelong curRowKeyLength
 
 
 
@@ -388,7 +388,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 KeyValueStatsCollector
-privateKeyValueStatsCollector()
+privateKeyValueStatsCollector()
 
 
 
@@ -405,7 +405,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 collect
-publicvoidcollect(Cellcell)
+publicvoidcollect(Cellcell)
 
 
 
@@ -414,7 +414,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 collectRow
-privatevoidcollectRow()
+privatevoidcollectRow()
 
 
 
@@ -423,7 +423,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 finish
-publicvoidfinish()
+publicvoidfinish()
 
 
 
@@ -432,7 +432,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 toString
-publichttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringtoString()
+publichttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringtoString()
 
 Overrides:
 https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#toString--;
 title="class or interface in java.lang">toStringin 

[27/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/org/apache/hadoop/hbase/util/RegionMover.MoveWithoutAck.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/util/RegionMover.MoveWithoutAck.html 
b/devapidocs/org/apache/hadoop/hbase/util/RegionMover.MoveWithoutAck.html
index efc1791..6e81af2 100644
--- a/devapidocs/org/apache/hadoop/hbase/util/RegionMover.MoveWithoutAck.html
+++ b/devapidocs/org/apache/hadoop/hbase/util/RegionMover.MoveWithoutAck.html
@@ -117,7 +117,7 @@ var activeTableTab = "activeTableTab";
 
 
 
-private static class RegionMover.MoveWithoutAck
+private static class RegionMover.MoveWithoutAck
 extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
 implements https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Callable.html?is-external=true;
 title="class or interface in java.util.concurrent">Callablehttps://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true;
 title="class or interface in java.lang">Boolean
 Move Regions without Acknowledging.Usefule in case of RS 
shutdown as we might want to shut the
@@ -228,7 +228,7 @@ implements https://docs.oracle.com/javase/8/docs/api/java/util/concurre
 
 
 admin
-privateAdmin admin
+privateAdmin admin
 
 
 
@@ -237,7 +237,7 @@ implements https://docs.oracle.com/javase/8/docs/api/java/util/concurre
 
 
 region
-privateRegionInfo region
+privateRegionInfo region
 
 
 
@@ -246,7 +246,7 @@ implements https://docs.oracle.com/javase/8/docs/api/java/util/concurre
 
 
 targetServer
-privatehttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String targetServer
+privatehttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String targetServer
 
 
 
@@ -255,7 +255,7 @@ implements https://docs.oracle.com/javase/8/docs/api/java/util/concurre
 
 
 movedRegions
-privatehttps://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionInfo movedRegions
+privatehttps://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionInfo movedRegions
 
 
 
@@ -264,7 +264,7 @@ implements https://docs.oracle.com/javase/8/docs/api/java/util/concurre
 
 
 sourceServer
-privatehttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String sourceServer
+privatehttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String sourceServer
 
 
 
@@ -281,7 +281,7 @@ implements https://docs.oracle.com/javase/8/docs/api/java/util/concurre
 
 
 MoveWithoutAck
-publicMoveWithoutAck(Adminadmin,
+publicMoveWithoutAck(Adminadmin,
   RegionInforegionInfo,
   https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringsourceServer,
   https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringtargetServer,
@@ -302,7 +302,7 @@ implements https://docs.oracle.com/javase/8/docs/api/java/util/concurre
 
 
 call
-publichttps://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true;
 title="class or interface in java.lang">Booleancall()
+publichttps://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true;
 title="class or interface in java.lang">Booleancall()
 
 Specified by:
 https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Callable.html?is-external=true#call--;
 title="class or interface in java.util.concurrent">callin 
interfacehttps://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Callable.html?is-external=true;
 title="class or interface in java.util.concurrent">Callablehttps://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true;
 title="class or interface in java.lang">Boolean

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/org/apache/hadoop/hbase/util/RegionMover.RegionMoverBuilder.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/util/RegionMover.RegionMoverBuilder.html 
b/devapidocs/org/apache/hadoop/hbase/util/RegionMover.RegionMoverBuilder.html
index c451701..d4d8db5 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/util/RegionMover.RegionMoverBuilder.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/util/RegionMover.RegionMoverBuilder.html
@@ -113,7 +113,7 @@ var activeTableTab = "activeTableTab";
 
 
 
-public static class RegionMover.RegionMoverBuilder
+public static class 

[14/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.KeyValueStatsCollector.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.KeyValueStatsCollector.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.KeyValueStatsCollector.html
index 50caf18..61bf913 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.KeyValueStatsCollector.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.KeyValueStatsCollector.html
@@ -45,773 +45,774 @@
 037import java.util.TimeZone;
 038import java.util.concurrent.TimeUnit;
 039
-040import 
org.apache.commons.cli.CommandLine;
-041import 
org.apache.commons.cli.CommandLineParser;
-042import 
org.apache.commons.cli.HelpFormatter;
-043import org.apache.commons.cli.Option;
-044import 
org.apache.commons.cli.OptionGroup;
-045import org.apache.commons.cli.Options;
-046import 
org.apache.commons.cli.ParseException;
-047import 
org.apache.commons.cli.PosixParser;
-048import 
org.apache.commons.lang3.StringUtils;
-049import 
org.apache.hadoop.conf.Configuration;
-050import 
org.apache.hadoop.conf.Configured;
-051import org.apache.hadoop.fs.FileSystem;
-052import org.apache.hadoop.fs.Path;
-053import org.apache.hadoop.hbase.Cell;
-054import 
org.apache.hadoop.hbase.CellComparator;
-055import 
org.apache.hadoop.hbase.CellUtil;
-056import 
org.apache.hadoop.hbase.HBaseConfiguration;
-057import 
org.apache.hadoop.hbase.HBaseInterfaceAudience;
-058import 
org.apache.hadoop.hbase.HConstants;
-059import 
org.apache.hadoop.hbase.HRegionInfo;
-060import 
org.apache.hadoop.hbase.KeyValue;
-061import 
org.apache.hadoop.hbase.KeyValueUtil;
-062import 
org.apache.hadoop.hbase.PrivateCellUtil;
-063import 
org.apache.hadoop.hbase.TableName;
-064import org.apache.hadoop.hbase.Tag;
-065import 
org.apache.hadoop.hbase.io.FSDataInputStreamWrapper;
-066import 
org.apache.hadoop.hbase.io.hfile.HFile.FileInfo;
-067import 
org.apache.hadoop.hbase.mob.MobUtils;
-068import 
org.apache.hadoop.hbase.regionserver.HStoreFile;
-069import 
org.apache.hadoop.hbase.regionserver.TimeRangeTracker;
-070import 
org.apache.hadoop.hbase.util.BloomFilter;
-071import 
org.apache.hadoop.hbase.util.BloomFilterFactory;
-072import 
org.apache.hadoop.hbase.util.BloomFilterUtil;
-073import 
org.apache.hadoop.hbase.util.Bytes;
-074import 
org.apache.hadoop.hbase.util.FSUtils;
-075import 
org.apache.hadoop.hbase.util.HFileArchiveUtil;
-076import org.apache.hadoop.util.Tool;
-077import 
org.apache.hadoop.util.ToolRunner;
-078import 
org.apache.yetus.audience.InterfaceAudience;
-079import 
org.apache.yetus.audience.InterfaceStability;
-080import org.slf4j.Logger;
-081import org.slf4j.LoggerFactory;
-082
-083import 
com.codahale.metrics.ConsoleReporter;
-084import com.codahale.metrics.Counter;
-085import com.codahale.metrics.Gauge;
-086import com.codahale.metrics.Histogram;
-087import com.codahale.metrics.Meter;
-088import 
com.codahale.metrics.MetricFilter;
-089import 
com.codahale.metrics.MetricRegistry;
-090import 
com.codahale.metrics.ScheduledReporter;
-091import com.codahale.metrics.Snapshot;
-092import com.codahale.metrics.Timer;
-093
-094/**
-095 * Implements pretty-printing 
functionality for {@link HFile}s.
-096 */
-097@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.TOOLS)
-098@InterfaceStability.Evolving
-099public class HFilePrettyPrinter extends 
Configured implements Tool {
-100
-101  private static final Logger LOG = 
LoggerFactory.getLogger(HFilePrettyPrinter.class);
-102
-103  private Options options = new 
Options();
-104
-105  private boolean verbose;
-106  private boolean printValue;
-107  private boolean printKey;
-108  private boolean shouldPrintMeta;
-109  private boolean printBlockIndex;
-110  private boolean printBlockHeaders;
-111  private boolean printStats;
-112  private boolean checkRow;
-113  private boolean checkFamily;
-114  private boolean isSeekToRow = false;
-115  private boolean checkMobIntegrity = 
false;
-116  private MapString, 
ListPath mobFileLocations;
-117  private static final int 
FOUND_MOB_FILES_CACHE_CAPACITY = 50;
-118  private static final int 
MISSING_MOB_FILES_CACHE_CAPACITY = 20;
-119  private PrintStream out = System.out;
-120  private PrintStream err = System.err;
-121
-122  /**
-123   * The row which the user wants to 
specify and print all the KeyValues for.
-124   */
-125  private byte[] row = null;
-126
-127  private ListPath files = new 
ArrayList();
-128  private int count;
-129
-130  private static final String FOUR_SPACES 
= "";
-131
-132  public HFilePrettyPrinter() {
-133super();
-134init();
-135  }
-136
-137  public HFilePrettyPrinter(Configuration 
conf) {
-138super(conf);
-139init();
-140  }
-141
-142  private void init() {
-143options.addOption("v", 

[30/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/org/apache/hadoop/hbase/replication/ZKReplicationQueueStorage.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/replication/ZKReplicationQueueStorage.html 
b/devapidocs/org/apache/hadoop/hbase/replication/ZKReplicationQueueStorage.html
index aec17e1..ee6838f 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/replication/ZKReplicationQueueStorage.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/replication/ZKReplicationQueueStorage.html
@@ -18,7 +18,7 @@
 catch(err) {
 }
 //-->
-var methods = 
{"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":10,"i14":10,"i15":10,"i16":10,"i17":10,"i18":10,"i19":10,"i20":10,"i21":10,"i22":10,"i23":10,"i24":10,"i25":10,"i26":10,"i27":10,"i28":10,"i29":10,"i30":10,"i31":10,"i32":10,"i33":10,"i34":10,"i35":10};
+var methods = 
{"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":10,"i14":10,"i15":10,"i16":10,"i17":10,"i18":10,"i19":10,"i20":10,"i21":10,"i22":10,"i23":10,"i24":10,"i25":10,"i26":10,"i27":10,"i28":10,"i29":10,"i30":10,"i31":10,"i32":10,"i33":10,"i34":10,"i35":10,"i36":10};
 var tabs = {65535:["t0","All Methods"],2:["t2","Instance 
Methods"],8:["t4","Concrete Methods"]};
 var altColor = "altColor";
 var rowColor = "rowColor";
@@ -180,7 +180,7 @@ implements 
-private https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
+(package private) https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 regionsZNode
 
 
@@ -415,24 +415,30 @@ implements 
 void
+removeLastSequenceIds(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringpeerId)
+Remove all the max sequence id record for the given 
peer.
+
+
+
+void
 removePeerFromHFileRefs(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringpeerId)
 Remove a peer from hfile reference queue.
 
 
-
+
 void
 removeQueue(ServerNameserverName,
https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringqueueId)
 Remove a replication queue for a given regionserver.
 
 
-
+
 void
 removeReplicatorIfQueueIsEmpty(ServerNameserverName)
 Remove the record of region server if the queue is 
empty.
 
 
-
+
 void
 removeWAL(ServerNameserverName,
  https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringqueueId,
@@ -440,14 +446,14 @@ implements Remove an WAL file from the given queue for a given 
regionserver.
 
 
-
+
 void
 setLastSequenceIds(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringpeerId,
   https://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true;
 title="class or interface in java.util">Maphttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String,https://docs.oracle.com/javase/8/docs/api/java/lang/Long.html?is-external=true;
 title="class or interface in java.lang">LonglastSeqIds)
 Set the max sequence id of a bunch of regions for a given 
peer.
 
 
-
+
 void
 setWALPosition(ServerNameserverName,
   https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringqueueId,
@@ -573,7 +579,7 @@ implements 
 
 regionsZNode
-private finalhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String regionsZNode
+finalhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String regionsZNode
 
 
 
@@ -590,7 +596,7 @@ implements 
 
 ZKReplicationQueueStorage
-publicZKReplicationQueueStorage(ZKWatcherzookeeper,
+publicZKReplicationQueueStorage(ZKWatcherzookeeper,
  
org.apache.hadoop.conf.Configurationconf)
 
 
@@ -608,7 +614,7 @@ implements 
 
 getRsNode
-privatehttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringgetRsNode(ServerNameserverName)
+privatehttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringgetRsNode(ServerNameserverName)
 
 
 
@@ -617,7 +623,7 @@ implements 
 
 getQueueNode
-privatehttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in 

[45/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/checkstyle-aggregate.html
--
diff --git a/checkstyle-aggregate.html b/checkstyle-aggregate.html
index a0c497f..77b9919 100644
--- a/checkstyle-aggregate.html
+++ b/checkstyle-aggregate.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase  Checkstyle Results
 
@@ -277,7 +277,7 @@
 3595
 0
 0
-15919
+15918
 
 Files
 
@@ -480,7 +480,7 @@
 org/apache/hadoop/hbase/IntegrationTestBackupRestore.java
 0
 0
-2
+3
 
 org/apache/hadoop/hbase/IntegrationTestBase.java
 0
@@ -3625,7 +3625,7 @@
 org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.java
 0
 0
-34
+33
 
 org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
 0
@@ -9110,7 +9110,7 @@
 org/apache/hadoop/hbase/test/IntegrationTestReplication.java
 0
 0
-8
+7
 
 org/apache/hadoop/hbase/test/IntegrationTestTimeBoundedMultiGetRequestsWithRegionReplicas.java
 0
@@ -10290,12 +10290,12 @@
 sortStaticImportsAlphabetically: true
 groups: 
*,org.apache.hbase.thirdparty,org.apache.hadoop.hbase.shaded
 option: top
-1227
+1225
 Error
 
 
 http://checkstyle.sourceforge.net/config_imports.html#RedundantImport;>RedundantImport
-27
+28
 Error
 
 
@@ -10320,12 +10320,12 @@
 http://checkstyle.sourceforge.net/config_javadoc.html#JavadocTagContinuationIndentation;>JavadocTagContinuationIndentation
 
 offset: 2
-798
+784
 Error
 
 
 http://checkstyle.sourceforge.net/config_javadoc.html#NonEmptyAtclauseDescription;>NonEmptyAtclauseDescription
-3835
+3849
 Error
 
 misc
@@ -11329,19 +11329,19 @@
 indentation
 Indentation
 'member def modifier' have incorrect indentation level 3, expected level 
should be 4.
-78
+79
 
 Error
 blocks
 NeedBraces
 'if' construct must use '{}'s.
-182
+183
 
 Error
 blocks
 NeedBraces
 'if' construct must use '{}'s.
-185
+186
 
 org/apache/hadoop/hbase/CoordinatedStateManager.java
 
@@ -14509,350 +14509,356 @@
 imports
 ImportOrder
 Wrong order for 'org.junit.After' import.
-48
+58
 
 Error
+imports
+RedundantImport
+Duplicate import to line 56 - 
org.apache.hbase.thirdparty.com.google.common.collect.Lists.
+66
+
+Error
 blocks
 NeedBraces
 'if' construct must use '{}'s.
-238
+338
 
 org/apache/hadoop/hbase/IntegrationTestBase.java
 
-
+
 Severity
 Category
 Rule
 Message
 Line
-
+
 Error
 javadoc
-JavadocTagContinuationIndentation
+NonEmptyAtclauseDescription
 Javadoc comment at column 26 has parse error. Missed HTML close tag 'arg'. 
Sometimes it means that close tag missed for one of previous tags.
-43
+44
 
 org/apache/hadoop/hbase/IntegrationTestDDLMasterFailover.java
 
-
+
 Severity
 Category
 Rule
 Message
 Line
-
+
 Error
 blocks
 LeftCurly
 '{' at column 5 should be on the previous line.
 393
-
+
 Error
 whitespace
 ParenPad
 '(' is followed by whitespace.
 416
-
+
 Error
 whitespace
 ParenPad
 ')' is preceded with whitespace.
 503
-
+
 Error
 sizes
 LineLength
 Line is longer than 100 characters (found 102).
 664
-
+
 Error
 indentation
 Indentation
 'array initialization' child have incorrect indentation level 16, expected 
level should be one of the following: 10, 43, 44.
 702
-
+
 Error
 sizes
 LineLength
 Line is longer than 100 characters (found 101).
 702
-
+
 Error
 sizes
 LineLength
 Line is longer than 100 characters (found 102).
 717
-
+
 Error
 whitespace
 ParenPad
 ')' is preceded with whitespace.
 777
-
+
 Error
 coding
 MissingSwitchDefault
 switch without default clause.
 850
-
+
 Error
 indentation
 Indentation
 'case' child have incorrect indentation level 10, expected level should be 
12.
 851
-
+
 Error
 indentation
 Indentation
 'block' child have incorrect indentation level 12, expected level should 
be 14.
 852
-
+
 Error
 indentation
 Indentation
 'block' child have incorrect indentation level 12, expected level should 
be 14.
 853
-
+
 Error
 indentation
 Indentation
 'case' child have incorrect indentation level 10, expected level should be 
12.
 854
-
+
 Error
 indentation
 Indentation
 'block' child have incorrect indentation level 12, expected level should 
be 14.
 855
-
+
 Error
 indentation
 Indentation
 'block' child have incorrect indentation level 12, expected level should 
be 14.
 856
-
+
 Error
 indentation
 Indentation
 'case' child have incorrect indentation level 10, expected level should be 
12.
 857
-
+
 Error
 indentation
 Indentation
 'block' child have incorrect indentation level 12, expected level should 
be 14.
 858
-
+
 Error
 indentation
 Indentation
 'block' child have incorrect indentation level 12, expected level should 
be 14.
 859
-
+
 Error
 indentation
 Indentation
 'case' child have incorrect indentation level 10, expected level should be 
12.
 860
-
+
 Error
 indentation
 Indentation
 'if' have incorrect indentation level 12, expected level should be 14.
 863
-
+
 Error
 indentation
 Indentation
 'if' child have incorrect indentation level 14, expected level should be 
16.
 864
-
+
 Error
 indentation
 Indentation
 'if rcurly' 

[18/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/client/ConnectionImplementation.MasterServiceStubMaker.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/ConnectionImplementation.MasterServiceStubMaker.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/ConnectionImplementation.MasterServiceStubMaker.html
index d7aa8b1..98a45a0 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/ConnectionImplementation.MasterServiceStubMaker.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/ConnectionImplementation.MasterServiceStubMaker.html
@@ -680,1330 +680,1333 @@
 672}
 673ListHRegionLocation locations 
= new ArrayList();
 674for (RegionInfo regionInfo : regions) 
{
-675  RegionLocations list = 
locateRegion(tableName, regionInfo.getStartKey(), useCache, true);
-676  if (list != null) {
-677for (HRegionLocation loc : 
list.getRegionLocations()) {
-678  if (loc != null) {
-679locations.add(loc);
-680  }
-681}
-682  }
-683}
-684return locations;
-685  }
-686
-687  @Override
-688  public HRegionLocation 
locateRegion(final TableName tableName, final byte[] row)
-689  throws IOException {
-690RegionLocations locations = 
locateRegion(tableName, row, true, true);
-691return locations == null ? null : 
locations.getRegionLocation();
-692  }
-693
-694  @Override
-695  public HRegionLocation 
relocateRegion(final TableName tableName, final byte[] row)
-696  throws IOException {
-697RegionLocations locations =
-698  relocateRegion(tableName, row, 
RegionReplicaUtil.DEFAULT_REPLICA_ID);
-699return locations == null ? null
-700  : 
locations.getRegionLocation(RegionReplicaUtil.DEFAULT_REPLICA_ID);
-701  }
-702
-703  @Override
-704  public RegionLocations 
relocateRegion(final TableName tableName,
-705  final byte [] row, int replicaId) 
throws IOException{
-706// Since this is an explicit request 
not to use any caching, finding
-707// disabled tables should not be 
desirable.  This will ensure that an exception is thrown when
-708// the first time a disabled table is 
interacted with.
-709if 
(!tableName.equals(TableName.META_TABLE_NAME)  
isTableDisabled(tableName)) {
-710  throw new 
TableNotEnabledException(tableName.getNameAsString() + " is disabled.");
-711}
-712
-713return locateRegion(tableName, row, 
false, true, replicaId);
-714  }
+675  if 
(!RegionReplicaUtil.isDefaultReplica(regionInfo)) {
+676continue;
+677  }
+678  RegionLocations list = 
locateRegion(tableName, regionInfo.getStartKey(), useCache, true);
+679  if (list != null) {
+680for (HRegionLocation loc : 
list.getRegionLocations()) {
+681  if (loc != null) {
+682locations.add(loc);
+683  }
+684}
+685  }
+686}
+687return locations;
+688  }
+689
+690  @Override
+691  public HRegionLocation 
locateRegion(final TableName tableName, final byte[] row)
+692  throws IOException {
+693RegionLocations locations = 
locateRegion(tableName, row, true, true);
+694return locations == null ? null : 
locations.getRegionLocation();
+695  }
+696
+697  @Override
+698  public HRegionLocation 
relocateRegion(final TableName tableName, final byte[] row)
+699  throws IOException {
+700RegionLocations locations =
+701  relocateRegion(tableName, row, 
RegionReplicaUtil.DEFAULT_REPLICA_ID);
+702return locations == null ? null
+703  : 
locations.getRegionLocation(RegionReplicaUtil.DEFAULT_REPLICA_ID);
+704  }
+705
+706  @Override
+707  public RegionLocations 
relocateRegion(final TableName tableName,
+708  final byte [] row, int replicaId) 
throws IOException{
+709// Since this is an explicit request 
not to use any caching, finding
+710// disabled tables should not be 
desirable.  This will ensure that an exception is thrown when
+711// the first time a disabled table is 
interacted with.
+712if 
(!tableName.equals(TableName.META_TABLE_NAME)  
isTableDisabled(tableName)) {
+713  throw new 
TableNotEnabledException(tableName.getNameAsString() + " is disabled.");
+714}
 715
-716  @Override
-717  public RegionLocations 
locateRegion(final TableName tableName, final byte[] row, boolean useCache,
-718  boolean retry) throws IOException 
{
-719return locateRegion(tableName, row, 
useCache, retry, RegionReplicaUtil.DEFAULT_REPLICA_ID);
-720  }
-721
-722  @Override
-723  public RegionLocations 
locateRegion(final TableName tableName, final byte[] row, boolean useCache,
-724  boolean retry, int replicaId) 
throws IOException {
-725checkClosed();
-726if (tableName == null || 
tableName.getName().length == 0) {
-727  throw new 
IllegalArgumentException("table name cannot be null or zero length");
-728}
-729

[46/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/bulk-loads.html
--
diff --git a/bulk-loads.html b/bulk-loads.html
index 9b97ee8..f4df371 100644
--- a/bulk-loads.html
+++ b/bulk-loads.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase   
   Bulk Loads in Apache HBase (TM)
@@ -299,7 +299,7 @@ under the License. -->
 https://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2018-03-26
+  Last Published: 
2018-03-27
 
 
 



[32/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.html 
b/devapidocs/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.html
index ad31b71..8ffb7dd 100644
--- a/devapidocs/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.html
+++ b/devapidocs/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.html
@@ -115,7 +115,7 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Private
-public class RegionCoprocessorHost
+public class RegionCoprocessorHost
 extends CoprocessorHostRegionCoprocessor,RegionCoprocessorEnvironment
 Implements the coprocessor environment and runtime support 
for coprocessors
  loaded within a Region.
@@ -213,7 +213,7 @@ extends 
-private static 
org.apache.commons.collections4.map.ReferenceMaphttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String,https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ConcurrentMap.html?is-external=true;
 title="class or interface in java.util.concurrent">ConcurrentMaphttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String,https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
+private static 
org.apache.hbase.thirdparty.org.apache.commons.collections4.map.ReferenceMaphttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String,https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ConcurrentMap.html?is-external=true;
 title="class or interface in java.util.concurrent">ConcurrentMaphttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String,https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
 SHARED_DATA_MAP
 
 
@@ -796,7 +796,7 @@ extends 
 
 LOG
-private static finalorg.slf4j.Logger LOG
+private static finalorg.slf4j.Logger LOG
 
 
 
@@ -805,7 +805,7 @@ extends 
 
 SHARED_DATA_MAP
-private static 
finalorg.apache.commons.collections4.map.ReferenceMaphttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String,https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ConcurrentMap.html?is-external=true;
 title="class or interface in java.util.concurrent">ConcurrentMaphttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String,https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object SHARED_DATA_MAP
+private static 
finalorg.apache.hbase.thirdparty.org.apache.commons.collections4.map.ReferenceMaphttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String,https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ConcurrentMap.html?is-external=true;
 title="class or interface in java.util.concurrent">ConcurrentMaphttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String,https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object SHARED_DATA_MAP
 
 
 
@@ -814,7 +814,7 @@ extends 
 
 hasCustomPostScannerFilterRow
-private finalboolean hasCustomPostScannerFilterRow
+private finalboolean hasCustomPostScannerFilterRow
 
 
 
@@ -823,7 +823,7 @@ extends 
 
 rsServices
-RegionServerServices rsServices
+RegionServerServices rsServices
 The region server services
 
 
@@ -833,7 +833,7 @@ extends 
 
 region
-HRegion region
+HRegion region
 The region
 
 
@@ -843,7 +843,7 @@ extends 
 
 regionObserverGetter
-privateCoprocessorHost.ObserverGetterRegionCoprocessor,RegionObserver 
regionObserverGetter
+privateCoprocessorHost.ObserverGetterRegionCoprocessor,RegionObserver 
regionObserverGetter
 
 
 
@@ -852,7 +852,7 @@ extends 
 
 endpointObserverGetter
-privateCoprocessorHost.ObserverGetterRegionCoprocessor,EndpointObserver endpointObserverGetter
+privateCoprocessorHost.ObserverGetterRegionCoprocessor,EndpointObserver endpointObserverGetter
 
 
 
@@ -869,7 +869,7 @@ extends 
 
 RegionCoprocessorHost
-publicRegionCoprocessorHost(HRegionregion,
+publicRegionCoprocessorHost(HRegionregion,
  RegionServerServicesrsServices,
  
org.apache.hadoop.conf.Configurationconf)
 Constructor
@@ -895,7 +895,7 @@ extends 
 
 getTableCoprocessorAttrsFromSchema

[11/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.html 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.html
index 50caf18..61bf913 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.html
@@ -45,773 +45,774 @@
 037import java.util.TimeZone;
 038import java.util.concurrent.TimeUnit;
 039
-040import 
org.apache.commons.cli.CommandLine;
-041import 
org.apache.commons.cli.CommandLineParser;
-042import 
org.apache.commons.cli.HelpFormatter;
-043import org.apache.commons.cli.Option;
-044import 
org.apache.commons.cli.OptionGroup;
-045import org.apache.commons.cli.Options;
-046import 
org.apache.commons.cli.ParseException;
-047import 
org.apache.commons.cli.PosixParser;
-048import 
org.apache.commons.lang3.StringUtils;
-049import 
org.apache.hadoop.conf.Configuration;
-050import 
org.apache.hadoop.conf.Configured;
-051import org.apache.hadoop.fs.FileSystem;
-052import org.apache.hadoop.fs.Path;
-053import org.apache.hadoop.hbase.Cell;
-054import 
org.apache.hadoop.hbase.CellComparator;
-055import 
org.apache.hadoop.hbase.CellUtil;
-056import 
org.apache.hadoop.hbase.HBaseConfiguration;
-057import 
org.apache.hadoop.hbase.HBaseInterfaceAudience;
-058import 
org.apache.hadoop.hbase.HConstants;
-059import 
org.apache.hadoop.hbase.HRegionInfo;
-060import 
org.apache.hadoop.hbase.KeyValue;
-061import 
org.apache.hadoop.hbase.KeyValueUtil;
-062import 
org.apache.hadoop.hbase.PrivateCellUtil;
-063import 
org.apache.hadoop.hbase.TableName;
-064import org.apache.hadoop.hbase.Tag;
-065import 
org.apache.hadoop.hbase.io.FSDataInputStreamWrapper;
-066import 
org.apache.hadoop.hbase.io.hfile.HFile.FileInfo;
-067import 
org.apache.hadoop.hbase.mob.MobUtils;
-068import 
org.apache.hadoop.hbase.regionserver.HStoreFile;
-069import 
org.apache.hadoop.hbase.regionserver.TimeRangeTracker;
-070import 
org.apache.hadoop.hbase.util.BloomFilter;
-071import 
org.apache.hadoop.hbase.util.BloomFilterFactory;
-072import 
org.apache.hadoop.hbase.util.BloomFilterUtil;
-073import 
org.apache.hadoop.hbase.util.Bytes;
-074import 
org.apache.hadoop.hbase.util.FSUtils;
-075import 
org.apache.hadoop.hbase.util.HFileArchiveUtil;
-076import org.apache.hadoop.util.Tool;
-077import 
org.apache.hadoop.util.ToolRunner;
-078import 
org.apache.yetus.audience.InterfaceAudience;
-079import 
org.apache.yetus.audience.InterfaceStability;
-080import org.slf4j.Logger;
-081import org.slf4j.LoggerFactory;
-082
-083import 
com.codahale.metrics.ConsoleReporter;
-084import com.codahale.metrics.Counter;
-085import com.codahale.metrics.Gauge;
-086import com.codahale.metrics.Histogram;
-087import com.codahale.metrics.Meter;
-088import 
com.codahale.metrics.MetricFilter;
-089import 
com.codahale.metrics.MetricRegistry;
-090import 
com.codahale.metrics.ScheduledReporter;
-091import com.codahale.metrics.Snapshot;
-092import com.codahale.metrics.Timer;
-093
-094/**
-095 * Implements pretty-printing 
functionality for {@link HFile}s.
-096 */
-097@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.TOOLS)
-098@InterfaceStability.Evolving
-099public class HFilePrettyPrinter extends 
Configured implements Tool {
-100
-101  private static final Logger LOG = 
LoggerFactory.getLogger(HFilePrettyPrinter.class);
-102
-103  private Options options = new 
Options();
-104
-105  private boolean verbose;
-106  private boolean printValue;
-107  private boolean printKey;
-108  private boolean shouldPrintMeta;
-109  private boolean printBlockIndex;
-110  private boolean printBlockHeaders;
-111  private boolean printStats;
-112  private boolean checkRow;
-113  private boolean checkFamily;
-114  private boolean isSeekToRow = false;
-115  private boolean checkMobIntegrity = 
false;
-116  private MapString, 
ListPath mobFileLocations;
-117  private static final int 
FOUND_MOB_FILES_CACHE_CAPACITY = 50;
-118  private static final int 
MISSING_MOB_FILES_CACHE_CAPACITY = 20;
-119  private PrintStream out = System.out;
-120  private PrintStream err = System.err;
-121
-122  /**
-123   * The row which the user wants to 
specify and print all the KeyValues for.
-124   */
-125  private byte[] row = null;
-126
-127  private ListPath files = new 
ArrayList();
-128  private int count;
-129
-130  private static final String FOUR_SPACES 
= "";
-131
-132  public HFilePrettyPrinter() {
-133super();
-134init();
-135  }
-136
-137  public HFilePrettyPrinter(Configuration 
conf) {
-138super(conf);
-139init();
-140  }
-141
-142  private void init() {
-143options.addOption("v", "verbose", 
false,
-144"Verbose output; emits file and 
meta data delimiters");
-145options.addOption("p", 

[07/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/master/HMaster.RedirectServlet.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/master/HMaster.RedirectServlet.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/master/HMaster.RedirectServlet.html
index 79bf967..c8b113b 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/master/HMaster.RedirectServlet.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/master/HMaster.RedirectServlet.html
@@ -115,3514 +115,3517 @@
 107import 
org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer;
 108import 
org.apache.hadoop.hbase.master.balancer.ClusterStatusChore;
 109import 
org.apache.hadoop.hbase.master.balancer.LoadBalancerFactory;
-110import 
org.apache.hadoop.hbase.master.cleaner.HFileCleaner;
-111import 
org.apache.hadoop.hbase.master.cleaner.LogCleaner;
-112import 
org.apache.hadoop.hbase.master.cleaner.ReplicationBarrierCleaner;
-113import 
org.apache.hadoop.hbase.master.locking.LockManager;
-114import 
org.apache.hadoop.hbase.master.normalizer.NormalizationPlan;
-115import 
org.apache.hadoop.hbase.master.normalizer.NormalizationPlan.PlanType;
-116import 
org.apache.hadoop.hbase.master.normalizer.RegionNormalizer;
-117import 
org.apache.hadoop.hbase.master.normalizer.RegionNormalizerChore;
-118import 
org.apache.hadoop.hbase.master.normalizer.RegionNormalizerFactory;
-119import 
org.apache.hadoop.hbase.master.procedure.CreateTableProcedure;
-120import 
org.apache.hadoop.hbase.master.procedure.DeleteNamespaceProcedure;
-121import 
org.apache.hadoop.hbase.master.procedure.DeleteTableProcedure;
-122import 
org.apache.hadoop.hbase.master.procedure.DisableTableProcedure;
-123import 
org.apache.hadoop.hbase.master.procedure.EnableTableProcedure;
-124import 
org.apache.hadoop.hbase.master.procedure.MasterProcedureConstants;
-125import 
org.apache.hadoop.hbase.master.procedure.MasterProcedureEnv;
-126import 
org.apache.hadoop.hbase.master.procedure.MasterProcedureScheduler;
-127import 
org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil;
-128import 
org.apache.hadoop.hbase.master.procedure.ModifyTableProcedure;
-129import 
org.apache.hadoop.hbase.master.procedure.ProcedurePrepareLatch;
-130import 
org.apache.hadoop.hbase.master.procedure.RecoverMetaProcedure;
-131import 
org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure;
-132import 
org.apache.hadoop.hbase.master.replication.AddPeerProcedure;
-133import 
org.apache.hadoop.hbase.master.replication.DisablePeerProcedure;
-134import 
org.apache.hadoop.hbase.master.replication.EnablePeerProcedure;
-135import 
org.apache.hadoop.hbase.master.replication.ModifyPeerProcedure;
-136import 
org.apache.hadoop.hbase.master.replication.RemovePeerProcedure;
-137import 
org.apache.hadoop.hbase.master.replication.ReplicationPeerManager;
-138import 
org.apache.hadoop.hbase.master.replication.UpdatePeerConfigProcedure;
-139import 
org.apache.hadoop.hbase.master.snapshot.SnapshotManager;
-140import 
org.apache.hadoop.hbase.mob.MobConstants;
-141import 
org.apache.hadoop.hbase.monitoring.MemoryBoundedLogMessageBuffer;
-142import 
org.apache.hadoop.hbase.monitoring.MonitoredTask;
-143import 
org.apache.hadoop.hbase.monitoring.TaskMonitor;
-144import 
org.apache.hadoop.hbase.procedure.MasterProcedureManagerHost;
-145import 
org.apache.hadoop.hbase.procedure.flush.MasterFlushTableProcedureManager;
-146import 
org.apache.hadoop.hbase.procedure2.LockedResource;
-147import 
org.apache.hadoop.hbase.procedure2.Procedure;
-148import 
org.apache.hadoop.hbase.procedure2.ProcedureEvent;
-149import 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor;
-150import 
org.apache.hadoop.hbase.procedure2.RemoteProcedureDispatcher.RemoteProcedure;
-151import 
org.apache.hadoop.hbase.procedure2.RemoteProcedureException;
-152import 
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore;
-153import 
org.apache.hadoop.hbase.quotas.MasterQuotaManager;
-154import 
org.apache.hadoop.hbase.quotas.MasterSpaceQuotaObserver;
-155import 
org.apache.hadoop.hbase.quotas.QuotaObserverChore;
-156import 
org.apache.hadoop.hbase.quotas.QuotaUtil;
-157import 
org.apache.hadoop.hbase.quotas.SnapshotQuotaObserverChore;
-158import 
org.apache.hadoop.hbase.quotas.SpaceQuotaSnapshotNotifier;
-159import 
org.apache.hadoop.hbase.quotas.SpaceQuotaSnapshotNotifierFactory;
-160import 
org.apache.hadoop.hbase.regionserver.DefaultStoreEngine;
-161import 
org.apache.hadoop.hbase.regionserver.HRegionServer;
-162import 
org.apache.hadoop.hbase.regionserver.HStore;
-163import 
org.apache.hadoop.hbase.regionserver.RSRpcServices;
-164import 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost;
-165import 
org.apache.hadoop.hbase.regionserver.RegionSplitPolicy;
-166import 
org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy;
-167import 

[26/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/org/apache/hadoop/hbase/wal/WALPrettyPrinter.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/wal/WALPrettyPrinter.html 
b/devapidocs/org/apache/hadoop/hbase/wal/WALPrettyPrinter.html
index cc33120..3cc25e3 100644
--- a/devapidocs/org/apache/hadoop/hbase/wal/WALPrettyPrinter.html
+++ b/devapidocs/org/apache/hadoop/hbase/wal/WALPrettyPrinter.html
@@ -111,7 +111,7 @@ var activeTableTab = "activeTableTab";
 
 @InterfaceAudience.LimitedPrivate(value="Tools")
  @InterfaceStability.Evolving
-public class WALPrettyPrinter
+public class WALPrettyPrinter
 extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
 WALPrettyPrinter prints the contents of a given WAL with a 
variety of
  options affecting formatting and extent of content.
@@ -327,7 +327,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 outputValues
-privateboolean outputValues
+privateboolean outputValues
 
 
 
@@ -336,7 +336,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 outputJSON
-privateboolean outputJSON
+privateboolean outputJSON
 
 
 
@@ -345,7 +345,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 sequence
-privatelong sequence
+privatelong sequence
 
 
 
@@ -354,7 +354,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 region
-privatehttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String region
+privatehttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String region
 
 
 
@@ -363,7 +363,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 row
-privatehttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String row
+privatehttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String row
 
 
 
@@ -372,7 +372,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 persistentOutput
-privateboolean persistentOutput
+privateboolean persistentOutput
 
 
 
@@ -381,7 +381,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 firstTxn
-privateboolean firstTxn
+privateboolean firstTxn
 
 
 
@@ -390,7 +390,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 out
-privatehttps://docs.oracle.com/javase/8/docs/api/java/io/PrintStream.html?is-external=true;
 title="class or interface in java.io">PrintStream out
+privatehttps://docs.oracle.com/javase/8/docs/api/java/io/PrintStream.html?is-external=true;
 title="class or interface in java.io">PrintStream out
 
 
 
@@ -399,7 +399,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 MAPPER
-private static finalcom.fasterxml.jackson.databind.ObjectMapper MAPPER
+private static finalcom.fasterxml.jackson.databind.ObjectMapper MAPPER
 
 
 
@@ -416,7 +416,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 WALPrettyPrinter
-publicWALPrettyPrinter()
+publicWALPrettyPrinter()
 Basic constructor that simply initializes values to 
reasonable defaults.
 
 
@@ -426,7 +426,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 WALPrettyPrinter
-publicWALPrettyPrinter(booleanoutputValues,
+publicWALPrettyPrinter(booleanoutputValues,
 booleanoutputJSON,
 longsequence,
 https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringregion,
@@ -467,7 +467,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 enableValues
-publicvoidenableValues()
+publicvoidenableValues()
 turns value output on
 
 
@@ -477,7 +477,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 disableValues
-publicvoiddisableValues()
+publicvoiddisableValues()
 turns value output off
 
 
@@ -487,7 +487,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 enableJSON
-publicvoidenableJSON()
+publicvoidenableJSON()
 turns JSON output on
 
 
@@ -497,7 +497,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 disableJSON
-publicvoiddisableJSON()
+publicvoiddisableJSON()
 turns JSON output off, and turns on "pretty strings" for 
human consumption
 
 
@@ -507,7 +507,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 setSequenceFilter
-publicvoidsetSequenceFilter(longsequence)
+publicvoidsetSequenceFilter(longsequence)
 sets the region by which output will 

[37/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/org/apache/hadoop/hbase/master/HMasterCommandLine.LocalHMaster.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/master/HMasterCommandLine.LocalHMaster.html
 
b/devapidocs/org/apache/hadoop/hbase/master/HMasterCommandLine.LocalHMaster.html
index 0b237ab..170f400 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/master/HMasterCommandLine.LocalHMaster.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/master/HMasterCommandLine.LocalHMaster.html
@@ -132,7 +132,7 @@ var activeTableTab = "activeTableTab";
 
 
 
-public static class HMasterCommandLine.LocalHMaster
+public static class HMasterCommandLine.LocalHMaster
 extends HMaster
 
 
@@ -318,7 +318,7 @@ extends 
 
 zkcluster
-privateMiniZooKeeperCluster zkcluster
+privateMiniZooKeeperCluster zkcluster
 
 
 
@@ -335,7 +335,7 @@ extends 
 
 LocalHMaster
-publicLocalHMaster(org.apache.hadoop.conf.Configurationconf)
+publicLocalHMaster(org.apache.hadoop.conf.Configurationconf)
  throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException,
 org.apache.zookeeper.KeeperException,
 https://docs.oracle.com/javase/8/docs/api/java/lang/InterruptedException.html?is-external=true;
 title="class or interface in java.lang">InterruptedException
@@ -361,7 +361,7 @@ extends 
 
 run
-publicvoidrun()
+publicvoidrun()
 Description copied from 
class:HRegionServer
 The HRegionServer sticks in this loop until closed.
 
@@ -378,7 +378,7 @@ extends 
 
 setZKCluster
-voidsetZKCluster(MiniZooKeeperClusterzkcluster)
+voidsetZKCluster(MiniZooKeeperClusterzkcluster)
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/org/apache/hadoop/hbase/master/HMasterCommandLine.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/master/HMasterCommandLine.html 
b/devapidocs/org/apache/hadoop/hbase/master/HMasterCommandLine.html
index a5042b8..38f04e5 100644
--- a/devapidocs/org/apache/hadoop/hbase/master/HMasterCommandLine.html
+++ b/devapidocs/org/apache/hadoop/hbase/master/HMasterCommandLine.html
@@ -124,7 +124,7 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Private
-public class HMasterCommandLine
+public class HMasterCommandLine
 extends ServerCommandLine
 
 
@@ -282,7 +282,7 @@ extends 
 
 LOG
-private static finalorg.slf4j.Logger LOG
+private static finalorg.slf4j.Logger LOG
 
 
 
@@ -291,7 +291,7 @@ extends 
 
 USAGE
-private static finalhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String USAGE
+private static finalhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String USAGE
 
 See Also:
 Constant
 Field Values
@@ -304,7 +304,7 @@ extends 
 
 masterClass
-private finalhttps://docs.oracle.com/javase/8/docs/api/java/lang/Class.html?is-external=true;
 title="class or interface in java.lang">Class? extends HMaster masterClass
+private finalhttps://docs.oracle.com/javase/8/docs/api/java/lang/Class.html?is-external=true;
 title="class or interface in java.lang">Class? extends HMaster masterClass
 
 
 
@@ -321,7 +321,7 @@ extends 
 
 HMasterCommandLine
-publicHMasterCommandLine(https://docs.oracle.com/javase/8/docs/api/java/lang/Class.html?is-external=true;
 title="class or interface in java.lang">Class? extends HMastermasterClass)
+publicHMasterCommandLine(https://docs.oracle.com/javase/8/docs/api/java/lang/Class.html?is-external=true;
 title="class or interface in java.lang">Class? extends HMastermasterClass)
 
 
 
@@ -338,7 +338,7 @@ extends 
 
 getUsage
-protectedhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringgetUsage()
+protectedhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringgetUsage()
 Description copied from 
class:ServerCommandLine
 Implementing subclasses should return a usage string to 
print out.
 
@@ -353,7 +353,7 @@ extends 
 
 run
-publicintrun(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String[]args)
+publicintrun(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String[]args)
 throws https://docs.oracle.com/javase/8/docs/api/java/lang/Exception.html?is-external=true;
 title="class or interface in java.lang">Exception
 
 Throws:
@@ -367,7 +367,7 @@ extends 
 
 startMaster
-privateintstartMaster()
+privateintstartMaster()
 
 
 
@@ -376,7 +376,7 @@ extends 
 
 stopMaster
-privateintstopMaster()
+privateintstopMaster()
 
 
 

[28/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/org/apache/hadoop/hbase/thrift2/ThriftUtilities.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/thrift2/ThriftUtilities.html 
b/devapidocs/org/apache/hadoop/hbase/thrift2/ThriftUtilities.html
index 4f57e7c..338e9a6 100644
--- a/devapidocs/org/apache/hadoop/hbase/thrift2/ThriftUtilities.html
+++ b/devapidocs/org/apache/hadoop/hbase/thrift2/ThriftUtilities.html
@@ -110,7 +110,7 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Private
-public final class ThriftUtilities
+public final class ThriftUtilities
 extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
 
 
@@ -274,7 +274,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 ThriftUtilities
-privateThriftUtilities()
+privateThriftUtilities()
 
 
 
@@ -291,7 +291,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 getFromThrift
-public staticGetgetFromThrift(org.apache.hadoop.hbase.thrift2.generated.TGetin)
+public staticGetgetFromThrift(org.apache.hadoop.hbase.thrift2.generated.TGetin)
  throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
 Creates a Get (HBase) from a 
TGet (Thrift).
 
@@ -312,7 +312,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 getsFromThrift
-public statichttps://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListGetgetsFromThrift(https://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in 
java.util">Listorg.apache.hadoop.hbase.thrift2.generated.TGetin)
+public statichttps://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListGetgetsFromThrift(https://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in 
java.util">Listorg.apache.hadoop.hbase.thrift2.generated.TGetin)
 throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
 Converts multiple TGets (Thrift) into a list 
of Gets 
(HBase).
 
@@ -333,7 +333,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 resultFromHBase
-public 
staticorg.apache.hadoop.hbase.thrift2.generated.TResultresultFromHBase(Resultin)
+public 
staticorg.apache.hadoop.hbase.thrift2.generated.TResultresultFromHBase(Resultin)
 Creates a TResult (Thrift) from a Result (HBase).
 
 Parameters:
@@ -349,7 +349,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 resultsFromHBase
-public statichttps://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in 
java.util">Listorg.apache.hadoop.hbase.thrift2.generated.TResultresultsFromHBase(Result[]in)
+public statichttps://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in 
java.util">Listorg.apache.hadoop.hbase.thrift2.generated.TResultresultsFromHBase(Result[]in)
 Converts multiple Results (HBase) into a list 
of TResults (Thrift).
 
 Parameters:
@@ -367,7 +367,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 putFromThrift
-public staticPutputFromThrift(org.apache.hadoop.hbase.thrift2.generated.TPutin)
+public staticPutputFromThrift(org.apache.hadoop.hbase.thrift2.generated.TPutin)
 Creates a Put (HBase) from a 
TPut (Thrift)
 
 Parameters:
@@ -383,7 +383,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 putsFromThrift
-public statichttps://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListPutputsFromThrift(https://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in 
java.util">Listorg.apache.hadoop.hbase.thrift2.generated.TPutin)
+public statichttps://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListPutputsFromThrift(https://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in 
java.util">Listorg.apache.hadoop.hbase.thrift2.generated.TPutin)
 Converts multiple TPuts (Thrift) into a list 
of Puts 
(HBase).
 
 Parameters:
@@ -401,7 +401,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 deleteFromThrift
-public staticDeletedeleteFromThrift(org.apache.hadoop.hbase.thrift2.generated.TDeletein)
+public staticDeletedeleteFromThrift(org.apache.hadoop.hbase.thrift2.generated.TDeletein)
 Creates a Delete 

[50/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/apache_hbase_reference_guide.pdf
--
diff --git a/apache_hbase_reference_guide.pdf b/apache_hbase_reference_guide.pdf
index 03d1b36..2c0540e 100644
--- a/apache_hbase_reference_guide.pdf
+++ b/apache_hbase_reference_guide.pdf
@@ -5,16 +5,16 @@
 /Author (Apache HBase Team)
 /Creator (Asciidoctor PDF 1.5.0.alpha.15, based on Prawn 2.2.2)
 /Producer (Apache HBase Team)
-/ModDate (D:20180326144553+00'00')
-/CreationDate (D:20180326144553+00'00')
+/ModDate (D:20180327144547+00'00')
+/CreationDate (D:20180327144547+00'00')
 >>
 endobj
 2 0 obj
 << /Type /Catalog
 /Pages 3 0 R
 /Names 26 0 R
-/Outlines 4625 0 R
-/PageLabels 4850 0 R
+/Outlines 4515 0 R
+/PageLabels 4738 0 R
 /PageMode /UseOutlines
 /OpenAction [7 0 R /FitH 842.89]
 /ViewerPreferences << /DisplayDocTitle true
@@ -23,8 +23,8 @@ endobj
 endobj
 3 0 obj
 << /Type /Pages
-/Count 716
-/Kids [7 0 R 12 0 R 14 0 R 16 0 R 18 0 R 20 0 R 22 0 R 24 0 R 44 0 R 47 0 R 50 
0 R 54 0 R 63 0 R 66 0 R 69 0 R 71 0 R 76 0 R 80 0 R 83 0 R 89 0 R 91 0 R 94 0 
R 96 0 R 103 0 R 109 0 R 114 0 R 116 0 R 133 0 R 139 0 R 148 0 R 151 0 R 160 0 
R 169 0 R 178 0 R 189 0 R 193 0 R 195 0 R 199 0 R 208 0 R 217 0 R 226 0 R 234 0 
R 239 0 R 248 0 R 256 0 R 265 0 R 278 0 R 285 0 R 295 0 R 303 0 R 311 0 R 318 0 
R 326 0 R 332 0 R 338 0 R 345 0 R 353 0 R 363 0 R 372 0 R 384 0 R 393 0 R 401 0 
R 408 0 R 416 0 R 424 0 R 435 0 R 443 0 R 450 0 R 458 0 R 469 0 R 479 0 R 486 0 
R 494 0 R 502 0 R 511 0 R 519 0 R 524 0 R 528 0 R 533 0 R 537 0 R 553 0 R 564 0 
R 568 0 R 582 0 R 588 0 R 593 0 R 595 0 R 597 0 R 600 0 R 602 0 R 604 0 R 612 0 
R 618 0 R 623 0 R 628 0 R 639 0 R 650 0 R 655 0 R 663 0 R 667 0 R 671 0 R 673 0 
R 684 0 R 693 0 R 705 0 R 714 0 R 725 0 R 732 0 R 744 0 R 762 0 R 768 0 R 770 0 
R 772 0 R 781 0 R 793 0 R 803 0 R 811 0 R 817 0 R 820 0 R 824 0 R 828 0 R 831 0 
R 834 0 R 836 0 R 839 0 R 843 0 R 845 0 
 R 850 0 R 854 0 R 860 0 R 864 0 R 867 0 R 874 0 R 876 0 R 880 0 R 889 0 R 891 
0 R 894 0 R 898 0 R 901 0 R 904 0 R 918 0 R 925 0 R 934 0 R 945 0 R 951 0 R 961 
0 R 972 0 R 975 0 R 979 0 R 982 0 R 987 0 R 996 0 R 1004 0 R 1008 0 R 1012 0 R 
1017 0 R 1021 0 R 1023 0 R 1039 0 R 1050 0 R 1055 0 R 1061 0 R 1064 0 R 1073 0 
R 1081 0 R 1085 0 R 1090 0 R 1095 0 R 1097 0 R 1099 0 R 1101 0 R  0 R 1119 
0 R 1123 0 R 1130 0 R 1137 0 R 1145 0 R 1150 0 R 1155 0 R 1160 0 R 1168 0 R 
1172 0 R 1177 0 R 1179 0 R 1187 0 R 1193 0 R 1195 0 R 1202 0 R 1212 0 R 1216 0 
R 1218 0 R 1220 0 R 1224 0 R 1227 0 R 1232 0 R 1235 0 R 1247 0 R 1251 0 R 1257 
0 R 1265 0 R 1270 0 R 1274 0 R 1278 0 R 1280 0 R 1283 0 R 1286 0 R 1289 0 R 
1293 0 R 1297 0 R 1301 0 R 1306 0 R 1310 0 R 1313 0 R 1315 0 R 1326 0 R 1329 0 
R 1337 0 R 1346 0 R 1352 0 R 1356 0 R 1358 0 R 1369 0 R 1372 0 R 1378 0 R 1387 
0 R 1390 0 R 1397 0 R 1405 0 R 1407 0 R 1409 0 R 1418 0 R 1420 0 R 1422 0 R 
1425 0 R 1427 0 R 1429 0 R 1431 0 R 1433 0 R 1436 0 R 1440
  0 R 1445 0 R 1447 0 R 1449 0 R 1451 0 R 1456 0 R 1464 0 R 1469 0 R 1472 0 R 
1474 0 R 1477 0 R 1481 0 R 1485 0 R 1488 0 R 1490 0 R 1492 0 R 1495 0 R 1501 0 
R 1506 0 R 1514 0 R 1528 0 R 1543 0 R 1547 0 R 1551 0 R 1564 0 R 1569 0 R 1584 
0 R 1592 0 R 1596 0 R 1604 0 R 1619 0 R 1633 0 R 1645 0 R 1650 0 R 1657 0 R 
1666 0 R 1672 0 R 1677 0 R 1685 0 R 1688 0 R 1697 0 R 1703 0 R 1706 0 R 1719 0 
R 1721 0 R 1727 0 R 1732 0 R 1734 0 R 1742 0 R 1750 0 R 1754 0 R 1756 0 R 1758 
0 R 1770 0 R 1776 0 R 1784 0 R 1790 0 R 1804 0 R 1809 0 R 1818 0 R 1826 0 R 
1832 0 R 1839 0 R 1844 0 R 1847 0 R 1849 0 R 1855 0 R 1859 0 R 1865 0 R 1869 0 
R 1877 0 R 1883 0 R 1888 0 R 1893 0 R 1895 0 R 1903 0 R 1910 0 R 1916 0 R 1921 
0 R 1925 0 R 1928 0 R 1934 0 R 1939 0 R 1946 0 R 1948 0 R 1950 0 R 1953 0 R 
1961 0 R 1964 0 R 1971 0 R 1980 0 R 1983 0 R 1988 0 R 1990 0 R 1993 0 R 1996 0 
R 2000 0 R 2010 0 R 2015 0 R 2022 0 R 2024 0 R 2031 0 R 2039 0 R 2046 0 R 2052 
0 R 2058 0 R 2060 0 R 2068 0 R 2077 0 R 2088 0 R 2094 0 R 21
 01 0 R 2103 0 R 2108 0 R 2110 0 R 2112 0 R 2115 0 R 2118 0 R 2121 0 R 2126 0 R 
2130 0 R 2141 0 R 2144 0 R 2149 0 R 2152 0 R 2154 0 R 2159 0 R 2169 0 R 2171 0 
R 2173 0 R 2175 0 R 2177 0 R 2180 0 R 2182 0 R 2184 0 R 2187 0 R 2189 0 R 2191 
0 R 2195 0 R 2200 0 R 2209 0 R 2211 0 R 2213 0 R 2220 0 R  0 R 2227 0 R 
2229 0 R 2231 0 R 2238 0 R 2243 0 R 2247 0 R 2251 0 R 2255 0 R 2257 0 R 2259 0 
R 2263 0 R 2266 0 R 2268 0 R 2270 0 R 2274 0 R 2276 0 R 2279 0 R 2281 0 R 2283 
0 R 2285 0 R 2292 0 R 2295 0 R 2300 0 R 2302 0 R 2304 0 R 2306 0 R 2308 0 R 
2316 0 R 2327 0 R 2341 0 R 2352 0 R 2357 0 R 2362 0 R 2366 0 R 2369 0 R 2374 0 
R 2379 0 R 2381 0 R 2385 0 R 2387 0 R 2389 0 R 2391 0 R 2395 0 R 2397 0 R 2410 
0 R 2413 0 R 2421 0 R 2427 0 R 2439 0 R 2453 0 R 2466 0 R 2483 0 R 2487 0 R 
2489 0 R 2493 0 R 2511 0 R 2517 0 R 2529 0 R 2533 0 R 2537 0 R 2546 0 R 2556 0 
R 2561 0 R 2573 0 R 2586 0 R 2605 0 R 2614 0 R 2617 0 R 2626 0 R 2644 0 R 2651 
0 R 2654 0 R 2659 0 R 2663 0 R 

[05/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/master/HMasterCommandLine.LocalHMaster.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/master/HMasterCommandLine.LocalHMaster.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/master/HMasterCommandLine.LocalHMaster.html
index 64b9ab5..30fe780 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/master/HMasterCommandLine.LocalHMaster.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/master/HMasterCommandLine.LocalHMaster.html
@@ -30,308 +30,309 @@
 022import java.io.IOException;
 023import java.util.List;
 024
-025import 
org.apache.commons.cli.CommandLine;
-026import 
org.apache.commons.cli.GnuParser;
-027import org.apache.commons.cli.Options;
-028import 
org.apache.commons.cli.ParseException;
-029import 
org.apache.hadoop.conf.Configuration;
-030import 
org.apache.hadoop.hbase.HConstants;
-031import 
org.apache.hadoop.hbase.LocalHBaseCluster;
-032import 
org.apache.hadoop.hbase.MasterNotRunningException;
-033import 
org.apache.hadoop.hbase.ZNodeClearer;
-034import 
org.apache.hadoop.hbase.ZooKeeperConnectionException;
-035import 
org.apache.hadoop.hbase.trace.TraceUtil;
-036import 
org.apache.yetus.audience.InterfaceAudience;
-037import 
org.apache.hadoop.hbase.client.Admin;
-038import 
org.apache.hadoop.hbase.client.Connection;
-039import 
org.apache.hadoop.hbase.client.ConnectionFactory;
-040import 
org.apache.hadoop.hbase.regionserver.HRegionServer;
-041import 
org.apache.hadoop.hbase.util.JVMClusterUtil;
-042import 
org.apache.hadoop.hbase.util.ServerCommandLine;
-043import 
org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster;
-044import 
org.apache.hadoop.hbase.zookeeper.ZKUtil;
-045import 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
-046import 
org.apache.zookeeper.KeeperException;
-047import org.slf4j.Logger;
-048import org.slf4j.LoggerFactory;
-049
-050@InterfaceAudience.Private
-051public class HMasterCommandLine extends 
ServerCommandLine {
-052  private static final Logger LOG = 
LoggerFactory.getLogger(HMasterCommandLine.class);
-053
-054  private static final String USAGE =
-055"Usage: Master [opts] 
start|stop|clear\n" +
-056" start  Start Master. If local mode, 
start Master and RegionServer in same JVM\n" +
-057" stop   Start cluster shutdown; 
Master signals RegionServer shutdown\n" +
-058" clear  Delete the master znode in 
ZooKeeper after a master crashes\n "+
-059" where [opts] are:\n" +
-060"   
--minRegionServers=servers   Minimum RegionServers needed to host user 
tables.\n" +
-061"   
--localRegionServers=servers " +
-062  "RegionServers to start in master 
process when in standalone mode.\n" +
-063"   --masters=servers 
   Masters to start in this process.\n" +
-064"   --backup   
Master should start in backup mode";
-065
-066  private final Class? extends 
HMaster masterClass;
-067
-068  public HMasterCommandLine(Class? 
extends HMaster masterClass) {
-069this.masterClass = masterClass;
-070  }
-071
-072  @Override
-073  protected String getUsage() {
-074return USAGE;
-075  }
-076
-077  @Override
-078  public int run(String args[]) throws 
Exception {
-079Options opt = new Options();
-080opt.addOption("localRegionServers", 
true,
-081  "RegionServers to start in master 
process when running standalone");
-082opt.addOption("masters", true, 
"Masters to start in this process");
-083opt.addOption("minRegionServers", 
true, "Minimum RegionServers needed to host user tables");
-084opt.addOption("backup", false, "Do 
not try to become HMaster until the primary fails");
-085
-086CommandLine cmd;
-087try {
-088  cmd = new GnuParser().parse(opt, 
args);
-089} catch (ParseException e) {
-090  LOG.error("Could not parse: ", 
e);
-091  usage(null);
-092  return 1;
-093}
-094
+025import 
org.apache.hadoop.conf.Configuration;
+026import 
org.apache.hadoop.hbase.HConstants;
+027import 
org.apache.hadoop.hbase.LocalHBaseCluster;
+028import 
org.apache.hadoop.hbase.MasterNotRunningException;
+029import 
org.apache.hadoop.hbase.ZNodeClearer;
+030import 
org.apache.hadoop.hbase.ZooKeeperConnectionException;
+031import 
org.apache.hadoop.hbase.trace.TraceUtil;
+032import 
org.apache.yetus.audience.InterfaceAudience;
+033import 
org.apache.hadoop.hbase.client.Admin;
+034import 
org.apache.hadoop.hbase.client.Connection;
+035import 
org.apache.hadoop.hbase.client.ConnectionFactory;
+036import 
org.apache.hadoop.hbase.regionserver.HRegionServer;
+037import 
org.apache.hadoop.hbase.util.JVMClusterUtil;
+038import 
org.apache.hadoop.hbase.util.ServerCommandLine;
+039import 
org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster;
+040import 
org.apache.hadoop.hbase.zookeeper.ZKUtil;
+041import 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+042import 

[20/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/BackupManager.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/BackupManager.html 
b/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/BackupManager.html
index d0d6f47..45e4c6c 100644
--- a/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/BackupManager.html
+++ b/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/BackupManager.html
@@ -40,499 +40,505 @@
 032import 
org.apache.hadoop.hbase.HConstants;
 033import 
org.apache.hadoop.hbase.HTableDescriptor;
 034import 
org.apache.hadoop.hbase.TableName;
-035import 
org.apache.hadoop.hbase.backup.BackupInfo;
-036import 
org.apache.hadoop.hbase.backup.BackupInfo.BackupState;
-037import 
org.apache.hadoop.hbase.backup.BackupObserver;
-038import 
org.apache.hadoop.hbase.backup.BackupRestoreConstants;
-039import 
org.apache.hadoop.hbase.backup.BackupType;
-040import 
org.apache.hadoop.hbase.backup.HBackupFileSystem;
-041import 
org.apache.hadoop.hbase.backup.impl.BackupManifest.BackupImage;
-042import 
org.apache.hadoop.hbase.backup.master.BackupLogCleaner;
-043import 
org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager;
-044import 
org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager;
-045import 
org.apache.hadoop.hbase.client.Admin;
-046import 
org.apache.hadoop.hbase.client.Connection;
-047import 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost;
-048import 
org.apache.hadoop.hbase.procedure.ProcedureManagerHost;
-049import 
org.apache.hadoop.hbase.util.Pair;
-050import 
org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
-051import 
org.apache.yetus.audience.InterfaceAudience;
-052import org.slf4j.Logger;
-053import org.slf4j.LoggerFactory;
-054
-055/**
-056 * Handles backup requests, creates 
backup info records in backup system table to keep track of
-057 * backup sessions, dispatches backup 
request.
-058 */
-059@InterfaceAudience.Private
-060public class BackupManager implements 
Closeable {
-061  // in seconds
-062  public final static String 
BACKUP_EXCLUSIVE_OPERATION_TIMEOUT_SECONDS_KEY =
-063  
"hbase.backup.exclusive.op.timeout.seconds";
-064  // In seconds
-065  private final static int 
DEFAULT_BACKUP_EXCLUSIVE_OPERATION_TIMEOUT = 3600;
-066  private static final Logger LOG = 
LoggerFactory.getLogger(BackupManager.class);
-067
-068  protected Configuration conf = null;
-069  protected BackupInfo backupInfo = 
null;
-070  protected BackupSystemTable 
systemTable;
-071  protected final Connection conn;
-072
-073  /**
-074   * Backup manager constructor.
-075   * @param conn connection
-076   * @param conf configuration
-077   * @throws IOException exception
-078   */
-079  public BackupManager(Connection conn, 
Configuration conf) throws IOException {
-080if 
(!conf.getBoolean(BackupRestoreConstants.BACKUP_ENABLE_KEY,
-081  
BackupRestoreConstants.BACKUP_ENABLE_DEFAULT)) {
-082  throw new BackupException("HBase 
backup is not enabled. Check your "
-083  + 
BackupRestoreConstants.BACKUP_ENABLE_KEY + " setting.");
-084}
-085this.conf = conf;
-086this.conn = conn;
-087this.systemTable = new 
BackupSystemTable(conn);
-088  }
-089
-090  /**
-091   * Returns backup info
-092   */
-093  protected BackupInfo getBackupInfo() 
{
-094return backupInfo;
-095  }
-096
-097  /**
-098   * This method modifies the master's 
configuration in order to inject backup-related features
-099   * (TESTs only)
-100   * @param conf configuration
-101   */
-102  @VisibleForTesting
-103  public static void 
decorateMasterConfiguration(Configuration conf) {
-104if (!isBackupEnabled(conf)) {
-105  return;
-106}
-107// Add WAL archive cleaner plug-in
-108String plugins = 
conf.get(HConstants.HBASE_MASTER_LOGCLEANER_PLUGINS);
-109String cleanerClass = 
BackupLogCleaner.class.getCanonicalName();
-110if (!plugins.contains(cleanerClass)) 
{
-111  
conf.set(HConstants.HBASE_MASTER_LOGCLEANER_PLUGINS, plugins + "," + 
cleanerClass);
-112}
-113
-114String classes = 
conf.get(ProcedureManagerHost.MASTER_PROCEDURE_CONF_KEY);
-115String masterProcedureClass = 
LogRollMasterProcedureManager.class.getName();
-116if (classes == null) {
-117  
conf.set(ProcedureManagerHost.MASTER_PROCEDURE_CONF_KEY, 
masterProcedureClass);
-118} else if 
(!classes.contains(masterProcedureClass)) {
-119  
conf.set(ProcedureManagerHost.MASTER_PROCEDURE_CONF_KEY,
-120classes + "," + 
masterProcedureClass);
-121}
-122
-123if (LOG.isDebugEnabled()) {
-124  LOG.debug("Added log cleaner: " + 
cleanerClass + "\n" + "Added master procedure manager: "
-125  + masterProcedureClass);
-126}
-127  }
-128
-129  /**
-130   * This method modifies the Region 
Server configuration in order to 

[02/51] [partial] hbase-site git commit: Published site at 2a2258656b2fcd92b967131b6c1f037363553bc4.

2018-03-27 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/e0fb1fde/devapidocs/src-html/org/apache/hadoop/hbase/master/RegionPlacementMaintainer.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/master/RegionPlacementMaintainer.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/master/RegionPlacementMaintainer.html
index bcb65f1..a9d5986 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/master/RegionPlacementMaintainer.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/master/RegionPlacementMaintainer.html
@@ -39,1086 +39,1087 @@
 031import java.util.Scanner;
 032import java.util.Set;
 033import java.util.TreeMap;
-034import 
org.apache.commons.cli.CommandLine;
-035import 
org.apache.commons.cli.GnuParser;
-036import 
org.apache.commons.cli.HelpFormatter;
-037import org.apache.commons.cli.Options;
-038import 
org.apache.commons.cli.ParseException;
-039import 
org.apache.commons.lang3.StringUtils;
-040import 
org.apache.hadoop.conf.Configuration;
-041import org.apache.hadoop.fs.FileSystem;
-042import 
org.apache.hadoop.hbase.ClusterMetrics.Option;
-043import 
org.apache.hadoop.hbase.HBaseConfiguration;
-044import 
org.apache.hadoop.hbase.HConstants;
-045import 
org.apache.hadoop.hbase.ServerName;
-046import 
org.apache.hadoop.hbase.TableName;
-047import 
org.apache.hadoop.hbase.client.Admin;
-048import 
org.apache.hadoop.hbase.client.ClusterConnection;
-049import 
org.apache.hadoop.hbase.client.Connection;
-050import 
org.apache.hadoop.hbase.client.ConnectionFactory;
-051import 
org.apache.hadoop.hbase.client.RegionInfo;
-052import 
org.apache.hadoop.hbase.favored.FavoredNodeAssignmentHelper;
-053import 
org.apache.hadoop.hbase.favored.FavoredNodesPlan;
-054import 
org.apache.hadoop.hbase.util.FSUtils;
-055import 
org.apache.hadoop.hbase.util.MunkresAssignment;
-056import 
org.apache.hadoop.hbase.util.Pair;
-057import 
org.apache.yetus.audience.InterfaceAudience;
-058import org.slf4j.Logger;
-059import org.slf4j.LoggerFactory;
-060
-061import 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-062import 
org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
-063import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService.BlockingInterface;
-064import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.UpdateFavoredNodesRequest;
-065import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.UpdateFavoredNodesResponse;
-066
-067/**
-068 * A tool that is used for manipulating 
and viewing favored nodes information
-069 * for regions. Run with -h to get a list 
of the options
-070 */
-071@InterfaceAudience.Private
-072// TODO: Remove? Unused. Partially 
implemented only.
-073public class RegionPlacementMaintainer 
{
-074  private static final Logger LOG = 
LoggerFactory.getLogger(RegionPlacementMaintainer.class
-075  .getName());
-076  //The cost of a placement that should 
never be assigned.
-077  private static final float MAX_COST = 
Float.POSITIVE_INFINITY;
-078
-079  // The cost of a placement that is 
undesirable but acceptable.
-080  private static final float AVOID_COST = 
10f;
-081
-082  // The amount by which the cost of a 
placement is increased if it is the
-083  // last slot of the server. This is 
done to more evenly distribute the slop
-084  // amongst servers.
-085  private static final float 
LAST_SLOT_COST_PENALTY = 0.5f;
-086
-087  // The amount by which the cost of a 
primary placement is penalized if it is
-088  // not the host currently serving the 
region. This is done to minimize moves.
-089  private static final float 
NOT_CURRENT_HOST_PENALTY = 0.1f;
-090
-091  private static boolean 
USE_MUNKRES_FOR_PLACING_SECONDARY_AND_TERTIARY = false;
-092
-093  private Configuration conf;
-094  private final boolean 
enforceLocality;
-095  private final boolean 
enforceMinAssignmentMove;
-096  private RackManager rackManager;
-097  private SetTableName 
targetTableSet;
-098  private final Connection connection;
-099
-100  public 
RegionPlacementMaintainer(Configuration conf) {
-101this(conf, true, true);
-102  }
-103
-104  public 
RegionPlacementMaintainer(Configuration conf, boolean enforceLocality,
-105  boolean enforceMinAssignmentMove) 
{
-106this.conf = conf;
-107this.enforceLocality = 
enforceLocality;
-108this.enforceMinAssignmentMove = 
enforceMinAssignmentMove;
-109this.targetTableSet = new 
HashSet();
-110this.rackManager = new 
RackManager(conf);
-111try {
-112  this.connection = 
ConnectionFactory.createConnection(this.conf);
-113} catch (IOException e) {
-114  throw new RuntimeException(e);
-115}
-116  }
-117
-118  private static void printHelp(Options 
opt) {
-119new HelpFormatter().printHelp(
-120"RegionPlacement  -w | -u | 
-n | -v | -t | -h | -overwrite -r regionName -f favoredNodes " +
-121"-diff" +
-122" [-l false] [-m false] [-d] 
[-tables 

[50/50] [abbrv] hbase git commit: HBASE-19999 Remove the SYNC_REPLICATION_ENABLED flag

2018-03-27 Thread zhangduo
HBASE-1 Remove the SYNC_REPLICATION_ENABLED flag


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/6cde40d0
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/6cde40d0
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/6cde40d0

Branch: refs/heads/HBASE-19064
Commit: 6cde40d037873cc9c6e996b6dea26c80b6a87ff6
Parents: 66c5302
Author: Guanghao Zhang 
Authored: Fri Mar 9 11:30:25 2018 +0800
Committer: zhangduo 
Committed: Tue Mar 27 18:14:09 2018 +0800

--
 .../hbase/replication/ReplicationUtils.java  |  2 --
 .../hadoop/hbase/regionserver/HRegionServer.java | 13 -
 .../hbase/wal/SyncReplicationWALProvider.java| 19 ++-
 .../org/apache/hadoop/hbase/wal/WALFactory.java  | 18 --
 .../hbase/replication/TestSyncReplication.java   |  1 -
 .../master/TestRecoverStandbyProcedure.java  |  2 --
 .../wal/TestSyncReplicationWALProvider.java  |  2 --
 7 files changed, 38 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/6cde40d0/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationUtils.java
--
diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationUtils.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationUtils.java
index e402d0f..cb22f57 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationUtils.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationUtils.java
@@ -37,8 +37,6 @@ import org.apache.yetus.audience.InterfaceAudience;
 @InterfaceAudience.Private
 public final class ReplicationUtils {
 
-  public static final String SYNC_REPLICATION_ENABLED = 
"hbase.replication.sync.enabled";
-
   public static final String REPLICATION_ATTR_NAME = "__rep__";
 
   public static final String REMOTE_WAL_DIR_NAME = "remoteWALs";

http://git-wip-us.apache.org/repos/asf/hbase/blob/6cde40d0/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
index 63f0716..6eec768 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
@@ -1804,10 +1804,8 @@ public class HRegionServer extends HasThread implements
   private void setupWALAndReplication() throws IOException {
 boolean isMasterNoTableOrSystemTableOnly = this instanceof HMaster &&
   (!LoadBalancer.isTablesOnMaster(conf) || 
LoadBalancer.isSystemTablesOnlyOnMaster(conf));
-if (isMasterNoTableOrSystemTableOnly) {
-  conf.setBoolean(ReplicationUtils.SYNC_REPLICATION_ENABLED, false);
-}
-WALFactory factory = new WALFactory(conf, serverName.toString());
+WALFactory factory =
+new WALFactory(conf, serverName.toString(), 
!isMasterNoTableOrSystemTableOnly);
 if (!isMasterNoTableOrSystemTableOnly) {
   // TODO Replication make assumptions here based on the default 
filesystem impl
   Path oldLogDir = new Path(walRootDir, HConstants.HREGION_OLDLOGDIR_NAME);
@@ -1926,11 +1924,8 @@ public class HRegionServer extends HasThread implements
 }
 this.executorService.startExecutorService(ExecutorType.RS_REFRESH_PEER,
   conf.getInt("hbase.regionserver.executor.refresh.peer.threads", 2));
-
-if (conf.getBoolean(ReplicationUtils.SYNC_REPLICATION_ENABLED, false)) {
-  
this.executorService.startExecutorService(ExecutorType.RS_REPLAY_SYNC_REPLICATION_WAL,
-
conf.getInt("hbase.regionserver.executor.replay.sync.replication.wal.threads", 
2));
-}
+
this.executorService.startExecutorService(ExecutorType.RS_REPLAY_SYNC_REPLICATION_WAL,
+  
conf.getInt("hbase.regionserver.executor.replay.sync.replication.wal.threads", 
1));
 
 Threads.setDaemonThreadRunning(this.walRoller.getThread(), getName() + 
".logRoller",
 uncaughtExceptionHandler);

http://git-wip-us.apache.org/repos/asf/hbase/blob/6cde40d0/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/SyncReplicationWALProvider.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/SyncReplicationWALProvider.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/SyncReplicationWALProvider.java
index 282aa21..54287fe 100644
--- 

[35/50] [abbrv] hbase git commit: HBASE-20285 Delete all last pushed sequence ids when removing a peer or removing the serial flag for a peer

2018-03-27 Thread zhangduo
HBASE-20285 Delete all last pushed sequence ids when removing a peer or 
removing the serial flag for a peer


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/056c3395
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/056c3395
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/056c3395

Branch: refs/heads/HBASE-19064
Commit: 056c3395d952f9e6d9c08b734c2a970ce935ec85
Parents: 15c398f
Author: zhangduo 
Authored: Mon Mar 26 22:17:00 2018 +0800
Committer: zhangduo 
Committed: Tue Mar 27 12:20:51 2018 +0800

--
 .../src/main/protobuf/MasterProcedure.proto | 10 +++
 .../replication/ReplicationQueueStorage.java|  5 ++
 .../replication/ZKReplicationQueueStorage.java  | 37 ++-
 .../TestZKReplicationQueueStorage.java  | 31 -
 .../replication/DisablePeerProcedure.java   | 15 +
 .../master/replication/EnablePeerProcedure.java | 15 +
 .../master/replication/RemovePeerProcedure.java | 31 -
 .../replication/ReplicationPeerManager.java |  8 ++-
 .../replication/UpdatePeerConfigProcedure.java  |  3 +
 .../replication/SerialReplicationTestBase.java  | 19 +-
 .../TestAddToSerialReplicationPeer.java | 28 ++---
 .../replication/TestSerialReplication.java  | 66 +---
 12 files changed, 226 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/056c3395/hbase-protocol-shaded/src/main/protobuf/MasterProcedure.proto
--
diff --git a/hbase-protocol-shaded/src/main/protobuf/MasterProcedure.proto 
b/hbase-protocol-shaded/src/main/protobuf/MasterProcedure.proto
index f710759..b37557c 100644
--- a/hbase-protocol-shaded/src/main/protobuf/MasterProcedure.proto
+++ b/hbase-protocol-shaded/src/main/protobuf/MasterProcedure.proto
@@ -421,3 +421,13 @@ message UpdatePeerConfigStateData {
   required ReplicationPeer peer_config = 1;
   optional ReplicationPeer old_peer_config = 2;
 }
+
+message RemovePeerStateData {
+  optional ReplicationPeer peer_config = 1;
+}
+
+message EnablePeerStateData {
+}
+
+message DisablePeerStateData {
+}

http://git-wip-us.apache.org/repos/asf/hbase/blob/056c3395/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueueStorage.java
--
diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueueStorage.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueueStorage.java
index 99a1e97..cd37ac2 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueueStorage.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueueStorage.java
@@ -87,6 +87,11 @@ public interface ReplicationQueueStorage {
   void setLastSequenceIds(String peerId, Map lastSeqIds) throws 
ReplicationException;
 
   /**
+   * Remove all the max sequence id record for the given peer.
+   * @param peerId peer id
+   */
+  void removeLastSequenceIds(String peerId) throws ReplicationException;
+  /**
* Get the current position for a specific WAL in a given queue for a given 
regionserver.
* @param serverName the name of the regionserver
* @param queueId a String that identifies the queue

http://git-wip-us.apache.org/repos/asf/hbase/blob/056c3395/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ZKReplicationQueueStorage.java
--
diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ZKReplicationQueueStorage.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ZKReplicationQueueStorage.java
index 19986f1..a629da3 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ZKReplicationQueueStorage.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ZKReplicationQueueStorage.java
@@ -103,7 +103,8 @@ class ZKReplicationQueueStorage extends 
ZKReplicationStorageBase
*/
   private final String hfileRefsZNode;
 
-  private final String regionsZNode;
+  @VisibleForTesting
+  final String regionsZNode;
 
   public ZKReplicationQueueStorage(ZKWatcher zookeeper, Configuration conf) {
 super(zookeeper, conf);
@@ -313,6 +314,40 @@ class ZKReplicationQueueStorage extends 
ZKReplicationStorageBase
   }
 
   @Override
+  public void removeLastSequenceIds(String peerId) throws ReplicationException 
{
+String suffix = "-" + peerId;
+try {
+  StringBuilder sb = new StringBuilder(regionsZNode);
+  int regionsZNodeLength = regionsZNode.length();
+  int 

[19/50] [abbrv] hbase git commit: Add HBaseCon 2018 to front-page

2018-03-27 Thread zhangduo
Add HBaseCon 2018 to front-page


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/ce702df4
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/ce702df4
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/ce702df4

Branch: refs/heads/HBASE-19064
Commit: ce702df41ba32a466887aee10949bdc963e9c404
Parents: 4c203a9
Author: Josh Elser 
Authored: Sat Mar 24 15:16:06 2018 -0400
Committer: Josh Elser 
Committed: Sat Mar 24 15:16:06 2018 -0400

--
 src/site/xdoc/index.xml | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/ce702df4/src/site/xdoc/index.xml
--
diff --git a/src/site/xdoc/index.xml b/src/site/xdoc/index.xml
index 1848d40..6d0b05c 100644
--- a/src/site/xdoc/index.xml
+++ b/src/site/xdoc/index.xml
@@ -83,6 +83,7 @@ Apache HBase is an open-source, distributed, versioned, 
non-relational database
 
 
  
+   June 18th, 2018 https://hbase.apache.org/hbasecon-2018;>HBaseCon 2018 @ San Jose 
Convention Center, San Jose, CA, USA
August 4th, 2017 https://easychair.org/cfp/HBaseConAsia2017;>HBaseCon Asia 2017 @ the 
Huawei Campus in Shenzhen, China
June 12th, 2017 https://easychair.org/cfp/hbasecon2017;>HBaseCon2017 at the 
Crittenden Buildings on the Google Mountain View Campus
April 25th, 2017 https://www.meetup.com/hbaseusergroup/events/239291716/;>Meetup @ 
Visa in Palo Alto



[43/50] [abbrv] hbase git commit: HBASE-19935 Only allow table replication for sync replication for now

2018-03-27 Thread zhangduo
HBASE-19935 Only allow table replication for sync replication for now


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/29bef5c9
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/29bef5c9
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/29bef5c9

Branch: refs/heads/HBASE-19064
Commit: 29bef5c9d96cc437209de63257dc36e09f116f99
Parents: e8a85bf
Author: Guanghao Zhang 
Authored: Tue Feb 6 16:00:59 2018 +0800
Committer: zhangduo 
Committed: Tue Mar 27 17:31:49 2018 +0800

--
 .../replication/ReplicationPeerConfig.java  |  9 +++
 .../replication/ReplicationPeerManager.java | 34 -
 .../replication/TestReplicationAdmin.java   | 73 ++--
 .../wal/TestCombinedAsyncWriter.java|  6 ++
 .../wal/TestSyncReplicationWALProvider.java |  6 ++
 5 files changed, 102 insertions(+), 26 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/29bef5c9/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
index 97abc74..997a155 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
@@ -25,6 +25,8 @@ import java.util.List;
 import java.util.Map;
 import java.util.Set;
 import java.util.TreeMap;
+
+import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.yetus.audience.InterfaceAudience;
@@ -220,6 +222,13 @@ public class ReplicationPeerConfig {
 return this.remoteWALDir;
   }
 
+  /**
+   * Use remote wal dir to decide whether a peer is sync replication peer
+   */
+  public boolean isSyncReplication() {
+return !StringUtils.isBlank(this.remoteWALDir);
+  }
+
   public static ReplicationPeerConfigBuilder newBuilder() {
 return new ReplicationPeerConfigBuilderImpl();
   }

http://git-wip-us.apache.org/repos/asf/hbase/blob/29bef5c9/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ReplicationPeerManager.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ReplicationPeerManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ReplicationPeerManager.java
index f07a0d8..ff778a8 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ReplicationPeerManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ReplicationPeerManager.java
@@ -170,7 +170,7 @@ public class ReplicationPeerManager {
   " does not match new remote wal dir '" + 
peerConfig.getRemoteWALDir() + "'");
 }
 
-if (oldPeerConfig.getRemoteWALDir() != null) {
+if (oldPeerConfig.isSyncReplication()) {
   if (!ReplicationUtils.isNamespacesAndTableCFsEqual(oldPeerConfig, 
peerConfig)) {
 throw new DoNotRetryIOException(
   "Changing the replicated namespace/table config on a synchronous 
replication " +
@@ -199,8 +199,8 @@ public class ReplicationPeerManager {
 }
 ReplicationPeerConfig copiedPeerConfig = 
ReplicationPeerConfig.newBuilder(peerConfig).build();
 SyncReplicationState syncReplicationState =
-StringUtils.isBlank(peerConfig.getRemoteWALDir()) ? 
SyncReplicationState.NONE
-: SyncReplicationState.DOWNGRADE_ACTIVE;
+copiedPeerConfig.isSyncReplication() ? 
SyncReplicationState.DOWNGRADE_ACTIVE
+: SyncReplicationState.NONE;
 peerStorage.addPeer(peerId, copiedPeerConfig, enabled, 
syncReplicationState);
 peers.put(peerId,
   new ReplicationPeerDescription(peerId, enabled, copiedPeerConfig, 
syncReplicationState));
@@ -324,9 +324,37 @@ public class ReplicationPeerManager {
 peerConfig.getTableCFsMap());
 }
 
+if (peerConfig.isSyncReplication()) {
+  checkPeerConfigForSyncReplication(peerConfig);
+}
+
 checkConfiguredWALEntryFilters(peerConfig);
   }
 
+  private void checkPeerConfigForSyncReplication(ReplicationPeerConfig 
peerConfig)
+  throws DoNotRetryIOException {
+// This is used to reduce the difficulty for implementing the sync 
replication state transition
+// as we need to reopen all the related regions.
+// TODO: Add namespace, replicat_all flag back
+if (peerConfig.replicateAllUserTables()) {
+  throw new DoNotRetryIOException(
+   

[44/50] [abbrv] hbase git commit: HBASE-19957 General framework to transit sync replication state

2018-03-27 Thread zhangduo
http://git-wip-us.apache.org/repos/asf/hbase/blob/a1f8234b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/SyncReplicationPeerInfoProvider.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/SyncReplicationPeerInfoProvider.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/SyncReplicationPeerInfoProvider.java
new file mode 100644
index 000..92f2c52
--- /dev/null
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/SyncReplicationPeerInfoProvider.java
@@ -0,0 +1,43 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication.regionserver;
+
+import java.util.Optional;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.replication.SyncReplicationState;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Get the information for a sync replication peer.
+ */
+@InterfaceAudience.Private
+public interface SyncReplicationPeerInfoProvider {
+
+  /**
+   * Return the peer id and remote WAL directory if the region is 
synchronously replicated and the
+   * state is {@link SyncReplicationState#ACTIVE}.
+   */
+  Optional> getPeerIdAndRemoteWALDir(RegionInfo info);
+
+  /**
+   * Check whether the give region is contained in a sync replication peer 
which is in the given
+   * state.
+   */
+  boolean isInState(RegionInfo info, SyncReplicationState state);
+}

http://git-wip-us.apache.org/repos/asf/hbase/blob/a1f8234b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/SyncReplicationPeerInfoProviderImpl.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/SyncReplicationPeerInfoProviderImpl.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/SyncReplicationPeerInfoProviderImpl.java
new file mode 100644
index 000..32159e6
--- /dev/null
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/SyncReplicationPeerInfoProviderImpl.java
@@ -0,0 +1,71 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication.regionserver;
+
+import java.util.Optional;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.replication.ReplicationPeer;
+import org.apache.hadoop.hbase.replication.ReplicationPeers;
+import org.apache.hadoop.hbase.replication.SyncReplicationState;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.yetus.audience.InterfaceAudience;
+
+@InterfaceAudience.Private
+class SyncReplicationPeerInfoProviderImpl implements 
SyncReplicationPeerInfoProvider {
+
+  private final ReplicationPeers replicationPeers;
+
+  private final SyncReplicationPeerMappingManager mapping;
+
+  SyncReplicationPeerInfoProviderImpl(ReplicationPeers replicationPeers,
+  SyncReplicationPeerMappingManager mapping) {
+this.replicationPeers = replicationPeers;
+this.mapping = mapping;
+  }
+
+  @Override
+  public Optional> getPeerIdAndRemoteWALDir(RegionInfo 
info) {
+String peerId = mapping.getPeerId(info);
+if (peerId == null) {
+  return Optional.empty();
+   

[20/50] [abbrv] hbase git commit: HBASE-20264 add Javas 9 and 10 to the prerequisites table and add a note about using LTS releases.

2018-03-27 Thread zhangduo
HBASE-20264 add Javas 9 and 10 to the prerequisites table and add a note about 
using LTS releases.

* Make the #java anchor point at a section instead of directly at a table
* Add a note to the intro of that section about LTS JDKs
* Add columns for JDK9 and JDK10 that say unsupported and point to HBASE-20264

Signed-off-by: Zach York 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/9ea1a7d4
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/9ea1a7d4
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/9ea1a7d4

Branch: refs/heads/HBASE-19064
Commit: 9ea1a7d422080fab2e7621ad5629322dc01de1f7
Parents: ce702df
Author: Sean Busbey 
Authored: Fri Mar 23 08:48:28 2018 -0500
Committer: Sean Busbey 
Committed: Sat Mar 24 18:36:51 2018 -0500

--
 src/main/asciidoc/_chapters/configuration.adoc | 20 +++-
 1 file changed, 19 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/9ea1a7d4/src/main/asciidoc/_chapters/configuration.adoc
--
diff --git a/src/main/asciidoc/_chapters/configuration.adoc 
b/src/main/asciidoc/_chapters/configuration.adoc
index 25a52b9..8005b50 100644
--- a/src/main/asciidoc/_chapters/configuration.adoc
+++ b/src/main/asciidoc/_chapters/configuration.adoc
@@ -92,24 +92,42 @@ This section lists required services and some required 
system configuration.
 
 [[java]]
 .Java
-[cols="1,1,4", options="header"]
+
+The following table summarizes the recommendation of the HBase community wrt 
deploying on various Java versions. An entry of "yes" is meant to indicate a 
base level of testing and willingness to help diagnose and address issues you 
might run into. Similarly, an entry of "no" or "Not Supported" generally means 
that should you run into an issue the community is likely to ask you to change 
the Java environment before proceeding to help. In some cases, specific 
guidance on limitations (e.g. wether compiling / unit tests work, specific 
operational issues, etc) will also be noted.
+
+.Long Term Support JDKs are recommended
+[TIP]
+
+HBase recommends downstream users rely on JDK releases that are marked as Long 
Term Supported (LTS) either from the OpenJDK project or vendors. As of March 
2018 that means Java 8 is the only applicable version and that the next likely 
version to see testing will be Java 11 near Q3 2018.
+
+
+.Java support by release line
+[cols="1,1,1,1,1", options="header"]
 |===
 |HBase Version
 |JDK 7
 |JDK 8
+|JDK 9
+|JDK 10
 
 |2.0
 |link:http://search-hadoop.com/m/YGbbsPxZ723m3as[Not Supported]
 |yes
+|link:https://issues.apache.org/jira/browse/HBASE-20264[Not Supported]
+|link:https://issues.apache.org/jira/browse/HBASE-20264[Not Supported]
 
 |1.3
 |yes
 |yes
+|link:https://issues.apache.org/jira/browse/HBASE-20264[Not Supported]
+|link:https://issues.apache.org/jira/browse/HBASE-20264[Not Supported]
 
 
 |1.2
 |yes
 |yes
+|link:https://issues.apache.org/jira/browse/HBASE-20264[Not Supported]
+|link:https://issues.apache.org/jira/browse/HBASE-20264[Not Supported]
 
 |===
 



[25/50] [abbrv] hbase git commit: HBASE-20095 Redesign single instance pool in CleanerChore

2018-03-27 Thread zhangduo
HBASE-20095 Redesign single instance pool in CleanerChore


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/83fa0ad9
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/83fa0ad9
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/83fa0ad9

Branch: refs/heads/HBASE-19064
Commit: 83fa0ad9edcf952a574176b314f3b8c131aa2075
Parents: 6a5c14b
Author: Reid Chan 
Authored: Mon Mar 26 11:39:30 2018 +0800
Committer: Mike Drob 
Committed: Mon Mar 26 12:48:31 2018 -0500

--
 .../org/apache/hadoop/hbase/master/HMaster.java |   3 +
 .../hbase/master/cleaner/CleanerChore.java  | 144 +--
 .../TestZooKeeperTableArchiveClient.java|   3 +
 .../hbase/master/cleaner/TestCleanerChore.java  |   6 +
 .../hbase/master/cleaner/TestHFileCleaner.java  |   1 +
 .../master/cleaner/TestHFileLinkCleaner.java|   1 +
 .../hbase/master/cleaner/TestLogsCleaner.java   |   1 +
 7 files changed, 118 insertions(+), 41 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/83fa0ad9/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
index 0dc5aa3..f5bd0de 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
@@ -107,6 +107,7 @@ import 
org.apache.hadoop.hbase.master.balancer.BalancerChore;
 import org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer;
 import org.apache.hadoop.hbase.master.balancer.ClusterStatusChore;
 import org.apache.hadoop.hbase.master.balancer.LoadBalancerFactory;
+import org.apache.hadoop.hbase.master.cleaner.CleanerChore;
 import org.apache.hadoop.hbase.master.cleaner.HFileCleaner;
 import org.apache.hadoop.hbase.master.cleaner.LogCleaner;
 import org.apache.hadoop.hbase.master.cleaner.ReplicationBarrierCleaner;
@@ -1146,6 +1147,8 @@ public class HMaster extends HRegionServer implements 
MasterServices {

this.executorService.startExecutorService(ExecutorType.MASTER_TABLE_OPERATIONS, 
1);
startProcedureExecutor();
 
+// Initial cleaner chore
+CleanerChore.initChorePool(conf);
// Start log cleaner thread
int cleanerInterval = conf.getInt("hbase.master.cleaner.interval", 600 * 
1000);
this.logCleaner =

http://git-wip-us.apache.org/repos/asf/hbase/blob/83fa0ad9/hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/CleanerChore.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/CleanerChore.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/CleanerChore.java
index 46f6217..312bcce 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/CleanerChore.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/CleanerChore.java
@@ -25,6 +25,7 @@ import java.util.List;
 import java.util.Map;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.ForkJoinPool;
+import java.util.concurrent.ForkJoinTask;
 import java.util.concurrent.RecursiveTask;
 import java.util.concurrent.atomic.AtomicBoolean;
 import org.apache.hadoop.conf.Configuration;
@@ -41,6 +42,7 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import 
org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
+import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
 import org.apache.hbase.thirdparty.com.google.common.base.Predicate;
 import org.apache.hbase.thirdparty.com.google.common.collect.ImmutableSet;
 import org.apache.hbase.thirdparty.com.google.common.collect.Iterables;
@@ -51,7 +53,7 @@ import 
org.apache.hbase.thirdparty.com.google.common.collect.Lists;
  * @param  Cleaner delegate class that is dynamically loaded from 
configuration
  */
 
@edu.umd.cs.findbugs.annotations.SuppressWarnings(value="ST_WRITE_TO_STATIC_FROM_INSTANCE_METHOD",
-justification="TODO: Fix. It is wonky have static pool initialized from 
instance")
+justification="Static pool will be only updated once.")
 @InterfaceAudience.Private
 public abstract class CleanerChore extends 
ScheduledChore
 implements ConfigurationObserver {
@@ -68,19 +70,93 @@ public abstract class CleanerChore extends Schedu
   public static final String CHORE_POOL_SIZE = 
"hbase.cleaner.scan.dir.concurrent.size";
   private static final String DEFAULT_CHORE_POOL_SIZE = "0.25";
 
+  private static class DirScanPool {
+int size;
+ForkJoinPool pool;
+int cleanerLatch;
+ 

[10/50] [abbrv] hbase git commit: HBASE-20261 Table page (table.jsp) in Master UI does not show replicaIds for hbase meta table

2018-03-27 Thread zhangduo
HBASE-20261 Table page (table.jsp) in Master UI does not show replicaIds for 
hbase meta table

Signed-off-by: Josh Elser 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/88eac3ca
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/88eac3ca
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/88eac3ca

Branch: refs/heads/HBASE-19064
Commit: 88eac3ca18ef47cd7a4f0b037c609df2d7f6617b
Parents: ad47c2d
Author: Toshihiro Suzuki 
Authored: Fri Mar 23 12:37:09 2018 +0900
Committer: Josh Elser 
Committed: Fri Mar 23 15:31:30 2018 -0400

--
 .../src/main/resources/hbase-webapps/master/table.jsp | 7 +++
 1 file changed, 7 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/88eac3ca/hbase-server/src/main/resources/hbase-webapps/master/table.jsp
--
diff --git a/hbase-server/src/main/resources/hbase-webapps/master/table.jsp 
b/hbase-server/src/main/resources/hbase-webapps/master/table.jsp
index e52f33a..a992cc3 100644
--- a/hbase-server/src/main/resources/hbase-webapps/master/table.jsp
+++ b/hbase-server/src/main/resources/hbase-webapps/master/table.jsp
@@ -245,6 +245,13 @@ if ( fqtn != null ) {
 <%= locality%>
 <%= escapeXml(Bytes.toString(meta.getStartKey())) %>
 <%= escapeXml(Bytes.toString(meta.getEndKey())) %>
+<%
+  if (withReplica) {
+%>
+<%= meta.getReplicaId() %>
+<%
+  }
+%>
 
 <%  } %>
 <%} %>



[21/50] [abbrv] hbase git commit: HBASE-17819 Reduce the heap overhead for BucketCache.

2018-03-27 Thread zhangduo
HBASE-17819 Reduce the heap overhead for BucketCache.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/3f7222df
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/3f7222df
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/3f7222df

Branch: refs/heads/HBASE-19064
Commit: 3f7222df3699ef11d0782628c8358ad5d0ce108b
Parents: 9ea1a7d
Author: anoopsamjohn 
Authored: Sun Mar 25 16:36:30 2018 +0530
Committer: anoopsamjohn 
Committed: Sun Mar 25 16:36:30 2018 +0530

--
 .../apache/hadoop/hbase/util/UnsafeAccess.java  |  2 +-
 .../hbase/io/hfile/bucket/BucketCache.java  | 83 +---
 .../io/hfile/bucket/ByteBufferIOEngine.java |  5 ++
 .../hadoop/hbase/io/hfile/bucket/IOEngine.java  |  9 +++
 .../bucket/UnsafeSharedMemoryBucketEntry.java   | 81 +++
 5 files changed, 167 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/3f7222df/hbase-common/src/main/java/org/apache/hadoop/hbase/util/UnsafeAccess.java
--
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/UnsafeAccess.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/UnsafeAccess.java
index feaa9e6..486f81b 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/UnsafeAccess.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/UnsafeAccess.java
@@ -37,7 +37,7 @@ public final class UnsafeAccess {
 
   private static final Logger LOG = 
LoggerFactory.getLogger(UnsafeAccess.class);
 
-  static final Unsafe theUnsafe;
+  public static final Unsafe theUnsafe;
 
   /** The offset to the first element in a byte array. */
   public static final long BYTE_ARRAY_BASE_OFFSET;

http://git-wip-us.apache.org/repos/asf/hbase/blob/3f7222df/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
index e9129d2..673057c 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
@@ -72,6 +72,7 @@ import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
 import org.apache.hadoop.hbase.util.HasThread;
 import org.apache.hadoop.hbase.util.IdReadWriteLock;
 import org.apache.hadoop.hbase.util.IdReadWriteLock.ReferenceType;
+import org.apache.hadoop.hbase.util.UnsafeAvailChecker;
 import org.apache.hadoop.util.StringUtils;
 import org.apache.yetus.audience.InterfaceAudience;
 import org.slf4j.Logger;
@@ -517,7 +518,7 @@ public class BucketCache implements BlockCache, HeapSize {
 cacheStats.ioHit(timeTaken);
   }
   if (cachedBlock.getMemoryType() == MemoryType.SHARED) {
-bucketEntry.refCount.incrementAndGet();
+bucketEntry.incrementRefCountAndGet();
   }
   bucketEntry.access(accessCount.incrementAndGet());
   if (this.ioErrorStartTime > 0) {
@@ -610,8 +611,8 @@ public class BucketCache implements BlockCache, HeapSize {
 ReentrantReadWriteLock lock = offsetLock.getLock(bucketEntry.offset());
 try {
   lock.writeLock().lock();
-  int refCount = bucketEntry.refCount.get();
-  if(refCount == 0) {
+  int refCount = bucketEntry.getRefCount();
+  if (refCount == 0) {
 if (backingMap.remove(cacheKey, bucketEntry)) {
   blockEvicted(cacheKey, bucketEntry, removedBlock == null);
 } else {
@@ -630,7 +631,7 @@ public class BucketCache implements BlockCache, HeapSize {
 + " readers. Can not be freed now. Hence will mark this"
 + " for evicting at a later point");
   }
-  bucketEntry.markedForEvict = true;
+  bucketEntry.markForEvict();
 }
   }
 } finally {
@@ -728,7 +729,7 @@ public class BucketCache implements BlockCache, HeapSize {
   // this set is small around O(Handler Count) unless something else is 
wrong
   Set inUseBuckets = new HashSet();
   for (BucketEntry entry : backingMap.values()) {
-if (entry.refCount.get() != 0) {
+if (entry.getRefCount() != 0) {
   inUseBuckets.add(bucketAllocator.getBucketIndex(entry.offset()));
 }
   }
@@ -1275,9 +1276,6 @@ public class BucketCache implements BlockCache, HeapSize {
 byte deserialiserIndex;
 private volatile long accessCounter;
 private BlockPriority priority;
-// Set this when we were not able to forcefully evict the block

[27/50] [abbrv] hbase git commit: HBASE-20224 Web UI is broken in standalone mode - addendum for hbase-backup module

2018-03-27 Thread zhangduo
HBASE-20224 Web UI is broken in standalone mode - addendum for hbase-backup 
module


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/b30ff819
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/b30ff819
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/b30ff819

Branch: refs/heads/HBASE-19064
Commit: b30ff8196a11353b4244227ceed90648558af6fe
Parents: 4428169
Author: tedyu 
Authored: Mon Mar 26 12:23:47 2018 -0700
Committer: tedyu 
Committed: Mon Mar 26 12:23:47 2018 -0700

--
 hbase-backup/src/test/resources/hbase-site.xml | 39 +
 1 file changed, 39 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/b30ff819/hbase-backup/src/test/resources/hbase-site.xml
--
diff --git a/hbase-backup/src/test/resources/hbase-site.xml 
b/hbase-backup/src/test/resources/hbase-site.xml
new file mode 100644
index 000..858d428
--- /dev/null
+++ b/hbase-backup/src/test/resources/hbase-site.xml
@@ -0,0 +1,39 @@
+
+
+
+
+  
+hbase.defaults.for.version.skip
+true
+  
+  
+hbase.hconnection.threads.keepalivetime
+3
+  
+  
+hbase.localcluster.assign.random.ports
+true
+
+  Assign random ports to master and RS info server (UI).
+
+  
+



[33/50] [abbrv] hbase git commit: HBASE-20223 Update to hbase-thirdparty 2.1.0

2018-03-27 Thread zhangduo
http://git-wip-us.apache.org/repos/asf/hbase/blob/15c398f7/hbase-server/src/test/java/org/apache/hadoop/hbase/util/RestartMetaTest.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/util/RestartMetaTest.java 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/util/RestartMetaTest.java
index d78e34a..b3dce20 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/util/RestartMetaTest.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/util/RestartMetaTest.java
@@ -18,7 +18,6 @@ package org.apache.hadoop.hbase.util;
 
 import java.io.IOException;
 
-import org.apache.commons.cli.CommandLine;
 import org.apache.hadoop.hbase.HBaseTestingUtility;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.TableName;
@@ -33,6 +32,7 @@ import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
 import org.apache.hadoop.hbase.util.test.LoadTestDataGenerator;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
+import org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLine;
 
 /**
  * A command-line tool that spins up a local process-based cluster, loads

http://git-wip-us.apache.org/repos/asf/hbase/blob/15c398f7/hbase-spark-it/pom.xml
--
diff --git a/hbase-spark-it/pom.xml b/hbase-spark-it/pom.xml
index 74de0a0..bfd2906 100644
--- a/hbase-spark-it/pom.xml
+++ b/hbase-spark-it/pom.xml
@@ -233,10 +233,6 @@
   slf4j-api
 
 
-  commons-cli
-  commons-cli
-
-
   org.apache.commons
   commons-lang3
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/15c398f7/hbase-spark-it/src/test/java/org/apache/hadoop/hbase/spark/IntegrationTestSparkBulkLoad.java
--
diff --git 
a/hbase-spark-it/src/test/java/org/apache/hadoop/hbase/spark/IntegrationTestSparkBulkLoad.java
 
b/hbase-spark-it/src/test/java/org/apache/hadoop/hbase/spark/IntegrationTestSparkBulkLoad.java
index d13dd17..e5a8ddd 100644
--- 
a/hbase-spark-it/src/test/java/org/apache/hadoop/hbase/spark/IntegrationTestSparkBulkLoad.java
+++ 
b/hbase-spark-it/src/test/java/org/apache/hadoop/hbase/spark/IntegrationTestSparkBulkLoad.java
@@ -28,7 +28,6 @@ import java.util.List;
 import java.util.Map;
 import java.util.Random;
 import java.util.Set;
-import org.apache.commons.cli.CommandLine;
 import org.apache.commons.lang3.RandomStringUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
@@ -73,6 +72,7 @@ import org.slf4j.LoggerFactory;
 import scala.Tuple2;
 
 import org.apache.hbase.thirdparty.com.google.common.collect.Sets;
+import org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLine;
 
 /**
  * Test Bulk Load and Spark on a distributed cluster.

http://git-wip-us.apache.org/repos/asf/hbase/blob/15c398f7/hbase-thrift/pom.xml
--
diff --git a/hbase-thrift/pom.xml b/hbase-thrift/pom.xml
index f1624df..0142ccd 100644
--- a/hbase-thrift/pom.xml
+++ b/hbase-thrift/pom.xml
@@ -224,18 +224,10 @@
   slf4j-api
 
 
-  commons-cli
-  commons-cli
-
-
   org.apache.commons
   commons-lang3
 
 
-  org.apache.commons
-  commons-collections4
-
-
   org.apache.hbase
   hbase-server
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/15c398f7/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServer.java
--
diff --git 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServer.java 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServer.java
index b6051d8..14b8e8f 100644
--- 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServer.java
+++ 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServer.java
@@ -21,11 +21,6 @@ package org.apache.hadoop.hbase.thrift;
 import java.util.Arrays;
 import java.util.List;
 
-import org.apache.commons.cli.CommandLine;
-import org.apache.commons.cli.CommandLineParser;
-import org.apache.commons.cli.HelpFormatter;
-import org.apache.commons.cli.Options;
-import org.apache.commons.cli.PosixParser;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HBaseConfiguration;
 import org.apache.hadoop.hbase.HBaseInterfaceAudience;
@@ -36,6 +31,11 @@ import org.apache.hadoop.util.Shell.ExitCodeException;
 import org.apache.yetus.audience.InterfaceAudience;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
+import org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLine;
+import org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLineParser;
+import org.apache.hbase.thirdparty.org.apache.commons.cli.HelpFormatter;
+import org.apache.hbase.thirdparty.org.apache.commons.cli.Options;

[08/50] [abbrv] hbase git commit: HBASE-19504 Add TimeRange support into checkAndMutate

2018-03-27 Thread zhangduo
http://git-wip-us.apache.org/repos/asf/hbase/blob/ad47c2da/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
index 3272afa..3526689 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
@@ -1738,7 +1738,7 @@ public class TestHRegion {
 
   // checkAndPut with empty value
   boolean res = region.checkAndMutate(row1, fam1, qf1, 
CompareOperator.EQUAL, new BinaryComparator(
-  emptyVal), put, true);
+  emptyVal), put);
   assertTrue(res);
 
   // Putting data in key
@@ -1747,25 +1747,25 @@ public class TestHRegion {
 
   // checkAndPut with correct value
   res = region.checkAndMutate(row1, fam1, qf1, CompareOperator.EQUAL, new 
BinaryComparator(emptyVal),
-  put, true);
+  put);
   assertTrue(res);
 
   // not empty anymore
   res = region.checkAndMutate(row1, fam1, qf1, CompareOperator.EQUAL, new 
BinaryComparator(emptyVal),
-  put, true);
+  put);
   assertFalse(res);
 
   Delete delete = new Delete(row1);
   delete.addColumn(fam1, qf1);
   res = region.checkAndMutate(row1, fam1, qf1, CompareOperator.EQUAL, new 
BinaryComparator(emptyVal),
-  delete, true);
+  delete);
   assertFalse(res);
 
   put = new Put(row1);
   put.addColumn(fam1, qf1, val2);
   // checkAndPut with correct value
   res = region.checkAndMutate(row1, fam1, qf1, CompareOperator.EQUAL, new 
BinaryComparator(val1),
-  put, true);
+  put);
   assertTrue(res);
 
   // checkAndDelete with correct value
@@ -1773,12 +1773,12 @@ public class TestHRegion {
   delete.addColumn(fam1, qf1);
   delete.addColumn(fam1, qf1);
   res = region.checkAndMutate(row1, fam1, qf1, CompareOperator.EQUAL, new 
BinaryComparator(val2),
-  delete, true);
+  delete);
   assertTrue(res);
 
   delete = new Delete(row1);
   res = region.checkAndMutate(row1, fam1, qf1, CompareOperator.EQUAL, new 
BinaryComparator(emptyVal),
-  delete, true);
+  delete);
   assertTrue(res);
 
   // checkAndPut looking for a null value
@@ -1786,7 +1786,7 @@ public class TestHRegion {
   put.addColumn(fam1, qf1, val1);
 
   res = region
-  .checkAndMutate(row1, fam1, qf1, CompareOperator.EQUAL, new 
NullComparator(), put, true);
+  .checkAndMutate(row1, fam1, qf1, CompareOperator.EQUAL, new 
NullComparator(), put);
   assertTrue(res);
 } finally {
   HBaseTestingUtility.closeRegionAndWAL(this.region);
@@ -1814,14 +1814,14 @@ public class TestHRegion {
 
   // checkAndPut with wrong value
   boolean res = region.checkAndMutate(row1, fam1, qf1, 
CompareOperator.EQUAL, new BinaryComparator(
-  val2), put, true);
+  val2), put);
   assertEquals(false, res);
 
   // checkAndDelete with wrong value
   Delete delete = new Delete(row1);
   delete.addFamily(fam1);
   res = region.checkAndMutate(row1, fam1, qf1, CompareOperator.EQUAL, new 
BinaryComparator(val2),
-  put, true);
+  put);
   assertEquals(false, res);
 
   // Putting data in key
@@ -1832,7 +1832,7 @@ public class TestHRegion {
   // checkAndPut with wrong value
   res =
   region.checkAndMutate(row1, fam1, qf1, CompareOperator.EQUAL, new 
BigDecimalComparator(
-  bd2), put, true);
+  bd2), put);
   assertEquals(false, res);
 
   // checkAndDelete with wrong value
@@ -1840,7 +1840,7 @@ public class TestHRegion {
   delete.addFamily(fam1);
   res =
   region.checkAndMutate(row1, fam1, qf1, CompareOperator.EQUAL, new 
BigDecimalComparator(
-  bd2), put, true);
+  bd2), put);
   assertEquals(false, res);
 } finally {
   HBaseTestingUtility.closeRegionAndWAL(this.region);
@@ -1866,14 +1866,14 @@ public class TestHRegion {
 
   // checkAndPut with correct value
   boolean res = region.checkAndMutate(row1, fam1, qf1, 
CompareOperator.EQUAL, new BinaryComparator(
-  val1), put, true);
+  val1), put);
   assertEquals(true, res);
 
   // checkAndDelete with correct value
   Delete delete = new Delete(row1);
   delete.addColumn(fam1, qf1);
   res = region.checkAndMutate(row1, fam1, qf1, CompareOperator.EQUAL, new 
BinaryComparator(val1),
-  delete, true);
+  delete);
   assertEquals(true, res);
 
   // Putting data in key
@@ -1884,7 +1884,7 @@ public class TestHRegion {
   // checkAndPut with correct value
   res =
   

[16/50] [abbrv] hbase git commit: HBASE-20272 TestAsyncTable#testCheckAndMutateWithTimeRange fails due to TableExistsException

2018-03-27 Thread zhangduo
HBASE-20272 TestAsyncTable#testCheckAndMutateWithTimeRange fails due to 
TableExistsException


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/b50b2e51
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/b50b2e51
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/b50b2e51

Branch: refs/heads/HBASE-19064
Commit: b50b2e51bff361e28d3522d2f4bc3aedac9b7d86
Parents: c44e886
Author: tedyu 
Authored: Sat Mar 24 06:27:20 2018 -0700
Committer: tedyu 
Committed: Sat Mar 24 06:27:20 2018 -0700

--
 .../test/java/org/apache/hadoop/hbase/client/TestAsyncTable.java| 1 -
 1 file changed, 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/b50b2e51/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncTable.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncTable.java 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncTable.java
index 576c0a7..d119f1c 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncTable.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncTable.java
@@ -342,7 +342,6 @@ public class TestAsyncTable {
 
   @Test
   public void testCheckAndMutateWithTimeRange() throws Exception {
-
TEST_UTIL.createTable(TableName.valueOf("testCheckAndMutateWithTimeRange"), 
FAMILY);
 AsyncTable table = getTable.get();
 final long ts = System.currentTimeMillis() / 2;
 Put put = new Put(row);



[36/50] [abbrv] hbase git commit: HBASE-19083 Introduce a new log writer which can write to two HDFSes

2018-03-27 Thread zhangduo
HBASE-19083 Introduce a new log writer which can write to two HDFSes


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/847d3504
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/847d3504
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/847d3504

Branch: refs/heads/HBASE-19064
Commit: 847d3504774603c4fea66d94e14cbdde3909e81e
Parents: 056c339
Author: zhangduo 
Authored: Thu Jan 11 21:08:02 2018 +0800
Committer: zhangduo 
Committed: Tue Mar 27 17:14:07 2018 +0800

--
 .../hbase/regionserver/wal/AsyncFSWAL.java  |  21 +--
 .../regionserver/wal/CombinedAsyncWriter.java   | 134 ++
 .../hbase/regionserver/wal/DualAsyncFSWAL.java  |  67 +
 .../wal/AbstractTestProtobufLog.java| 110 +++
 .../regionserver/wal/ProtobufLogTestHelper.java |  99 ++
 .../regionserver/wal/TestAsyncProtobufLog.java  |  32 +
 .../wal/TestCombinedAsyncWriter.java| 136 +++
 .../hbase/regionserver/wal/TestProtobufLog.java |  14 +-
 .../regionserver/wal/WriterOverAsyncWriter.java |  63 +
 9 files changed, 533 insertions(+), 143 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/847d3504/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AsyncFSWAL.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AsyncFSWAL.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AsyncFSWAL.java
index e34818f..0bee9d6 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AsyncFSWAL.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AsyncFSWAL.java
@@ -607,12 +607,16 @@ public class AsyncFSWAL extends 
AbstractFSWAL {
 }
   }
 
-  @Override
-  protected AsyncWriter createWriterInstance(Path path) throws IOException {
+  protected final AsyncWriter createAsyncWriter(FileSystem fs, Path path) 
throws IOException {
 return AsyncFSWALProvider.createAsyncWriter(conf, fs, path, false, 
eventLoopGroup,
   channelClass);
   }
 
+  @Override
+  protected AsyncWriter createWriterInstance(Path path) throws IOException {
+return createAsyncWriter(fs, path);
+  }
+
   private void waitForSafePoint() {
 consumeLock.lock();
 try {
@@ -632,13 +636,12 @@ public class AsyncFSWAL extends 
AbstractFSWAL {
 }
   }
 
-  private long closeWriter() {
-AsyncWriter oldWriter = this.writer;
-if (oldWriter != null) {
-  long fileLength = oldWriter.getLength();
+  protected final long closeWriter(AsyncWriter writer) {
+if (writer != null) {
+  long fileLength = writer.getLength();
   closeExecutor.execute(() -> {
 try {
-  oldWriter.close();
+  writer.close();
 } catch (IOException e) {
   LOG.warn("close old writer failed", e);
 }
@@ -654,7 +657,7 @@ public class AsyncFSWAL extends AbstractFSWAL {
   throws IOException {
 Preconditions.checkNotNull(nextWriter);
 waitForSafePoint();
-long oldFileLen = closeWriter();
+long oldFileLen = closeWriter(this.writer);
 logRollAndSetupWalProps(oldPath, newPath, oldFileLen);
 this.writer = nextWriter;
 if (nextWriter instanceof AsyncProtobufLogWriter) {
@@ -679,7 +682,7 @@ public class AsyncFSWAL extends AbstractFSWAL {
   @Override
   protected void doShutdown() throws IOException {
 waitForSafePoint();
-closeWriter();
+closeWriter(this.writer);
 closeExecutor.shutdown();
 try {
   if (!closeExecutor.awaitTermination(waitOnShutdownInSeconds, 
TimeUnit.SECONDS)) {

http://git-wip-us.apache.org/repos/asf/hbase/blob/847d3504/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/CombinedAsyncWriter.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/CombinedAsyncWriter.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/CombinedAsyncWriter.java
new file mode 100644
index 000..8ecfede
--- /dev/null
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/CombinedAsyncWriter.java
@@ -0,0 +1,134 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * 

[46/50] [abbrv] hbase git commit: HBASE-19990 Create remote wal directory when transitting to state S

2018-03-27 Thread zhangduo
HBASE-19990 Create remote wal directory when transitting to state S


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/bf670c5c
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/bf670c5c
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/bf670c5c

Branch: refs/heads/HBASE-19064
Commit: bf670c5c1bf4d1c8dff58e96f506e7d591575dd6
Parents: b12c487
Author: zhangduo 
Authored: Wed Feb 14 16:01:16 2018 +0800
Committer: zhangduo 
Committed: Tue Mar 27 17:39:58 2018 +0800

--
 .../procedure2/ProcedureYieldException.java |  9 --
 .../hbase/replication/ReplicationUtils.java |  2 ++
 .../hadoop/hbase/master/MasterFileSystem.java   | 19 ++---
 .../master/procedure/MasterProcedureEnv.java|  5 
 ...ransitPeerSyncReplicationStateProcedure.java | 29 
 .../hbase/replication/TestSyncReplication.java  |  8 ++
 6 files changed, 55 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/bf670c5c/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/ProcedureYieldException.java
--
diff --git 
a/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/ProcedureYieldException.java
 
b/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/ProcedureYieldException.java
index 0487ac5b..dbb9981 100644
--- 
a/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/ProcedureYieldException.java
+++ 
b/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/ProcedureYieldException.java
@@ -15,16 +15,21 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-
 package org.apache.hadoop.hbase.procedure2;
 
 import org.apache.yetus.audience.InterfaceAudience;
 import org.apache.yetus.audience.InterfaceStability;
 
-// TODO: Not used yet
+/**
+ * Indicate that a procedure wants to be rescheduled. Usually because there 
are something wrong but
+ * we do not want to fail the procedure.
+ * 
+ * TODO: need to support scheduling after a delay.
+ */
 @InterfaceAudience.Private
 @InterfaceStability.Stable
 public class ProcedureYieldException extends ProcedureException {
+
   /** default constructor */
   public ProcedureYieldException() {
 super();

http://git-wip-us.apache.org/repos/asf/hbase/blob/bf670c5c/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationUtils.java
--
diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationUtils.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationUtils.java
index d94cb00..e402d0f 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationUtils.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationUtils.java
@@ -41,6 +41,8 @@ public final class ReplicationUtils {
 
   public static final String REPLICATION_ATTR_NAME = "__rep__";
 
+  public static final String REMOTE_WAL_DIR_NAME = "remoteWALs";
+
   private ReplicationUtils() {
   }
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/bf670c5c/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java
index 864be02..7ccbd71 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java
@@ -41,6 +41,7 @@ import org.apache.hadoop.hbase.log.HBaseMarkers;
 import org.apache.hadoop.hbase.mob.MobConstants;
 import org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore;
 import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.replication.ReplicationUtils;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.hbase.util.FSTableDescriptors;
 import org.apache.hadoop.hbase.util.FSUtils;
@@ -133,7 +134,6 @@ public class MasterFileSystem {
* Idempotent.
*/
   private void createInitialFileSystemLayout() throws IOException {
-
 final String[] protectedSubDirs = new String[] {
 HConstants.BASE_NAMESPACE_DIR,
 HConstants.HFILE_ARCHIVE_DIRECTORY,
@@ -145,7 +145,8 @@ public class MasterFileSystem {
   HConstants.HREGION_LOGDIR_NAME,
   HConstants.HREGION_OLDLOGDIR_NAME,
   HConstants.CORRUPT_DIR_NAME,
-  WALProcedureStore.MASTER_PROCEDURE_LOGDIR
+  

[45/50] [abbrv] hbase git commit: HBASE-19957 General framework to transit sync replication state

2018-03-27 Thread zhangduo
HBASE-19957 General framework to transit sync replication state


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/a1f8234b
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/a1f8234b
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/a1f8234b

Branch: refs/heads/HBASE-19064
Commit: a1f8234b9f3958c9642d272f015e224e79c450f6
Parents: 29bef5c
Author: zhangduo 
Authored: Fri Feb 9 18:33:28 2018 +0800
Committer: zhangduo 
Committed: Tue Mar 27 17:39:58 2018 +0800

--
 .../replication/ReplicationPeerConfig.java  |   2 -
 .../replication/ReplicationPeerDescription.java |   5 +-
 .../hbase/replication/SyncReplicationState.java |  19 +-
 .../org/apache/hadoop/hbase/HConstants.java |   3 +
 .../src/main/protobuf/MasterProcedure.proto |  20 +-
 .../hbase/replication/ReplicationPeerImpl.java  |  45 -
 .../replication/ReplicationPeerStorage.java |  25 ++-
 .../hbase/replication/ReplicationPeers.java |  27 ++-
 .../replication/ZKReplicationPeerStorage.java   |  65 +--
 .../hbase/coprocessor/MasterObserver.java   |   7 +-
 .../org/apache/hadoop/hbase/master/HMaster.java |   4 +-
 .../hbase/master/MasterCoprocessorHost.java |  12 +-
 .../replication/AbstractPeerProcedure.java  |  14 +-
 .../master/replication/ModifyPeerProcedure.java |  11 --
 .../replication/RefreshPeerProcedure.java   |  18 +-
 .../replication/ReplicationPeerManager.java |  89 +
 ...ransitPeerSyncReplicationStateProcedure.java | 181 ---
 .../hbase/regionserver/HRegionServer.java   |  35 ++--
 .../regionserver/ReplicationSourceService.java  |  11 +-
 .../regionserver/PeerActionListener.java|   4 +-
 .../regionserver/PeerProcedureHandler.java  |  16 +-
 .../regionserver/PeerProcedureHandlerImpl.java  |  52 +-
 .../regionserver/RefreshPeerCallable.java   |   7 +
 .../replication/regionserver/Replication.java   |  22 ++-
 .../regionserver/ReplicationSourceManager.java  |  41 +++--
 .../SyncReplicationPeerInfoProvider.java|  43 +
 .../SyncReplicationPeerInfoProviderImpl.java|  71 
 .../SyncReplicationPeerMappingManager.java  |  48 +
 .../SyncReplicationPeerProvider.java|  35 
 .../hbase/wal/SyncReplicationWALProvider.java   |  35 ++--
 .../org/apache/hadoop/hbase/wal/WALFactory.java |  47 ++---
 .../replication/TestReplicationAdmin.java   |   3 +-
 .../TestReplicationSourceManager.java   |   5 +-
 .../wal/TestSyncReplicationWALProvider.java |  36 ++--
 34 files changed, 745 insertions(+), 313 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/a1f8234b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
index 997a155..cc7b4bc 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
@@ -15,7 +15,6 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-
 package org.apache.hadoop.hbase.replication;
 
 import java.util.Collection;
@@ -25,7 +24,6 @@ import java.util.List;
 import java.util.Map;
 import java.util.Set;
 import java.util.TreeMap;
-
 import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.util.Bytes;

http://git-wip-us.apache.org/repos/asf/hbase/blob/a1f8234b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerDescription.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerDescription.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerDescription.java
index 2d077c5..b0c27bb 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerDescription.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerDescription.java
@@ -20,7 +20,10 @@ package org.apache.hadoop.hbase.replication;
 import org.apache.yetus.audience.InterfaceAudience;
 
 /**
- * The POJO equivalent of ReplicationProtos.ReplicationPeerDescription
+ * The POJO equivalent of ReplicationProtos.ReplicationPeerDescription.
+ * 
+ * To developer, here we do not store the new sync replication state since it 
is just an
+ * intermediate state and this class is 

[48/50] [abbrv] hbase git commit: HBASE-19943 Only allow removing sync replication peer which is in DA state

2018-03-27 Thread zhangduo
HBASE-19943 Only allow removing sync replication peer which is in DA state


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/8cc1d7aa
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/8cc1d7aa
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/8cc1d7aa

Branch: refs/heads/HBASE-19064
Commit: 8cc1d7aa7e33cf739e854113fbc3f58e3811a83b
Parents: bf670c5
Author: huzheng 
Authored: Thu Mar 1 18:34:02 2018 +0800
Committer: zhangduo 
Committed: Tue Mar 27 17:49:33 2018 +0800

--
 .../replication/ReplicationPeerManager.java | 14 -
 .../hbase/wal/SyncReplicationWALProvider.java   |  2 +-
 .../replication/TestReplicationAdmin.java   | 63 
 .../hbase/replication/TestSyncReplication.java  |  2 +-
 4 files changed, 78 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/8cc1d7aa/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ReplicationPeerManager.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ReplicationPeerManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ReplicationPeerManager.java
index 0dc922d..41dd6e3 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ReplicationPeerManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ReplicationPeerManager.java
@@ -120,8 +120,20 @@ public class ReplicationPeerManager {
 return desc;
   }
 
+  private void checkPeerInDAStateIfSyncReplication(String peerId) throws 
DoNotRetryIOException {
+ReplicationPeerDescription desc = peers.get(peerId);
+if (desc != null && desc.getPeerConfig().isSyncReplication()
+&& 
!SyncReplicationState.DOWNGRADE_ACTIVE.equals(desc.getSyncReplicationState())) {
+  throw new DoNotRetryIOException("Couldn't remove synchronous replication 
peer with state="
+  + desc.getSyncReplicationState()
+  + ", Transit the synchronous replication state to be 
DOWNGRADE_ACTIVE firstly.");
+}
+  }
+
   ReplicationPeerConfig preRemovePeer(String peerId) throws 
DoNotRetryIOException {
-return checkPeerExists(peerId).getPeerConfig();
+ReplicationPeerDescription pd = checkPeerExists(peerId);
+checkPeerInDAStateIfSyncReplication(peerId);
+return pd.getPeerConfig();
   }
 
   void preEnablePeer(String peerId) throws DoNotRetryIOException {

http://git-wip-us.apache.org/repos/asf/hbase/blob/8cc1d7aa/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/SyncReplicationWALProvider.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/SyncReplicationWALProvider.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/SyncReplicationWALProvider.java
index ac4b4cd..282aa21 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/SyncReplicationWALProvider.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/SyncReplicationWALProvider.java
@@ -142,7 +142,7 @@ public class SyncReplicationWALProvider implements 
WALProvider, PeerActionListen
   @Override
   public WAL getWAL(RegionInfo region) throws IOException {
 if (region == null) {
-  return provider.getWAL(region);
+  return provider.getWAL(null);
 }
 Optional> peerIdAndRemoteWALDir =
   peerInfoProvider.getPeerIdAndRemoteWALDir(region);

http://git-wip-us.apache.org/repos/asf/hbase/blob/8cc1d7aa/hbase-server/src/test/java/org/apache/hadoop/hbase/client/replication/TestReplicationAdmin.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/replication/TestReplicationAdmin.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/replication/TestReplicationAdmin.java
index 0ad476f..486ab51 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/replication/TestReplicationAdmin.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/replication/TestReplicationAdmin.java
@@ -254,6 +254,62 @@ public class TestReplicationAdmin {
   }
 
   @Test
+  public void testRemovePeerWithNonDAState() throws Exception {
+TableName tableName = TableName.valueOf(name.getMethodName());
+TEST_UTIL.createTable(tableName, Bytes.toBytes("family"));
+ReplicationPeerConfigBuilder builder = ReplicationPeerConfig.newBuilder();
+
+String rootDir = "hdfs://srv1:/hbase";
+builder.setClusterKey(KEY_ONE);
+builder.setRemoteWALDir(rootDir);
+builder.setReplicateAllUserTables(false);
+Map tableCfs = new 

[01/50] [abbrv] hbase git commit: Revert "HBASE-20224 Web UI is broken in standalone mode" [Forced Update!]

2018-03-27 Thread zhangduo
Repository: hbase
Updated Branches:
  refs/heads/HBASE-19064 4bfc3e0c8 -> 6cde40d03 (forced update)


Revert "HBASE-20224 Web UI is broken in standalone mode"

Broke shell tests.

This reverts commit dd9fe813ecc605f5e8b3c8598824f4e9a0a1eed6.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/5d1b2110
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/5d1b2110
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/5d1b2110

Branch: refs/heads/HBASE-19064
Commit: 5d1b2110d1bac600d81d8bf04d3a97e5a9bd1268
Parents: 4cb40e6
Author: Michael Stack 
Authored: Thu Mar 22 10:47:47 2018 -0700
Committer: Michael Stack 
Committed: Thu Mar 22 10:57:42 2018 -0700

--
 hbase-client/src/test/resources/hbase-site.xml| 7 ---
 hbase-mapreduce/src/test/resources/hbase-site.xml | 7 ---
 hbase-procedure/src/test/resources/hbase-site.xml | 7 ---
 hbase-rest/src/test/resources/hbase-site.xml  | 7 ---
 .../main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java  | 2 +-
 .../test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java   | 2 +-
 hbase-server/src/test/resources/hbase-site.xml| 7 ---
 hbase-thrift/src/test/resources/hbase-site.xml| 7 ---
 8 files changed, 2 insertions(+), 44 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/5d1b2110/hbase-client/src/test/resources/hbase-site.xml
--
diff --git a/hbase-client/src/test/resources/hbase-site.xml 
b/hbase-client/src/test/resources/hbase-site.xml
index 858d428..99d2ab8 100644
--- a/hbase-client/src/test/resources/hbase-site.xml
+++ b/hbase-client/src/test/resources/hbase-site.xml
@@ -29,11 +29,4 @@
 hbase.hconnection.threads.keepalivetime
 3
   
-  
-hbase.localcluster.assign.random.ports
-true
-
-  Assign random ports to master and RS info server (UI).
-
-  
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/5d1b2110/hbase-mapreduce/src/test/resources/hbase-site.xml
--
diff --git a/hbase-mapreduce/src/test/resources/hbase-site.xml 
b/hbase-mapreduce/src/test/resources/hbase-site.xml
index 34802d0..64a1964 100644
--- a/hbase-mapreduce/src/test/resources/hbase-site.xml
+++ b/hbase-mapreduce/src/test/resources/hbase-site.xml
@@ -158,11 +158,4 @@
 hbase.hconnection.threads.keepalivetime
 3
   
-  
-hbase.localcluster.assign.random.ports
-true
-
-  Assign random ports to master and RS info server (UI).
-
-  
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/5d1b2110/hbase-procedure/src/test/resources/hbase-site.xml
--
diff --git a/hbase-procedure/src/test/resources/hbase-site.xml 
b/hbase-procedure/src/test/resources/hbase-site.xml
index a1cc27e..114ee8a 100644
--- a/hbase-procedure/src/test/resources/hbase-site.xml
+++ b/hbase-procedure/src/test/resources/hbase-site.xml
@@ -41,11 +41,4 @@
   WARNING: Doing so may expose you to additional risk of data loss!
 
   
-  
-hbase.localcluster.assign.random.ports
-true
-
-  Assign random ports to master and RS info server (UI).
-
-  
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/5d1b2110/hbase-rest/src/test/resources/hbase-site.xml
--
diff --git a/hbase-rest/src/test/resources/hbase-site.xml 
b/hbase-rest/src/test/resources/hbase-site.xml
index be7b492..2bd3ee4 100644
--- a/hbase-rest/src/test/resources/hbase-site.xml
+++ b/hbase-rest/src/test/resources/hbase-site.xml
@@ -139,11 +139,4 @@
 Skip sanity checks in tests
 
   
-  
-hbase.localcluster.assign.random.ports
-true
-
-  Assign random ports to master and RS info server (UI).
-
-  
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/5d1b2110/hbase-server/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java
index 5c8ddd9..e19e53b 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java
@@ -141,7 +141,7 @@ public class LocalHBaseCluster {
 
 // Always have masters and regionservers come up on port '0' so we don't
 // clash over default ports.
-if (conf.getBoolean(ASSIGN_RANDOM_PORTS, false)) {
+if (conf.getBoolean(ASSIGN_RANDOM_PORTS, true)) {
   

[39/50] [abbrv] hbase git commit: HBASE-19781 Add a new cluster state flag for synchronous replication

2018-03-27 Thread zhangduo
http://git-wip-us.apache.org/repos/asf/hbase/blob/7adf2719/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsckReplication.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsckReplication.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsckReplication.java
index 8911982..f5eca39 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsckReplication.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsckReplication.java
@@ -28,6 +28,7 @@ import 
org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
 import org.apache.hadoop.hbase.replication.ReplicationPeerStorage;
 import org.apache.hadoop.hbase.replication.ReplicationQueueStorage;
 import org.apache.hadoop.hbase.replication.ReplicationStorageFactory;
+import org.apache.hadoop.hbase.replication.SyncReplicationState;
 import org.apache.hadoop.hbase.testclassification.MediumTests;
 import org.apache.hadoop.hbase.testclassification.ReplicationTests;
 import org.apache.hadoop.hbase.util.HBaseFsck.ErrorReporter.ERROR_CODE;
@@ -67,9 +68,9 @@ public class TestHBaseFsckReplication {
 String peerId1 = "1";
 String peerId2 = "2";
 peerStorage.addPeer(peerId1, 
ReplicationPeerConfig.newBuilder().setClusterKey("key").build(),
-  true);
+  true, SyncReplicationState.NONE);
 peerStorage.addPeer(peerId2, 
ReplicationPeerConfig.newBuilder().setClusterKey("key").build(),
-  true);
+  true, SyncReplicationState.NONE);
 for (int i = 0; i < 10; i++) {
   queueStorage.addWAL(ServerName.valueOf("localhost", 1 + i, 10 + 
i), peerId1,
 "file-" + i);

http://git-wip-us.apache.org/repos/asf/hbase/blob/7adf2719/hbase-shell/src/main/ruby/hbase/replication_admin.rb
--
diff --git a/hbase-shell/src/main/ruby/hbase/replication_admin.rb 
b/hbase-shell/src/main/ruby/hbase/replication_admin.rb
index d1f1344..5f86365 100644
--- a/hbase-shell/src/main/ruby/hbase/replication_admin.rb
+++ b/hbase-shell/src/main/ruby/hbase/replication_admin.rb
@@ -20,6 +20,7 @@
 include Java
 
 java_import 
org.apache.hadoop.hbase.client.replication.ReplicationPeerConfigUtil
+java_import org.apache.hadoop.hbase.replication.SyncReplicationState
 java_import org.apache.hadoop.hbase.replication.ReplicationPeerConfig
 java_import org.apache.hadoop.hbase.util.Bytes
 java_import org.apache.hadoop.hbase.zookeeper.ZKConfig
@@ -338,6 +339,20 @@ module Hbase
   '!' + ReplicationPeerConfigUtil.convertToString(tableCFs)
 end
 
+# Transit current cluster to a new state in the specified synchronous
+# replication peer
+def transit_peer_sync_replication_state(id, state)
+  if 'ACTIVE'.eql?(state)
+@admin.transitReplicationPeerSyncReplicationState(id, 
SyncReplicationState::ACTIVE)
+  elsif 'DOWNGRADE_ACTIVE'.eql?(state)
+@admin.transitReplicationPeerSyncReplicationState(id, 
SyncReplicationState::DOWNGRADE_ACTIVE)
+  elsif 'STANDBY'.eql?(state)
+@admin.transitReplicationPeerSyncReplicationState(id, 
SyncReplicationState::STANDBY)
+  else
+raise(ArgumentError, 'synchronous replication state must be ACTIVE, 
DOWNGRADE_ACTIVE or STANDBY')
+  end
+end
+
 
#--
 # Enables a table's replication switch
 def enable_tablerep(table_name)

http://git-wip-us.apache.org/repos/asf/hbase/blob/7adf2719/hbase-shell/src/main/ruby/shell.rb
--
diff --git a/hbase-shell/src/main/ruby/shell.rb 
b/hbase-shell/src/main/ruby/shell.rb
index 5e563df..fdc9c7c 100644
--- a/hbase-shell/src/main/ruby/shell.rb
+++ b/hbase-shell/src/main/ruby/shell.rb
@@ -396,6 +396,7 @@ Shell.load_command_group(
 get_peer_config
 list_peer_configs
 update_peer_config
+transit_peer_sync_replication_state
   ]
 )
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/7adf2719/hbase-shell/src/main/ruby/shell/commands/list_peers.rb
--
diff --git a/hbase-shell/src/main/ruby/shell/commands/list_peers.rb 
b/hbase-shell/src/main/ruby/shell/commands/list_peers.rb
index f3ab749..f2ec014 100644
--- a/hbase-shell/src/main/ruby/shell/commands/list_peers.rb
+++ b/hbase-shell/src/main/ruby/shell/commands/list_peers.rb
@@ -39,8 +39,8 @@ EOF
 peers = replication_admin.list_peers
 
 formatter.header(%w[PEER_ID CLUSTER_KEY ENDPOINT_CLASSNAME
-REMOTE_ROOT_DIR STATE REPLICATE_ALL 
-NAMESPACES TABLE_CFS BANDWIDTH
+REMOTE_ROOT_DIR SYNC_REPLICATION_STATE STATE
+REPLICATE_ALL NAMESPACES TABLE_CFS BANDWIDTH

[28/50] [abbrv] hbase git commit: HBASE-20130 Use defaults (16020 & 16030) as base ports when the RS is bound to localhost

2018-03-27 Thread zhangduo
HBASE-20130 Use defaults (16020 & 16030) as base ports when the RS is bound to 
localhost

Base ports are changed to defaults 16020 & 16030 when RS binds to localhost. 
This is mostly used in pseudo distributed mode.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/a73f4d84
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/a73f4d84
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/a73f4d84

Branch: refs/heads/HBASE-19064
Commit: a73f4d84a0428eaf2e6253e2840b580b27c5968c
Parents: b30ff81
Author: Umesh Agashe 
Authored: Mon Mar 26 12:10:27 2018 -0700
Committer: Michael Stack 
Committed: Mon Mar 26 14:11:20 2018 -0700

--
 bin/local-regionservers.sh   | 10 +++---
 bin/regionservers.sh |  4 ++--
 src/main/asciidoc/_chapters/getting_started.adoc |  8 
 3 files changed, 13 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/a73f4d84/bin/local-regionservers.sh
--
diff --git a/bin/local-regionservers.sh b/bin/local-regionservers.sh
index 40ba93e..79dc5d0 100755
--- a/bin/local-regionservers.sh
+++ b/bin/local-regionservers.sh
@@ -18,7 +18,11 @@
 # */
 # This is used for starting multiple regionservers on the same machine.
 # run it from hbase-dir/ just like 'bin/hbase'
-# Supports up to 100 regionservers (limitation = overlapping ports)
+# Supports up to 10 regionservers (limitation = overlapping ports)
+# For supporting more instances select different values (e.g. 16200, 16300)
+# for HBASE_RS_BASE_PORT and HBASE_RS_INFO_BASE_PORT below
+HBASE_RS_BASE_PORT=16020
+HBASE_RS_INFO_BASE_PORT=16030
 
 bin=`dirname "${BASH_SOURCE-$0}"`
 bin=`cd "$bin" >/dev/null && pwd`
@@ -44,8 +48,8 @@ run_regionserver () {
   DN=$2
   export HBASE_IDENT_STRING="$USER-$DN"
   HBASE_REGIONSERVER_ARGS="\
--Dhbase.regionserver.port=`expr 16200 + $DN` \
--Dhbase.regionserver.info.port=`expr 16300 + $DN`"
+-Dhbase.regionserver.port=`expr $HBASE_RS_BASE_PORT + $DN` \
+-Dhbase.regionserver.info.port=`expr $HBASE_RS_INFO_BASE_PORT + $DN`"
 
   "$bin"/hbase-daemon.sh  --config "${HBASE_CONF_DIR}" \
 --autostart-window-size "${AUTOSTART_WINDOW_SIZE}" \

http://git-wip-us.apache.org/repos/asf/hbase/blob/a73f4d84/bin/regionservers.sh
--
diff --git a/bin/regionservers.sh b/bin/regionservers.sh
index 6db11bb..b83c1f3 100755
--- a/bin/regionservers.sh
+++ b/bin/regionservers.sh
@@ -60,8 +60,8 @@ fi
 regionservers=`cat "$HOSTLIST"`
 if [ "$regionservers" = "localhost" ]; then
   HBASE_REGIONSERVER_ARGS="\
--Dhbase.regionserver.port=16201 \
--Dhbase.regionserver.info.port=16301"
+-Dhbase.regionserver.port=16020 \
+-Dhbase.regionserver.info.port=16030"
 
   $"${@// /\\ }" ${HBASE_REGIONSERVER_ARGS} \
 2>&1 | sed "s/^/$regionserver: /" &

http://git-wip-us.apache.org/repos/asf/hbase/blob/a73f4d84/src/main/asciidoc/_chapters/getting_started.adoc
--
diff --git a/src/main/asciidoc/_chapters/getting_started.adoc 
b/src/main/asciidoc/_chapters/getting_started.adoc
index 3a5a772..2229eee 100644
--- a/src/main/asciidoc/_chapters/getting_started.adoc
+++ b/src/main/asciidoc/_chapters/getting_started.adoc
@@ -392,10 +392,10 @@ Running multiple HRegionServers on the same system can be 
useful for testing in
 The `local-regionservers.sh` command allows you to run multiple RegionServers.
 It works in a similar way to the `local-master-backup.sh` command, in that 
each parameter you provide represents the port offset for an instance.
 Each RegionServer requires two ports, and the default ports are 16020 and 
16030.
-However, the base ports for additional RegionServers are not the default ports 
since the default ports are used by the HMaster, which is also a RegionServer 
since HBase version 1.0.0.
-The base ports are 16200 and 16300 instead.
-You can run 99 additional RegionServers that are not a HMaster or backup 
HMaster, on a server.
-The following command starts four additional RegionServers, running on 
sequential ports starting at 16202/16302 (base ports 16200/16300 plus 2).
+Since HBase version 1.1.0, HMaster doesn't use region server ports, this 
leaves 10 ports (16020 to 16029 and 16030 to 16039) to be used for 
RegionServers.
+For supporting additional RegionServers, base ports can be changed in script 
'local-regionservers.sh' to appropriate value.
+e.g. With values 16200 and 16300 for base ports, 99 additional RegionServers 
can be supported, on a server.
+The following command starts four additional RegionServers, running on 
sequential ports starting at 

[15/50] [abbrv] hbase git commit: HBASE-20271 ReplicationSourceWALReader.switched should use the file name instead of the path object directly

2018-03-27 Thread zhangduo
HBASE-20271 ReplicationSourceWALReader.switched should use the file name 
instead of the path object directly


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/c44e8868
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/c44e8868
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/c44e8868

Branch: refs/heads/HBASE-19064
Commit: c44e88686084a60be7d4dd5afeabab22f7cddfd8
Parents: 64ccd2b
Author: zhangduo 
Authored: Sat Mar 24 16:25:20 2018 +0800
Committer: zhangduo 
Committed: Sat Mar 24 21:12:40 2018 +0800

--
 .../replication/regionserver/ReplicationSourceWALReader.java  | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/c44e8868/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
index 2154856..7ba347f 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
@@ -174,7 +174,8 @@ class ReplicationSourceWALReader extends Thread {
   }
 
   protected static final boolean switched(WALEntryStream entryStream, Path 
path) {
-return !path.equals(entryStream.getCurrentPath());
+Path newPath = entryStream.getCurrentPath();
+return newPath == null || !path.getName().equals(newPath.getName());
   }
 
   protected WALEntryBatch readWALEntries(WALEntryStream entryStream)



[24/50] [abbrv] hbase git commit: HBASE-20260 Remove old content from book

2018-03-27 Thread zhangduo
HBASE-20260 Remove old content from book


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/6a5c14b2
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/6a5c14b2
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/6a5c14b2

Branch: refs/heads/HBASE-19064
Commit: 6a5c14b2278ba972e4f25ff5a4b7e028121f79dd
Parents: 7a1d00c
Author: Mike Drob 
Authored: Thu Mar 22 15:05:30 2018 -0500
Committer: Mike Drob 
Committed: Mon Mar 26 11:29:48 2018 -0500

--
 src/main/asciidoc/_chapters/architecture.adoc   |   7 +-
 src/main/asciidoc/_chapters/backup_restore.adoc |   2 +-
 src/main/asciidoc/_chapters/compression.adoc|   2 +-
 src/main/asciidoc/_chapters/configuration.adoc  |  37 +--
 .../asciidoc/_chapters/getting_started.adoc |  18 -
 src/main/asciidoc/_chapters/ops_mgt.adoc|  39 ++-
 .../asciidoc/_chapters/troubleshooting.adoc | 115 +--
 src/main/asciidoc/_chapters/upgrading.adoc  | 325 +--
 8 files changed, 55 insertions(+), 490 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/6a5c14b2/src/main/asciidoc/_chapters/architecture.adoc
--
diff --git a/src/main/asciidoc/_chapters/architecture.adoc 
b/src/main/asciidoc/_chapters/architecture.adoc
index 6fb5891..7be5f48 100644
--- a/src/main/asciidoc/_chapters/architecture.adoc
+++ b/src/main/asciidoc/_chapters/architecture.adoc
@@ -1493,11 +1493,14 @@ Alphanumeric Rowkeys::
 Using a Custom Algorithm::
   The RegionSplitter tool is provided with HBase, and uses a _SplitAlgorithm_ 
to determine split points for you.
   As parameters, you give it the algorithm, desired number of regions, and 
column families.
-  It includes two split algorithms.
+  It includes three split algorithms.
   The first is the
   
`link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/util/RegionSplitter.HexStringSplit.html[HexStringSplit]`
   algorithm, which assumes the row keys are hexadecimal strings.
-  The second,
+  The second is the
+  
`link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/util/RegionSplitter.DecimalStringSplit.html[DecimalStringSplit]`
+  algorithm, which assumes the row keys are decimal strings in the range 
 to .
+  The third,
   
`link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/util/RegionSplitter.UniformSplit.html[UniformSplit]`,
   assumes the row keys are random byte arrays.
   You will probably need to develop your own

http://git-wip-us.apache.org/repos/asf/hbase/blob/6a5c14b2/src/main/asciidoc/_chapters/backup_restore.adoc
--
diff --git a/src/main/asciidoc/_chapters/backup_restore.adoc 
b/src/main/asciidoc/_chapters/backup_restore.adoc
index 4e6c506..b02af41 100644
--- a/src/main/asciidoc/_chapters/backup_restore.adoc
+++ b/src/main/asciidoc/_chapters/backup_restore.adoc
@@ -19,7 +19,7 @@
  */
 
 
-[[casestudies]]
+[[backuprestore]]
 = Backup and Restore
 :doctype: book
 :numbered:

http://git-wip-us.apache.org/repos/asf/hbase/blob/6a5c14b2/src/main/asciidoc/_chapters/compression.adoc
--
diff --git a/src/main/asciidoc/_chapters/compression.adoc 
b/src/main/asciidoc/_chapters/compression.adoc
index 23ceeaf..8fc1c55 100644
--- a/src/main/asciidoc/_chapters/compression.adoc
+++ b/src/main/asciidoc/_chapters/compression.adoc
@@ -441,7 +441,7 @@ $ hbase org.apache.hadoop.hbase.util.LoadTestTool -write 
1:10:100 -num_keys 1000
 
 
 [[data.block.encoding.enable]]
-== Enable Data Block Encoding
+=== Enable Data Block Encoding
 
 Codecs are built into HBase so no extra configuration is needed.
 Codecs are enabled on a table by setting the `DATA_BLOCK_ENCODING` property.

http://git-wip-us.apache.org/repos/asf/hbase/blob/6a5c14b2/src/main/asciidoc/_chapters/configuration.adoc
--
diff --git a/src/main/asciidoc/_chapters/configuration.adoc 
b/src/main/asciidoc/_chapters/configuration.adoc
index 8005b50..1f75855 100644
--- a/src/main/asciidoc/_chapters/configuration.adoc
+++ b/src/main/asciidoc/_chapters/configuration.adoc
@@ -131,7 +131,7 @@ HBase recommends downstream users rely on JDK releases that 
are marked as Long T
 
 |===
 
-NOTE: HBase will neither build nor compile with Java 6.
+NOTE: HBase will neither build nor run with Java 6.
 
 NOTE: You must set `JAVA_HOME` on each node of your cluster. _hbase-env.sh_ 
provides a handy mechanism to do this.
 
@@ -141,11 +141,7 @@ ssh::
   HBase uses the Secure Shell (ssh) command and utilities extensively to 
communicate between cluster nodes. Each server in the cluster must be running 

[30/50] [abbrv] hbase git commit: HBASE-20288 upgrade doc should call out removal of DLR.

2018-03-27 Thread zhangduo
HBASE-20288 upgrade doc should call out removal of DLR.

Signed-off-by: Michael Stack 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/5b2d2de8
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/5b2d2de8
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/5b2d2de8

Branch: refs/heads/HBASE-19064
Commit: 5b2d2de8f9b1d8a7523ca9f559f6ec726ce2afd3
Parents: 4cedd99
Author: Sean Busbey 
Authored: Mon Mar 26 09:35:47 2018 -0500
Committer: Sean Busbey 
Committed: Mon Mar 26 16:45:32 2018 -0500

--
 src/main/asciidoc/_chapters/upgrading.adoc | 12 
 1 file changed, 12 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/5b2d2de8/src/main/asciidoc/_chapters/upgrading.adoc
--
diff --git a/src/main/asciidoc/_chapters/upgrading.adoc 
b/src/main/asciidoc/_chapters/upgrading.adoc
index f680582..4f0c445 100644
--- a/src/main/asciidoc/_chapters/upgrading.adoc
+++ b/src/main/asciidoc/_chapters/upgrading.adoc
@@ -344,6 +344,9 @@ The following configuration settings are no longer 
applicable or available. For
 * hbase.bucketcache.ioengine no longer supports the 'heap' value.
 * hbase.bulkload.staging.dir
 * hbase.balancer.tablesOnMaster wasn't removed, strictly speaking, but its 
meaning has fundamentally changed and users should not set it. See the section 
[[upgrade2.0.regions.on.master]] for details.
+* hbase.master.distributed.log.replay See the section 
[[upgrade2.0.distributed.log.replay]] for details
+* hbase.regionserver.disallow.writes.when.recovering See the section 
[[upgrade2.0.distributed.log.replay]] for details
+* hbase.regionserver.wal.logreplay.batch.size See the section 
[[upgrade2.0.distributed.log.replay]] for details
 
 [[upgrade2.0.changed.defaults]]
 .Configuration settings with different defaults in HBase 2.0+
@@ -372,6 +375,11 @@ A brief summary of related changes:
 * hbase.balancer.tablesOnMaster.systemTablesOnly is boolean to keep user 
tables off master. default false
 * those wishing to replicate old list-of-servers config should deploy a 
stand-alone RegionServer process and then rely on Region Server Groups
 
+[[upgrade2.0.distributed.log.replay]]
+."Distributed Log Replay" feature broken and removed
+
+The Distributed Log Replay feature was broken and has been removed from HBase 
2.y+. As a consequence all related configs, metrics, RPC fields, and logging 
have also been removed. Note that this feature was found to be unreliable in 
the run up to HBase 1.0, defaulted to being unused, and was effectively removed 
in HBase 1.2.0 when we started ignoring the config that turns it on 
(link:https://issues.apache.org/jira/browse/HBASE-14465[HBASE-14465]). If you 
are currently using the feature, be sure to perform a clean shutdown, ensure 
all DLR work is complete, and disable the feature prior to upgrading.
+
 [[upgrade2.0.metrics]]
 .Changed metrics
 
@@ -383,6 +391,10 @@ The following metrics have changed their meaning:
 
 * The metric 'blockCacheEvictionCount' published on a per-region server basis 
no longer includes blocks removed from the cache due to the invalidation of the 
hfiles they are from (e.g. via compaction).
 
+The following metrics have been removed:
+
+* Metrics related to the Distributed Log Replay feature are no longer present. 
They were previsouly found in the region server context under the name 
'replay'. See the section [[upgrade2.0.distributed.log.replay]] for details.
+
 [[upgrade2.0.zkconfig]]
 .ZooKeeper configs no longer read from zoo.cfg
 



[40/50] [abbrv] hbase git commit: HBASE-19781 Add a new cluster state flag for synchronous replication

2018-03-27 Thread zhangduo
HBASE-19781 Add a new cluster state flag for synchronous replication


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/7adf2719
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/7adf2719
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/7adf2719

Branch: refs/heads/HBASE-19064
Commit: 7adf2719d3137e6fe18a4813c159cac6924b3e86
Parents: 669dbcc
Author: Guanghao Zhang 
Authored: Mon Jan 22 11:44:49 2018 +0800
Committer: zhangduo 
Committed: Tue Mar 27 17:30:56 2018 +0800

--
 .../org/apache/hadoop/hbase/client/Admin.java   |  39 +
 .../apache/hadoop/hbase/client/AsyncAdmin.java  |  31 
 .../hadoop/hbase/client/AsyncHBaseAdmin.java|   7 +
 .../hbase/client/ConnectionImplementation.java  |   9 ++
 .../apache/hadoop/hbase/client/HBaseAdmin.java  |  26 +++
 .../hadoop/hbase/client/RawAsyncHBaseAdmin.java |  15 ++
 .../client/ShortCircuitMasterConnection.java|   9 ++
 .../replication/ReplicationPeerConfigUtil.java  |  26 +--
 .../replication/ReplicationPeerDescription.java |  10 +-
 .../hbase/replication/SyncReplicationState.java |  48 ++
 .../hbase/shaded/protobuf/RequestConverter.java |  10 ++
 .../src/main/protobuf/Master.proto  |   4 +
 .../src/main/protobuf/MasterProcedure.proto |   4 +
 .../src/main/protobuf/Replication.proto |  20 +++
 .../replication/ReplicationPeerStorage.java |  18 ++-
 .../hbase/replication/ReplicationUtils.java |   1 +
 .../replication/ZKReplicationPeerStorage.java   |  60 +--
 .../replication/TestReplicationStateBasic.java  |  23 ++-
 .../TestZKReplicationPeerStorage.java   |  12 +-
 .../hbase/coprocessor/MasterObserver.java   |  23 +++
 .../org/apache/hadoop/hbase/master/HMaster.java |  12 ++
 .../hbase/master/MasterCoprocessorHost.java |  21 +++
 .../hadoop/hbase/master/MasterRpcServices.java  |  17 ++
 .../hadoop/hbase/master/MasterServices.java |   9 ++
 .../procedure/PeerProcedureInterface.java   |   2 +-
 .../replication/ReplicationPeerManager.java |  51 +-
 ...ransitPeerSyncReplicationStateProcedure.java | 159 +++
 .../hbase/security/access/AccessController.java |   8 +
 .../replication/TestReplicationAdmin.java   |  62 
 .../hbase/master/MockNoopMasterServices.java|   8 +-
 .../cleaner/TestReplicationHFileCleaner.java|   4 +-
 .../TestReplicationTrackerZKImpl.java   |   6 +-
 .../TestReplicationSourceManager.java   |   3 +-
 .../security/access/TestAccessController.java   |  16 ++
 .../hbase/util/TestHBaseFsckReplication.java|   5 +-
 .../src/main/ruby/hbase/replication_admin.rb|  15 ++
 hbase-shell/src/main/ruby/shell.rb  |   1 +
 .../src/main/ruby/shell/commands/list_peers.rb  |   6 +-
 .../transit_peer_sync_replication_state.rb  |  44 +
 .../test/ruby/hbase/replication_admin_test.rb   |  24 +++
 40 files changed, 815 insertions(+), 53 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/7adf2719/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
index b8546fa..167d6f3 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
@@ -52,6 +52,7 @@ import 
org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException;
 import org.apache.hadoop.hbase.replication.ReplicationException;
 import org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
 import org.apache.hadoop.hbase.replication.ReplicationPeerDescription;
+import org.apache.hadoop.hbase.replication.SyncReplicationState;
 import org.apache.hadoop.hbase.snapshot.HBaseSnapshotException;
 import org.apache.hadoop.hbase.snapshot.RestoreSnapshotException;
 import org.apache.hadoop.hbase.snapshot.SnapshotCreationException;
@@ -2648,6 +2649,44 @@ public interface Admin extends Abortable, Closeable {
   List listReplicationPeers(Pattern pattern) 
throws IOException;
 
   /**
+   * Transit current cluster to a new state in a synchronous replication peer.
+   * @param peerId a short name that identifies the peer
+   * @param state a new state of current cluster
+   * @throws IOException if a remote or network exception occurs
+   */
+  void transitReplicationPeerSyncReplicationState(String peerId, 
SyncReplicationState state)
+  throws IOException;
+
+  /**
+   * Transit current cluster to a new state in a synchronous replication peer. 
But does not block
+   * and wait for it.
+   * 
+   * You can use Future.get(long, TimeUnit) to wait on the 

[02/50] [abbrv] hbase git commit: HBASE-16412 Warnings from asciidoc

2018-03-27 Thread zhangduo
HBASE-16412 Warnings from asciidoc


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/0ef41f89
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/0ef41f89
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/0ef41f89

Branch: refs/heads/HBASE-19064
Commit: 0ef41f895572f0f839782e4c87249e0f2a2af6f1
Parents: 5d1b211
Author: Michael Stack 
Authored: Thu Mar 22 05:40:37 2018 -0700
Committer: Michael Stack 
Committed: Thu Mar 22 12:22:29 2018 -0700

--
 src/main/asciidoc/_chapters/backup_restore.adoc |  52 +++
 src/main/asciidoc/_chapters/developer.adoc  |  12 +++--
 .../resources/images/backup-app-components.png  | Bin 24366 -> 0 bytes
 .../resources/images/backup-cloud-appliance.png | Bin 30114 -> 0 bytes
 .../images/backup-dedicated-cluster.png | Bin 24950 -> 0 bytes
 .../resources/images/backup-intra-cluster.png   | Bin 19348 -> 0 bytes
 .../resources/images/backup-app-components.png  | Bin 0 -> 24366 bytes
 .../resources/images/backup-cloud-appliance.png | Bin 0 -> 30114 bytes
 .../images/backup-dedicated-cluster.png | Bin 0 -> 24950 bytes
 .../resources/images/backup-intra-cluster.png   | Bin 0 -> 19348 bytes
 10 files changed, 28 insertions(+), 36 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/0ef41f89/src/main/asciidoc/_chapters/backup_restore.adoc
--
diff --git a/src/main/asciidoc/_chapters/backup_restore.adoc 
b/src/main/asciidoc/_chapters/backup_restore.adoc
index a9dbcf5..4e6c506 100644
--- a/src/main/asciidoc/_chapters/backup_restore.adoc
+++ b/src/main/asciidoc/_chapters/backup_restore.adoc
@@ -710,66 +710,54 @@ image::backup-app-components.png[]
 The following is an outline of the steps and examples of commands that are 
used to backup the data for the _green_ application and
 to recover the data later. All commands are run when logged in as HBase 
superuser.
 
-1. A backup set called _green_set_ is created as an alias for both the 
transactions table and the customer table. The backup set can
+* A backup set called _green_set_ is created as an alias for both the 
transactions table and the customer table. The backup set can
 be used for all operations to avoid typing each table name. The backup set 
name is case-sensitive and should be formed with only
 printable characters and without spaces.
 
-[source]
-
-$ hbase backup set add green_set transactions
-$ hbase backup set add green_set customer
-
+ $ hbase backup set add green_set transactions
+ $ hbase backup set add green_set customer
 
-2. The first backup of green_set data must be a full backup. The following 
command example shows how credentials are passed to Amazon
+* The first backup of green_set data must be a full backup. The following 
command example shows how credentials are passed to Amazon
 S3 and specifies the file system with the s3a: prefix.
 
-[source]
-
-$ ACCESS_KEY=ABCDEFGHIJKLMNOPQRST
-$ SECRET_KEY=123456789abcdefghijklmnopqrstuvwxyzABCD
-$ sudo -u hbase hbase backup create full\
-  s3a://$ACCESS_KEY:SECRET_KEY@prodhbasebackups/backups -s green_set
-
+ $ ACCESS_KEY=ABCDEFGHIJKLMNOPQRST
+ $ SECRET_KEY=123456789abcdefghijklmnopqrstuvwxyzABCD
+ $ sudo -u hbase hbase backup create full\
+   s3a://$ACCESS_KEY:SECRET_KEY@prodhbasebackups/backups -s green_set
 
-3. Incremental backups should be run according to a schedule that ensures 
essential data recovery in the event of a catastrophe. At
+* Incremental backups should be run according to a schedule that ensures 
essential data recovery in the event of a catastrophe. At
 this retail company, the HBase admin team decides that automated daily backups 
secures the data sufficiently. The team decides that
 they can implement this by modifying an existing Cron job that is defined in 
`/etc/crontab`. Consequently, IT modifies the Cron job
 by adding the following line:
 
-[source]
-
-@daily hbase hbase backup create incremental 
s3a://$ACCESS_KEY:$SECRET_KEY@prodhbasebackups/backups -s green_set
-
+ @daily hbase hbase backup create incremental 
s3a://$ACCESS_KEY:$SECRET_KEY@prodhbasebackups/backups -s green_set
 
-4. A catastrophic IT incident disables the production cluster that the green 
application uses. An HBase system administrator of the
+* A catastrophic IT incident disables the production cluster that the green 
application uses. An HBase system administrator of the
 backup cluster must restore the _green_set_ dataset to the point in time 
closest to the recovery objective.
-
++
 NOTE: If the administrator of the backup HBase cluster has the backup ID with 
relevant details in accessible records, the following
 search with the `hdfs dfs -ls` command and manually scanning the backup 

  1   2   >