hbase git commit: HBASE-17879: Avoid NPE in snapshot.jsp when accessing without any request parameter

2017-04-28 Thread chia7712
Repository: hbase
Updated Branches:
  refs/heads/master 6edb8f821 -> 1848353fd


HBASE-17879: Avoid NPE in snapshot.jsp when accessing without any request 
parameter

Signed-off-by: Chia-Ping Tsai 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/1848353f
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/1848353f
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/1848353f

Branch: refs/heads/master
Commit: 1848353fd60b2c51282552e9d0ad284be601cca5
Parents: 6edb8f8
Author: Abhishek Kumar 
Authored: Sat Apr 22 18:16:20 2017 +0530
Committer: Chia-Ping Tsai 
Committed: Sat Apr 29 10:51:46 2017 +0800

--
 .../resources/hbase-webapps/master/snapshot.jsp | 20 +++-
 1 file changed, 11 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/1848353f/hbase-server/src/main/resources/hbase-webapps/master/snapshot.jsp
--
diff --git a/hbase-server/src/main/resources/hbase-webapps/master/snapshot.jsp 
b/hbase-server/src/main/resources/hbase-webapps/master/snapshot.jsp
index 75f75fc..ad3ede5 100644
--- a/hbase-server/src/main/resources/hbase-webapps/master/snapshot.jsp
+++ b/hbase-server/src/main/resources/hbase-webapps/master/snapshot.jsp
@@ -36,14 +36,16 @@
   SnapshotInfo.SnapshotStats stats = null;
   TableName snapshotTable = null;
   boolean tableExists = false;
-  try (Admin admin = master.getConnection().getAdmin()) {
-for (SnapshotDescription snapshotDesc: admin.listSnapshots()) {
-  if (snapshotName.equals(snapshotDesc.getName())) {
-snapshot = snapshotDesc;
-stats = SnapshotInfo.getSnapshotStats(conf, snapshot);
-snapshotTable = snapshot.getTableName();
-tableExists = admin.tableExists(snapshotTable);
-break;
+  if(snapshotName != null) {
+try (Admin admin = master.getConnection().getAdmin()) {
+  for (SnapshotDescription snapshotDesc: admin.listSnapshots()) {
+if (snapshotName.equals(snapshotDesc.getName())) {
+  snapshot = snapshotDesc;
+  stats = SnapshotInfo.getSnapshotStats(conf, snapshot);
+  snapshotTable = snapshot.getTableName();
+  tableExists = admin.tableExists(snapshotTable);
+  break;
+}
   }
 }
   }
@@ -110,7 +112,7 @@
   
   
 
-  Snapshot "<%= snapshotName %>" does not exists
+  Snapshot "<%= snapshotName %>" does not exist
 
   
   Go Back, or wait for the redirect.



[2/2] hbase git commit: HBASE-17875 Document why objects over 10MB are not well-suited for hbase.

2017-04-28 Thread busbey
HBASE-17875 Document why objects over 10MB are not well-suited for hbase.

Signed-off-by: Sean Busbey 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/6edb8f82
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/6edb8f82
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/6edb8f82

Branch: refs/heads/master
Commit: 6edb8f82178f1da4b29dc4ee1a1d8ea8dd2484a4
Parents: ba12cdf
Author: Jingcheng Du 
Authored: Fri Apr 28 13:32:30 2017 -0500
Committer: Sean Busbey 
Committed: Fri Apr 28 13:32:30 2017 -0500

--
 src/main/asciidoc/_chapters/faq.adoc   | 3 +++
 src/main/asciidoc/_chapters/hbase_mob.adoc | 4 ++--
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/6edb8f82/src/main/asciidoc/_chapters/faq.adoc
--
diff --git a/src/main/asciidoc/_chapters/faq.adoc 
b/src/main/asciidoc/_chapters/faq.adoc
index 7bffe0e..9034d4b 100644
--- a/src/main/asciidoc/_chapters/faq.adoc
+++ b/src/main/asciidoc/_chapters/faq.adoc
@@ -44,6 +44,9 @@ How can I find examples of NoSQL/HBase?::
 What is the history of HBase?::
   See <>.
 
+Why are the cells above 10MB not recommended for HBase?::
+  Large cells don't fit well into HBase's approach to buffering data. First, 
the large cells bypass the MemStoreLAB when they are written. Then, they cannot 
be cached in the L2 block cache during read operations. Instead, HBase has to 
allocate on-heap memory for them each time. This can have a significant impact 
on the garbage collector within the RegionServer process.
+
 === Upgrading
 How do I upgrade Maven-managed projects from HBase 0.94 to HBase 0.96+?::
   In HBase 0.96, the project moved to a modular structure. Adjust your 
project's dependencies to rely upon the `hbase-client` module or another module 
as appropriate, rather than a single JAR. You can model your Maven dependency 
after one of the following, depending on your targeted version of HBase. See 
Section 3.5, “Upgrading from 0.94.x to 0.96.x” or Section 3.3, “Upgrading 
from 0.96.x to 0.98.x” for more information.

http://git-wip-us.apache.org/repos/asf/hbase/blob/6edb8f82/src/main/asciidoc/_chapters/hbase_mob.adoc
--
diff --git a/src/main/asciidoc/_chapters/hbase_mob.adoc 
b/src/main/asciidoc/_chapters/hbase_mob.adoc
index bdf077a..5da0343 100644
--- a/src/main/asciidoc/_chapters/hbase_mob.adoc
+++ b/src/main/asciidoc/_chapters/hbase_mob.adoc
@@ -36,7 +36,7 @@ read and write paths are optimized for values smaller than 
100KB in size. When
 HBase deals with large numbers of objects over this threshold, referred to here
 as medium objects, or MOBs, performance is degraded due to write amplification
 caused by splits and compactions. When using MOBs, ideally your objects will 
be between
-100KB and 10MB. HBase ***FIX_VERSION_NUMBER*** adds support
+100KB and 10MB (see the <>). HBase ***FIX_VERSION_NUMBER*** adds support
 for better managing large numbers of MOBs while maintaining performance,
 consistency, and low operational overhead. MOB support is provided by the work
 done in link:https://issues.apache.org/jira/browse/HBASE-11339[HBASE-11339]. To
@@ -155,7 +155,7 @@ family as the second argument. and take a compaction type 
as the third argument.
 
 
 hbase> compact 't1', 'c1’, ‘MOB’
-hbase> major_compact_mob 't1', 'c1’, ‘MOB’
+hbase> major_compact 't1', 'c1’, ‘MOB’
 
 
 These commands are also available via `Admin.compact` and



[1/2] hbase git commit: HBASE-17975 TokenUtil should throw remote exception rather than squash it.

2017-04-28 Thread busbey
Repository: hbase
Updated Branches:
  refs/heads/master 73d80bb41 -> 6edb8f821


HBASE-17975 TokenUtil should throw remote exception rather than squash it.

Signed-off-by: Josh Elser 
Signed-off-by: Ted Yu 
Signed-off-by: Umesh Agashe 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/ba12cdf1
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/ba12cdf1
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/ba12cdf1

Branch: refs/heads/master
Commit: ba12cdf1388a467a0bca5074c9b8b5c022962131
Parents: 73d80bb
Author: Sean Busbey 
Authored: Fri Apr 28 08:21:00 2017 -0500
Committer: Sean Busbey 
Committed: Fri Apr 28 13:19:33 2017 -0500

--
 .../java/org/apache/hadoop/hbase/security/token/TokenUtil.java  | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/ba12cdf1/hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java
index 6127d5b..8d0a46f 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java
@@ -56,6 +56,7 @@ public class TokenUtil {
   /**
* Obtain and return an authentication token for the current user.
* @param conn The HBase cluster connection
+   * @throws IOException if a remote error or serialization problem occurs.
* @return the authentication token instance
*/
   public static Token obtainToken(
@@ -71,14 +72,12 @@ public class TokenUtil {
 
   return toToken(response.getToken());
 } catch (ServiceException se) {
-  ProtobufUtil.handleRemoteException(se);
+  throw ProtobufUtil.handleRemoteException(se);
 } finally {
   if (meta != null) {
 meta.close();
   }
 }
-// dummy return for ServiceException block
-return null;
   }
 
 



[1/2] hbase git commit: HBASE-8486 remove references to -ROOT- table from table descriptors where allowed by compatibility rules.

2017-04-28 Thread busbey
Repository: hbase
Updated Branches:
  refs/heads/master fc68f23a4 -> 73d80bb41


HBASE-8486 remove references to -ROOT- table from table descriptors where 
allowed by compatibility rules.

Signed-off-by: Sean Busbey 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/923508c9
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/923508c9
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/923508c9

Branch: refs/heads/master
Commit: 923508c9de065b99b4721acfb582e5a476f48acd
Parents: fc68f23
Author: Chia-Ping Tsai 
Authored: Fri Apr 28 12:35:47 2017 -0500
Committer: Sean Busbey 
Committed: Fri Apr 28 12:35:47 2017 -0500

--
 .../apache/hadoop/hbase/HTableDescriptor.java   | 13 ++--
 .../hadoop/hbase/client/TableDescriptor.java| 11 +--
 .../hbase/client/TableDescriptorBuilder.java| 80 +---
 3 files changed, 43 insertions(+), 61 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/923508c9/hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java
index e3cf2ec..bf58d73 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java
@@ -37,8 +37,8 @@ import org.apache.hadoop.hbase.util.Bytes;
 
 /**
  * HTableDescriptor contains the details about an HBase table  such as the 
descriptors of
- * all the column families, is the table a catalog table,  -ROOT- 
 or
- *  hbase:meta , if the table is read only, the maximum size of 
the memstore,
+ * all the column families, is the table a catalog table,  hbase:meta 
,
+ * if the table is read only, the maximum size of the memstore,
  * when the region split should occur, coprocessors associated with it etc...
  * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0.
  * use {@link TableDescriptorBuilder} to build {@link 
HTableDescriptor}.
@@ -54,7 +54,7 @@ public class HTableDescriptor implements TableDescriptor, 
Comparable -ROOT-  region.
+   * This is vestigial API. It will be removed in 3.0.
*
-   * @return true if this is a  -ROOT-  region
+   * @return always return the false
*/
-  @Override
   public boolean isRootRegion() {
-return delegatee.isRootRegion();
+return false;
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hbase/blob/923508c9/hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableDescriptor.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableDescriptor.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableDescriptor.java
index 58a18ec..6f7e20f 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableDescriptor.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableDescriptor.java
@@ -29,8 +29,8 @@ import org.apache.hadoop.hbase.util.Bytes;
 
 /**
  * TableDescriptor contains the details about an HBase table such as the 
descriptors of
- * all the column families, is the table a catalog table,  -ROOT- 
 or
- *  hbase:meta , if the table is read only, the maximum size of 
the memstore,
+ * all the column families, is the table a catalog table,  hbase:meta 
,
+ * if the table is read only, the maximum size of the memstore,
  * when the region split should occur, coprocessors associated with it etc...
  */
 @InterfaceAudience.Public
@@ -246,11 +246,4 @@ public interface TableDescriptor {
*/
   boolean isReadOnly();
 
-  /**
-   * Check if the descriptor represents a  -ROOT-  region.
-   *
-   * @return true if this is a  -ROOT-  region
-   */
-  boolean isRootRegion();
-
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/923508c9/hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableDescriptorBuilder.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableDescriptorBuilder.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableDescriptorBuilder.java
index a372ced..6c0fa65 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableDescriptorBuilder.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableDescriptorBuilder.java
@@ -52,92 +52,96 @@ public class TableDescriptorBuilder {
 
   private static final Log LOG = 
LogFactory.getLog(TableDescriptorBuilder.class);
 
+  @InterfaceAudience.Private
   public static final String 

[2/2] hbase git commit: HBASE-17970 Set yarn.app.mapreduce.am.staging-dir when starting MiniMRCluster

2017-04-28 Thread busbey
HBASE-17970 Set yarn.app.mapreduce.am.staging-dir when starting MiniMRCluster

Signed-off-by: Sean Busbey 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/73d80bb4
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/73d80bb4
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/73d80bb4

Branch: refs/heads/master
Commit: 73d80bb416dba867bab2f7c21333d443cf090883
Parents: 923508c
Author: zhangduo 
Authored: Fri Apr 28 13:41:58 2017 +0800
Committer: Sean Busbey 
Committed: Fri Apr 28 12:37:00 2017 -0500

--
 .../org/apache/hadoop/hbase/HBaseTestingUtility.java   |  4 +++-
 .../hadoop/hbase/snapshot/TestExportSnapshot.java  | 13 -
 .../hbase/snapshot/TestExportSnapshotNoCluster.java|  5 +
 .../hadoop/hbase/snapshot/TestMobExportSnapshot.java   |  9 +
 .../hbase/snapshot/TestMobSecureExportSnapshot.java|  3 +--
 .../hbase/snapshot/TestSecureExportSnapshot.java   |  3 +--
 6 files changed, 11 insertions(+), 26 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/73d80bb4/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
index e0edfa3..afc070d 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
@@ -91,11 +91,11 @@ import org.apache.hadoop.hbase.master.HMaster;
 import org.apache.hadoop.hbase.master.RegionStates;
 import org.apache.hadoop.hbase.master.ServerManager;
 import org.apache.hadoop.hbase.regionserver.BloomType;
+import org.apache.hadoop.hbase.regionserver.ChunkCreator;
 import org.apache.hadoop.hbase.regionserver.HRegion;
 import org.apache.hadoop.hbase.regionserver.HRegionServer;
 import org.apache.hadoop.hbase.regionserver.HStore;
 import org.apache.hadoop.hbase.regionserver.InternalScanner;
-import org.apache.hadoop.hbase.regionserver.ChunkCreator;
 import org.apache.hadoop.hbase.regionserver.MemStoreLABImpl;
 import org.apache.hadoop.hbase.regionserver.Region;
 import org.apache.hadoop.hbase.regionserver.RegionScanner;
@@ -723,6 +723,8 @@ public class HBaseTestingUtility extends 
HBaseCommonTestingUtility {
 conf.set("mapreduce.jobtracker.staging.root.dir",
   new Path(root, "mapreduce-jobtracker-staging-root-dir").toString());
 conf.set("mapreduce.job.working.dir", new Path(root, 
"mapred-working-dir").toString());
+conf.set("yarn.app.mapreduce.am.staging-dir",
+  new Path(root, "mapreduce-am-staging-root-dir").toString());
   }
 
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/73d80bb4/hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
index 52412d9..cc055a5 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
@@ -20,8 +20,8 @@ package org.apache.hadoop.hbase.snapshot;
 
 import static org.apache.hadoop.util.ToolRunner.run;
 import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
 
 import java.io.IOException;
 import java.net.URI;
@@ -43,9 +43,9 @@ import org.apache.hadoop.hbase.HRegionInfo;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
 import org.apache.hadoop.hbase.master.snapshot.SnapshotManager;
-import org.apache.hadoop.hbase.testclassification.LargeTests;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.SnapshotDescription;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotRegionManifest;
+import org.apache.hadoop.hbase.testclassification.LargeTests;
 import org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.hbase.util.FSUtils;
@@ -96,19 +96,14 @@ public class TestExportSnapshot {
   @BeforeClass
   public static void setUpBeforeClass() throws Exception {
 setUpBaseConf(TEST_UTIL.getConfiguration());
-// Setup separate test-data directory for MR cluster and set corresponding 
configurations.
-// Otherwise, different test classes 

[2/4] hbase git commit: HBASE-17955 Various reviewboard improvements to space quota work

2017-04-28 Thread elserj
http://git-wip-us.apache.org/repos/asf/hbase/blob/70bcf3fe/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RegionServerStatusProtos.java
--
diff --git 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RegionServerStatusProtos.java
 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RegionServerStatusProtos.java
index c70b736..b886f5c 100644
--- 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RegionServerStatusProtos.java
+++ 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RegionServerStatusProtos.java
@@ -10173,42 +10173,42 @@ public final class RegionServerStatusProtos {
  * A region identifier
  * 
  *
- * optional .hbase.pb.RegionInfo region = 1;
+ * optional .hbase.pb.RegionInfo region_info = 1;
  */
-boolean hasRegion();
+boolean hasRegionInfo();
 /**
  * 
  * A region identifier
  * 
  *
- * optional .hbase.pb.RegionInfo region = 1;
+ * optional .hbase.pb.RegionInfo region_info = 1;
  */
-org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.RegionInfo 
getRegion();
+org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.RegionInfo 
getRegionInfo();
 /**
  * 
  * A region identifier
  * 
  *
- * optional .hbase.pb.RegionInfo region = 1;
+ * optional .hbase.pb.RegionInfo region_info = 1;
  */
-
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.RegionInfoOrBuilder
 getRegionOrBuilder();
+
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.RegionInfoOrBuilder
 getRegionInfoOrBuilder();
 
 /**
  * 
  * The size in bytes of the region
  * 
  *
- * optional uint64 size = 2;
+ * optional uint64 region_size = 2;
  */
-boolean hasSize();
+boolean hasRegionSize();
 /**
  * 
  * The size in bytes of the region
  * 
  *
- * optional uint64 size = 2;
+ * optional uint64 region_size = 2;
  */
-long getSize();
+long getRegionSize();
   }
   /**
* Protobuf type {@code hbase.pb.RegionSpaceUse}
@@ -10222,7 +10222,7 @@ public final class RegionServerStatusProtos {
   super(builder);
 }
 private RegionSpaceUse() {
-  size_ = 0L;
+  regionSize_ = 0L;
 }
 
 @java.lang.Override
@@ -10256,19 +10256,19 @@ public final class RegionServerStatusProtos {
 case 10: {
   
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.RegionInfo.Builder
 subBuilder = null;
   if (((bitField0_ & 0x0001) == 0x0001)) {
-subBuilder = region_.toBuilder();
+subBuilder = regionInfo_.toBuilder();
   }
-  region_ = 
input.readMessage(org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.RegionInfo.PARSER,
 extensionRegistry);
+  regionInfo_ = 
input.readMessage(org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.RegionInfo.PARSER,
 extensionRegistry);
   if (subBuilder != null) {
-subBuilder.mergeFrom(region_);
-region_ = subBuilder.buildPartial();
+subBuilder.mergeFrom(regionInfo_);
+regionInfo_ = subBuilder.buildPartial();
   }
   bitField0_ |= 0x0001;
   break;
 }
 case 16: {
   bitField0_ |= 0x0002;
-  size_ = input.readUInt64();
+  regionSize_ = input.readUInt64();
   break;
 }
   }
@@ -10296,16 +10296,16 @@ public final class RegionServerStatusProtos {
 }
 
 private int bitField0_;
-public static final int REGION_FIELD_NUMBER = 1;
-private 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.RegionInfo 
region_;
+public static final int REGION_INFO_FIELD_NUMBER = 1;
+private 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.RegionInfo 
regionInfo_;
 /**
  * 
  * A region identifier
  * 
  *
- * optional .hbase.pb.RegionInfo region = 1;
+ * optional .hbase.pb.RegionInfo region_info = 1;
  */
-public boolean hasRegion() {
+public boolean hasRegionInfo() {
   return ((bitField0_ & 0x0001) == 0x0001);
 }
 /**
@@ -10313,32 +10313,32 @@ public final class RegionServerStatusProtos {
  * A region identifier
  * 
  *
- * optional .hbase.pb.RegionInfo region = 1;
+ * optional .hbase.pb.RegionInfo region_info = 1;
  */
-public 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.RegionInfo 
getRegion() {
-  return region_ == null ? 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.RegionInfo.getDefaultInstance()
 : region_;
+

[1/4] hbase git commit: HBASE-17955 Various reviewboard improvements to space quota work

2017-04-28 Thread elserj
Repository: hbase
Updated Branches:
  refs/heads/HBASE-16961 cb08814a4 -> 70bcf3fe6


http://git-wip-us.apache.org/repos/asf/hbase/blob/70bcf3fe/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/ActivePolicyEnforcement.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/ActivePolicyEnforcement.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/ActivePolicyEnforcement.java
index a313fa1..c558b26 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/ActivePolicyEnforcement.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/ActivePolicyEnforcement.java
@@ -17,6 +17,7 @@
 package org.apache.hadoop.hbase.quotas;
 
 import java.util.Collections;
+import java.util.HashMap;
 import java.util.Map;
 import java.util.Objects;
 
@@ -28,7 +29,12 @@ import 
org.apache.hadoop.hbase.regionserver.RegionServerServices;
 
 /**
  * A class to ease dealing with tables that have and do not have violation 
policies
- * being enforced in a uniform manner. Immutable.
+ * being enforced. This class is immutable, expect for {@code 
locallyCachedPolicies}.
+ *
+ * The {@code locallyCachedPolicies} are mutable given the current {@code 
activePolicies}
+ * and {@code snapshots}. It is expected that when a new instance of this 
class is
+ * instantiated, we also want to invalidate those previously cached policies 
(as they
+ * may now be invalidate if we received new quota usage information).
  */
 @InterfaceAudience.Private
 @InterfaceStability.Evolving
@@ -36,12 +42,23 @@ public class ActivePolicyEnforcement {
   private final Map activePolicies;
   private final Map snapshots;
   private final RegionServerServices rss;
+  private final SpaceViolationPolicyEnforcementFactory factory;
+  private final Map 
locallyCachedPolicies;
 
   public 
ActivePolicyEnforcement(Map 
activePolicies,
   Map snapshots, RegionServerServices rss) {
+this(activePolicies, snapshots, rss, 
SpaceViolationPolicyEnforcementFactory.getInstance());
+  }
+
+  public 
ActivePolicyEnforcement(Map 
activePolicies,
+  Map snapshots, RegionServerServices rss,
+  SpaceViolationPolicyEnforcementFactory factory) {
 this.activePolicies = activePolicies;
 this.snapshots = snapshots;
 this.rss = rss;
+this.factory = factory;
+// Mutable!
+this.locallyCachedPolicies = new HashMap<>();
   }
 
   /**
@@ -65,16 +82,25 @@ public class ActivePolicyEnforcement {
*/
   public SpaceViolationPolicyEnforcement getPolicyEnforcement(TableName 
tableName) {
 SpaceViolationPolicyEnforcement policy = 
activePolicies.get(Objects.requireNonNull(tableName));
-if (null == policy) {
-  synchronized (activePolicies) {
-// If we've never seen a snapshot, assume no use, and infinite limit
-SpaceQuotaSnapshot snapshot = snapshots.get(tableName);
-if (null == snapshot) {
-  snapshot = SpaceQuotaSnapshot.getNoSuchSnapshot();
+if (policy == null) {
+  synchronized (locallyCachedPolicies) {
+// When we don't have an policy enforcement for the table, there could 
be one of two cases:
+//  1) The table has no quota defined
+//  2) The table is not in violation of its quota
+// In both of these cases, we want to make sure that access remains 
fast and we minimize
+// object creation. We can accomplish this by locally caching policies 
instead of creating
+// a new instance of the policy each time.
+policy = locallyCachedPolicies.get(tableName);
+// We have already created/cached the enforcement, use it again. 
`activePolicies` and
+// `snapshots` are immutable, thus this policy is valid for the 
lifetime of `this`.
+if (policy != null) {
+  return policy;
 }
-// Create the default policy and cache it
-return 
SpaceViolationPolicyEnforcementFactory.getInstance().createWithoutViolation(
-rss, tableName, snapshot);
+// Create a PolicyEnforcement for this table and snapshot. The 
snapshot may be null
+// which is OK.
+policy = factory.createWithoutViolation(rss, tableName, 
snapshots.get(tableName));
+// Cache the policy we created
+locallyCachedPolicies.put(tableName, policy);
   }
 }
 return policy;
@@ -87,6 +113,14 @@ public class ActivePolicyEnforcement {
 return Collections.unmodifiableMap(activePolicies);
   }
 
+  /**
+   * Returns an unmodifiable version of the policy enforcements that were 
cached because they are
+   * not in violation of their quota.
+   */
+  Map 

[3/4] hbase git commit: HBASE-17955 Various reviewboard improvements to space quota work

2017-04-28 Thread elserj
http://git-wip-us.apache.org/repos/asf/hbase/blob/70bcf3fe/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
--
diff --git 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
index 4577bcf..e8a57e9 100644
--- 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
+++ 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
@@ -4362,7 +4362,7 @@ public final class QuotaProtos {
* optional .hbase.pb.SpaceQuota space = 3;
*/
   private 
org.apache.hadoop.hbase.shaded.com.google.protobuf.SingleFieldBuilderV3<
-  
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota, 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota.Builder,
 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuotaOrBuilder>
 
+  
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota, 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota.Builder,
 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuotaOrBuilder>
   getSpaceFieldBuilder() {
 if (spaceBuilder_ == null) {
   spaceBuilder_ = new 
org.apache.hadoop.hbase.shaded.com.google.protobuf.SingleFieldBuilderV3<
@@ -6077,7 +6077,7 @@ public final class QuotaProtos {
* optional .hbase.pb.SpaceQuota quota = 1;
*/
   private 
org.apache.hadoop.hbase.shaded.com.google.protobuf.SingleFieldBuilderV3<
-  
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota, 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota.Builder,
 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuotaOrBuilder>
 
+  
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota, 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota.Builder,
 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuotaOrBuilder>
   getQuotaFieldBuilder() {
 if (quotaBuilder_ == null) {
   quotaBuilder_ = new 
org.apache.hadoop.hbase.shaded.com.google.protobuf.SingleFieldBuilderV3<
@@ -6143,13 +6143,13 @@ public final class QuotaProtos {
   org.apache.hadoop.hbase.shaded.com.google.protobuf.MessageOrBuilder {
 
 /**
- * optional .hbase.pb.SpaceViolationPolicy policy = 1;
+ * optional .hbase.pb.SpaceViolationPolicy violation_policy = 
1;
  */
-boolean hasPolicy();
+boolean hasViolationPolicy();
 /**
- * optional .hbase.pb.SpaceViolationPolicy policy = 1;
+ * optional .hbase.pb.SpaceViolationPolicy violation_policy = 
1;
  */
-
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceViolationPolicy
 getPolicy();
+
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceViolationPolicy
 getViolationPolicy();
 
 /**
  * optional bool in_violation = 2;
@@ -6163,7 +6163,7 @@ public final class QuotaProtos {
   /**
* 
* Represents the state of a quota on a table. Either the quota is not in 
violation
-   * or it is in violatino there is a violation policy which should be in 
effect.
+   * or it is in violation there is a violation policy which should be in 
effect.
* 
*
* Protobuf type {@code hbase.pb.SpaceQuotaStatus}
@@ -6177,7 +6177,7 @@ public final class QuotaProtos {
   super(builder);
 }
 private SpaceQuotaStatus() {
-  policy_ = 1;
+  violationPolicy_ = 1;
   inViolation_ = false;
 }
 
@@ -6216,7 +6216,7 @@ public final class QuotaProtos {
 unknownFields.mergeVarintField(1, rawValue);
   } else {
 bitField0_ |= 0x0001;
-policy_ = rawValue;
+violationPolicy_ = rawValue;
   }
   break;
 }
@@ -6250,19 +6250,19 @@ public final class QuotaProtos {
 }
 
 private int bitField0_;
-public static final int POLICY_FIELD_NUMBER = 1;
-private int policy_;
+public static final int VIOLATION_POLICY_FIELD_NUMBER = 1;
+private int violationPolicy_;
 /**
- * optional .hbase.pb.SpaceViolationPolicy policy = 1;
+ * optional .hbase.pb.SpaceViolationPolicy violation_policy = 
1;
  */
-public boolean hasPolicy() {
+public boolean hasViolationPolicy() {
   return ((bitField0_ & 0x0001) == 0x0001);
 }
 /**
- * optional .hbase.pb.SpaceViolationPolicy policy = 1;
+ * optional .hbase.pb.SpaceViolationPolicy violation_policy = 
1;
  */
-public 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceViolationPolicy
 getPolicy() 

[4/4] hbase git commit: HBASE-17955 Various reviewboard improvements to space quota work

2017-04-28 Thread elserj
HBASE-17955 Various reviewboard improvements to space quota work

Most notable change is to cache SpaceViolationPolicyEnforcement objects
in the write path. When a table has no quota or there is not SpaceQuotaSnapshot
for that table (yet), we want to avoid creating lots of
SpaceViolationPolicyEnforcement instances, caching one instance
instead. This will help reduce GC pressure.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/70bcf3fe
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/70bcf3fe
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/70bcf3fe

Branch: refs/heads/HBASE-16961
Commit: 70bcf3fe6890582e00f9ad0ec7b6b80ebfacf05f
Parents: cb08814
Author: Josh Elser 
Authored: Tue Apr 18 16:43:40 2017 -0400
Committer: Josh Elser 
Committed: Fri Apr 28 13:27:19 2017 -0400

--
 .../hbase/quotas/QuotaSettingsFactory.java  |  10 +-
 .../hadoop/hbase/quotas/QuotaTableUtil.java |   7 +-
 .../hadoop/hbase/quotas/SpaceLimitSettings.java |  26 +-
 .../hadoop/hbase/quotas/SpaceQuotaSnapshot.java |  34 +-
 .../hbase/quotas/SpaceViolationPolicy.java  |   5 +-
 .../hbase/master/MetricsMasterQuotaSource.java  |  13 +-
 .../MetricsRegionServerQuotaSource.java |  10 +-
 .../MetricsMasterQuotaSourceFactoryImpl.java|   2 +-
 .../master/MetricsMasterQuotaSourceImpl.java|  10 +-
 .../shaded/protobuf/generated/AdminProtos.java  |   8 +-
 .../shaded/protobuf/generated/MasterProtos.java |  10 +-
 .../shaded/protobuf/generated/QuotaProtos.java  | 637 ++-
 .../generated/RegionServerStatusProtos.java | 340 +-
 .../src/main/protobuf/Quota.proto   |   8 +-
 .../src/main/protobuf/RegionServerStatus.proto  |   4 +-
 .../hbase/protobuf/generated/QuotaProtos.java   | 463 +++---
 hbase-protocol/src/main/protobuf/Quota.proto|   8 +-
 .../org/apache/hadoop/hbase/master/HMaster.java |   4 +-
 .../hadoop/hbase/master/MasterRpcServices.java  |   9 +-
 .../hadoop/hbase/master/MetricsMaster.java  |  13 +-
 .../hbase/master/MetricsMasterWrapperImpl.java  |   4 +-
 .../hbase/quotas/ActivePolicyEnforcement.java   |  54 +-
 .../quotas/FileSystemUtilizationChore.java  |   4 +-
 .../hadoop/hbase/quotas/MasterQuotaManager.java |   8 +-
 .../hbase/quotas/MasterSpaceQuotaObserver.java  |   4 +-
 .../quotas/NamespaceQuotaSnapshotStore.java |   2 +-
 .../hadoop/hbase/quotas/QuotaObserverChore.java |  62 +-
 .../quotas/RegionServerSpaceQuotaManager.java   |  16 +-
 .../hbase/quotas/SpaceLimitingException.java|   6 +-
 .../hbase/quotas/SpaceQuotaRefresherChore.java  |   2 +-
 .../SpaceViolationPolicyEnforcementFactory.java |  20 +-
 .../hbase/quotas/TableQuotaSnapshotStore.java   |   2 +-
 .../AbstractViolationPolicyEnforcement.java |  45 +-
 ...LoadVerifyingViolationPolicyEnforcement.java |  50 --
 .../DefaultViolationPolicyEnforcement.java  |  90 +++
 .../DisableTableViolationPolicyEnforcement.java |   2 +-
 ...ssingSnapshotViolationPolicyEnforcement.java |  63 ++
 .../NoInsertsViolationPolicyEnforcement.java|   2 +-
 .../NoWritesViolationPolicyEnforcement.java |   2 +-
 .../hbase/regionserver/CompactSplitThread.java  |   2 +-
 .../hbase/regionserver/HRegionServer.java   |   4 +-
 .../hbase/regionserver/RSRpcServices.java   |   9 +-
 .../resources/hbase-webapps/master/table.jsp|   8 +-
 .../hbase/quotas/SpaceQuotaHelperForTests.java  |  66 +-
 .../quotas/TestActivePolicyEnforcement.java |  62 +-
 .../quotas/TestMasterSpaceQuotaObserver.java|  28 +-
 .../TestQuotaObserverChoreRegionReports.java|   6 +-
 .../TestQuotaObserverChoreWithMiniCluster.java  |  31 +-
 .../hbase/quotas/TestQuotaStatusRPCs.java   |  15 +-
 .../TestRegionServerSpaceQuotaManager.java  |   4 +-
 .../hadoop/hbase/quotas/TestSpaceQuotas.java|  30 +-
 .../TestTableSpaceQuotaViolationNotifier.java   |   8 +-
 ...kLoadCheckingViolationPolicyEnforcement.java |   2 +-
 .../TestRegionServerRegionSpaceUseReport.java   |   4 +-
 54 files changed, 1289 insertions(+), 1049 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/70bcf3fe/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
index 184277d..a99235f 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
@@ -127,11 +127,11 @@ public class QuotaSettingsFactory {
   }
 
   static QuotaSettings fromSpace(TableName 

[2/2] hbase git commit: HBASE-17817 add table name to output (if available) when removing coprocessors

2017-04-28 Thread busbey
HBASE-17817 add table name to output (if available) when removing coprocessors

Amending-Author: Sean Busbey 
Signed-off-by: Sean Busbey 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/e62d7a6d
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/e62d7a6d
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/e62d7a6d

Branch: refs/heads/branch-1.1
Commit: e62d7a6d26ffaba8f30cc7e858df7f3eb10e0b76
Parents: 23a3e75
Author: Steen Manniche 
Authored: Tue Apr 11 17:48:50 2017 +0200
Committer: Sean Busbey 
Committed: Fri Apr 28 12:23:27 2017 -0500

--
 .../apache/hadoop/hbase/coprocessor/CoprocessorHost.java | 11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/e62d7a6d/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
index 30051d1..8271626 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
@@ -559,8 +559,15 @@ public abstract class CoprocessorHost {
   // server is configured to abort.
   abortServer(env, e);
 } else {
-  LOG.error("Removing coprocessor '" + env.toString() + "' from " +
-  "environment because it threw:  " + e,e);
+  // If available, pull a table name out of the environment
+  if(env instanceof RegionCoprocessorEnvironment) {
+String tableName = 
((RegionCoprocessorEnvironment)env).getRegionInfo().getTable().getNameAsString();
+LOG.error("Removing coprocessor '" + env.toString() + "' from table 
'"+ tableName + "'", e);
+  } else {
+LOG.error("Removing coprocessor '" + env.toString() + "' from " +
+"environment",e);
+  }
+
   coprocessors.remove(env);
   try {
 shutdown(env);



[1/2] hbase git commit: HBASE-17514 emit a warning if thrift1 proxy user is configured but hbase.regionserver.thrift.http is not

2017-04-28 Thread busbey
Repository: hbase
Updated Branches:
  refs/heads/branch-1.1 5272d68ff -> e62d7a6d2


HBASE-17514 emit a warning if thrift1 proxy user is configured but 
hbase.regionserver.thrift.http is not

Signed-off-by: Sean Busbey 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/23a3e755
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/23a3e755
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/23a3e755

Branch: refs/heads/branch-1.1
Commit: 23a3e755f676f5f11fb5eae935c62ba389487a89
Parents: 5272d68
Author: lv zehui 
Authored: Sat Apr 22 21:20:00 2017 +0800
Committer: Sean Busbey 
Committed: Fri Apr 28 12:22:31 2017 -0500

--
 .../java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java | 5 +
 1 file changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/23a3e755/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java
--
diff --git 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java
 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java
index e530cc4..710d608 100644
--- 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java
+++ 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java
@@ -313,6 +313,11 @@ public class ThriftServerRunner implements Runnable {
 this.realUser = userProvider.getCurrent().getUGI();
 qop = conf.get(THRIFT_QOP_KEY);
 doAsEnabled = conf.getBoolean(THRIFT_SUPPORT_PROXYUSER, false);
+if (doAsEnabled) {
+  if (!conf.getBoolean(USE_HTTP_CONF_KEY, false)) {
+LOG.warn("Fail to enable the doAs feature. 
hbase.regionserver.thrift.http is not configured ");
+  }
+}
 if (qop != null) {
   if (!qop.equals("auth") && !qop.equals("auth-int")
   && !qop.equals("auth-conf")) {



hbase git commit: HBASE-17817 add table name to output (if available) when removing coprocessors

2017-04-28 Thread busbey
Repository: hbase
Updated Branches:
  refs/heads/branch-1.2 92eefa388 -> 236a8d06a


HBASE-17817 add table name to output (if available) when removing coprocessors

Amending-Author: Sean Busbey 
Signed-off-by: Sean Busbey 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/236a8d06
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/236a8d06
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/236a8d06

Branch: refs/heads/branch-1.2
Commit: 236a8d06a3d4fcbba847be1cadd4d795a618fd97
Parents: 92eefa3
Author: Steen Manniche 
Authored: Tue Apr 11 17:48:50 2017 +0200
Committer: Sean Busbey 
Committed: Fri Apr 28 12:16:48 2017 -0500

--
 .../apache/hadoop/hbase/coprocessor/CoprocessorHost.java | 11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/236a8d06/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
index 30051d1..8271626 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
@@ -559,8 +559,15 @@ public abstract class CoprocessorHost {
   // server is configured to abort.
   abortServer(env, e);
 } else {
-  LOG.error("Removing coprocessor '" + env.toString() + "' from " +
-  "environment because it threw:  " + e,e);
+  // If available, pull a table name out of the environment
+  if(env instanceof RegionCoprocessorEnvironment) {
+String tableName = 
((RegionCoprocessorEnvironment)env).getRegionInfo().getTable().getNameAsString();
+LOG.error("Removing coprocessor '" + env.toString() + "' from table 
'"+ tableName + "'", e);
+  } else {
+LOG.error("Removing coprocessor '" + env.toString() + "' from " +
+"environment",e);
+  }
+
   coprocessors.remove(env);
   try {
 shutdown(env);



hbase git commit: HBASE-17817 add table name to output (if available) when removing coprocessors

2017-04-28 Thread busbey
Repository: hbase
Updated Branches:
  refs/heads/branch-1.3 6b04084ab -> e4c8a858b


HBASE-17817 add table name to output (if available) when removing coprocessors

Amending-Author: Sean Busbey 
Signed-off-by: Sean Busbey 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/e4c8a858
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/e4c8a858
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/e4c8a858

Branch: refs/heads/branch-1.3
Commit: e4c8a858b1957ab1123d0daf5a21507c7737f519
Parents: 6b04084
Author: Steen Manniche 
Authored: Tue Apr 11 17:48:50 2017 +0200
Committer: Sean Busbey 
Committed: Fri Apr 28 11:54:46 2017 -0500

--
 .../apache/hadoop/hbase/coprocessor/CoprocessorHost.java | 11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/e4c8a858/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
index 30051d1..8271626 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
@@ -559,8 +559,15 @@ public abstract class CoprocessorHost {
   // server is configured to abort.
   abortServer(env, e);
 } else {
-  LOG.error("Removing coprocessor '" + env.toString() + "' from " +
-  "environment because it threw:  " + e,e);
+  // If available, pull a table name out of the environment
+  if(env instanceof RegionCoprocessorEnvironment) {
+String tableName = 
((RegionCoprocessorEnvironment)env).getRegionInfo().getTable().getNameAsString();
+LOG.error("Removing coprocessor '" + env.toString() + "' from table 
'"+ tableName + "'", e);
+  } else {
+LOG.error("Removing coprocessor '" + env.toString() + "' from " +
+"environment",e);
+  }
+
   coprocessors.remove(env);
   try {
 shutdown(env);



[3/3] hbase git commit: HBASE-17962 Improve documentation on Rest interface

2017-04-28 Thread busbey
HBASE-17962 Improve documentation on Rest interface

(Excluded update to the ref guide, since it'll be copied from master branch 
prior to 1.4 release.)

Signed-off-by: Sean Busbey 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/fa4f6389
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/fa4f6389
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/fa4f6389

Branch: refs/heads/branch-1
Commit: fa4f6389426c55abf8a1f0d1791deacdc181748c
Parents: 9a71bac
Author: Niels Basjes 
Authored: Wed Apr 26 11:21:39 2017 +0200
Committer: Sean Busbey 
Committed: Fri Apr 28 11:43:55 2017 -0500

--
 .../main/java/org/apache/hadoop/hbase/rest/RowResource.java  | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/fa4f6389/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RowResource.java
--
diff --git 
a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RowResource.java 
b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RowResource.java
index 4d50c54..d93fd39 100644
--- a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RowResource.java
+++ b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RowResource.java
@@ -139,7 +139,13 @@ public class RowResource extends ResourceBase {
 if (!rowspec.hasColumns() || rowspec.getColumns().length > 1) {
   servlet.getMetrics().incrementFailedGetRequests(1);
   return Response.status(Response.Status.BAD_REQUEST).type(MIMETYPE_TEXT)
-  .entity("Bad request: Either 0 or more than 1 columns specified." + 
CRLF).build();
+  .entity("Bad request: Default 'GET' method only works if there is 
exactly 1 column " +
+  "in the row. Using the 'Accept' header with one of these 
formats lets you " +
+  "retrieve the entire row if it has multiple columns: " +
+  // Same as the @Produces list for the get method.
+  MIMETYPE_XML + ", " + MIMETYPE_JSON + ", " +
+  MIMETYPE_PROTOBUF + ", " + MIMETYPE_PROTOBUF_IETF +
+  CRLF).build();
 }
 MultivaluedMap params = uriInfo.getQueryParameters();
 try {



[2/3] hbase git commit: HBASE-17835 Spelling mistakes in the Java source

2017-04-28 Thread busbey
HBASE-17835 Spelling mistakes in the Java source

Signed-off-by: Sean Busbey 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/9a71bacd
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/9a71bacd
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/9a71bacd

Branch: refs/heads/branch-1
Commit: 9a71bacdf013ba6a9406c58aa43193f029df2d66
Parents: 0f5932b
Author: QilinCao 
Authored: Thu Apr 27 09:42:12 2017 +0800
Committer: Sean Busbey 
Committed: Fri Apr 28 11:40:47 2017 -0500

--
 .../src/main/java/org/apache/hadoop/hbase/client/Admin.java  | 2 +-
 .../org/apache/hadoop/hbase/procedure/ZKProcedureMemberRpcs.java | 2 +-
 .../org/apache/hadoop/hbase/master/TestClockSkewDetection.java   | 4 ++--
 .../org/apache/hadoop/hbase/wal/WALPerformanceEvaluation.java| 2 +-
 4 files changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/9a71bacd/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
index 5810b2b..82df3f4 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
@@ -1190,7 +1190,7 @@ public interface Admin extends Abortable, Closeable {
* running - returns false finished - returns 
true
* finished with error - throws the exception that caused the snapshot 
to fail  The
* cluster only knows about the most recent snapshot. Therefore, if another 
snapshot has been
-   * run/started since the snapshot your are checking, you will recieve an 
{@link
+   * run/started since the snapshot you are checking, you will receive an 
{@link
* org.apache.hadoop.hbase.snapshot.UnknownSnapshotException}.
*
* @param snapshot description of the snapshot to check

http://git-wip-us.apache.org/repos/asf/hbase/blob/9a71bacd/hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/ZKProcedureMemberRpcs.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/ZKProcedureMemberRpcs.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/ZKProcedureMemberRpcs.java
index e21c880..17a6e5c 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/ZKProcedureMemberRpcs.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/ZKProcedureMemberRpcs.java
@@ -127,7 +127,7 @@ public class ZKProcedureMemberRpcs implements 
ProcedureMemberRpcs {
* @param path full znode path that cause the notification
*/
   private void receivedReachedGlobalBarrier(String path) {
-LOG.debug("Recieved reached global barrier:" + path);
+LOG.debug("Received reached global barrier:" + path);
 String procName = ZKUtil.getNodeName(path);
 this.member.receivedReachedGlobalBarrier(procName);
   }

http://git-wip-us.apache.org/repos/asf/hbase/blob/9a71bacd/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestClockSkewDetection.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestClockSkewDetection.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestClockSkewDetection.java
index 00f3dc2..856b84c 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestClockSkewDetection.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestClockSkewDetection.java
@@ -129,7 +129,7 @@ public class TestClockSkewDetection {
   fail("HMaster should have thrown a ClockOutOfSyncException but didn't.");
 } catch(ClockOutOfSyncException e) {
   //we want an exception
-  LOG.info("Recieved expected exception: "+e);
+  LOG.info("Received expected exception: "+e);
 }
 
 try {
@@ -145,7 +145,7 @@ public class TestClockSkewDetection {
   fail("HMaster should have thrown a ClockOutOfSyncException but didn't.");
 } catch (ClockOutOfSyncException e) {
   // we want an exception
-  LOG.info("Recieved expected exception: " + e);
+  LOG.info("Received expected exception: " + e);
 }
 
 // make sure values above warning threshold but below max threshold don't 
kill

http://git-wip-us.apache.org/repos/asf/hbase/blob/9a71bacd/hbase-server/src/test/java/org/apache/hadoop/hbase/wal/WALPerformanceEvaluation.java
--
diff --git 

[1/3] hbase git commit: HBASE-17817 add table name to output (if available) when removing coprocessors

2017-04-28 Thread busbey
Repository: hbase
Updated Branches:
  refs/heads/branch-1 3765e7bed -> fa4f63894


HBASE-17817 add table name to output (if available) when removing coprocessors

Amending-Author: Sean Busbey 
Signed-off-by: Sean Busbey 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/0f5932b0
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/0f5932b0
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/0f5932b0

Branch: refs/heads/branch-1
Commit: 0f5932b059a86c92c1d61e6a1dcd2c2fe9994b7a
Parents: 3765e7b
Author: Steen Manniche 
Authored: Tue Apr 11 17:48:50 2017 +0200
Committer: Sean Busbey 
Committed: Fri Apr 28 11:39:54 2017 -0500

--
 .../apache/hadoop/hbase/coprocessor/CoprocessorHost.java | 11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/0f5932b0/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
index 91b9057..f2b201b 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
@@ -580,8 +580,15 @@ public abstract class CoprocessorHost {
   // server is configured to abort.
   abortServer(env, e);
 } else {
-  LOG.error("Removing coprocessor '" + env.toString() + "' from " +
-  "environment because it threw:  " + e,e);
+  // If available, pull a table name out of the environment
+  if(env instanceof RegionCoprocessorEnvironment) {
+String tableName = 
((RegionCoprocessorEnvironment)env).getRegionInfo().getTable().getNameAsString();
+LOG.error("Removing coprocessor '" + env.toString() + "' from table 
'"+ tableName + "'", e);
+  } else {
+LOG.error("Removing coprocessor '" + env.toString() + "' from " +
+"environment",e);
+  }
+
   coprocessors.remove(env);
   try {
 shutdown(env);



[2/4] hbase git commit: HBASE-17920 TestFSHDFSUtils always fails against hadoop 3.0.0-alpha2

2017-04-28 Thread busbey
HBASE-17920 TestFSHDFSUtils always fails against hadoop 3.0.0-alpha2

Signed-off-by: Sean Busbey 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/43f3fccb
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/43f3fccb
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/43f3fccb

Branch: refs/heads/master
Commit: 43f3fccb7b24d1433434d983e8e60914d8905f8d
Parents: 635c9db
Author: Jonathan M Hsieh 
Authored: Fri Apr 14 10:49:45 2017 -0700
Committer: Sean Busbey 
Committed: Fri Apr 28 11:25:14 2017 -0500

--
 .../hadoop/hbase/util/TestFSHDFSUtils.java  | 27 +++-
 1 file changed, 20 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/43f3fccb/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSHDFSUtils.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSHDFSUtils.java 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSHDFSUtils.java
index ea19ea7..5899971 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSHDFSUtils.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSHDFSUtils.java
@@ -100,8 +100,7 @@ public class TestFSHDFSUtils {
 Mockito.verify(dfs, Mockito.times(1)).isFileClosed(FILE);
   }
 
-  @Test
-  public void testIsSameHdfs() throws IOException {
+  void testIsSameHdfs(int nnport) throws IOException {
 try {
   Class dfsUtilClazz = Class.forName("org.apache.hadoop.hdfs.DFSUtil");
   dfsUtilClazz.getMethod("getNNServiceRpcAddresses", Configuration.class);
@@ -111,7 +110,7 @@ public class TestFSHDFSUtils {
 }
 
 Configuration conf = HBaseConfiguration.create();
-Path srcPath = new Path("hdfs://localhost:8020/");
+Path srcPath = new Path("hdfs://localhost:" + nnport + "/");
 Path desPath = new Path("hdfs://127.0.0.1/");
 FileSystem srcFs = srcPath.getFileSystem(conf);
 FileSystem desFs = desPath.getFileSystem(conf);
@@ -122,7 +121,7 @@ public class TestFSHDFSUtils {
 desFs = desPath.getFileSystem(conf);
 assertTrue(!FSHDFSUtils.isSameHdfs(conf, srcFs, desFs));
 
-desPath = new Path("hdfs://127.0.1.1:8020/");
+desPath = new Path("hdfs://127.0.1.1:" + nnport + "/");
 desFs = desPath.getFileSystem(conf);
 assertTrue(!FSHDFSUtils.isSameHdfs(conf, srcFs, desFs));
 
@@ -130,21 +129,35 @@ public class TestFSHDFSUtils {
 conf.set("dfs.nameservices", "haosong-hadoop");
 conf.set("dfs.ha.namenodes.haosong-hadoop", "nn1,nn2");
 conf.set("dfs.client.failover.proxy.provider.haosong-hadoop",
-
"org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");
+
"org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");
 
-conf.set("dfs.namenode.rpc-address.haosong-hadoop.nn1", "127.0.0.1:8020");
+conf.set("dfs.namenode.rpc-address.haosong-hadoop.nn1", "127.0.0.1:"+ 
nnport);
 conf.set("dfs.namenode.rpc-address.haosong-hadoop.nn2", "127.10.2.1:8000");
 desPath = new Path("/");
 desFs = desPath.getFileSystem(conf);
 assertTrue(FSHDFSUtils.isSameHdfs(conf, srcFs, desFs));
 
-conf.set("dfs.namenode.rpc-address.haosong-hadoop.nn1", "127.10.2.1:8020");
+conf.set("dfs.namenode.rpc-address.haosong-hadoop.nn1", 
"127.10.2.1:"+nnport);
 conf.set("dfs.namenode.rpc-address.haosong-hadoop.nn2", "127.0.0.1:8000");
 desPath = new Path("/");
 desFs = desPath.getFileSystem(conf);
 assertTrue(!FSHDFSUtils.isSameHdfs(conf, srcFs, desFs));
   }
 
+  @Test
+  public void testIsSameHdfs() throws IOException {
+String hadoopVersion = org.apache.hadoop.util.VersionInfo.getVersion();
+LOG.info("hadoop version is: "  + hadoopVersion);
+boolean isHadoop3 = hadoopVersion.startsWith("3.");
+if (isHadoop3) {
+  // Hadoop 3.0.0 alpha1+ change default nn port to 9820. See HDFS-9427
+  testIsSameHdfs(9820);
+} else {
+  // pre hadoop 3.0.0 defaults to port 8020
+  testIsSameHdfs(8020);
+}
+  }
+
   /**
* Version of DFS that has HDFS-4525 in it.
*/



[3/4] hbase git commit: HBASE-17835 Spelling mistakes in the Java source

2017-04-28 Thread busbey
HBASE-17835 Spelling mistakes in the Java source

Signed-off-by: Sean Busbey 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/03e8f6b5
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/03e8f6b5
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/03e8f6b5

Branch: refs/heads/master
Commit: 03e8f6b56ec184d0bbcff765b1b8f353b853fb0f
Parents: 43f3fcc
Author: QilinCao 
Authored: Thu Apr 27 09:42:12 2017 +0800
Committer: Sean Busbey 
Committed: Fri Apr 28 11:25:57 2017 -0500

--
 .../src/main/java/org/apache/hadoop/hbase/client/Admin.java  | 2 +-
 .../src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java | 2 +-
 .../org/apache/hadoop/hbase/procedure/ZKProcedureMemberRpcs.java | 2 +-
 .../org/apache/hadoop/hbase/master/TestClockSkewDetection.java   | 4 ++--
 .../org/apache/hadoop/hbase/wal/WALPerformanceEvaluation.java| 2 +-
 5 files changed, 6 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/03e8f6b5/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
index decf81f..8a9dc61 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
@@ -1426,7 +1426,7 @@ public interface Admin extends Abortable, Closeable {
* running - returns false finished - returns 
true
* finished with error - throws the exception that caused the snapshot 
to fail  The
* cluster only knows about the most recent snapshot. Therefore, if another 
snapshot has been
-   * run/started since the snapshot your are checking, you will recieve an 
{@link
+   * run/started since the snapshot you are checking, you will receive an 
{@link
* org.apache.hadoop.hbase.snapshot.UnknownSnapshotException}.
*
* @param snapshot description of the snapshot to check

http://git-wip-us.apache.org/repos/asf/hbase/blob/03e8f6b5/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
index 0e1054d..0782f5a 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
@@ -627,7 +627,7 @@ public interface AsyncAdmin {
* finished with error - throws the exception that caused the snapshot 
to fail
* 
* The cluster only knows about the most recent snapshot. Therefore, if 
another snapshot has been
-   * run/started since the snapshot your are checking, you will recieve an
+   * run/started since the snapshot you are checking, you will receive an
* {@link org.apache.hadoop.hbase.snapshot.UnknownSnapshotException}.
* @param snapshot description of the snapshot to check
* @return true if the snapshot is completed, false if the 
snapshot is still

http://git-wip-us.apache.org/repos/asf/hbase/blob/03e8f6b5/hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/ZKProcedureMemberRpcs.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/ZKProcedureMemberRpcs.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/ZKProcedureMemberRpcs.java
index 2c2b4af..8c56255 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/ZKProcedureMemberRpcs.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/ZKProcedureMemberRpcs.java
@@ -125,7 +125,7 @@ public class ZKProcedureMemberRpcs implements 
ProcedureMemberRpcs {
* @param path full znode path that cause the notification
*/
   private void receivedReachedGlobalBarrier(String path) {
-LOG.debug("Recieved reached global barrier:" + path);
+LOG.debug("Received reached global barrier:" + path);
 String procName = ZKUtil.getNodeName(path);
 this.member.receivedReachedGlobalBarrier(procName);
   }

http://git-wip-us.apache.org/repos/asf/hbase/blob/03e8f6b5/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestClockSkewDetection.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestClockSkewDetection.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestClockSkewDetection.java
index 

[1/4] hbase git commit: HBASE-17817 add table name to output (if available) when removing coprocessors

2017-04-28 Thread busbey
Repository: hbase
Updated Branches:
  refs/heads/master 5411d3ecb -> fc68f23a4


HBASE-17817 add table name to output (if available) when removing coprocessors

Amending-Author: Sean Busbey 
Signed-off-by: Sean Busbey 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/635c9db8
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/635c9db8
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/635c9db8

Branch: refs/heads/master
Commit: 635c9db81556f7b3a89c76d3be40d9989496d151
Parents: 5411d3e
Author: Steen Manniche 
Authored: Tue Apr 11 17:48:50 2017 +0200
Committer: Sean Busbey 
Committed: Fri Apr 28 11:24:00 2017 -0500

--
 .../apache/hadoop/hbase/coprocessor/CoprocessorHost.java | 11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/635c9db8/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
index bdface1..ae0c4b1 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
@@ -601,8 +601,15 @@ public abstract class CoprocessorHost {
   // server is configured to abort.
   abortServer(env, e);
 } else {
-  LOG.error("Removing coprocessor '" + env.toString() + "' from " +
-  "environment because it threw:  " + e,e);
+  // If available, pull a table name out of the environment
+  if(env instanceof RegionCoprocessorEnvironment) {
+String tableName = 
((RegionCoprocessorEnvironment)env).getRegionInfo().getTable().getNameAsString();
+LOG.error("Removing coprocessor '" + env.toString() + "' from table 
'"+ tableName + "'", e);
+  } else {
+LOG.error("Removing coprocessor '" + env.toString() + "' from " +
+"environment",e);
+  }
+
   coprocessors.remove(env);
   try {
 shutdown(env);



[4/4] hbase git commit: HBASE-17962 Improve documentation on Rest interface

2017-04-28 Thread busbey
HBASE-17962 Improve documentation on Rest interface

Signed-off-by: Sean Busbey 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/fc68f23a
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/fc68f23a
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/fc68f23a

Branch: refs/heads/master
Commit: fc68f23a4889271888ca8ba3fde1f6f86d3f0fce
Parents: 03e8f6b
Author: Niels Basjes 
Authored: Wed Apr 26 11:21:39 2017 +0200
Committer: Sean Busbey 
Committed: Fri Apr 28 11:28:29 2017 -0500

--
 .../main/java/org/apache/hadoop/hbase/rest/RowResource.java | 8 +++-
 src/main/asciidoc/_chapters/external_apis.adoc  | 9 ++---
 2 files changed, 13 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/fc68f23a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RowResource.java
--
diff --git 
a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RowResource.java 
b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RowResource.java
index 7be4190..41b465f 100644
--- a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RowResource.java
+++ b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RowResource.java
@@ -139,7 +139,13 @@ public class RowResource extends ResourceBase {
 if (!rowspec.hasColumns() || rowspec.getColumns().length > 1) {
   servlet.getMetrics().incrementFailedGetRequests(1);
   return Response.status(Response.Status.BAD_REQUEST).type(MIMETYPE_TEXT)
-  .entity("Bad request: Either 0 or more than 1 columns specified." + 
CRLF).build();
+  .entity("Bad request: Default 'GET' method only works if there is 
exactly 1 column " +
+  "in the row. Using the 'Accept' header with one of these 
formats lets you " +
+  "retrieve the entire row if it has multiple columns: " +
+  // Same as the @Produces list for the get method.
+  MIMETYPE_XML + ", " + MIMETYPE_JSON + ", " +
+  MIMETYPE_PROTOBUF + ", " + MIMETYPE_PROTOBUF_IETF +
+  CRLF).build();
 }
 MultivaluedMap params = uriInfo.getQueryParameters();
 try {

http://git-wip-us.apache.org/repos/asf/hbase/blob/fc68f23a/src/main/asciidoc/_chapters/external_apis.adoc
--
diff --git a/src/main/asciidoc/_chapters/external_apis.adoc 
b/src/main/asciidoc/_chapters/external_apis.adoc
index 556c4e0..2f85461 100644
--- a/src/main/asciidoc/_chapters/external_apis.adoc
+++ b/src/main/asciidoc/_chapters/external_apis.adoc
@@ -225,14 +225,17 @@ creation or mutation, and `DELETE` for deletion.
 |Description
 |Example
 
-|/_table_/_row_/_column:qualifier_/_timestamp_
+|/_table_/_row_
 |GET
-|Get the value of a single row. Values are Base-64 encoded.
+|Get all columns of a single row. Values are Base-64 encoded. This requires 
the "Accept" request header with a type that can hold multiple columns (like 
xml, json or protobuf).
 |curl -vi -X GET \
   -H "Accept: text/xml" \
   "http://example.com:8000/users/row1;
 
-curl -vi -X GET \
+|/_table_/_row_/_column:qualifier_/_timestamp_
+|GET
+|Get the value of a single column. Values are Base-64 encoded.
+|curl -vi -X GET \
   -H "Accept: text/xml" \
   "http://example.com:8000/users/row1/cf:a/1458586888395;
 



[08/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.BlockWritable.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.BlockWritable.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.BlockWritable.html
index a665139..3fedd0b 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.BlockWritable.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.BlockWritable.html
@@ -879,1201 +879,1221 @@
 871// includes the header size also.
 872private int 
unencodedDataSizeWritten;
 873
-874/**
-875 * Bytes to be written to the file 
system, including the header. Compressed
-876 * if compression is turned on. It 
also includes the checksum data that
-877 * immediately follows the block 
data. (header + data + checksums)
-878 */
-879private ByteArrayOutputStream 
onDiskBlockBytesWithHeader;
-880
-881/**
-882 * The size of the checksum data on 
disk. It is used only if data is
-883 * not compressed. If data is 
compressed, then the checksums are already
-884 * part of onDiskBytesWithHeader. If 
data is uncompressed, then this
-885 * variable stores the checksum data 
for this block.
-886 */
-887private byte[] onDiskChecksum = 
HConstants.EMPTY_BYTE_ARRAY;
-888
-889/**
-890 * Current block's start offset in 
the {@link HFile}. Set in
-891 * {@link 
#writeHeaderAndData(FSDataOutputStream)}.
-892 */
-893private long startOffset;
-894
-895/**
-896 * Offset of previous block by block 
type. Updated when the next block is
-897 * started.
-898 */
-899private long[] prevOffsetByType;
-900
-901/** The offset of the previous block 
of the same type */
-902private long prevOffset;
-903/** Meta data that holds information 
about the hfileblock**/
-904private HFileContext fileContext;
-905
-906/**
-907 * @param dataBlockEncoder data block 
encoding algorithm to use
-908 */
-909public Writer(HFileDataBlockEncoder 
dataBlockEncoder, HFileContext fileContext) {
-910  if 
(fileContext.getBytesPerChecksum()  HConstants.HFILEBLOCK_HEADER_SIZE) {
-911throw new 
RuntimeException("Unsupported value of bytesPerChecksum. " +
-912" Minimum is " + 
HConstants.HFILEBLOCK_HEADER_SIZE + " but the configured value is " +
-913
fileContext.getBytesPerChecksum());
-914  }
-915  this.dataBlockEncoder = 
dataBlockEncoder != null?
-916  dataBlockEncoder: 
NoOpDataBlockEncoder.INSTANCE;
-917  this.dataBlockEncodingCtx = 
this.dataBlockEncoder.
-918  
newDataBlockEncodingContext(HConstants.HFILEBLOCK_DUMMY_HEADER, fileContext);
-919  // TODO: This should be lazily 
instantiated since we usually do NOT need this default encoder
-920  this.defaultBlockEncodingCtx = new 
HFileBlockDefaultEncodingContext(null,
-921  
HConstants.HFILEBLOCK_DUMMY_HEADER, fileContext);
-922  // TODO: Set BAOS initial size. Use 
fileContext.getBlocksize() and add for header/checksum
-923  baosInMemory = new 
ByteArrayOutputStream();
-924  prevOffsetByType = new 
long[BlockType.values().length];
-925  for (int i = 0; i  
prevOffsetByType.length; ++i) {
-926prevOffsetByType[i] = UNSET;
-927  }
-928  // TODO: Why fileContext saved away 
when we have dataBlockEncoder and/or
-929  // defaultDataBlockEncoder?
-930  this.fileContext = fileContext;
-931}
-932
-933/**
-934 * Starts writing into the block. The 
previous block's data is discarded.
-935 *
-936 * @return the stream the user can 
write their data into
-937 * @throws IOException
-938 */
-939DataOutputStream 
startWriting(BlockType newBlockType)
-940throws IOException {
-941  if (state == State.BLOCK_READY 
 startOffset != -1) {
-942// We had a previous block that 
was written to a stream at a specific
-943// offset. Save that offset as 
the last offset of a block of that type.
-944
prevOffsetByType[blockType.getId()] = startOffset;
-945  }
-946
-947  startOffset = -1;
-948  blockType = newBlockType;
-949
-950  baosInMemory.reset();
-951  
baosInMemory.write(HConstants.HFILEBLOCK_DUMMY_HEADER);
-952
-953  state = State.WRITING;
-954
-955  // We will compress it later in 
finishBlock()
-956  userDataStream = new 
ByteBufferWriterDataOutputStream(baosInMemory);
-957  if (newBlockType == BlockType.DATA) 
{
-958
this.dataBlockEncoder.startBlockEncoding(dataBlockEncodingCtx, 
userDataStream);
-959  }
-960  this.unencodedDataSizeWritten = 
0;
-961  return userDataStream;
-962}
-963
-964/**
-965 * Writes the Cell to this block
-966 * @param cell
-967 * @throws IOException
-968 */
-969void write(Cell 

[37/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/org/apache/hadoop/hbase/client/class-use/SnapshotDescription.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/class-use/SnapshotDescription.html 
b/devapidocs/org/apache/hadoop/hbase/client/class-use/SnapshotDescription.html
index 0720431..044ed72 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/client/class-use/SnapshotDescription.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/client/class-use/SnapshotDescription.html
@@ -109,36 +109,71 @@
 
 
 
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListSnapshotDescription
+AsyncHBaseAdmin.listSnapshots()
+
+
 http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListSnapshotDescription
 Admin.listSnapshots()
 List completed snapshots.
 
 
-
+
 http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListSnapshotDescription
 HBaseAdmin.listSnapshots()
 
+
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListSnapshotDescription
+AsyncAdmin.listSnapshots()
+List completed snapshots.
+
+
 
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListSnapshotDescription
+AsyncHBaseAdmin.listSnapshots(http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html?is-external=true;
 title="class or interface in 
java.util.regex">Patternpattern)
+
+
 http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListSnapshotDescription
 Admin.listSnapshots(http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html?is-external=true;
 title="class or interface in java.util.regex">Patternpattern)
 List all the completed snapshots matching the given 
pattern.
 
 
-
+
 http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListSnapshotDescription
 HBaseAdmin.listSnapshots(http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html?is-external=true;
 title="class or interface in 
java.util.regex">Patternpattern)
 
+
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListSnapshotDescription
+AsyncAdmin.listSnapshots(http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html?is-external=true;
 title="class or interface in java.util.regex">Patternpattern)
+List all the completed snapshots matching the given 
pattern.
+
+
 
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListSnapshotDescription
+AsyncHBaseAdmin.listSnapshots(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in 
java.lang">Stringregex)
+
+
 http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListSnapshotDescription
 Admin.listSnapshots(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringregex)
 List all the completed snapshots matching the given regular 
expression.
 
 
-
+
 http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListSnapshotDescription
 HBaseAdmin.listSnapshots(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in 
java.lang">Stringregex)
 
+
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in 

[22/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DisableTableProcedureBiConsumer.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DisableTableProcedureBiConsumer.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DisableTableProcedureBiConsumer.html
index 6c52543..f3f7a46 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DisableTableProcedureBiConsumer.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DisableTableProcedureBiConsumer.html
@@ -31,1797 +31,2040 @@
 023import java.util.ArrayList;
 024import java.util.Arrays;
 025import java.util.Collection;
-026import java.util.HashMap;
-027import java.util.LinkedList;
-028import java.util.List;
-029import java.util.Map;
-030import java.util.Optional;
-031import 
java.util.concurrent.CompletableFuture;
-032import java.util.concurrent.TimeUnit;
-033import 
java.util.concurrent.atomic.AtomicReference;
-034import java.util.function.BiConsumer;
-035import java.util.regex.Pattern;
-036import java.util.stream.Collectors;
-037
-038import 
com.google.common.annotations.VisibleForTesting;
-039
-040import io.netty.util.Timeout;
-041import io.netty.util.TimerTask;
-042import org.apache.commons.logging.Log;
-043import 
org.apache.commons.logging.LogFactory;
-044import 
org.apache.hadoop.hbase.HColumnDescriptor;
-045import 
org.apache.hadoop.hbase.HRegionInfo;
-046import 
org.apache.hadoop.hbase.HRegionLocation;
-047import 
org.apache.hadoop.hbase.MetaTableAccessor;
-048import 
org.apache.hadoop.hbase.MetaTableAccessor.QueryType;
-049import 
org.apache.hadoop.hbase.NotServingRegionException;
-050import 
org.apache.hadoop.hbase.RegionLocations;
-051import 
org.apache.hadoop.hbase.ServerName;
-052import 
org.apache.hadoop.hbase.NamespaceDescriptor;
-053import 
org.apache.hadoop.hbase.HConstants;
-054import 
org.apache.hadoop.hbase.TableExistsException;
-055import 
org.apache.hadoop.hbase.TableName;
-056import 
org.apache.hadoop.hbase.AsyncMetaTableAccessor;
-057import 
org.apache.hadoop.hbase.TableNotFoundException;
-058import 
org.apache.hadoop.hbase.UnknownRegionException;
-059import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
-060import 
org.apache.hadoop.hbase.classification.InterfaceStability;
-061import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.AdminRequestCallerBuilder;
-062import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.MasterRequestCallerBuilder;
-063import 
org.apache.hadoop.hbase.client.Scan.ReadType;
-064import 
org.apache.hadoop.hbase.client.replication.ReplicationSerDeHelper;
-065import 
org.apache.hadoop.hbase.client.replication.TableCFs;
-066import 
org.apache.hadoop.hbase.exceptions.DeserializationException;
-067import 
org.apache.hadoop.hbase.ipc.HBaseRpcController;
-068import 
org.apache.hadoop.hbase.quotas.QuotaFilter;
-069import 
org.apache.hadoop.hbase.quotas.QuotaSettings;
-070import 
org.apache.hadoop.hbase.quotas.QuotaTableUtil;
-071import 
org.apache.hadoop.hbase.replication.ReplicationException;
-072import 
org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
-073import 
org.apache.hadoop.hbase.replication.ReplicationPeerDescription;
-074import 
org.apache.hadoop.hbase.shaded.com.google.protobuf.RpcCallback;
-075import 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-076import 
org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
-077import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService;
-078import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionRequest;
-079import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionResponse;
-080import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionRequest;
-081import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionResponse;
-082import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
-083import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableSchema;
-084import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnRequest;
-085import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnResponse;
-086import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionRequest;
-087import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionResponse;
-088import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceRequest;
-089import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceResponse;
-090import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceRequest;
-091import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceResponse;
-092import 

[33/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/org/apache/hadoop/hbase/regionserver/CompactSplitThread.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/regionserver/CompactSplitThread.html 
b/devapidocs/org/apache/hadoop/hbase/regionserver/CompactSplitThread.html
index de489a6..c334291 100644
--- a/devapidocs/org/apache/hadoop/hbase/regionserver/CompactSplitThread.html
+++ b/devapidocs/org/apache/hadoop/hbase/regionserver/CompactSplitThread.html
@@ -18,7 +18,7 @@
 catch(err) {
 }
 //-->
-var methods = 
{"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":10,"i14":10,"i15":10,"i16":10,"i17":10,"i18":10,"i19":10,"i20":10,"i21":10,"i22":10,"i23":10,"i24":10,"i25":10,"i26":10,"i27":10,"i28":10,"i29":10,"i30":10,"i31":10,"i32":10,"i33":10};
+var methods = 
{"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":10,"i14":10,"i15":10,"i16":10,"i17":10,"i18":10,"i19":10,"i20":10,"i21":10,"i22":10,"i23":10,"i24":10,"i25":10,"i26":10,"i27":10,"i28":10,"i29":10,"i30":10,"i31":10};
 var tabs = {65535:["t0","All Methods"],2:["t2","Instance 
Methods"],8:["t4","Concrete Methods"]};
 var altColor = "altColor";
 var rowColor = "rowColor";
@@ -191,51 +191,39 @@ implements 
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
-MERGE_THREADS
-
-
-static int
-MERGE_THREADS_DEFAULT
-
-
-private http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ThreadPoolExecutor.html?is-external=true;
 title="class or interface in 
java.util.concurrent">ThreadPoolExecutor
-mergePool
-
-
-static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 REGION_SERVER_REGION_SPLIT_LIMIT
 
-
+
 private int
 regionSplitLimit
 Splitting should not take place if the total number of 
regions exceed this.
 
 
-
+
 private HRegionServer
 server
 
-
+
 private http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ThreadPoolExecutor.html?is-external=true;
 title="class or interface in 
java.util.concurrent">ThreadPoolExecutor
 shortCompactions
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 SMALL_COMPACTION_THREADS
 
-
+
 static int
 SMALL_COMPACTION_THREADS_DEFAULT
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 SPLIT_THREADS
 
-
+
 static int
 SPLIT_THREADS_DEFAULT
 
-
+
 private http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ThreadPoolExecutor.html?is-external=true;
 title="class or interface in 
java.util.concurrent">ThreadPoolExecutor
 splits
 
@@ -293,72 +281,64 @@ implements getCompactionThroughputController()
 
 
-long
-getCompletedMergeTaskCount()
-
-
 int
 getLargeCompactionQueueSize()
 
-
+
 protected int
 getLargeCompactionThreadNum()
 
-
-protected int
-getMergeThreadNum()
-
-
+
 int
 getRegionSplitLimit()
 
-
+
 int
 getSmallCompactionQueueSize()
 
-
+
 protected int
 getSmallCompactionThreadNum()
 
-
+
 int
 getSplitQueueSize()
 
-
+
 protected int
 getSplitThreadNum()
 
-
+
 (package private) void
 interruptIfNecessary()
 Only interrupt once it's done with a run through the work 
loop.
 
 
-
+
 (package private) void
 join()
 
-
+
 void
 onConfigurationChange(org.apache.hadoop.conf.ConfigurationnewConf)
 This method would be called by the ConfigurationManager
  object when the Configuration object is reloaded from disk.
 
 
-
+
 void
 registerChildren(ConfigurationManagermanager)
 Needs to be called to register the children to the 
manager.
 
 
-
+
 CompactionRequest
 requestCompaction(Regionr,
  Stores,
  http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringwhy,
  CompactionRequestrequest)
 
-
+
 CompactionRequest
 requestCompaction(Regionr,
  Stores,
@@ -367,12 +347,12 @@ implements CompactionRequestrequest,
  Useruser)
 
-
+
 http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCompactionRequest
 requestCompaction(Regionr,
  http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringwhy)
 
-
+
 http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCompactionRequest
 requestCompaction(Regionr,
  http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringwhy,
@@ -380,13 +360,13 

[43/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/index-all.html
--
diff --git a/devapidocs/index-all.html b/devapidocs/index-all.html
index eb5909f..3f15073 100644
--- a/devapidocs/index-all.html
+++ b/devapidocs/index-all.html
@@ -21687,6 +21687,8 @@
 
 DEFAULT_SMALL_HFILE_QUEUE_INIT_SIZE
 - Static variable in class org.apache.hadoop.hbase.master.cleaner.HFileCleaner
 
+DEFAULT_SNAPSHOT_RESTORE_FAILSAFE_NAME
 - Static variable in class org.apache.hadoop.hbase.HConstants
+
 DEFAULT_SNAPSHOT_RESTORE_TAKE_FAILSAFE_SNAPSHOT
 - Static variable in class org.apache.hadoop.hbase.HConstants
 
 DEFAULT_SOCKET_TIMEOUT_CONNECT
 - Static variable in interface org.apache.hadoop.hbase.ipc.RpcClient
@@ -22777,6 +22779,12 @@
 
 Delete an existing snapshot.
 
+deleteSnapshot(String)
 - Method in interface org.apache.hadoop.hbase.client.AsyncAdmin
+
+Delete an existing snapshot.
+
+deleteSnapshot(String)
 - Method in class org.apache.hadoop.hbase.client.AsyncHBaseAdmin
+
 deleteSnapshot(byte[])
 - Method in class org.apache.hadoop.hbase.client.HBaseAdmin
 
 deleteSnapshot(String)
 - Method in class org.apache.hadoop.hbase.client.HBaseAdmin
@@ -22799,6 +22807,18 @@
 
 Delete existing snapshots whose names match the pattern 
passed.
 
+deleteSnapshots(String)
 - Method in interface org.apache.hadoop.hbase.client.AsyncAdmin
+
+Delete existing snapshots whose names match the pattern 
passed.
+
+deleteSnapshots(Pattern)
 - Method in interface org.apache.hadoop.hbase.client.AsyncAdmin
+
+Delete existing snapshots whose names match the pattern 
passed.
+
+deleteSnapshots(String)
 - Method in class org.apache.hadoop.hbase.client.AsyncHBaseAdmin
+
+deleteSnapshots(Pattern)
 - Method in class org.apache.hadoop.hbase.client.AsyncHBaseAdmin
+
 deleteSnapshots(String)
 - Method in class org.apache.hadoop.hbase.client.HBaseAdmin
 
 deleteSnapshots(Pattern)
 - Method in class org.apache.hadoop.hbase.client.HBaseAdmin
@@ -22918,6 +22938,20 @@
 Delete all existing snapshots matching the given table name 
regular expression and snapshot
  name regular expression.
 
+deleteTableSnapshots(String,
 String) - Method in interface org.apache.hadoop.hbase.client.AsyncAdmin
+
+Delete all existing snapshots matching the given table name 
regular expression and snapshot
+ name regular expression.
+
+deleteTableSnapshots(Pattern,
 Pattern) - Method in interface org.apache.hadoop.hbase.client.AsyncAdmin
+
+Delete all existing snapshots matching the given table name 
regular expression and snapshot
+ name regular expression.
+
+deleteTableSnapshots(String,
 String) - Method in class org.apache.hadoop.hbase.client.AsyncHBaseAdmin
+
+deleteTableSnapshots(Pattern,
 Pattern) - Method in class org.apache.hadoop.hbase.client.AsyncHBaseAdmin
+
 deleteTableSnapshots(String,
 String) - Method in class org.apache.hadoop.hbase.client.HBaseAdmin
 
 deleteTableSnapshots(Pattern,
 Pattern) - Method in class org.apache.hadoop.hbase.client.HBaseAdmin
@@ -25109,6 +25143,15 @@
 
 encodedBlockDecodingCtx
 - Variable in class org.apache.hadoop.hbase.io.hfile.HFileBlock.FSReaderImpl
 
+encodedBlockSizeLimit
 - Variable in class org.apache.hadoop.hbase.io.hfile.HFileWriterImpl
+
+Block size limit after encoding, used to unify encoded 
block Cache entry size
+
+encodedBlockSizeWritten()
 - Method in class org.apache.hadoop.hbase.io.hfile.HFileBlock.Writer
+
+Returns the number of bytes written into the current block 
so far, or
+ zero if not writing the block at the moment.
+
 encodedClass()
 - Method in interface org.apache.hadoop.hbase.types.DataType
 
 Inform consumers over what type this DataType 
operates.
@@ -25169,6 +25212,8 @@
 
 EncodedDataBlock.BufferGrabbingByteArrayOutputStream
 - Class in org.apache.hadoop.hbase.io.encoding
 
+encodedDataSizeWritten
 - Variable in class org.apache.hadoop.hbase.io.hfile.HFileBlock.Writer
+
 encodedLength(T)
 - Method in interface org.apache.hadoop.hbase.types.DataType
 
 Inform consumers how long the encoded byte[] 
will be.
@@ -33366,8 +33411,6 @@
 
  The version number of protocol compiler.
 
-getCompletedMergeTaskCount()
 - Method in class org.apache.hadoop.hbase.regionserver.CompactSplitThread
-
 getCompletedRecoveredEditsFilePath(Path,
 long) - Static method in class org.apache.hadoop.hbase.wal.WALSplitter
 
 Get the completed recovered edits file path, renaming it to 
be by last edit
@@ -38609,8 +38652,6 @@
 
 getMergesDir(HRegionInfo)
 - Method in class org.apache.hadoop.hbase.regionserver.HRegionFileSystem
 
-getMergeThreadNum()
 - Method in class org.apache.hadoop.hbase.regionserver.CompactSplitThread
-
 getMessage()
 - Method in exception org.apache.hadoop.hbase.backup.FailedArchiveException
 
 getMessage(Exception)
 - Method in class org.apache.hadoop.hbase.backup.impl.TableBackupClient
@@ -55453,6 +55494,8 @@
 
 internalDecodeKeyValues(DataInputStream,
 int, int, HFileBlockDefaultDecodingContext) - Method in class 

[29/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.AdminRpcCall.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.AdminRpcCall.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.AdminRpcCall.html
index 6c52543..f3f7a46 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.AdminRpcCall.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.AdminRpcCall.html
@@ -31,1797 +31,2040 @@
 023import java.util.ArrayList;
 024import java.util.Arrays;
 025import java.util.Collection;
-026import java.util.HashMap;
-027import java.util.LinkedList;
-028import java.util.List;
-029import java.util.Map;
-030import java.util.Optional;
-031import 
java.util.concurrent.CompletableFuture;
-032import java.util.concurrent.TimeUnit;
-033import 
java.util.concurrent.atomic.AtomicReference;
-034import java.util.function.BiConsumer;
-035import java.util.regex.Pattern;
-036import java.util.stream.Collectors;
-037
-038import 
com.google.common.annotations.VisibleForTesting;
-039
-040import io.netty.util.Timeout;
-041import io.netty.util.TimerTask;
-042import org.apache.commons.logging.Log;
-043import 
org.apache.commons.logging.LogFactory;
-044import 
org.apache.hadoop.hbase.HColumnDescriptor;
-045import 
org.apache.hadoop.hbase.HRegionInfo;
-046import 
org.apache.hadoop.hbase.HRegionLocation;
-047import 
org.apache.hadoop.hbase.MetaTableAccessor;
-048import 
org.apache.hadoop.hbase.MetaTableAccessor.QueryType;
-049import 
org.apache.hadoop.hbase.NotServingRegionException;
-050import 
org.apache.hadoop.hbase.RegionLocations;
-051import 
org.apache.hadoop.hbase.ServerName;
-052import 
org.apache.hadoop.hbase.NamespaceDescriptor;
-053import 
org.apache.hadoop.hbase.HConstants;
-054import 
org.apache.hadoop.hbase.TableExistsException;
-055import 
org.apache.hadoop.hbase.TableName;
-056import 
org.apache.hadoop.hbase.AsyncMetaTableAccessor;
-057import 
org.apache.hadoop.hbase.TableNotFoundException;
-058import 
org.apache.hadoop.hbase.UnknownRegionException;
-059import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
-060import 
org.apache.hadoop.hbase.classification.InterfaceStability;
-061import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.AdminRequestCallerBuilder;
-062import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.MasterRequestCallerBuilder;
-063import 
org.apache.hadoop.hbase.client.Scan.ReadType;
-064import 
org.apache.hadoop.hbase.client.replication.ReplicationSerDeHelper;
-065import 
org.apache.hadoop.hbase.client.replication.TableCFs;
-066import 
org.apache.hadoop.hbase.exceptions.DeserializationException;
-067import 
org.apache.hadoop.hbase.ipc.HBaseRpcController;
-068import 
org.apache.hadoop.hbase.quotas.QuotaFilter;
-069import 
org.apache.hadoop.hbase.quotas.QuotaSettings;
-070import 
org.apache.hadoop.hbase.quotas.QuotaTableUtil;
-071import 
org.apache.hadoop.hbase.replication.ReplicationException;
-072import 
org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
-073import 
org.apache.hadoop.hbase.replication.ReplicationPeerDescription;
-074import 
org.apache.hadoop.hbase.shaded.com.google.protobuf.RpcCallback;
-075import 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-076import 
org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
-077import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService;
-078import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionRequest;
-079import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionResponse;
-080import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionRequest;
-081import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionResponse;
-082import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
-083import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableSchema;
-084import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnRequest;
-085import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnResponse;
-086import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionRequest;
-087import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionResponse;
-088import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceRequest;
-089import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceResponse;
-090import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceRequest;
-091import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceResponse;
-092import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteNamespaceRequest;

[32/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/org/apache/hadoop/hbase/thrift/package-tree.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/thrift/package-tree.html 
b/devapidocs/org/apache/hadoop/hbase/thrift/package-tree.html
index 051b200..8068c8f 100644
--- a/devapidocs/org/apache/hadoop/hbase/thrift/package-tree.html
+++ b/devapidocs/org/apache/hadoop/hbase/thrift/package-tree.html
@@ -198,9 +198,9 @@
 
 java.lang.http://docs.oracle.com/javase/8/docs/api/java/lang/Enum.html?is-external=true;
 title="class or interface in java.lang">EnumE (implements java.lang.http://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true;
 title="class or interface in java.lang">ComparableT, java.io.http://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html?is-external=true;
 title="class or interface in java.io">Serializable)
 
-org.apache.hadoop.hbase.thrift.ThriftMetrics.ThriftServerType
-org.apache.hadoop.hbase.thrift.MetricsThriftServerSourceFactoryImpl.FactoryStorage
 org.apache.hadoop.hbase.thrift.ThriftServerRunner.ImplType
+org.apache.hadoop.hbase.thrift.MetricsThriftServerSourceFactoryImpl.FactoryStorage
+org.apache.hadoop.hbase.thrift.ThriftMetrics.ThriftServerType
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.ImplData.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.ImplData.html 
b/devapidocs/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.ImplData.html
index 97fb5c6..095f8c8 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.ImplData.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.ImplData.html
@@ -393,22 +393,22 @@ extends org.jamon.AbstractTemplateProxy.ImplData
 privateHMaster m_master
 
 
-
+
 
 
 
 
-m_servers
-privatehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListServerName m_servers
+m_filter
+privatehttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String m_filter
 
 
-
+
 
 
 
 
-m_servers__IsNotDefault
-privateboolean m_servers__IsNotDefault
+m_filter__IsNotDefault
+privateboolean m_filter__IsNotDefault
 
 
 
@@ -429,130 +429,130 @@ extends org.jamon.AbstractTemplateProxy.ImplData
 privateboolean m_deadServers__IsNotDefault
 
 
-
+
 
 
 
 
-m_format
-privatehttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String m_format
+m_frags
+privatehttp://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true;
 title="class or interface in java.util">Maphttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String,http://docs.oracle.com/javase/8/docs/api/java/lang/Integer.html?is-external=true;
 title="class or interface in java.lang">Integer m_frags
 
 
-
+
 
 
 
 
-m_format__IsNotDefault
-privateboolean m_format__IsNotDefault
+m_frags__IsNotDefault
+privateboolean m_frags__IsNotDefault
 
 
-
+
 
 
 
 
-m_catalogJanitorEnabled
-privateboolean m_catalogJanitorEnabled
+m_metaLocation
+privateServerName m_metaLocation
 
 
-
+
 
 
 
 
-m_catalogJanitorEnabled__IsNotDefault
-privateboolean m_catalogJanitorEnabled__IsNotDefault
+m_metaLocation__IsNotDefault
+privateboolean m_metaLocation__IsNotDefault
 
 
-
+
 
 
 
 
-m_metaLocation
-privateServerName m_metaLocation
+m_assignmentManager
+privateAssignmentManager m_assignmentManager
 
 
-
+
 
 
 
 
-m_metaLocation__IsNotDefault
-privateboolean m_metaLocation__IsNotDefault
+m_assignmentManager__IsNotDefault
+privateboolean m_assignmentManager__IsNotDefault
 
 
-
+
 
 
 
 
-m_assignmentManager
-privateAssignmentManager m_assignmentManager
+m_serverManager
+privateServerManager m_serverManager
 
 
-
+
 
 
 
 
-m_assignmentManager__IsNotDefault
-privateboolean m_assignmentManager__IsNotDefault
+m_serverManager__IsNotDefault
+privateboolean m_serverManager__IsNotDefault
 
 
-
+
 
 
 
 
-m_filter
-privatehttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String m_filter
+m_catalogJanitorEnabled
+privateboolean m_catalogJanitorEnabled
 
 
-
+
 
 
 
 
-m_filter__IsNotDefault
-privateboolean m_filter__IsNotDefault
+m_catalogJanitorEnabled__IsNotDefault
+privateboolean m_catalogJanitorEnabled__IsNotDefault
 
 
-
+
 
 
 
 
-m_serverManager
-privateServerManager m_serverManager
+m_servers
+privatehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListServerName m_servers
 
 
-
+
 
 
 
 
-m_serverManager__IsNotDefault
-privateboolean 

[48/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/apidocs/org/apache/hadoop/hbase/HConstants.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/HConstants.html 
b/apidocs/org/apache/hadoop/hbase/HConstants.html
index d5fcd98..30ba255 100644
--- a/apidocs/org/apache/hadoop/hbase/HConstants.html
+++ b/apidocs/org/apache/hadoop/hbase/HConstants.html
@@ -595,300 +595,304 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 
+static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
+DEFAULT_SNAPSHOT_RESTORE_FAILSAFE_NAME
+
+
 static boolean
 DEFAULT_SNAPSHOT_RESTORE_TAKE_FAILSAFE_SNAPSHOT
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 DEFAULT_STATUS_MULTICAST_ADDRESS
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 DEFAULT_STATUS_MULTICAST_BIND_ADDRESS
 
-
+
 static int
 DEFAULT_STATUS_MULTICAST_PORT
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 DEFAULT_TEMPORARY_HDFS_DIRECTORY
 
-
+
 static int
 DEFAULT_THREAD_WAKE_FREQUENCY
 Default value for thread wake frequency
 
 
-
+
 static boolean
 DEFAULT_USE_META_REPLICAS
 
-
+
 static int
 DEFAULT_VERSION_FILE_WRITE_ATTEMPTS
 Parameter name for how often we should try to write a 
version file, before failing
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 DEFAULT_WAL_STORAGE_POLICY
 
-
+
 static int
 DEFAULT_ZK_SESSION_TIMEOUT
 Default value for ZooKeeper session timeout
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 DEFAULT_ZOOKEEPER_ZNODE_PARENT
 
-
+
 static int
 DEFAULT_ZOOKEPER_CLIENT_PORT
 Default client port that the zookeeper listens on
 
 
-
+
 static int
 DEFAULT_ZOOKEPER_MAX_CLIENT_CNXNS
 Default limit on concurrent client-side zookeeper 
connections
 
 
-
+
 static long
 DEFAULT_ZOOKEPER_RECOVERABLE_WAITIME
 Default wait time for the recoverable zookeeper
 
 
-
+
 static int
 DELIMITER
 delimiter used between portions of a region name
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 DISALLOW_WRITES_IN_RECOVERING
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 DISTRIBUTED_LOG_REPLAY_KEY
 Conf key that enables unflushed WAL edits directly being 
replayed to region servers
 
 
-
+
 static byte[]
 EMPTY_BYTE_ARRAY
 An empty instance.
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/nio/ByteBuffer.html?is-external=true;
 title="class or interface in java.nio">ByteBuffer
 EMPTY_BYTE_BUFFER
 
-
+
 static byte[]
 EMPTY_END_ROW
 Last row in a table.
 
 
-
+
 static byte[]
 EMPTY_START_ROW
 Used by scanners, etc when they want to start at the 
beginning of a region
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 ENABLE_CLIENT_BACKPRESSURE
 Config key for if the server should send backpressure and 
if the client should listen to
  that backpressure from the server
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 ENABLE_DATA_FILE_UMASK
 Enable file permission modification from standard 
hbase
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 ENABLE_WAL_COMPRESSION
 Configuration name of WAL Compression
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 ENABLE_WAL_ENCRYPTION
 Configuration key for enabling WAL encryption, a 
boolean
 
 
-
+
 static TableName
 ENSEMBLE_TABLE_NAME
 The name of the ensemble table
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 FILE_SYSTEM_VERSION
 Current version of file system.
 
 
-
+
 static int
 FOREVER
 Unlimited time-to-live.
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 HBASE_BALANCER_MAX_BALANCING
 Config for the max balancing time
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 

hbase-site git commit: INFRA-10751 Empty commit

2017-04-28 Thread git-site-role
Repository: hbase-site
Updated Branches:
  refs/heads/asf-site 6f2e75f27 -> 40526c106


INFRA-10751 Empty commit


Project: http://git-wip-us.apache.org/repos/asf/hbase-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase-site/commit/40526c10
Tree: http://git-wip-us.apache.org/repos/asf/hbase-site/tree/40526c10
Diff: http://git-wip-us.apache.org/repos/asf/hbase-site/diff/40526c10

Branch: refs/heads/asf-site
Commit: 40526c106e8f9622462c53603ba777eae1457f72
Parents: 6f2e75f
Author: jenkins 
Authored: Fri Apr 28 14:59:19 2017 +
Committer: jenkins 
Committed: Fri Apr 28 14:59:19 2017 +

--

--




[45/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/checkstyle.rss
--
diff --git a/checkstyle.rss b/checkstyle.rss
index 1c0e514..88690ba 100644
--- a/checkstyle.rss
+++ b/checkstyle.rss
@@ -26,7 +26,7 @@ under the License.
 2007 - 2017 The Apache Software Foundation
 
   File: 2155,
- Errors: 14361,
+ Errors: 14364,
  Warnings: 0,
  Infos: 0
   
@@ -10037,7 +10037,7 @@ under the License.
   0
 
 
-  31
+  32
 
   
   
@@ -12165,7 +12165,7 @@ under the License.
   0
 
 
-  46
+  47
 
   
   
@@ -22609,7 +22609,7 @@ under the License.
   0
 
 
-  22
+  23
 
   
   

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/coc.html
--
diff --git a/coc.html b/coc.html
index 8909877..1351f9e 100644
--- a/coc.html
+++ b/coc.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase  
   Code of Conduct Policy
@@ -380,7 +380,7 @@ email to mailto:priv...@hbase.apache.org;>the priv
 https://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2017-04-27
+  Last Published: 
2017-04-28
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/cygwin.html
--
diff --git a/cygwin.html b/cygwin.html
index e77ef19..d0cac3a 100644
--- a/cygwin.html
+++ b/cygwin.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase  Installing Apache HBase (TM) on Windows using 
Cygwin
 
@@ -679,7 +679,7 @@ Now your HBase server is running, start 
coding and build that next
 https://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2017-04-27
+  Last Published: 
2017-04-28
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/dependencies.html
--
diff --git a/dependencies.html b/dependencies.html
index d3ae7d5..0571b2e 100644
--- a/dependencies.html
+++ b/dependencies.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase  Project Dependencies
 
@@ -524,7 +524,7 @@
 https://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2017-04-27
+  Last Published: 
2017-04-28
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/dependency-convergence.html
--
diff --git a/dependency-convergence.html b/dependency-convergence.html
index 5975f25..82d1ecb 100644
--- a/dependency-convergence.html
+++ b/dependency-convergence.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase  Reactor Dependency Convergence
 
@@ -1849,7 +1849,7 @@
 https://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2017-04-27
+  Last Published: 
2017-04-28
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/dependency-info.html
--
diff --git a/dependency-info.html b/dependency-info.html
index 3867877..d0056bf 100644
--- a/dependency-info.html
+++ b/dependency-info.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase  Dependency Information
 
@@ -318,7 +318,7 @@
 https://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2017-04-27
+  Last Published: 
2017-04-28
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/dependency-management.html
--
diff --git a/dependency-management.html b/dependency-management.html
index bc181c3..1fc932d 100644
--- a/dependency-management.html
+++ b/dependency-management.html
@@ -7,7 +7,7 @@
   
 
 
-
+ 

[49/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/apidocs/index-all.html
--
diff --git a/apidocs/index-all.html b/apidocs/index-all.html
index e767c1c..1047ab6 100644
--- a/apidocs/index-all.html
+++ b/apidocs/index-all.html
@@ -3112,6 +3112,8 @@
 
 DEFAULT_SERVERSTART_WAIT_MAX
 - Static variable in class org.apache.hadoop.hbase.util.RegionMover
 
+DEFAULT_SNAPSHOT_RESTORE_FAILSAFE_NAME
 - Static variable in class org.apache.hadoop.hbase.HConstants
+
 DEFAULT_SNAPSHOT_RESTORE_TAKE_FAILSAFE_SNAPSHOT
 - Static variable in class org.apache.hadoop.hbase.HConstants
 
 DEFAULT_STATUS_MULTICAST_ADDRESS
 - Static variable in class org.apache.hadoop.hbase.HConstants
@@ -14318,6 +14320,8 @@ Input/OutputFormats, a table indexing MapReduce job, 
and utility methods.
 
 Name of the directory to store all snapshots.
 
+SNAPSHOT_RESTORE_FAILSAFE_NAME
 - Static variable in class org.apache.hadoop.hbase.HConstants
+
 SNAPSHOT_RESTORE_TAKE_FAILSAFE_SNAPSHOT
 - Static variable in class org.apache.hadoop.hbase.HConstants
 
 SnapshotCreationException - Exception in org.apache.hadoop.hbase.snapshot



[35/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileWriterImpl.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileWriterImpl.html 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileWriterImpl.html
index daa9b81..34dc232 100644
--- a/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileWriterImpl.html
+++ b/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileWriterImpl.html
@@ -177,133 +177,145 @@ implements dataBlockIndexWriter
 
 
+private int
+encodedBlockSizeLimit
+Block size limit after encoding, used to unify encoded 
block Cache entry size
+
+
+
 protected long
 entryCount
 Total # of key/value entries, i.e.
 
 
-
+
 protected HFile.FileInfo
 fileInfo
 A "file info" block: a key-value map of file-wide 
metadata.
 
 
-
+
 protected Cell
 firstCellInBlock
 First cell in a block.
 
 
-
+
 private long
 firstDataBlockOffset
 The offset of the first data block or -1 if the file is 
empty.
 
 
-
+
 protected HFileContext
 hFileContext
 
-
+
 private http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListInlineBlockWriter
 inlineBlockWriters
 Inline block writers for multi-level block index and 
compound Blooms.
 
 
-
+
 static int
 KEY_VALUE_VER_WITH_MEMSTORE
 Version for KeyValue which includes memstore timestamp
 
 
-
+
 static byte[]
 KEY_VALUE_VERSION
 KeyValue version in FileInfo
 
 
-
+
 protected Cell
 lastCell
 The Cell previously appended.
 
 
-
+
 private Cell
 lastCellOfPreviousBlock
 The last(stop) Cell of the previous data block.
 
 
-
+
 protected long
 lastDataBlockOffset
 The offset of the last data block or 0 if the file is 
empty.
 
 
-
+
 private static 
org.apache.commons.logging.Log
 LOG
 
-
+
 protected long
 maxMemstoreTS
 
-
+
 private int
 maxTagsLength
 
-
+
 private HFileBlockIndex.BlockIndexWriter
 metaBlockIndexWriter
 
-
+
 protected http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in 
java.util">Listorg.apache.hadoop.io.Writable
 metaData
 Writables representing meta block data.
 
 
-
+
 protected http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">Listbyte[]
 metaNames
 Meta block names.
 
 
-
+
 protected http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 name
 Name for this object used when logging or in toString.
 
 
-
+
 protected 
org.apache.hadoop.fs.FSDataOutputStream
 outputStream
 FileSystem stream to write into.
 
 
-
+
 protected org.apache.hadoop.fs.Path
 path
 May be null if we were passed a stream.
 
 
-
+
 protected long
 totalKeyLength
 Used for calculating the average key length.
 
 
-
+
 protected long
 totalUncompressedBytes
 Total uncompressed bytes, maybe calculate a compression 
ratio later.
 
 
-
+
 protected long
 totalValueLength
 Used for calculating the average value length.
 
 
+
+static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
+UNIFIED_ENCODED_BLOCKSIZE_RATIO
+if this feature is enabled, preCalculate encoded data size 
before real encoding happens
+
+
 
 private static long
 UNSET
@@ -579,13 +591,37 @@ implements 
+
+
+
+
+UNIFIED_ENCODED_BLOCKSIZE_RATIO
+public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String UNIFIED_ENCODED_BLOCKSIZE_RATIO
+if this feature is enabled, preCalculate encoded data size 
before real encoding happens
+
+See Also:
+Constant
 Field Values
+
+
+
+
+
+
+
+
+encodedBlockSizeLimit
+private finalint encodedBlockSizeLimit
+Block size limit after encoding, used to unify encoded 
block Cache entry size
+
+
 
 
 
 
 
 lastCell
-protectedCell lastCell
+protectedCell lastCell
 The Cell previously appended. Becomes the last cell in the 
file.
 
 
@@ -595,7 +631,7 @@ implements 
 
 outputStream
-protectedorg.apache.hadoop.fs.FSDataOutputStream outputStream
+protectedorg.apache.hadoop.fs.FSDataOutputStream outputStream
 FileSystem stream to write into.
 
 
@@ -605,7 +641,7 @@ implements 
 
 closeOutputStream
-protected finalboolean closeOutputStream
+protected finalboolean closeOutputStream
 True if we opened the outputStream (and so 
will close it).
 
 
@@ -615,7 +651,7 @@ implements 
 
 fileInfo
-protectedHFile.FileInfo fileInfo
+protectedHFile.FileInfo fileInfo
 A "file info" block: a key-value map of file-wide 
metadata.
 
 
@@ -625,7 +661,7 @@ implements 
 
 entryCount
-protectedlong entryCount
+protectedlong entryCount
 Total # of key/value entries, i.e. how many times add() was 
called.
 
 
@@ -635,7 +671,7 @@ implements 
 
 totalKeyLength
-protectedlong totalKeyLength
+protectedlong totalKeyLength
 Used for calculating the average key 

[11/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.html 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.html
index 6c52543..f3f7a46 100644
--- a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.html
+++ b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.html
@@ -31,1797 +31,2040 @@
 023import java.util.ArrayList;
 024import java.util.Arrays;
 025import java.util.Collection;
-026import java.util.HashMap;
-027import java.util.LinkedList;
-028import java.util.List;
-029import java.util.Map;
-030import java.util.Optional;
-031import 
java.util.concurrent.CompletableFuture;
-032import java.util.concurrent.TimeUnit;
-033import 
java.util.concurrent.atomic.AtomicReference;
-034import java.util.function.BiConsumer;
-035import java.util.regex.Pattern;
-036import java.util.stream.Collectors;
-037
-038import 
com.google.common.annotations.VisibleForTesting;
-039
-040import io.netty.util.Timeout;
-041import io.netty.util.TimerTask;
-042import org.apache.commons.logging.Log;
-043import 
org.apache.commons.logging.LogFactory;
-044import 
org.apache.hadoop.hbase.HColumnDescriptor;
-045import 
org.apache.hadoop.hbase.HRegionInfo;
-046import 
org.apache.hadoop.hbase.HRegionLocation;
-047import 
org.apache.hadoop.hbase.MetaTableAccessor;
-048import 
org.apache.hadoop.hbase.MetaTableAccessor.QueryType;
-049import 
org.apache.hadoop.hbase.NotServingRegionException;
-050import 
org.apache.hadoop.hbase.RegionLocations;
-051import 
org.apache.hadoop.hbase.ServerName;
-052import 
org.apache.hadoop.hbase.NamespaceDescriptor;
-053import 
org.apache.hadoop.hbase.HConstants;
-054import 
org.apache.hadoop.hbase.TableExistsException;
-055import 
org.apache.hadoop.hbase.TableName;
-056import 
org.apache.hadoop.hbase.AsyncMetaTableAccessor;
-057import 
org.apache.hadoop.hbase.TableNotFoundException;
-058import 
org.apache.hadoop.hbase.UnknownRegionException;
-059import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
-060import 
org.apache.hadoop.hbase.classification.InterfaceStability;
-061import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.AdminRequestCallerBuilder;
-062import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.MasterRequestCallerBuilder;
-063import 
org.apache.hadoop.hbase.client.Scan.ReadType;
-064import 
org.apache.hadoop.hbase.client.replication.ReplicationSerDeHelper;
-065import 
org.apache.hadoop.hbase.client.replication.TableCFs;
-066import 
org.apache.hadoop.hbase.exceptions.DeserializationException;
-067import 
org.apache.hadoop.hbase.ipc.HBaseRpcController;
-068import 
org.apache.hadoop.hbase.quotas.QuotaFilter;
-069import 
org.apache.hadoop.hbase.quotas.QuotaSettings;
-070import 
org.apache.hadoop.hbase.quotas.QuotaTableUtil;
-071import 
org.apache.hadoop.hbase.replication.ReplicationException;
-072import 
org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
-073import 
org.apache.hadoop.hbase.replication.ReplicationPeerDescription;
-074import 
org.apache.hadoop.hbase.shaded.com.google.protobuf.RpcCallback;
-075import 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-076import 
org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
-077import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService;
-078import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionRequest;
-079import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionResponse;
-080import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionRequest;
-081import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionResponse;
-082import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
-083import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableSchema;
-084import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnRequest;
-085import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnResponse;
-086import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionRequest;
-087import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionResponse;
-088import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceRequest;
-089import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceResponse;
-090import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceRequest;
-091import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceResponse;
-092import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteNamespaceRequest;
-093import 

[50/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/apidocs/constant-values.html
--
diff --git a/apidocs/constant-values.html b/apidocs/constant-values.html
index 4779b26..f08373c 100644
--- a/apidocs/constant-values.html
+++ b/apidocs/constant-values.html
@@ -1025,1223 +1025,1237 @@
 16020
 
 
+
+
+publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
+DEFAULT_SNAPSHOT_RESTORE_FAILSAFE_NAME
+"hbase-failsafe-{snapshot.name}-{restore.timestamp}"
+
+
 
 
 publicstaticfinalboolean
 DEFAULT_SNAPSHOT_RESTORE_TAKE_FAILSAFE_SNAPSHOT
 false
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 DEFAULT_STATUS_MULTICAST_ADDRESS
 "226.1.1.3"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 DEFAULT_STATUS_MULTICAST_BIND_ADDRESS
 "0.0.0.0"
 
-
+
 
 
 publicstaticfinalint
 DEFAULT_STATUS_MULTICAST_PORT
 16100
 
-
+
 
 
 publicstaticfinalint
 DEFAULT_THREAD_WAKE_FREQUENCY
 1
 
-
+
 
 
 publicstaticfinalboolean
 DEFAULT_USE_META_REPLICAS
 false
 
-
+
 
 
 publicstaticfinalint
 DEFAULT_VERSION_FILE_WRITE_ATTEMPTS
 3
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 DEFAULT_WAL_STORAGE_POLICY
 "NONE"
 
-
+
 
 
 publicstaticfinalint
 DEFAULT_ZK_SESSION_TIMEOUT
 18
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 DEFAULT_ZOOKEEPER_ZNODE_PARENT
 "/hbase"
 
-
+
 
 
 publicstaticfinalint
 DEFAULT_ZOOKEPER_CLIENT_PORT
 2181
 
-
+
 
 
 publicstaticfinalint
 DEFAULT_ZOOKEPER_MAX_CLIENT_CNXNS
 300
 
-
+
 
 
 publicstaticfinallong
 DEFAULT_ZOOKEPER_RECOVERABLE_WAITIME
 1L
 
-
+
 
 
 publicstaticfinalint
 DELIMITER
 44
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 DISALLOW_WRITES_IN_RECOVERING
 "hbase.regionserver.disallow.writes.when.recovering"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 DISTRIBUTED_LOG_REPLAY_KEY
 "hbase.master.distributed.log.replay"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 ENABLE_CLIENT_BACKPRESSURE
 "hbase.client.backpressure.enabled"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 ENABLE_DATA_FILE_UMASK
 "hbase.data.umask.enable"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 ENABLE_WAL_COMPRESSION
 "hbase.regionserver.wal.enablecompression"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 ENABLE_WAL_ENCRYPTION
 "hbase.regionserver.wal.encryption"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 FILE_SYSTEM_VERSION
 "8"
 
-
+
 
 
 publicstaticfinalint
 FOREVER
 2147483647
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 HBASE_BALANCER_MAX_BALANCING
 "hbase.balancer.max.balancing"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 HBASE_BALANCER_PERIOD
 "hbase.balancer.period"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 HBASE_CANARY_READ_RAW_SCAN_KEY
 "hbase.canary.read.raw.enabled"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 HBASE_CANARY_WRITE_DATA_TTL_KEY
 "hbase.canary.write.data.ttl"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 HBASE_CANARY_WRITE_PERSERVER_REGIONS_LOWERLIMIT_KEY
 "hbase.canary.write.perserver.regions.lowerLimit"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 

[23/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DeleteTableProcedureBiConsumer.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DeleteTableProcedureBiConsumer.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DeleteTableProcedureBiConsumer.html
index 6c52543..f3f7a46 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DeleteTableProcedureBiConsumer.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DeleteTableProcedureBiConsumer.html
@@ -31,1797 +31,2040 @@
 023import java.util.ArrayList;
 024import java.util.Arrays;
 025import java.util.Collection;
-026import java.util.HashMap;
-027import java.util.LinkedList;
-028import java.util.List;
-029import java.util.Map;
-030import java.util.Optional;
-031import 
java.util.concurrent.CompletableFuture;
-032import java.util.concurrent.TimeUnit;
-033import 
java.util.concurrent.atomic.AtomicReference;
-034import java.util.function.BiConsumer;
-035import java.util.regex.Pattern;
-036import java.util.stream.Collectors;
-037
-038import 
com.google.common.annotations.VisibleForTesting;
-039
-040import io.netty.util.Timeout;
-041import io.netty.util.TimerTask;
-042import org.apache.commons.logging.Log;
-043import 
org.apache.commons.logging.LogFactory;
-044import 
org.apache.hadoop.hbase.HColumnDescriptor;
-045import 
org.apache.hadoop.hbase.HRegionInfo;
-046import 
org.apache.hadoop.hbase.HRegionLocation;
-047import 
org.apache.hadoop.hbase.MetaTableAccessor;
-048import 
org.apache.hadoop.hbase.MetaTableAccessor.QueryType;
-049import 
org.apache.hadoop.hbase.NotServingRegionException;
-050import 
org.apache.hadoop.hbase.RegionLocations;
-051import 
org.apache.hadoop.hbase.ServerName;
-052import 
org.apache.hadoop.hbase.NamespaceDescriptor;
-053import 
org.apache.hadoop.hbase.HConstants;
-054import 
org.apache.hadoop.hbase.TableExistsException;
-055import 
org.apache.hadoop.hbase.TableName;
-056import 
org.apache.hadoop.hbase.AsyncMetaTableAccessor;
-057import 
org.apache.hadoop.hbase.TableNotFoundException;
-058import 
org.apache.hadoop.hbase.UnknownRegionException;
-059import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
-060import 
org.apache.hadoop.hbase.classification.InterfaceStability;
-061import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.AdminRequestCallerBuilder;
-062import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.MasterRequestCallerBuilder;
-063import 
org.apache.hadoop.hbase.client.Scan.ReadType;
-064import 
org.apache.hadoop.hbase.client.replication.ReplicationSerDeHelper;
-065import 
org.apache.hadoop.hbase.client.replication.TableCFs;
-066import 
org.apache.hadoop.hbase.exceptions.DeserializationException;
-067import 
org.apache.hadoop.hbase.ipc.HBaseRpcController;
-068import 
org.apache.hadoop.hbase.quotas.QuotaFilter;
-069import 
org.apache.hadoop.hbase.quotas.QuotaSettings;
-070import 
org.apache.hadoop.hbase.quotas.QuotaTableUtil;
-071import 
org.apache.hadoop.hbase.replication.ReplicationException;
-072import 
org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
-073import 
org.apache.hadoop.hbase.replication.ReplicationPeerDescription;
-074import 
org.apache.hadoop.hbase.shaded.com.google.protobuf.RpcCallback;
-075import 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-076import 
org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
-077import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService;
-078import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionRequest;
-079import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionResponse;
-080import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionRequest;
-081import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionResponse;
-082import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
-083import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableSchema;
-084import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnRequest;
-085import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnResponse;
-086import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionRequest;
-087import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionResponse;
-088import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceRequest;
-089import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceResponse;
-090import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceRequest;
-091import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceResponse;
-092import 

[16/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.NamespaceProcedureBiConsumer.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.NamespaceProcedureBiConsumer.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.NamespaceProcedureBiConsumer.html
index 6c52543..f3f7a46 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.NamespaceProcedureBiConsumer.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.NamespaceProcedureBiConsumer.html
@@ -31,1797 +31,2040 @@
 023import java.util.ArrayList;
 024import java.util.Arrays;
 025import java.util.Collection;
-026import java.util.HashMap;
-027import java.util.LinkedList;
-028import java.util.List;
-029import java.util.Map;
-030import java.util.Optional;
-031import 
java.util.concurrent.CompletableFuture;
-032import java.util.concurrent.TimeUnit;
-033import 
java.util.concurrent.atomic.AtomicReference;
-034import java.util.function.BiConsumer;
-035import java.util.regex.Pattern;
-036import java.util.stream.Collectors;
-037
-038import 
com.google.common.annotations.VisibleForTesting;
-039
-040import io.netty.util.Timeout;
-041import io.netty.util.TimerTask;
-042import org.apache.commons.logging.Log;
-043import 
org.apache.commons.logging.LogFactory;
-044import 
org.apache.hadoop.hbase.HColumnDescriptor;
-045import 
org.apache.hadoop.hbase.HRegionInfo;
-046import 
org.apache.hadoop.hbase.HRegionLocation;
-047import 
org.apache.hadoop.hbase.MetaTableAccessor;
-048import 
org.apache.hadoop.hbase.MetaTableAccessor.QueryType;
-049import 
org.apache.hadoop.hbase.NotServingRegionException;
-050import 
org.apache.hadoop.hbase.RegionLocations;
-051import 
org.apache.hadoop.hbase.ServerName;
-052import 
org.apache.hadoop.hbase.NamespaceDescriptor;
-053import 
org.apache.hadoop.hbase.HConstants;
-054import 
org.apache.hadoop.hbase.TableExistsException;
-055import 
org.apache.hadoop.hbase.TableName;
-056import 
org.apache.hadoop.hbase.AsyncMetaTableAccessor;
-057import 
org.apache.hadoop.hbase.TableNotFoundException;
-058import 
org.apache.hadoop.hbase.UnknownRegionException;
-059import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
-060import 
org.apache.hadoop.hbase.classification.InterfaceStability;
-061import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.AdminRequestCallerBuilder;
-062import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.MasterRequestCallerBuilder;
-063import 
org.apache.hadoop.hbase.client.Scan.ReadType;
-064import 
org.apache.hadoop.hbase.client.replication.ReplicationSerDeHelper;
-065import 
org.apache.hadoop.hbase.client.replication.TableCFs;
-066import 
org.apache.hadoop.hbase.exceptions.DeserializationException;
-067import 
org.apache.hadoop.hbase.ipc.HBaseRpcController;
-068import 
org.apache.hadoop.hbase.quotas.QuotaFilter;
-069import 
org.apache.hadoop.hbase.quotas.QuotaSettings;
-070import 
org.apache.hadoop.hbase.quotas.QuotaTableUtil;
-071import 
org.apache.hadoop.hbase.replication.ReplicationException;
-072import 
org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
-073import 
org.apache.hadoop.hbase.replication.ReplicationPeerDescription;
-074import 
org.apache.hadoop.hbase.shaded.com.google.protobuf.RpcCallback;
-075import 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-076import 
org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
-077import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService;
-078import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionRequest;
-079import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionResponse;
-080import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionRequest;
-081import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionResponse;
-082import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
-083import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableSchema;
-084import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnRequest;
-085import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnResponse;
-086import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionRequest;
-087import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionResponse;
-088import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceRequest;
-089import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceResponse;
-090import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceRequest;
-091import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceResponse;
-092import 

[18/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.ModifyColumnFamilyProcedureBiConsumer.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.ModifyColumnFamilyProcedureBiConsumer.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.ModifyColumnFamilyProcedureBiConsumer.html
index 6c52543..f3f7a46 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.ModifyColumnFamilyProcedureBiConsumer.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.ModifyColumnFamilyProcedureBiConsumer.html
@@ -31,1797 +31,2040 @@
 023import java.util.ArrayList;
 024import java.util.Arrays;
 025import java.util.Collection;
-026import java.util.HashMap;
-027import java.util.LinkedList;
-028import java.util.List;
-029import java.util.Map;
-030import java.util.Optional;
-031import 
java.util.concurrent.CompletableFuture;
-032import java.util.concurrent.TimeUnit;
-033import 
java.util.concurrent.atomic.AtomicReference;
-034import java.util.function.BiConsumer;
-035import java.util.regex.Pattern;
-036import java.util.stream.Collectors;
-037
-038import 
com.google.common.annotations.VisibleForTesting;
-039
-040import io.netty.util.Timeout;
-041import io.netty.util.TimerTask;
-042import org.apache.commons.logging.Log;
-043import 
org.apache.commons.logging.LogFactory;
-044import 
org.apache.hadoop.hbase.HColumnDescriptor;
-045import 
org.apache.hadoop.hbase.HRegionInfo;
-046import 
org.apache.hadoop.hbase.HRegionLocation;
-047import 
org.apache.hadoop.hbase.MetaTableAccessor;
-048import 
org.apache.hadoop.hbase.MetaTableAccessor.QueryType;
-049import 
org.apache.hadoop.hbase.NotServingRegionException;
-050import 
org.apache.hadoop.hbase.RegionLocations;
-051import 
org.apache.hadoop.hbase.ServerName;
-052import 
org.apache.hadoop.hbase.NamespaceDescriptor;
-053import 
org.apache.hadoop.hbase.HConstants;
-054import 
org.apache.hadoop.hbase.TableExistsException;
-055import 
org.apache.hadoop.hbase.TableName;
-056import 
org.apache.hadoop.hbase.AsyncMetaTableAccessor;
-057import 
org.apache.hadoop.hbase.TableNotFoundException;
-058import 
org.apache.hadoop.hbase.UnknownRegionException;
-059import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
-060import 
org.apache.hadoop.hbase.classification.InterfaceStability;
-061import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.AdminRequestCallerBuilder;
-062import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.MasterRequestCallerBuilder;
-063import 
org.apache.hadoop.hbase.client.Scan.ReadType;
-064import 
org.apache.hadoop.hbase.client.replication.ReplicationSerDeHelper;
-065import 
org.apache.hadoop.hbase.client.replication.TableCFs;
-066import 
org.apache.hadoop.hbase.exceptions.DeserializationException;
-067import 
org.apache.hadoop.hbase.ipc.HBaseRpcController;
-068import 
org.apache.hadoop.hbase.quotas.QuotaFilter;
-069import 
org.apache.hadoop.hbase.quotas.QuotaSettings;
-070import 
org.apache.hadoop.hbase.quotas.QuotaTableUtil;
-071import 
org.apache.hadoop.hbase.replication.ReplicationException;
-072import 
org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
-073import 
org.apache.hadoop.hbase.replication.ReplicationPeerDescription;
-074import 
org.apache.hadoop.hbase.shaded.com.google.protobuf.RpcCallback;
-075import 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-076import 
org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
-077import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService;
-078import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionRequest;
-079import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionResponse;
-080import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionRequest;
-081import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionResponse;
-082import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
-083import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableSchema;
-084import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnRequest;
-085import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnResponse;
-086import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionRequest;
-087import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionResponse;
-088import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceRequest;
-089import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceResponse;
-090import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceRequest;
-091import 

[21/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.EnableTableProcedureBiConsumer.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.EnableTableProcedureBiConsumer.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.EnableTableProcedureBiConsumer.html
index 6c52543..f3f7a46 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.EnableTableProcedureBiConsumer.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.EnableTableProcedureBiConsumer.html
@@ -31,1797 +31,2040 @@
 023import java.util.ArrayList;
 024import java.util.Arrays;
 025import java.util.Collection;
-026import java.util.HashMap;
-027import java.util.LinkedList;
-028import java.util.List;
-029import java.util.Map;
-030import java.util.Optional;
-031import 
java.util.concurrent.CompletableFuture;
-032import java.util.concurrent.TimeUnit;
-033import 
java.util.concurrent.atomic.AtomicReference;
-034import java.util.function.BiConsumer;
-035import java.util.regex.Pattern;
-036import java.util.stream.Collectors;
-037
-038import 
com.google.common.annotations.VisibleForTesting;
-039
-040import io.netty.util.Timeout;
-041import io.netty.util.TimerTask;
-042import org.apache.commons.logging.Log;
-043import 
org.apache.commons.logging.LogFactory;
-044import 
org.apache.hadoop.hbase.HColumnDescriptor;
-045import 
org.apache.hadoop.hbase.HRegionInfo;
-046import 
org.apache.hadoop.hbase.HRegionLocation;
-047import 
org.apache.hadoop.hbase.MetaTableAccessor;
-048import 
org.apache.hadoop.hbase.MetaTableAccessor.QueryType;
-049import 
org.apache.hadoop.hbase.NotServingRegionException;
-050import 
org.apache.hadoop.hbase.RegionLocations;
-051import 
org.apache.hadoop.hbase.ServerName;
-052import 
org.apache.hadoop.hbase.NamespaceDescriptor;
-053import 
org.apache.hadoop.hbase.HConstants;
-054import 
org.apache.hadoop.hbase.TableExistsException;
-055import 
org.apache.hadoop.hbase.TableName;
-056import 
org.apache.hadoop.hbase.AsyncMetaTableAccessor;
-057import 
org.apache.hadoop.hbase.TableNotFoundException;
-058import 
org.apache.hadoop.hbase.UnknownRegionException;
-059import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
-060import 
org.apache.hadoop.hbase.classification.InterfaceStability;
-061import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.AdminRequestCallerBuilder;
-062import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.MasterRequestCallerBuilder;
-063import 
org.apache.hadoop.hbase.client.Scan.ReadType;
-064import 
org.apache.hadoop.hbase.client.replication.ReplicationSerDeHelper;
-065import 
org.apache.hadoop.hbase.client.replication.TableCFs;
-066import 
org.apache.hadoop.hbase.exceptions.DeserializationException;
-067import 
org.apache.hadoop.hbase.ipc.HBaseRpcController;
-068import 
org.apache.hadoop.hbase.quotas.QuotaFilter;
-069import 
org.apache.hadoop.hbase.quotas.QuotaSettings;
-070import 
org.apache.hadoop.hbase.quotas.QuotaTableUtil;
-071import 
org.apache.hadoop.hbase.replication.ReplicationException;
-072import 
org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
-073import 
org.apache.hadoop.hbase.replication.ReplicationPeerDescription;
-074import 
org.apache.hadoop.hbase.shaded.com.google.protobuf.RpcCallback;
-075import 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-076import 
org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
-077import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService;
-078import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionRequest;
-079import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionResponse;
-080import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionRequest;
-081import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionResponse;
-082import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
-083import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableSchema;
-084import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnRequest;
-085import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnResponse;
-086import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionRequest;
-087import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionResponse;
-088import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceRequest;
-089import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceResponse;
-090import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceRequest;
-091import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceResponse;
-092import 

[40/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/org/apache/hadoop/hbase/client/AsyncAdmin.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/client/AsyncAdmin.html 
b/devapidocs/org/apache/hadoop/hbase/client/AsyncAdmin.html
index ba71fe3..0b2cbfb 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/AsyncAdmin.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/AsyncAdmin.html
@@ -18,7 +18,7 @@
 catch(err) {
 }
 //-->
-var methods = 
{"i0":6,"i1":6,"i2":6,"i3":6,"i4":6,"i5":6,"i6":6,"i7":6,"i8":6,"i9":6,"i10":6,"i11":6,"i12":6,"i13":6,"i14":6,"i15":6,"i16":6,"i17":6,"i18":6,"i19":6,"i20":6,"i21":6,"i22":6,"i23":6,"i24":6,"i25":6,"i26":6,"i27":6,"i28":6,"i29":6,"i30":6,"i31":6,"i32":6,"i33":6,"i34":6,"i35":6,"i36":6,"i37":6,"i38":6,"i39":6,"i40":6,"i41":6,"i42":6,"i43":6,"i44":6,"i45":6,"i46":6,"i47":6,"i48":6,"i49":6,"i50":6,"i51":6,"i52":6,"i53":6,"i54":6,"i55":6,"i56":6,"i57":6,"i58":6,"i59":6,"i60":6,"i61":6,"i62":6,"i63":6,"i64":6,"i65":6,"i66":6,"i67":6,"i68":6,"i69":6,"i70":6,"i71":6,"i72":6};
+var methods = 
{"i0":6,"i1":6,"i2":6,"i3":6,"i4":6,"i5":6,"i6":6,"i7":6,"i8":6,"i9":6,"i10":6,"i11":6,"i12":6,"i13":6,"i14":6,"i15":6,"i16":6,"i17":6,"i18":6,"i19":6,"i20":6,"i21":6,"i22":6,"i23":6,"i24":6,"i25":6,"i26":6,"i27":6,"i28":6,"i29":6,"i30":6,"i31":6,"i32":6,"i33":6,"i34":6,"i35":6,"i36":6,"i37":6,"i38":6,"i39":6,"i40":6,"i41":6,"i42":6,"i43":6,"i44":6,"i45":6,"i46":6,"i47":6,"i48":6,"i49":6,"i50":6,"i51":6,"i52":6,"i53":6,"i54":6,"i55":6,"i56":6,"i57":6,"i58":6,"i59":6,"i60":6,"i61":6,"i62":6,"i63":6,"i64":6,"i65":6,"i66":6,"i67":6,"i68":6,"i69":6,"i70":6,"i71":6,"i72":6,"i73":6,"i74":6,"i75":6,"i76":6,"i77":6,"i78":6,"i79":6,"i80":6,"i81":6,"i82":6};
 var tabs = {65535:["t0","All Methods"],2:["t2","Instance 
Methods"],4:["t3","Abstract Methods"]};
 var altColor = "altColor";
 var rowColor = "rowColor";
@@ -244,122 +244,156 @@ public interface 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
+deleteSnapshot(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringsnapshotName)
+Delete an existing snapshot.
+
+
+
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
+deleteSnapshots(http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html?is-external=true;
 title="class or interface in java.util.regex">Patternpattern)
+Delete existing snapshots whose names match the pattern 
passed.
+
+
+
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
+deleteSnapshots(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringregex)
+Delete existing snapshots whose names match the pattern 
passed.
+
+
+
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
 deleteTable(TableNametableName)
 Deletes a table.
 
 
-
+
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFutureTableDescriptor[]
 deleteTables(http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html?is-external=true;
 title="class or interface in java.util.regex">Patternpattern)
 Delete tables matching the passed in pattern and wait on 
completion.
 
 
-
+
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFutureTableDescriptor[]
 deleteTables(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringregex)
 Deletes tables matching the passed in pattern and wait on 
completion.
 
 
-
+
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in 

[26/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.CreateTableProcedureBiConsumer.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.CreateTableProcedureBiConsumer.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.CreateTableProcedureBiConsumer.html
index 6c52543..f3f7a46 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.CreateTableProcedureBiConsumer.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.CreateTableProcedureBiConsumer.html
@@ -31,1797 +31,2040 @@
 023import java.util.ArrayList;
 024import java.util.Arrays;
 025import java.util.Collection;
-026import java.util.HashMap;
-027import java.util.LinkedList;
-028import java.util.List;
-029import java.util.Map;
-030import java.util.Optional;
-031import 
java.util.concurrent.CompletableFuture;
-032import java.util.concurrent.TimeUnit;
-033import 
java.util.concurrent.atomic.AtomicReference;
-034import java.util.function.BiConsumer;
-035import java.util.regex.Pattern;
-036import java.util.stream.Collectors;
-037
-038import 
com.google.common.annotations.VisibleForTesting;
-039
-040import io.netty.util.Timeout;
-041import io.netty.util.TimerTask;
-042import org.apache.commons.logging.Log;
-043import 
org.apache.commons.logging.LogFactory;
-044import 
org.apache.hadoop.hbase.HColumnDescriptor;
-045import 
org.apache.hadoop.hbase.HRegionInfo;
-046import 
org.apache.hadoop.hbase.HRegionLocation;
-047import 
org.apache.hadoop.hbase.MetaTableAccessor;
-048import 
org.apache.hadoop.hbase.MetaTableAccessor.QueryType;
-049import 
org.apache.hadoop.hbase.NotServingRegionException;
-050import 
org.apache.hadoop.hbase.RegionLocations;
-051import 
org.apache.hadoop.hbase.ServerName;
-052import 
org.apache.hadoop.hbase.NamespaceDescriptor;
-053import 
org.apache.hadoop.hbase.HConstants;
-054import 
org.apache.hadoop.hbase.TableExistsException;
-055import 
org.apache.hadoop.hbase.TableName;
-056import 
org.apache.hadoop.hbase.AsyncMetaTableAccessor;
-057import 
org.apache.hadoop.hbase.TableNotFoundException;
-058import 
org.apache.hadoop.hbase.UnknownRegionException;
-059import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
-060import 
org.apache.hadoop.hbase.classification.InterfaceStability;
-061import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.AdminRequestCallerBuilder;
-062import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.MasterRequestCallerBuilder;
-063import 
org.apache.hadoop.hbase.client.Scan.ReadType;
-064import 
org.apache.hadoop.hbase.client.replication.ReplicationSerDeHelper;
-065import 
org.apache.hadoop.hbase.client.replication.TableCFs;
-066import 
org.apache.hadoop.hbase.exceptions.DeserializationException;
-067import 
org.apache.hadoop.hbase.ipc.HBaseRpcController;
-068import 
org.apache.hadoop.hbase.quotas.QuotaFilter;
-069import 
org.apache.hadoop.hbase.quotas.QuotaSettings;
-070import 
org.apache.hadoop.hbase.quotas.QuotaTableUtil;
-071import 
org.apache.hadoop.hbase.replication.ReplicationException;
-072import 
org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
-073import 
org.apache.hadoop.hbase.replication.ReplicationPeerDescription;
-074import 
org.apache.hadoop.hbase.shaded.com.google.protobuf.RpcCallback;
-075import 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-076import 
org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
-077import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService;
-078import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionRequest;
-079import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionResponse;
-080import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionRequest;
-081import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionResponse;
-082import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
-083import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableSchema;
-084import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnRequest;
-085import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnResponse;
-086import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionRequest;
-087import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionResponse;
-088import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceRequest;
-089import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceResponse;
-090import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceRequest;
-091import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceResponse;
-092import 

[14/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.TableOperator.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.TableOperator.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.TableOperator.html
index 6c52543..f3f7a46 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.TableOperator.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.TableOperator.html
@@ -31,1797 +31,2040 @@
 023import java.util.ArrayList;
 024import java.util.Arrays;
 025import java.util.Collection;
-026import java.util.HashMap;
-027import java.util.LinkedList;
-028import java.util.List;
-029import java.util.Map;
-030import java.util.Optional;
-031import 
java.util.concurrent.CompletableFuture;
-032import java.util.concurrent.TimeUnit;
-033import 
java.util.concurrent.atomic.AtomicReference;
-034import java.util.function.BiConsumer;
-035import java.util.regex.Pattern;
-036import java.util.stream.Collectors;
-037
-038import 
com.google.common.annotations.VisibleForTesting;
-039
-040import io.netty.util.Timeout;
-041import io.netty.util.TimerTask;
-042import org.apache.commons.logging.Log;
-043import 
org.apache.commons.logging.LogFactory;
-044import 
org.apache.hadoop.hbase.HColumnDescriptor;
-045import 
org.apache.hadoop.hbase.HRegionInfo;
-046import 
org.apache.hadoop.hbase.HRegionLocation;
-047import 
org.apache.hadoop.hbase.MetaTableAccessor;
-048import 
org.apache.hadoop.hbase.MetaTableAccessor.QueryType;
-049import 
org.apache.hadoop.hbase.NotServingRegionException;
-050import 
org.apache.hadoop.hbase.RegionLocations;
-051import 
org.apache.hadoop.hbase.ServerName;
-052import 
org.apache.hadoop.hbase.NamespaceDescriptor;
-053import 
org.apache.hadoop.hbase.HConstants;
-054import 
org.apache.hadoop.hbase.TableExistsException;
-055import 
org.apache.hadoop.hbase.TableName;
-056import 
org.apache.hadoop.hbase.AsyncMetaTableAccessor;
-057import 
org.apache.hadoop.hbase.TableNotFoundException;
-058import 
org.apache.hadoop.hbase.UnknownRegionException;
-059import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
-060import 
org.apache.hadoop.hbase.classification.InterfaceStability;
-061import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.AdminRequestCallerBuilder;
-062import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.MasterRequestCallerBuilder;
-063import 
org.apache.hadoop.hbase.client.Scan.ReadType;
-064import 
org.apache.hadoop.hbase.client.replication.ReplicationSerDeHelper;
-065import 
org.apache.hadoop.hbase.client.replication.TableCFs;
-066import 
org.apache.hadoop.hbase.exceptions.DeserializationException;
-067import 
org.apache.hadoop.hbase.ipc.HBaseRpcController;
-068import 
org.apache.hadoop.hbase.quotas.QuotaFilter;
-069import 
org.apache.hadoop.hbase.quotas.QuotaSettings;
-070import 
org.apache.hadoop.hbase.quotas.QuotaTableUtil;
-071import 
org.apache.hadoop.hbase.replication.ReplicationException;
-072import 
org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
-073import 
org.apache.hadoop.hbase.replication.ReplicationPeerDescription;
-074import 
org.apache.hadoop.hbase.shaded.com.google.protobuf.RpcCallback;
-075import 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-076import 
org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
-077import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService;
-078import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionRequest;
-079import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionResponse;
-080import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionRequest;
-081import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionResponse;
-082import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
-083import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableSchema;
-084import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnRequest;
-085import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnResponse;
-086import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionRequest;
-087import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionResponse;
-088import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceRequest;
-089import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceResponse;
-090import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceRequest;
-091import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceResponse;
-092import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteNamespaceRequest;

[28/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.Converter.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.Converter.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.Converter.html
index 6c52543..f3f7a46 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.Converter.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.Converter.html
@@ -31,1797 +31,2040 @@
 023import java.util.ArrayList;
 024import java.util.Arrays;
 025import java.util.Collection;
-026import java.util.HashMap;
-027import java.util.LinkedList;
-028import java.util.List;
-029import java.util.Map;
-030import java.util.Optional;
-031import 
java.util.concurrent.CompletableFuture;
-032import java.util.concurrent.TimeUnit;
-033import 
java.util.concurrent.atomic.AtomicReference;
-034import java.util.function.BiConsumer;
-035import java.util.regex.Pattern;
-036import java.util.stream.Collectors;
-037
-038import 
com.google.common.annotations.VisibleForTesting;
-039
-040import io.netty.util.Timeout;
-041import io.netty.util.TimerTask;
-042import org.apache.commons.logging.Log;
-043import 
org.apache.commons.logging.LogFactory;
-044import 
org.apache.hadoop.hbase.HColumnDescriptor;
-045import 
org.apache.hadoop.hbase.HRegionInfo;
-046import 
org.apache.hadoop.hbase.HRegionLocation;
-047import 
org.apache.hadoop.hbase.MetaTableAccessor;
-048import 
org.apache.hadoop.hbase.MetaTableAccessor.QueryType;
-049import 
org.apache.hadoop.hbase.NotServingRegionException;
-050import 
org.apache.hadoop.hbase.RegionLocations;
-051import 
org.apache.hadoop.hbase.ServerName;
-052import 
org.apache.hadoop.hbase.NamespaceDescriptor;
-053import 
org.apache.hadoop.hbase.HConstants;
-054import 
org.apache.hadoop.hbase.TableExistsException;
-055import 
org.apache.hadoop.hbase.TableName;
-056import 
org.apache.hadoop.hbase.AsyncMetaTableAccessor;
-057import 
org.apache.hadoop.hbase.TableNotFoundException;
-058import 
org.apache.hadoop.hbase.UnknownRegionException;
-059import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
-060import 
org.apache.hadoop.hbase.classification.InterfaceStability;
-061import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.AdminRequestCallerBuilder;
-062import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.MasterRequestCallerBuilder;
-063import 
org.apache.hadoop.hbase.client.Scan.ReadType;
-064import 
org.apache.hadoop.hbase.client.replication.ReplicationSerDeHelper;
-065import 
org.apache.hadoop.hbase.client.replication.TableCFs;
-066import 
org.apache.hadoop.hbase.exceptions.DeserializationException;
-067import 
org.apache.hadoop.hbase.ipc.HBaseRpcController;
-068import 
org.apache.hadoop.hbase.quotas.QuotaFilter;
-069import 
org.apache.hadoop.hbase.quotas.QuotaSettings;
-070import 
org.apache.hadoop.hbase.quotas.QuotaTableUtil;
-071import 
org.apache.hadoop.hbase.replication.ReplicationException;
-072import 
org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
-073import 
org.apache.hadoop.hbase.replication.ReplicationPeerDescription;
-074import 
org.apache.hadoop.hbase.shaded.com.google.protobuf.RpcCallback;
-075import 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-076import 
org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
-077import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService;
-078import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionRequest;
-079import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionResponse;
-080import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionRequest;
-081import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionResponse;
-082import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
-083import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableSchema;
-084import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnRequest;
-085import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnResponse;
-086import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionRequest;
-087import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionResponse;
-088import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceRequest;
-089import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceResponse;
-090import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceRequest;
-091import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceResponse;
-092import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteNamespaceRequest;
-093import 

[10/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/client/HBaseAdmin.AbortProcedureFuture.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/HBaseAdmin.AbortProcedureFuture.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/HBaseAdmin.AbortProcedureFuture.html
index cf37188..0610ad0 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/HBaseAdmin.AbortProcedureFuture.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/HBaseAdmin.AbortProcedureFuture.html
@@ -2584,7 +2584,7 @@
 2576syncWaitTimeout,
 2577TimeUnit.MILLISECONDS);
 2578} catch (IOException e) {
-2579  // Somthing went wrong during the 
restore...
+2579  // Something went wrong during the 
restore...
 2580  // if the pre-restore snapshot is 
available try to rollback
 2581  if (takeFailSafeSnapshot) {
 2582try {

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/client/HBaseAdmin.AddColumnFamilyFuture.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/HBaseAdmin.AddColumnFamilyFuture.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/HBaseAdmin.AddColumnFamilyFuture.html
index cf37188..0610ad0 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/HBaseAdmin.AddColumnFamilyFuture.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/HBaseAdmin.AddColumnFamilyFuture.html
@@ -2584,7 +2584,7 @@
 2576syncWaitTimeout,
 2577TimeUnit.MILLISECONDS);
 2578} catch (IOException e) {
-2579  // Somthing went wrong during the 
restore...
+2579  // Something went wrong during the 
restore...
 2580  // if the pre-restore snapshot is 
available try to rollback
 2581  if (takeFailSafeSnapshot) {
 2582try {

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/client/HBaseAdmin.CreateTableFuture.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/HBaseAdmin.CreateTableFuture.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/HBaseAdmin.CreateTableFuture.html
index cf37188..0610ad0 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/HBaseAdmin.CreateTableFuture.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/HBaseAdmin.CreateTableFuture.html
@@ -2584,7 +2584,7 @@
 2576syncWaitTimeout,
 2577TimeUnit.MILLISECONDS);
 2578} catch (IOException e) {
-2579  // Somthing went wrong during the 
restore...
+2579  // Something went wrong during the 
restore...
 2580  // if the pre-restore snapshot is 
available try to rollback
 2581  if (takeFailSafeSnapshot) {
 2582try {

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/client/HBaseAdmin.DeleteColumnFamilyFuture.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/HBaseAdmin.DeleteColumnFamilyFuture.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/HBaseAdmin.DeleteColumnFamilyFuture.html
index cf37188..0610ad0 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/HBaseAdmin.DeleteColumnFamilyFuture.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/HBaseAdmin.DeleteColumnFamilyFuture.html
@@ -2584,7 +2584,7 @@
 2576syncWaitTimeout,
 2577TimeUnit.MILLISECONDS);
 2578} catch (IOException e) {
-2579  // Somthing went wrong during the 
restore...
+2579  // Something went wrong during the 
restore...
 2580  // if the pre-restore snapshot is 
available try to rollback
 2581  if (takeFailSafeSnapshot) {
 2582try {

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/client/HBaseAdmin.DeleteTableFuture.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/HBaseAdmin.DeleteTableFuture.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/HBaseAdmin.DeleteTableFuture.html
index cf37188..0610ad0 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/HBaseAdmin.DeleteTableFuture.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/HBaseAdmin.DeleteTableFuture.html
@@ -2584,7 +2584,7 @@
 2576syncWaitTimeout,
 2577TimeUnit.MILLISECONDS);
 2578} catch (IOException e) {
-2579  // Somthing went wrong during the 
restore...
+2579  // Something went wrong during the 
restore...
 2580  // if the pre-restore snapshot is 
available try to rollback
 2581  if 

[12/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.TruncateTableProcedureBiConsumer.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.TruncateTableProcedureBiConsumer.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.TruncateTableProcedureBiConsumer.html
index 6c52543..f3f7a46 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.TruncateTableProcedureBiConsumer.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.TruncateTableProcedureBiConsumer.html
@@ -31,1797 +31,2040 @@
 023import java.util.ArrayList;
 024import java.util.Arrays;
 025import java.util.Collection;
-026import java.util.HashMap;
-027import java.util.LinkedList;
-028import java.util.List;
-029import java.util.Map;
-030import java.util.Optional;
-031import 
java.util.concurrent.CompletableFuture;
-032import java.util.concurrent.TimeUnit;
-033import 
java.util.concurrent.atomic.AtomicReference;
-034import java.util.function.BiConsumer;
-035import java.util.regex.Pattern;
-036import java.util.stream.Collectors;
-037
-038import 
com.google.common.annotations.VisibleForTesting;
-039
-040import io.netty.util.Timeout;
-041import io.netty.util.TimerTask;
-042import org.apache.commons.logging.Log;
-043import 
org.apache.commons.logging.LogFactory;
-044import 
org.apache.hadoop.hbase.HColumnDescriptor;
-045import 
org.apache.hadoop.hbase.HRegionInfo;
-046import 
org.apache.hadoop.hbase.HRegionLocation;
-047import 
org.apache.hadoop.hbase.MetaTableAccessor;
-048import 
org.apache.hadoop.hbase.MetaTableAccessor.QueryType;
-049import 
org.apache.hadoop.hbase.NotServingRegionException;
-050import 
org.apache.hadoop.hbase.RegionLocations;
-051import 
org.apache.hadoop.hbase.ServerName;
-052import 
org.apache.hadoop.hbase.NamespaceDescriptor;
-053import 
org.apache.hadoop.hbase.HConstants;
-054import 
org.apache.hadoop.hbase.TableExistsException;
-055import 
org.apache.hadoop.hbase.TableName;
-056import 
org.apache.hadoop.hbase.AsyncMetaTableAccessor;
-057import 
org.apache.hadoop.hbase.TableNotFoundException;
-058import 
org.apache.hadoop.hbase.UnknownRegionException;
-059import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
-060import 
org.apache.hadoop.hbase.classification.InterfaceStability;
-061import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.AdminRequestCallerBuilder;
-062import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.MasterRequestCallerBuilder;
-063import 
org.apache.hadoop.hbase.client.Scan.ReadType;
-064import 
org.apache.hadoop.hbase.client.replication.ReplicationSerDeHelper;
-065import 
org.apache.hadoop.hbase.client.replication.TableCFs;
-066import 
org.apache.hadoop.hbase.exceptions.DeserializationException;
-067import 
org.apache.hadoop.hbase.ipc.HBaseRpcController;
-068import 
org.apache.hadoop.hbase.quotas.QuotaFilter;
-069import 
org.apache.hadoop.hbase.quotas.QuotaSettings;
-070import 
org.apache.hadoop.hbase.quotas.QuotaTableUtil;
-071import 
org.apache.hadoop.hbase.replication.ReplicationException;
-072import 
org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
-073import 
org.apache.hadoop.hbase.replication.ReplicationPeerDescription;
-074import 
org.apache.hadoop.hbase.shaded.com.google.protobuf.RpcCallback;
-075import 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-076import 
org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
-077import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService;
-078import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionRequest;
-079import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionResponse;
-080import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionRequest;
-081import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionResponse;
-082import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
-083import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableSchema;
-084import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnRequest;
-085import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnResponse;
-086import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionRequest;
-087import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionResponse;
-088import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceRequest;
-089import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceResponse;
-090import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceRequest;
-091import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceResponse;

[38/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.html 
b/devapidocs/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.html
index 8216665..4dad00d 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.html
@@ -18,7 +18,7 @@
 catch(err) {
 }
 //-->
-var methods = 
{"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":10,"i14":10,"i15":10,"i16":10,"i17":10,"i18":10,"i19":10,"i20":10,"i21":10,"i22":10,"i23":10,"i24":10,"i25":10,"i26":10,"i27":10,"i28":10,"i29":10,"i30":10,"i31":10,"i32":10,"i33":10,"i34":10,"i35":10,"i36":10,"i37":10,"i38":10,"i39":10,"i40":10,"i41":10,"i42":10,"i43":10,"i44":10,"i45":10,"i46":10,"i47":10,"i48":10,"i49":10,"i50":10,"i51":10,"i52":10,"i53":10,"i54":10,"i55":10,"i56":10,"i57":10,"i58":10,"i59":10,"i60":10,"i61":10,"i62":10,"i63":10,"i64":10,"i65":10,"i66":10,"i67":10,"i68":10,"i69":10,"i70":10,"i71":10,"i72":10,"i73":10,"i74":10,"i75":10,"i76":10,"i77":10,"i78":10,"i79":10,"i80":10,"i81":10,"i82":10,"i83":10,"i84":10,"i85":10,"i86":10,"i87":10,"i88":10,"i89":10,"i90":10};
+var methods = 
{"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":10,"i14":10,"i15":10,"i16":10,"i17":10,"i18":10,"i19":10,"i20":10,"i21":10,"i22":10,"i23":10,"i24":10,"i25":10,"i26":10,"i27":10,"i28":10,"i29":10,"i30":10,"i31":10,"i32":10,"i33":10,"i34":10,"i35":10,"i36":10,"i37":10,"i38":10,"i39":10,"i40":10,"i41":10,"i42":10,"i43":10,"i44":10,"i45":10,"i46":10,"i47":10,"i48":10,"i49":10,"i50":10,"i51":10,"i52":10,"i53":10,"i54":10,"i55":10,"i56":10,"i57":10,"i58":10,"i59":10,"i60":10,"i61":10,"i62":10,"i63":10,"i64":10,"i65":10,"i66":10,"i67":10,"i68":10,"i69":10,"i70":10,"i71":10,"i72":10,"i73":10,"i74":10,"i75":10,"i76":10,"i77":10,"i78":10,"i79":10,"i80":10,"i81":10,"i82":10,"i83":10,"i84":10,"i85":10,"i86":10,"i87":10,"i88":10,"i89":10,"i90":10,"i91":10,"i92":10,"i93":10,"i94":10,"i95":10,"i96":10,"i97":10,"i98":10,"i99":10,"i100":10,"i101":10,"i102":10};
 var tabs = {65535:["t0","All Methods"],2:["t2","Instance 
Methods"],8:["t4","Concrete Methods"]};
 var altColor = "altColor";
 var rowColor = "rowColor";
@@ -115,7 +115,7 @@ var activeTableTab = "activeTableTab";
 
 @InterfaceAudience.Private
  @InterfaceStability.Evolving
-public class AsyncHBaseAdmin
+public class AsyncHBaseAdmin
 extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
 implements AsyncAdmin
 The implementation of AsyncAdmin.
@@ -449,153 +449,191 @@ implements 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
+deleteSnapshot(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringsnapshotName)
+Delete an existing snapshot.
+
+
+
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
+deleteSnapshots(http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html?is-external=true;
 title="class or interface in 
java.util.regex">PatternsnapshotNamePattern)
+Delete existing snapshots whose names match the pattern 
passed.
+
+
+
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
+deleteSnapshots(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringregex)
+Delete existing snapshots whose names match the pattern 
passed.
+
+
+
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
 deleteTable(TableNametableName)
 Deletes a table.
 
 
-
+
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in 

[25/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DeleteColumnFamilyProcedureBiConsumer.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DeleteColumnFamilyProcedureBiConsumer.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DeleteColumnFamilyProcedureBiConsumer.html
index 6c52543..f3f7a46 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DeleteColumnFamilyProcedureBiConsumer.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DeleteColumnFamilyProcedureBiConsumer.html
@@ -31,1797 +31,2040 @@
 023import java.util.ArrayList;
 024import java.util.Arrays;
 025import java.util.Collection;
-026import java.util.HashMap;
-027import java.util.LinkedList;
-028import java.util.List;
-029import java.util.Map;
-030import java.util.Optional;
-031import 
java.util.concurrent.CompletableFuture;
-032import java.util.concurrent.TimeUnit;
-033import 
java.util.concurrent.atomic.AtomicReference;
-034import java.util.function.BiConsumer;
-035import java.util.regex.Pattern;
-036import java.util.stream.Collectors;
-037
-038import 
com.google.common.annotations.VisibleForTesting;
-039
-040import io.netty.util.Timeout;
-041import io.netty.util.TimerTask;
-042import org.apache.commons.logging.Log;
-043import 
org.apache.commons.logging.LogFactory;
-044import 
org.apache.hadoop.hbase.HColumnDescriptor;
-045import 
org.apache.hadoop.hbase.HRegionInfo;
-046import 
org.apache.hadoop.hbase.HRegionLocation;
-047import 
org.apache.hadoop.hbase.MetaTableAccessor;
-048import 
org.apache.hadoop.hbase.MetaTableAccessor.QueryType;
-049import 
org.apache.hadoop.hbase.NotServingRegionException;
-050import 
org.apache.hadoop.hbase.RegionLocations;
-051import 
org.apache.hadoop.hbase.ServerName;
-052import 
org.apache.hadoop.hbase.NamespaceDescriptor;
-053import 
org.apache.hadoop.hbase.HConstants;
-054import 
org.apache.hadoop.hbase.TableExistsException;
-055import 
org.apache.hadoop.hbase.TableName;
-056import 
org.apache.hadoop.hbase.AsyncMetaTableAccessor;
-057import 
org.apache.hadoop.hbase.TableNotFoundException;
-058import 
org.apache.hadoop.hbase.UnknownRegionException;
-059import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
-060import 
org.apache.hadoop.hbase.classification.InterfaceStability;
-061import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.AdminRequestCallerBuilder;
-062import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.MasterRequestCallerBuilder;
-063import 
org.apache.hadoop.hbase.client.Scan.ReadType;
-064import 
org.apache.hadoop.hbase.client.replication.ReplicationSerDeHelper;
-065import 
org.apache.hadoop.hbase.client.replication.TableCFs;
-066import 
org.apache.hadoop.hbase.exceptions.DeserializationException;
-067import 
org.apache.hadoop.hbase.ipc.HBaseRpcController;
-068import 
org.apache.hadoop.hbase.quotas.QuotaFilter;
-069import 
org.apache.hadoop.hbase.quotas.QuotaSettings;
-070import 
org.apache.hadoop.hbase.quotas.QuotaTableUtil;
-071import 
org.apache.hadoop.hbase.replication.ReplicationException;
-072import 
org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
-073import 
org.apache.hadoop.hbase.replication.ReplicationPeerDescription;
-074import 
org.apache.hadoop.hbase.shaded.com.google.protobuf.RpcCallback;
-075import 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-076import 
org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
-077import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService;
-078import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionRequest;
-079import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionResponse;
-080import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionRequest;
-081import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionResponse;
-082import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
-083import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableSchema;
-084import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnRequest;
-085import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnResponse;
-086import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionRequest;
-087import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionResponse;
-088import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceRequest;
-089import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceResponse;
-090import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceRequest;
-091import 

[17/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.ModifyNamespaceProcedureBiConsumer.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.ModifyNamespaceProcedureBiConsumer.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.ModifyNamespaceProcedureBiConsumer.html
index 6c52543..f3f7a46 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.ModifyNamespaceProcedureBiConsumer.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.ModifyNamespaceProcedureBiConsumer.html
@@ -31,1797 +31,2040 @@
 023import java.util.ArrayList;
 024import java.util.Arrays;
 025import java.util.Collection;
-026import java.util.HashMap;
-027import java.util.LinkedList;
-028import java.util.List;
-029import java.util.Map;
-030import java.util.Optional;
-031import 
java.util.concurrent.CompletableFuture;
-032import java.util.concurrent.TimeUnit;
-033import 
java.util.concurrent.atomic.AtomicReference;
-034import java.util.function.BiConsumer;
-035import java.util.regex.Pattern;
-036import java.util.stream.Collectors;
-037
-038import 
com.google.common.annotations.VisibleForTesting;
-039
-040import io.netty.util.Timeout;
-041import io.netty.util.TimerTask;
-042import org.apache.commons.logging.Log;
-043import 
org.apache.commons.logging.LogFactory;
-044import 
org.apache.hadoop.hbase.HColumnDescriptor;
-045import 
org.apache.hadoop.hbase.HRegionInfo;
-046import 
org.apache.hadoop.hbase.HRegionLocation;
-047import 
org.apache.hadoop.hbase.MetaTableAccessor;
-048import 
org.apache.hadoop.hbase.MetaTableAccessor.QueryType;
-049import 
org.apache.hadoop.hbase.NotServingRegionException;
-050import 
org.apache.hadoop.hbase.RegionLocations;
-051import 
org.apache.hadoop.hbase.ServerName;
-052import 
org.apache.hadoop.hbase.NamespaceDescriptor;
-053import 
org.apache.hadoop.hbase.HConstants;
-054import 
org.apache.hadoop.hbase.TableExistsException;
-055import 
org.apache.hadoop.hbase.TableName;
-056import 
org.apache.hadoop.hbase.AsyncMetaTableAccessor;
-057import 
org.apache.hadoop.hbase.TableNotFoundException;
-058import 
org.apache.hadoop.hbase.UnknownRegionException;
-059import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
-060import 
org.apache.hadoop.hbase.classification.InterfaceStability;
-061import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.AdminRequestCallerBuilder;
-062import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.MasterRequestCallerBuilder;
-063import 
org.apache.hadoop.hbase.client.Scan.ReadType;
-064import 
org.apache.hadoop.hbase.client.replication.ReplicationSerDeHelper;
-065import 
org.apache.hadoop.hbase.client.replication.TableCFs;
-066import 
org.apache.hadoop.hbase.exceptions.DeserializationException;
-067import 
org.apache.hadoop.hbase.ipc.HBaseRpcController;
-068import 
org.apache.hadoop.hbase.quotas.QuotaFilter;
-069import 
org.apache.hadoop.hbase.quotas.QuotaSettings;
-070import 
org.apache.hadoop.hbase.quotas.QuotaTableUtil;
-071import 
org.apache.hadoop.hbase.replication.ReplicationException;
-072import 
org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
-073import 
org.apache.hadoop.hbase.replication.ReplicationPeerDescription;
-074import 
org.apache.hadoop.hbase.shaded.com.google.protobuf.RpcCallback;
-075import 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-076import 
org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
-077import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService;
-078import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionRequest;
-079import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionResponse;
-080import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionRequest;
-081import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionResponse;
-082import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
-083import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableSchema;
-084import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnRequest;
-085import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnResponse;
-086import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionRequest;
-087import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionResponse;
-088import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceRequest;
-089import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceResponse;
-090import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceRequest;
-091import 

[51/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.


Project: http://git-wip-us.apache.org/repos/asf/hbase-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase-site/commit/6f2e75f2
Tree: http://git-wip-us.apache.org/repos/asf/hbase-site/tree/6f2e75f2
Diff: http://git-wip-us.apache.org/repos/asf/hbase-site/diff/6f2e75f2

Branch: refs/heads/asf-site
Commit: 6f2e75f27944abae7736246a5878b7608094ccd4
Parents: 4f60e1a
Author: jenkins 
Authored: Fri Apr 28 14:58:44 2017 +
Committer: jenkins 
Committed: Fri Apr 28 14:58:44 2017 +

--
 acid-semantics.html | 4 +-
 apache_hbase_reference_guide.pdf| 4 +-
 apache_hbase_reference_guide.pdfmarks   | 4 +-
 apidocs/constant-values.html|   360 +-
 apidocs/index-all.html  | 4 +
 apidocs/org/apache/hadoop/hbase/HConstants.html |   430 +-
 .../org/apache/hadoop/hbase/HConstants.html |13 +-
 bulk-loads.html | 4 +-
 checkstyle-aggregate.html   | 29134 +
 checkstyle.rss  | 8 +-
 coc.html| 4 +-
 cygwin.html | 4 +-
 dependencies.html   | 4 +-
 dependency-convergence.html | 4 +-
 dependency-info.html| 4 +-
 dependency-management.html  | 4 +-
 devapidocs/constant-values.html |   387 +-
 devapidocs/index-all.html   |97 +-
 .../org/apache/hadoop/hbase/HConstants.html |   432 +-
 .../hadoop/hbase/backup/package-tree.html   | 4 +-
 .../hadoop/hbase/class-use/TableName.html   |80 +-
 .../hbase/classification/package-tree.html  | 4 +-
 .../apache/hadoop/hbase/client/AsyncAdmin.html  |   345 +-
 ...dmin.AddColumnFamilyProcedureBiConsumer.html | 6 +-
 .../client/AsyncHBaseAdmin.AdminRpcCall.html| 4 +-
 .../hbase/client/AsyncHBaseAdmin.Converter.html | 4 +-
 ...dmin.CreateNamespaceProcedureBiConsumer.html | 6 +-
 ...aseAdmin.CreateTableProcedureBiConsumer.html | 6 +-
 ...n.DeleteColumnFamilyProcedureBiConsumer.html | 6 +-
 ...dmin.DeleteNamespaceProcedureBiConsumer.html | 6 +-
 ...aseAdmin.DeleteTableProcedureBiConsumer.html | 8 +-
 ...seAdmin.DisableTableProcedureBiConsumer.html | 6 +-
 ...aseAdmin.EnableTableProcedureBiConsumer.html | 6 +-
 .../client/AsyncHBaseAdmin.MasterRpcCall.html   | 4 +-
 ...min.MergeTableRegionProcedureBiConsumer.html | 6 +-
 ...n.ModifyColumnFamilyProcedureBiConsumer.html | 6 +-
 ...dmin.ModifyNamespaceProcedureBiConsumer.html | 6 +-
 ...HBaseAdmin.NamespaceProcedureBiConsumer.html |14 +-
 .../AsyncHBaseAdmin.ProcedureBiConsumer.html|12 +-
 .../client/AsyncHBaseAdmin.TableOperator.html   | 4 +-
 ...syncHBaseAdmin.TableProcedureBiConsumer.html |14 +-
 ...eAdmin.TruncateTableProcedureBiConsumer.html | 6 +-
 .../hadoop/hbase/client/AsyncHBaseAdmin.html|   631 +-
 .../client/class-use/SnapshotDescription.html   |90 +-
 .../hadoop/hbase/client/package-tree.html   |26 +-
 .../hadoop/hbase/executor/package-tree.html | 2 +-
 .../hadoop/hbase/filter/package-tree.html   |12 +-
 .../io/hfile/HFileBlock.BlockIterator.html  | 6 +-
 .../io/hfile/HFileBlock.BlockWritable.html  | 6 +-
 .../hbase/io/hfile/HFileBlock.FSReader.html |16 +-
 .../hbase/io/hfile/HFileBlock.FSReaderImpl.html |58 +-
 .../io/hfile/HFileBlock.PrefetchedHeader.html   |12 +-
 .../hbase/io/hfile/HFileBlock.Writer.html   |   148 +-
 .../hadoop/hbase/io/hfile/HFileBlock.html   |44 +-
 .../hadoop/hbase/io/hfile/HFileWriterImpl.html  |   200 +-
 .../hadoop/hbase/io/hfile/package-tree.html | 8 +-
 .../apache/hadoop/hbase/ipc/package-tree.html   | 2 +-
 .../hadoop/hbase/mapreduce/package-tree.html| 4 +-
 .../hadoop/hbase/master/package-tree.html   | 6 +-
 .../hbase/master/procedure/package-tree.html| 2 +-
 .../org/apache/hadoop/hbase/package-tree.html   |10 +-
 .../hadoop/hbase/procedure2/package-tree.html   | 4 +-
 .../hadoop/hbase/quotas/package-tree.html   | 6 +-
 .../CompactSplitThread.CompactionRunner.html|26 +-
 .../CompactSplitThread.Rejection.html   | 6 +-
 .../hbase/regionserver/CompactSplitThread.html  |   229 +-
 .../hadoop/hbase/regionserver/package-tree.html |20 +-
 .../regionserver/querymatcher/package-tree.html | 4 +-
 .../hbase/security/access/package-tree.html | 2 +-
 .../hadoop/hbase/security/package-tree.html | 2 +-
 .../hadoop/hbase/thrift/package-tree.html   | 4 +-
 .../tmpl/master/MasterStatusTmpl.ImplData.html  |   240 +-

[05/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Header.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Header.html 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Header.html
index a665139..3fedd0b 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Header.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Header.html
@@ -879,1201 +879,1221 @@
 871// includes the header size also.
 872private int 
unencodedDataSizeWritten;
 873
-874/**
-875 * Bytes to be written to the file 
system, including the header. Compressed
-876 * if compression is turned on. It 
also includes the checksum data that
-877 * immediately follows the block 
data. (header + data + checksums)
-878 */
-879private ByteArrayOutputStream 
onDiskBlockBytesWithHeader;
-880
-881/**
-882 * The size of the checksum data on 
disk. It is used only if data is
-883 * not compressed. If data is 
compressed, then the checksums are already
-884 * part of onDiskBytesWithHeader. If 
data is uncompressed, then this
-885 * variable stores the checksum data 
for this block.
-886 */
-887private byte[] onDiskChecksum = 
HConstants.EMPTY_BYTE_ARRAY;
-888
-889/**
-890 * Current block's start offset in 
the {@link HFile}. Set in
-891 * {@link 
#writeHeaderAndData(FSDataOutputStream)}.
-892 */
-893private long startOffset;
-894
-895/**
-896 * Offset of previous block by block 
type. Updated when the next block is
-897 * started.
-898 */
-899private long[] prevOffsetByType;
-900
-901/** The offset of the previous block 
of the same type */
-902private long prevOffset;
-903/** Meta data that holds information 
about the hfileblock**/
-904private HFileContext fileContext;
-905
-906/**
-907 * @param dataBlockEncoder data block 
encoding algorithm to use
-908 */
-909public Writer(HFileDataBlockEncoder 
dataBlockEncoder, HFileContext fileContext) {
-910  if 
(fileContext.getBytesPerChecksum()  HConstants.HFILEBLOCK_HEADER_SIZE) {
-911throw new 
RuntimeException("Unsupported value of bytesPerChecksum. " +
-912" Minimum is " + 
HConstants.HFILEBLOCK_HEADER_SIZE + " but the configured value is " +
-913
fileContext.getBytesPerChecksum());
-914  }
-915  this.dataBlockEncoder = 
dataBlockEncoder != null?
-916  dataBlockEncoder: 
NoOpDataBlockEncoder.INSTANCE;
-917  this.dataBlockEncodingCtx = 
this.dataBlockEncoder.
-918  
newDataBlockEncodingContext(HConstants.HFILEBLOCK_DUMMY_HEADER, fileContext);
-919  // TODO: This should be lazily 
instantiated since we usually do NOT need this default encoder
-920  this.defaultBlockEncodingCtx = new 
HFileBlockDefaultEncodingContext(null,
-921  
HConstants.HFILEBLOCK_DUMMY_HEADER, fileContext);
-922  // TODO: Set BAOS initial size. Use 
fileContext.getBlocksize() and add for header/checksum
-923  baosInMemory = new 
ByteArrayOutputStream();
-924  prevOffsetByType = new 
long[BlockType.values().length];
-925  for (int i = 0; i  
prevOffsetByType.length; ++i) {
-926prevOffsetByType[i] = UNSET;
-927  }
-928  // TODO: Why fileContext saved away 
when we have dataBlockEncoder and/or
-929  // defaultDataBlockEncoder?
-930  this.fileContext = fileContext;
-931}
-932
-933/**
-934 * Starts writing into the block. The 
previous block's data is discarded.
-935 *
-936 * @return the stream the user can 
write their data into
-937 * @throws IOException
-938 */
-939DataOutputStream 
startWriting(BlockType newBlockType)
-940throws IOException {
-941  if (state == State.BLOCK_READY 
 startOffset != -1) {
-942// We had a previous block that 
was written to a stream at a specific
-943// offset. Save that offset as 
the last offset of a block of that type.
-944
prevOffsetByType[blockType.getId()] = startOffset;
-945  }
-946
-947  startOffset = -1;
-948  blockType = newBlockType;
-949
-950  baosInMemory.reset();
-951  
baosInMemory.write(HConstants.HFILEBLOCK_DUMMY_HEADER);
-952
-953  state = State.WRITING;
-954
-955  // We will compress it later in 
finishBlock()
-956  userDataStream = new 
ByteBufferWriterDataOutputStream(baosInMemory);
-957  if (newBlockType == BlockType.DATA) 
{
-958
this.dataBlockEncoder.startBlockEncoding(dataBlockEncodingCtx, 
userDataStream);
-959  }
-960  this.unencodedDataSizeWritten = 
0;
-961  return userDataStream;
-962}
-963
-964/**
-965 * Writes the Cell to this block
-966 * @param cell
-967 * @throws IOException
-968 */
-969void write(Cell cell) throws 
IOException{
-970  

[36/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileBlock.PrefetchedHeader.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileBlock.PrefetchedHeader.html 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileBlock.PrefetchedHeader.html
index fa9f873..a430157 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileBlock.PrefetchedHeader.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileBlock.PrefetchedHeader.html
@@ -113,7 +113,7 @@ var activeTableTab = "activeTableTab";
 
 
 
-private static class HFileBlock.PrefetchedHeader
+private static class HFileBlock.PrefetchedHeader
 extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
 Data-structure to use caching the header of the NEXT block. 
Only works if next read
  that comes in here is next in sequence in this block.
@@ -219,7 +219,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 offset
-long offset
+long offset
 
 
 
@@ -228,7 +228,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 header
-byte[] header
+byte[] header
 
 
 
@@ -237,7 +237,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 buf
-finalhttp://docs.oracle.com/javase/8/docs/api/java/nio/ByteBuffer.html?is-external=true;
 title="class or interface in java.nio">ByteBuffer buf
+finalhttp://docs.oracle.com/javase/8/docs/api/java/nio/ByteBuffer.html?is-external=true;
 title="class or interface in java.nio">ByteBuffer buf
 
 
 
@@ -254,7 +254,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 PrefetchedHeader
-privatePrefetchedHeader()
+privatePrefetchedHeader()
 
 
 
@@ -271,7 +271,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 toString
-publichttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringtoString()
+publichttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringtoString()
 
 Overrides:
 http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#toString--;
 title="class or interface in java.lang">toStringin 
classhttp://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileBlock.Writer.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileBlock.Writer.html 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileBlock.Writer.html
index f9dd88a..8386b16 100644
--- a/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileBlock.Writer.html
+++ b/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileBlock.Writer.html
@@ -18,7 +18,7 @@
 catch(err) {
 }
 //-->
-var methods = 
{"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":10,"i14":10,"i15":10,"i16":10,"i17":10,"i18":10,"i19":10,"i20":10,"i21":10};
+var methods = 
{"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":10,"i14":10,"i15":10,"i16":10,"i17":10,"i18":10,"i19":10,"i20":10,"i21":10,"i22":10};
 var tabs = {65535:["t0","All Methods"],2:["t2","Instance 
Methods"],8:["t4","Concrete Methods"]};
 var altColor = "altColor";
 var rowColor = "rowColor";
@@ -192,52 +192,56 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 
+private int
+encodedDataSizeWritten
+
+
 private HFileContext
 fileContext
 Meta data that holds information about the hfileblock
 
 
-
+
 private ByteArrayOutputStream
 onDiskBlockBytesWithHeader
 Bytes to be written to the file system, including the 
header.
 
 
-
+
 private byte[]
 onDiskChecksum
 The size of the checksum data on disk.
 
 
-
+
 private long
 prevOffset
 The offset of the previous block of the same type
 
 
-
+
 private long[]
 prevOffsetByType
 Offset of previous block by block type.
 
 
-
+
 private long
 startOffset
 Current block's start offset in the HFile.
 
 
-
+
 private HFileBlock.Writer.State
 state
 Writer state.
 
 
-
+
 private int
 unencodedDataSizeWritten
 
-
+
 private http://docs.oracle.com/javase/8/docs/api/java/io/DataOutputStream.html?is-external=true;
 title="class or interface in java.io">DataOutputStream
 userDataStream
 A stream that we write uncompressed bytes to, which 
compresses them and
@@ -298,77 +302,84 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 
+int
+encodedBlockSizeWritten()
+Returns the number of bytes written into the current block 
so far, or
+ zero if not writing 

[01/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
Repository: hbase-site
Updated Branches:
  refs/heads/asf-site 4f60e1ab0 -> 6f2e75f27


http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.html 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.html
index a665139..3fedd0b 100644
--- a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.html
+++ b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.html
@@ -879,1201 +879,1221 @@
 871// includes the header size also.
 872private int 
unencodedDataSizeWritten;
 873
-874/**
-875 * Bytes to be written to the file 
system, including the header. Compressed
-876 * if compression is turned on. It 
also includes the checksum data that
-877 * immediately follows the block 
data. (header + data + checksums)
-878 */
-879private ByteArrayOutputStream 
onDiskBlockBytesWithHeader;
-880
-881/**
-882 * The size of the checksum data on 
disk. It is used only if data is
-883 * not compressed. If data is 
compressed, then the checksums are already
-884 * part of onDiskBytesWithHeader. If 
data is uncompressed, then this
-885 * variable stores the checksum data 
for this block.
-886 */
-887private byte[] onDiskChecksum = 
HConstants.EMPTY_BYTE_ARRAY;
-888
-889/**
-890 * Current block's start offset in 
the {@link HFile}. Set in
-891 * {@link 
#writeHeaderAndData(FSDataOutputStream)}.
-892 */
-893private long startOffset;
-894
-895/**
-896 * Offset of previous block by block 
type. Updated when the next block is
-897 * started.
-898 */
-899private long[] prevOffsetByType;
-900
-901/** The offset of the previous block 
of the same type */
-902private long prevOffset;
-903/** Meta data that holds information 
about the hfileblock**/
-904private HFileContext fileContext;
-905
-906/**
-907 * @param dataBlockEncoder data block 
encoding algorithm to use
-908 */
-909public Writer(HFileDataBlockEncoder 
dataBlockEncoder, HFileContext fileContext) {
-910  if 
(fileContext.getBytesPerChecksum()  HConstants.HFILEBLOCK_HEADER_SIZE) {
-911throw new 
RuntimeException("Unsupported value of bytesPerChecksum. " +
-912" Minimum is " + 
HConstants.HFILEBLOCK_HEADER_SIZE + " but the configured value is " +
-913
fileContext.getBytesPerChecksum());
-914  }
-915  this.dataBlockEncoder = 
dataBlockEncoder != null?
-916  dataBlockEncoder: 
NoOpDataBlockEncoder.INSTANCE;
-917  this.dataBlockEncodingCtx = 
this.dataBlockEncoder.
-918  
newDataBlockEncodingContext(HConstants.HFILEBLOCK_DUMMY_HEADER, fileContext);
-919  // TODO: This should be lazily 
instantiated since we usually do NOT need this default encoder
-920  this.defaultBlockEncodingCtx = new 
HFileBlockDefaultEncodingContext(null,
-921  
HConstants.HFILEBLOCK_DUMMY_HEADER, fileContext);
-922  // TODO: Set BAOS initial size. Use 
fileContext.getBlocksize() and add for header/checksum
-923  baosInMemory = new 
ByteArrayOutputStream();
-924  prevOffsetByType = new 
long[BlockType.values().length];
-925  for (int i = 0; i  
prevOffsetByType.length; ++i) {
-926prevOffsetByType[i] = UNSET;
-927  }
-928  // TODO: Why fileContext saved away 
when we have dataBlockEncoder and/or
-929  // defaultDataBlockEncoder?
-930  this.fileContext = fileContext;
-931}
-932
-933/**
-934 * Starts writing into the block. The 
previous block's data is discarded.
-935 *
-936 * @return the stream the user can 
write their data into
-937 * @throws IOException
-938 */
-939DataOutputStream 
startWriting(BlockType newBlockType)
-940throws IOException {
-941  if (state == State.BLOCK_READY 
 startOffset != -1) {
-942// We had a previous block that 
was written to a stream at a specific
-943// offset. Save that offset as 
the last offset of a block of that type.
-944
prevOffsetByType[blockType.getId()] = startOffset;
-945  }
-946
-947  startOffset = -1;
-948  blockType = newBlockType;
-949
-950  baosInMemory.reset();
-951  
baosInMemory.write(HConstants.HFILEBLOCK_DUMMY_HEADER);
-952
-953  state = State.WRITING;
-954
-955  // We will compress it later in 
finishBlock()
-956  userDataStream = new 
ByteBufferWriterDataOutputStream(baosInMemory);
-957  if (newBlockType == BlockType.DATA) 
{
-958
this.dataBlockEncoder.startBlockEncoding(dataBlockEncodingCtx, 
userDataStream);
-959  }
-960  this.unencodedDataSizeWritten = 
0;
-961  return userDataStream;
-962}
-963
-964/**
-965 * Writes the Cell to this block
-966 * @param cell
-967 * @throws IOException
-968 */
-969

[46/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/checkstyle-aggregate.html
--
diff --git a/checkstyle-aggregate.html b/checkstyle-aggregate.html
index 61eb5a1..4741e84 100644
--- a/checkstyle-aggregate.html
+++ b/checkstyle-aggregate.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase  Checkstyle Results
 
@@ -289,7 +289,7 @@
 2155
 0
 0
-14361
+14364
 
 Files
 
@@ -787,7 +787,7 @@
 org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
 0
 0
-31
+32
 
 org/apache/hadoop/hbase/client/AsyncMetaRegionLocator.java
 0
@@ -,7 +,7 @@
 org/apache/hadoop/hbase/io/hfile/HFileBlock.java
 0
 0
-46
+47
 
 org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
 0
@@ -2267,7 +2267,7 @@
 org/apache/hadoop/hbase/io/hfile/HFileWriterImpl.java
 0
 0
-22
+23
 
 org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
 0
@@ -7169,7 +7169,7 @@
 
 max: 100
 ignorePattern: ^package.*|^import.*|a 
href|href|http://|https://|ftp://|org.apache.thrift.|com.google.protobuf.|hbase.protobuf.generated
-762
+765
 Error
 
 
@@ -14524,197 +14524,203 @@
 imports
 ImportOrder
 Wrong order for 'com.google.common.annotations.VisibleForTesting' 
import.
-38
+39
 
 Error
 imports
 ImportOrder
 Wrong order for 'org.apache.hadoop.hbase.NamespaceDescriptor' import.
-52
+53
 
 Error
 imports
 ImportOrder
 Wrong order for 'org.apache.hadoop.hbase.HConstants' import.
-53
+54
 
 Error
 imports
 ImportOrder
 Wrong order for 'org.apache.hadoop.hbase.AsyncMetaTableAccessor' 
import.
-56
+57
 
 Error
 imports
 ImportOrder
 Wrong order for 
'org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteColumnRequest'
 import.
-98
+102
 
 Error
 imports
 ImportOrder
 Wrong order for 
'org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateTableRequest'
 import.
-110
+116
 
 Error
 sizes
 LineLength
 Line is longer than 100 characters (found 101).
-340
+347
 
 Error
 indentation
 Indentation
 'if' have incorrect indentation level 10, expected level should be 22.
-383
+390
 
 Error
 indentation
 Indentation
 'if' child have incorrect indentation level 12, expected level should be 
24.
-384
+391
 
 Error
 indentation
 Indentation
 'if' child have incorrect indentation level 12, expected level should be 
24.
-385
+392
 
 Error
 indentation
 Indentation
 'if rcurly' have incorrect indentation level 10, expected level should be 
22.
-386
+393
 
 Error
 indentation
 Indentation
 'if' have incorrect indentation level 10, expected level should be 22.
-387
+394
 
 Error
 indentation
 Indentation
 'if' child have incorrect indentation level 12, expected level should be 
24.
-388
+395
 
 Error
 indentation
 Indentation
 'if rcurly' have incorrect indentation level 10, expected level should be 
22.
-389
+396
 
 Error
 indentation
 Indentation
 'else' child have incorrect indentation level 12, expected level should be 
24.
-390
+397
 
 Error
 indentation
 Indentation
 'else rcurly' have incorrect indentation level 10, expected level should 
be 22.
-391
+398
 
 Error
 indentation
 Indentation
 'block rcurly' have incorrect indentation level 8, expected level should 
be 20.
-392
+399
 
 Error
 sizes
 LineLength
 Line is longer than 100 characters (found 104).
-615
+622
 
 Error
 sizes
 LineLength
 Line is longer than 100 characters (found 103).
-622
+629
 
 Error
 sizes
 LineLength
 Line is longer than 100 characters (found 105).
-676
+683
 
 Error
 sizes
 LineLength
 Line is longer than 100 characters (found 111).
-688
+695
 
 Error
 sizes
 LineLength
 Line is longer than 100 characters (found 105).
-726
+733
 
 Error
 blocks
 NeedBraces
 'if' construct must use '{}'s.
-796
+803
 
 Error
 blocks
 NeedBraces
 'if' construct must use '{}'s.
-949
+956
 
 Error
 blocks
 NeedBraces
 'if' construct must use '{}'s.
-955
+962
 
 Error
 javadoc
 NonEmptyAtclauseDescription
 At-clause should have a non-empty description.
-1062
+1069
 
 Error
 javadoc
 NonEmptyAtclauseDescription
 At-clause should have a non-empty description.
-1063
+1070
 
 Error
 sizes
 LineLength
 Line is longer than 100 characters (found 102).
-1236
+1243
 
 Error
 sizes
 LineLength
 Line is longer than 100 characters (found 111).
-1285
+1292
 
 Error
 sizes
 LineLength
 Line is longer than 100 characters (found 114).
-1367
+1374
 
 Error
 sizes
 LineLength
+Line is longer than 100 characters (found 107).
+1652
+
+Error
+sizes
+LineLength
 Line is longer than 100 characters (found 104).
-1776
+2019
 
 org/apache/hadoop/hbase/client/AsyncMetaRegionLocator.java
 
-
+
 Severity
 Category
 Rule
 Message
 Line
-
+
 Error
 imports
 AvoidStarImport
@@ -14723,31 +14729,31 @@
 
 org/apache/hadoop/hbase/client/AsyncNonMetaRegionLocator.java
 
-
+
 Severity
 Category
 Rule
 Message
 Line
-
+
 Error
 design
 VisibilityModifier
 Variable 'locateType' must be private and have accessor methods.
 80
-
+
 Error
 design
 VisibilityModifier
 Variable 'cache' must be private and have accessor methods.
 104

[15/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.ProcedureBiConsumer.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.ProcedureBiConsumer.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.ProcedureBiConsumer.html
index 6c52543..f3f7a46 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.ProcedureBiConsumer.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.ProcedureBiConsumer.html
@@ -31,1797 +31,2040 @@
 023import java.util.ArrayList;
 024import java.util.Arrays;
 025import java.util.Collection;
-026import java.util.HashMap;
-027import java.util.LinkedList;
-028import java.util.List;
-029import java.util.Map;
-030import java.util.Optional;
-031import 
java.util.concurrent.CompletableFuture;
-032import java.util.concurrent.TimeUnit;
-033import 
java.util.concurrent.atomic.AtomicReference;
-034import java.util.function.BiConsumer;
-035import java.util.regex.Pattern;
-036import java.util.stream.Collectors;
-037
-038import 
com.google.common.annotations.VisibleForTesting;
-039
-040import io.netty.util.Timeout;
-041import io.netty.util.TimerTask;
-042import org.apache.commons.logging.Log;
-043import 
org.apache.commons.logging.LogFactory;
-044import 
org.apache.hadoop.hbase.HColumnDescriptor;
-045import 
org.apache.hadoop.hbase.HRegionInfo;
-046import 
org.apache.hadoop.hbase.HRegionLocation;
-047import 
org.apache.hadoop.hbase.MetaTableAccessor;
-048import 
org.apache.hadoop.hbase.MetaTableAccessor.QueryType;
-049import 
org.apache.hadoop.hbase.NotServingRegionException;
-050import 
org.apache.hadoop.hbase.RegionLocations;
-051import 
org.apache.hadoop.hbase.ServerName;
-052import 
org.apache.hadoop.hbase.NamespaceDescriptor;
-053import 
org.apache.hadoop.hbase.HConstants;
-054import 
org.apache.hadoop.hbase.TableExistsException;
-055import 
org.apache.hadoop.hbase.TableName;
-056import 
org.apache.hadoop.hbase.AsyncMetaTableAccessor;
-057import 
org.apache.hadoop.hbase.TableNotFoundException;
-058import 
org.apache.hadoop.hbase.UnknownRegionException;
-059import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
-060import 
org.apache.hadoop.hbase.classification.InterfaceStability;
-061import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.AdminRequestCallerBuilder;
-062import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.MasterRequestCallerBuilder;
-063import 
org.apache.hadoop.hbase.client.Scan.ReadType;
-064import 
org.apache.hadoop.hbase.client.replication.ReplicationSerDeHelper;
-065import 
org.apache.hadoop.hbase.client.replication.TableCFs;
-066import 
org.apache.hadoop.hbase.exceptions.DeserializationException;
-067import 
org.apache.hadoop.hbase.ipc.HBaseRpcController;
-068import 
org.apache.hadoop.hbase.quotas.QuotaFilter;
-069import 
org.apache.hadoop.hbase.quotas.QuotaSettings;
-070import 
org.apache.hadoop.hbase.quotas.QuotaTableUtil;
-071import 
org.apache.hadoop.hbase.replication.ReplicationException;
-072import 
org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
-073import 
org.apache.hadoop.hbase.replication.ReplicationPeerDescription;
-074import 
org.apache.hadoop.hbase.shaded.com.google.protobuf.RpcCallback;
-075import 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-076import 
org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
-077import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService;
-078import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionRequest;
-079import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionResponse;
-080import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionRequest;
-081import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionResponse;
-082import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
-083import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableSchema;
-084import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnRequest;
-085import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnResponse;
-086import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionRequest;
-087import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionResponse;
-088import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceRequest;
-089import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceResponse;
-090import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceRequest;
-091import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceResponse;
-092import 

[24/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DeleteNamespaceProcedureBiConsumer.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DeleteNamespaceProcedureBiConsumer.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DeleteNamespaceProcedureBiConsumer.html
index 6c52543..f3f7a46 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DeleteNamespaceProcedureBiConsumer.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DeleteNamespaceProcedureBiConsumer.html
@@ -31,1797 +31,2040 @@
 023import java.util.ArrayList;
 024import java.util.Arrays;
 025import java.util.Collection;
-026import java.util.HashMap;
-027import java.util.LinkedList;
-028import java.util.List;
-029import java.util.Map;
-030import java.util.Optional;
-031import 
java.util.concurrent.CompletableFuture;
-032import java.util.concurrent.TimeUnit;
-033import 
java.util.concurrent.atomic.AtomicReference;
-034import java.util.function.BiConsumer;
-035import java.util.regex.Pattern;
-036import java.util.stream.Collectors;
-037
-038import 
com.google.common.annotations.VisibleForTesting;
-039
-040import io.netty.util.Timeout;
-041import io.netty.util.TimerTask;
-042import org.apache.commons.logging.Log;
-043import 
org.apache.commons.logging.LogFactory;
-044import 
org.apache.hadoop.hbase.HColumnDescriptor;
-045import 
org.apache.hadoop.hbase.HRegionInfo;
-046import 
org.apache.hadoop.hbase.HRegionLocation;
-047import 
org.apache.hadoop.hbase.MetaTableAccessor;
-048import 
org.apache.hadoop.hbase.MetaTableAccessor.QueryType;
-049import 
org.apache.hadoop.hbase.NotServingRegionException;
-050import 
org.apache.hadoop.hbase.RegionLocations;
-051import 
org.apache.hadoop.hbase.ServerName;
-052import 
org.apache.hadoop.hbase.NamespaceDescriptor;
-053import 
org.apache.hadoop.hbase.HConstants;
-054import 
org.apache.hadoop.hbase.TableExistsException;
-055import 
org.apache.hadoop.hbase.TableName;
-056import 
org.apache.hadoop.hbase.AsyncMetaTableAccessor;
-057import 
org.apache.hadoop.hbase.TableNotFoundException;
-058import 
org.apache.hadoop.hbase.UnknownRegionException;
-059import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
-060import 
org.apache.hadoop.hbase.classification.InterfaceStability;
-061import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.AdminRequestCallerBuilder;
-062import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.MasterRequestCallerBuilder;
-063import 
org.apache.hadoop.hbase.client.Scan.ReadType;
-064import 
org.apache.hadoop.hbase.client.replication.ReplicationSerDeHelper;
-065import 
org.apache.hadoop.hbase.client.replication.TableCFs;
-066import 
org.apache.hadoop.hbase.exceptions.DeserializationException;
-067import 
org.apache.hadoop.hbase.ipc.HBaseRpcController;
-068import 
org.apache.hadoop.hbase.quotas.QuotaFilter;
-069import 
org.apache.hadoop.hbase.quotas.QuotaSettings;
-070import 
org.apache.hadoop.hbase.quotas.QuotaTableUtil;
-071import 
org.apache.hadoop.hbase.replication.ReplicationException;
-072import 
org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
-073import 
org.apache.hadoop.hbase.replication.ReplicationPeerDescription;
-074import 
org.apache.hadoop.hbase.shaded.com.google.protobuf.RpcCallback;
-075import 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-076import 
org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
-077import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService;
-078import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionRequest;
-079import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionResponse;
-080import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionRequest;
-081import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionResponse;
-082import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
-083import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableSchema;
-084import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnRequest;
-085import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnResponse;
-086import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionRequest;
-087import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionResponse;
-088import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceRequest;
-089import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceResponse;
-090import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceRequest;
-091import 

[31/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/HConstants.html
--
diff --git a/devapidocs/src-html/org/apache/hadoop/hbase/HConstants.html 
b/devapidocs/src-html/org/apache/hadoop/hbase/HConstants.html
index b4e1ffc..8adbacb 100644
--- a/devapidocs/src-html/org/apache/hadoop/hbase/HConstants.html
+++ b/devapidocs/src-html/org/apache/hadoop/hbase/HConstants.html
@@ -1363,10 +1363,15 @@
 1355  
"hbase.snapshot.restore.take.failsafe.snapshot";
 1356  public static final boolean 
DEFAULT_SNAPSHOT_RESTORE_TAKE_FAILSAFE_SNAPSHOT = false;
 1357
-1358  private HConstants() {
-1359// Can't be instantiated with this 
ctor.
-1360  }
-1361}
+1358  public static final String 
SNAPSHOT_RESTORE_FAILSAFE_NAME =
+1359  
"hbase.snapshot.restore.failsafe.name";
+1360  public static final String 
DEFAULT_SNAPSHOT_RESTORE_FAILSAFE_NAME =
+1361  
"hbase-failsafe-{snapshot.name}-{restore.timestamp}";
+1362
+1363  private HConstants() {
+1364// Can't be instantiated with this 
ctor.
+1365  }
+1366}
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/Version.html
--
diff --git a/devapidocs/src-html/org/apache/hadoop/hbase/Version.html 
b/devapidocs/src-html/org/apache/hadoop/hbase/Version.html
index 2d5588c..09102b9 100644
--- a/devapidocs/src-html/org/apache/hadoop/hbase/Version.html
+++ b/devapidocs/src-html/org/apache/hadoop/hbase/Version.html
@@ -16,11 +16,11 @@
 008@InterfaceAudience.Private
 009public class Version {
 010  public static final String version = 
"2.0.0-SNAPSHOT";
-011  public static final String revision = 
"b81e00f5eabe8d99fd77d74f60e3754add8205da";
+011  public static final String revision = 
"5411d3ecb156a5128b9045bdb4e58850a10968fb";
 012  public static final String user = 
"jenkins";
-013  public static final String date = "Thu 
Apr 27 22:33:25 UTC 2017";
+013  public static final String date = "Fri 
Apr 28 14:39:00 UTC 2017";
 014  public static final String url = 
"git://asf920.gq1.ygridcore.net/home/jenkins/jenkins-slave/workspace/hbase_generate_website/hbase";
-015  public static final String srcChecksum 
= "f5899e22d243da9cc32fb736a2ec9230";
+015  public static final String srcChecksum 
= "25fa6c8e42c05e161da7d3fbf2f2d8d2";
 016}
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncAdmin.html
--
diff --git a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncAdmin.html 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncAdmin.html
index a043eca..fd97352 100644
--- a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncAdmin.html
+++ b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncAdmin.html
@@ -671,7 +671,85 @@
 663   * @param tableName name of the table 
where the snapshot will be restored
 664   */
 665  CompletableFutureVoid 
cloneSnapshot(final String snapshotName, final TableName tableName);
-666}
+666
+667  /**
+668   * List completed snapshots.
+669   * @return a list of snapshot 
descriptors for completed snapshots wrapped by a
+670   * {@link CompletableFuture}
+671   */
+672  
CompletableFutureListSnapshotDescription listSnapshots();
+673
+674  /**
+675   * List all the completed snapshots 
matching the given regular expression.
+676   * @param regex The regular expression 
to match against
+677   * @return - returns a List of 
SnapshotDescription wrapped by a {@link CompletableFuture}
+678   */
+679  
CompletableFutureListSnapshotDescription listSnapshots(String 
regex);
+680
+681  /**
+682   * List all the completed snapshots 
matching the given pattern.
+683   * @param pattern The compiled regular 
expression to match against
+684   * @return - returns a List of 
SnapshotDescription wrapped by a {@link CompletableFuture}
+685   */
+686  
CompletableFutureListSnapshotDescription listSnapshots(Pattern 
pattern);
+687
+688  /**
+689   * List all the completed snapshots 
matching the given table name regular expression and snapshot
+690   * name regular expression.
+691   * @param tableNameRegex The table name 
regular expression to match against
+692   * @param snapshotNameRegex The 
snapshot name regular expression to match against
+693   * @return - returns a List of 
completed SnapshotDescription wrapped by a
+694   * {@link CompletableFuture}
+695   */
+696  
CompletableFutureListSnapshotDescription 
listTableSnapshots(String tableNameRegex,
+697  String snapshotNameRegex);
+698
+699  /**
+700   * List all the completed snapshots 
matching the given table name regular expression and snapshot
+701   * name regular expression.
+702   * @param tableNamePattern The compiled 
table name regular expression to match against
+703   * 

[13/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.TableProcedureBiConsumer.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.TableProcedureBiConsumer.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.TableProcedureBiConsumer.html
index 6c52543..f3f7a46 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.TableProcedureBiConsumer.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.TableProcedureBiConsumer.html
@@ -31,1797 +31,2040 @@
 023import java.util.ArrayList;
 024import java.util.Arrays;
 025import java.util.Collection;
-026import java.util.HashMap;
-027import java.util.LinkedList;
-028import java.util.List;
-029import java.util.Map;
-030import java.util.Optional;
-031import 
java.util.concurrent.CompletableFuture;
-032import java.util.concurrent.TimeUnit;
-033import 
java.util.concurrent.atomic.AtomicReference;
-034import java.util.function.BiConsumer;
-035import java.util.regex.Pattern;
-036import java.util.stream.Collectors;
-037
-038import 
com.google.common.annotations.VisibleForTesting;
-039
-040import io.netty.util.Timeout;
-041import io.netty.util.TimerTask;
-042import org.apache.commons.logging.Log;
-043import 
org.apache.commons.logging.LogFactory;
-044import 
org.apache.hadoop.hbase.HColumnDescriptor;
-045import 
org.apache.hadoop.hbase.HRegionInfo;
-046import 
org.apache.hadoop.hbase.HRegionLocation;
-047import 
org.apache.hadoop.hbase.MetaTableAccessor;
-048import 
org.apache.hadoop.hbase.MetaTableAccessor.QueryType;
-049import 
org.apache.hadoop.hbase.NotServingRegionException;
-050import 
org.apache.hadoop.hbase.RegionLocations;
-051import 
org.apache.hadoop.hbase.ServerName;
-052import 
org.apache.hadoop.hbase.NamespaceDescriptor;
-053import 
org.apache.hadoop.hbase.HConstants;
-054import 
org.apache.hadoop.hbase.TableExistsException;
-055import 
org.apache.hadoop.hbase.TableName;
-056import 
org.apache.hadoop.hbase.AsyncMetaTableAccessor;
-057import 
org.apache.hadoop.hbase.TableNotFoundException;
-058import 
org.apache.hadoop.hbase.UnknownRegionException;
-059import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
-060import 
org.apache.hadoop.hbase.classification.InterfaceStability;
-061import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.AdminRequestCallerBuilder;
-062import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.MasterRequestCallerBuilder;
-063import 
org.apache.hadoop.hbase.client.Scan.ReadType;
-064import 
org.apache.hadoop.hbase.client.replication.ReplicationSerDeHelper;
-065import 
org.apache.hadoop.hbase.client.replication.TableCFs;
-066import 
org.apache.hadoop.hbase.exceptions.DeserializationException;
-067import 
org.apache.hadoop.hbase.ipc.HBaseRpcController;
-068import 
org.apache.hadoop.hbase.quotas.QuotaFilter;
-069import 
org.apache.hadoop.hbase.quotas.QuotaSettings;
-070import 
org.apache.hadoop.hbase.quotas.QuotaTableUtil;
-071import 
org.apache.hadoop.hbase.replication.ReplicationException;
-072import 
org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
-073import 
org.apache.hadoop.hbase.replication.ReplicationPeerDescription;
-074import 
org.apache.hadoop.hbase.shaded.com.google.protobuf.RpcCallback;
-075import 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-076import 
org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
-077import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService;
-078import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionRequest;
-079import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionResponse;
-080import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionRequest;
-081import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionResponse;
-082import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
-083import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableSchema;
-084import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnRequest;
-085import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnResponse;
-086import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionRequest;
-087import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionResponse;
-088import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceRequest;
-089import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceResponse;
-090import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceRequest;
-091import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceResponse;
-092import 

[39/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.CreateTableProcedureBiConsumer.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.CreateTableProcedureBiConsumer.html
 
b/devapidocs/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.CreateTableProcedureBiConsumer.html
index cb9c2ad..b4ea0f0 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.CreateTableProcedureBiConsumer.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.CreateTableProcedureBiConsumer.html
@@ -127,7 +127,7 @@ var activeTableTab = "activeTableTab";
 
 
 
-private class AsyncHBaseAdmin.CreateTableProcedureBiConsumer
+private class AsyncHBaseAdmin.CreateTableProcedureBiConsumer
 extends AsyncHBaseAdmin.TableProcedureBiConsumer
 
 
@@ -240,7 +240,7 @@ extends 
 
 CreateTableProcedureBiConsumer
-CreateTableProcedureBiConsumer(AsyncAdminadmin,
+CreateTableProcedureBiConsumer(AsyncAdminadmin,
TableNametableName)
 
 
@@ -258,7 +258,7 @@ extends 
 
 getOperationType
-http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringgetOperationType()
+http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringgetOperationType()
 
 Specified by:
 getOperationTypein
 classAsyncHBaseAdmin.TableProcedureBiConsumer

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DeleteColumnFamilyProcedureBiConsumer.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DeleteColumnFamilyProcedureBiConsumer.html
 
b/devapidocs/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DeleteColumnFamilyProcedureBiConsumer.html
index a537767..7d4fe0a 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DeleteColumnFamilyProcedureBiConsumer.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DeleteColumnFamilyProcedureBiConsumer.html
@@ -127,7 +127,7 @@ var activeTableTab = "activeTableTab";
 
 
 
-private class AsyncHBaseAdmin.DeleteColumnFamilyProcedureBiConsumer
+private class AsyncHBaseAdmin.DeleteColumnFamilyProcedureBiConsumer
 extends AsyncHBaseAdmin.TableProcedureBiConsumer
 
 
@@ -240,7 +240,7 @@ extends 
 
 DeleteColumnFamilyProcedureBiConsumer
-DeleteColumnFamilyProcedureBiConsumer(AsyncAdminadmin,
+DeleteColumnFamilyProcedureBiConsumer(AsyncAdminadmin,
   TableNametableName)
 
 
@@ -258,7 +258,7 @@ extends 
 
 getOperationType
-http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringgetOperationType()
+http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringgetOperationType()
 
 Specified by:
 getOperationTypein
 classAsyncHBaseAdmin.TableProcedureBiConsumer

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DeleteNamespaceProcedureBiConsumer.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DeleteNamespaceProcedureBiConsumer.html
 
b/devapidocs/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DeleteNamespaceProcedureBiConsumer.html
index 664d7de..1b4b7d5 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DeleteNamespaceProcedureBiConsumer.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.DeleteNamespaceProcedureBiConsumer.html
@@ -127,7 +127,7 @@ var activeTableTab = "activeTableTab";
 
 
 
-private class AsyncHBaseAdmin.DeleteNamespaceProcedureBiConsumer
+private class AsyncHBaseAdmin.DeleteNamespaceProcedureBiConsumer
 extends AsyncHBaseAdmin.NamespaceProcedureBiConsumer
 
 
@@ -240,7 +240,7 @@ extends 
 
 DeleteNamespaceProcedureBiConsumer
-DeleteNamespaceProcedureBiConsumer(AsyncAdminadmin,
+DeleteNamespaceProcedureBiConsumer(AsyncAdminadmin,
http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringnamespaceName)
 
 
@@ -258,7 +258,7 @@ extends 
 
 getOperationType
-http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringgetOperationType()
+http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringgetOperationType()
 
 Specified by:
 getOperationTypein
 classAsyncHBaseAdmin.NamespaceProcedureBiConsumer


[19/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.MergeTableRegionProcedureBiConsumer.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.MergeTableRegionProcedureBiConsumer.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.MergeTableRegionProcedureBiConsumer.html
index 6c52543..f3f7a46 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.MergeTableRegionProcedureBiConsumer.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.MergeTableRegionProcedureBiConsumer.html
@@ -31,1797 +31,2040 @@
 023import java.util.ArrayList;
 024import java.util.Arrays;
 025import java.util.Collection;
-026import java.util.HashMap;
-027import java.util.LinkedList;
-028import java.util.List;
-029import java.util.Map;
-030import java.util.Optional;
-031import 
java.util.concurrent.CompletableFuture;
-032import java.util.concurrent.TimeUnit;
-033import 
java.util.concurrent.atomic.AtomicReference;
-034import java.util.function.BiConsumer;
-035import java.util.regex.Pattern;
-036import java.util.stream.Collectors;
-037
-038import 
com.google.common.annotations.VisibleForTesting;
-039
-040import io.netty.util.Timeout;
-041import io.netty.util.TimerTask;
-042import org.apache.commons.logging.Log;
-043import 
org.apache.commons.logging.LogFactory;
-044import 
org.apache.hadoop.hbase.HColumnDescriptor;
-045import 
org.apache.hadoop.hbase.HRegionInfo;
-046import 
org.apache.hadoop.hbase.HRegionLocation;
-047import 
org.apache.hadoop.hbase.MetaTableAccessor;
-048import 
org.apache.hadoop.hbase.MetaTableAccessor.QueryType;
-049import 
org.apache.hadoop.hbase.NotServingRegionException;
-050import 
org.apache.hadoop.hbase.RegionLocations;
-051import 
org.apache.hadoop.hbase.ServerName;
-052import 
org.apache.hadoop.hbase.NamespaceDescriptor;
-053import 
org.apache.hadoop.hbase.HConstants;
-054import 
org.apache.hadoop.hbase.TableExistsException;
-055import 
org.apache.hadoop.hbase.TableName;
-056import 
org.apache.hadoop.hbase.AsyncMetaTableAccessor;
-057import 
org.apache.hadoop.hbase.TableNotFoundException;
-058import 
org.apache.hadoop.hbase.UnknownRegionException;
-059import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
-060import 
org.apache.hadoop.hbase.classification.InterfaceStability;
-061import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.AdminRequestCallerBuilder;
-062import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.MasterRequestCallerBuilder;
-063import 
org.apache.hadoop.hbase.client.Scan.ReadType;
-064import 
org.apache.hadoop.hbase.client.replication.ReplicationSerDeHelper;
-065import 
org.apache.hadoop.hbase.client.replication.TableCFs;
-066import 
org.apache.hadoop.hbase.exceptions.DeserializationException;
-067import 
org.apache.hadoop.hbase.ipc.HBaseRpcController;
-068import 
org.apache.hadoop.hbase.quotas.QuotaFilter;
-069import 
org.apache.hadoop.hbase.quotas.QuotaSettings;
-070import 
org.apache.hadoop.hbase.quotas.QuotaTableUtil;
-071import 
org.apache.hadoop.hbase.replication.ReplicationException;
-072import 
org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
-073import 
org.apache.hadoop.hbase.replication.ReplicationPeerDescription;
-074import 
org.apache.hadoop.hbase.shaded.com.google.protobuf.RpcCallback;
-075import 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-076import 
org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
-077import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService;
-078import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionRequest;
-079import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionResponse;
-080import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionRequest;
-081import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionResponse;
-082import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
-083import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableSchema;
-084import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnRequest;
-085import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnResponse;
-086import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionRequest;
-087import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionResponse;
-088import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceRequest;
-089import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceResponse;
-090import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceRequest;
-091import 

[27/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.CreateNamespaceProcedureBiConsumer.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.CreateNamespaceProcedureBiConsumer.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.CreateNamespaceProcedureBiConsumer.html
index 6c52543..f3f7a46 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.CreateNamespaceProcedureBiConsumer.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.CreateNamespaceProcedureBiConsumer.html
@@ -31,1797 +31,2040 @@
 023import java.util.ArrayList;
 024import java.util.Arrays;
 025import java.util.Collection;
-026import java.util.HashMap;
-027import java.util.LinkedList;
-028import java.util.List;
-029import java.util.Map;
-030import java.util.Optional;
-031import 
java.util.concurrent.CompletableFuture;
-032import java.util.concurrent.TimeUnit;
-033import 
java.util.concurrent.atomic.AtomicReference;
-034import java.util.function.BiConsumer;
-035import java.util.regex.Pattern;
-036import java.util.stream.Collectors;
-037
-038import 
com.google.common.annotations.VisibleForTesting;
-039
-040import io.netty.util.Timeout;
-041import io.netty.util.TimerTask;
-042import org.apache.commons.logging.Log;
-043import 
org.apache.commons.logging.LogFactory;
-044import 
org.apache.hadoop.hbase.HColumnDescriptor;
-045import 
org.apache.hadoop.hbase.HRegionInfo;
-046import 
org.apache.hadoop.hbase.HRegionLocation;
-047import 
org.apache.hadoop.hbase.MetaTableAccessor;
-048import 
org.apache.hadoop.hbase.MetaTableAccessor.QueryType;
-049import 
org.apache.hadoop.hbase.NotServingRegionException;
-050import 
org.apache.hadoop.hbase.RegionLocations;
-051import 
org.apache.hadoop.hbase.ServerName;
-052import 
org.apache.hadoop.hbase.NamespaceDescriptor;
-053import 
org.apache.hadoop.hbase.HConstants;
-054import 
org.apache.hadoop.hbase.TableExistsException;
-055import 
org.apache.hadoop.hbase.TableName;
-056import 
org.apache.hadoop.hbase.AsyncMetaTableAccessor;
-057import 
org.apache.hadoop.hbase.TableNotFoundException;
-058import 
org.apache.hadoop.hbase.UnknownRegionException;
-059import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
-060import 
org.apache.hadoop.hbase.classification.InterfaceStability;
-061import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.AdminRequestCallerBuilder;
-062import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.MasterRequestCallerBuilder;
-063import 
org.apache.hadoop.hbase.client.Scan.ReadType;
-064import 
org.apache.hadoop.hbase.client.replication.ReplicationSerDeHelper;
-065import 
org.apache.hadoop.hbase.client.replication.TableCFs;
-066import 
org.apache.hadoop.hbase.exceptions.DeserializationException;
-067import 
org.apache.hadoop.hbase.ipc.HBaseRpcController;
-068import 
org.apache.hadoop.hbase.quotas.QuotaFilter;
-069import 
org.apache.hadoop.hbase.quotas.QuotaSettings;
-070import 
org.apache.hadoop.hbase.quotas.QuotaTableUtil;
-071import 
org.apache.hadoop.hbase.replication.ReplicationException;
-072import 
org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
-073import 
org.apache.hadoop.hbase.replication.ReplicationPeerDescription;
-074import 
org.apache.hadoop.hbase.shaded.com.google.protobuf.RpcCallback;
-075import 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-076import 
org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
-077import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService;
-078import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionRequest;
-079import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionResponse;
-080import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionRequest;
-081import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionResponse;
-082import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
-083import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableSchema;
-084import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnRequest;
-085import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnResponse;
-086import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionRequest;
-087import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionResponse;
-088import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceRequest;
-089import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceResponse;
-090import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceRequest;
-091import 

[34/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/org/apache/hadoop/hbase/regionserver/CompactSplitThread.CompactionRunner.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/regionserver/CompactSplitThread.CompactionRunner.html
 
b/devapidocs/org/apache/hadoop/hbase/regionserver/CompactSplitThread.CompactionRunner.html
index 72bcfe2..ac8e764 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/regionserver/CompactSplitThread.CompactionRunner.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/regionserver/CompactSplitThread.CompactionRunner.html
@@ -117,7 +117,7 @@ var activeTableTab = "activeTableTab";
 
 
 
-private class CompactSplitThread.CompactionRunner
+private class CompactSplitThread.CompactionRunner
 extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
 implements http://docs.oracle.com/javase/8/docs/api/java/lang/Runnable.html?is-external=true;
 title="class or interface in java.lang">Runnable, http://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true;
 title="class or interface in java.lang">ComparableCompactSplitThread.CompactionRunner
 
@@ -246,7 +246,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Runnable.
 
 
 store
-private finalStore store
+private finalStore store
 
 
 
@@ -255,7 +255,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Runnable.
 
 
 region
-private finalHRegion region
+private finalHRegion region
 
 
 
@@ -264,7 +264,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Runnable.
 
 
 compaction
-privateCompactionContext compaction
+privateCompactionContext compaction
 
 
 
@@ -273,7 +273,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Runnable.
 
 
 queuedPriority
-privateint queuedPriority
+privateint queuedPriority
 
 
 
@@ -282,7 +282,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Runnable.
 
 
 parent
-privatehttp://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ThreadPoolExecutor.html?is-external=true;
 title="class or interface in java.util.concurrent">ThreadPoolExecutor parent
+privatehttp://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ThreadPoolExecutor.html?is-external=true;
 title="class or interface in java.util.concurrent">ThreadPoolExecutor parent
 
 
 
@@ -291,7 +291,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Runnable.
 
 
 user
-privateUser user
+privateUser user
 
 
 
@@ -308,7 +308,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Runnable.
 
 
 CompactionRunner
-publicCompactionRunner(Storestore,
+publicCompactionRunner(Storestore,
 Regionregion,
 CompactionContextcompaction,
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ThreadPoolExecutor.html?is-external=true;
 title="class or interface in 
java.util.concurrent">ThreadPoolExecutorparent,
@@ -329,7 +329,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Runnable.
 
 
 toString
-publichttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringtoString()
+publichttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringtoString()
 
 Overrides:
 http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#toString--;
 title="class or interface in java.lang">toStringin 
classhttp://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
@@ -342,7 +342,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Runnable.
 
 
 doCompaction
-privatevoiddoCompaction(Useruser)
+privatevoiddoCompaction(Useruser)
 
 
 
@@ -351,7 +351,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Runnable.
 
 
 run
-publicvoidrun()
+publicvoidrun()
 
 Specified by:
 http://docs.oracle.com/javase/8/docs/api/java/lang/Runnable.html?is-external=true#run--;
 title="class or interface in java.lang">runin 
interfacehttp://docs.oracle.com/javase/8/docs/api/java/lang/Runnable.html?is-external=true;
 title="class or interface in java.lang">Runnable
@@ -364,7 +364,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Runnable.
 
 
 formatStackTrace
-privatehttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringformatStackTrace(http://docs.oracle.com/javase/8/docs/api/java/lang/Exception.html?is-external=true;
 title="class or interface in java.lang">Exceptionex)
+privatehttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in 

[30/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.AddColumnFamilyProcedureBiConsumer.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.AddColumnFamilyProcedureBiConsumer.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.AddColumnFamilyProcedureBiConsumer.html
index 6c52543..f3f7a46 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.AddColumnFamilyProcedureBiConsumer.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.AddColumnFamilyProcedureBiConsumer.html
@@ -31,1797 +31,2040 @@
 023import java.util.ArrayList;
 024import java.util.Arrays;
 025import java.util.Collection;
-026import java.util.HashMap;
-027import java.util.LinkedList;
-028import java.util.List;
-029import java.util.Map;
-030import java.util.Optional;
-031import 
java.util.concurrent.CompletableFuture;
-032import java.util.concurrent.TimeUnit;
-033import 
java.util.concurrent.atomic.AtomicReference;
-034import java.util.function.BiConsumer;
-035import java.util.regex.Pattern;
-036import java.util.stream.Collectors;
-037
-038import 
com.google.common.annotations.VisibleForTesting;
-039
-040import io.netty.util.Timeout;
-041import io.netty.util.TimerTask;
-042import org.apache.commons.logging.Log;
-043import 
org.apache.commons.logging.LogFactory;
-044import 
org.apache.hadoop.hbase.HColumnDescriptor;
-045import 
org.apache.hadoop.hbase.HRegionInfo;
-046import 
org.apache.hadoop.hbase.HRegionLocation;
-047import 
org.apache.hadoop.hbase.MetaTableAccessor;
-048import 
org.apache.hadoop.hbase.MetaTableAccessor.QueryType;
-049import 
org.apache.hadoop.hbase.NotServingRegionException;
-050import 
org.apache.hadoop.hbase.RegionLocations;
-051import 
org.apache.hadoop.hbase.ServerName;
-052import 
org.apache.hadoop.hbase.NamespaceDescriptor;
-053import 
org.apache.hadoop.hbase.HConstants;
-054import 
org.apache.hadoop.hbase.TableExistsException;
-055import 
org.apache.hadoop.hbase.TableName;
-056import 
org.apache.hadoop.hbase.AsyncMetaTableAccessor;
-057import 
org.apache.hadoop.hbase.TableNotFoundException;
-058import 
org.apache.hadoop.hbase.UnknownRegionException;
-059import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
-060import 
org.apache.hadoop.hbase.classification.InterfaceStability;
-061import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.AdminRequestCallerBuilder;
-062import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.MasterRequestCallerBuilder;
-063import 
org.apache.hadoop.hbase.client.Scan.ReadType;
-064import 
org.apache.hadoop.hbase.client.replication.ReplicationSerDeHelper;
-065import 
org.apache.hadoop.hbase.client.replication.TableCFs;
-066import 
org.apache.hadoop.hbase.exceptions.DeserializationException;
-067import 
org.apache.hadoop.hbase.ipc.HBaseRpcController;
-068import 
org.apache.hadoop.hbase.quotas.QuotaFilter;
-069import 
org.apache.hadoop.hbase.quotas.QuotaSettings;
-070import 
org.apache.hadoop.hbase.quotas.QuotaTableUtil;
-071import 
org.apache.hadoop.hbase.replication.ReplicationException;
-072import 
org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
-073import 
org.apache.hadoop.hbase.replication.ReplicationPeerDescription;
-074import 
org.apache.hadoop.hbase.shaded.com.google.protobuf.RpcCallback;
-075import 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-076import 
org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
-077import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService;
-078import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionRequest;
-079import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionResponse;
-080import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionRequest;
-081import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionResponse;
-082import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
-083import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableSchema;
-084import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnRequest;
-085import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnResponse;
-086import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionRequest;
-087import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionResponse;
-088import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceRequest;
-089import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceResponse;
-090import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceRequest;
-091import 

[06/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.FSReaderImpl.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.FSReaderImpl.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.FSReaderImpl.html
index a665139..3fedd0b 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.FSReaderImpl.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.FSReaderImpl.html
@@ -879,1201 +879,1221 @@
 871// includes the header size also.
 872private int 
unencodedDataSizeWritten;
 873
-874/**
-875 * Bytes to be written to the file 
system, including the header. Compressed
-876 * if compression is turned on. It 
also includes the checksum data that
-877 * immediately follows the block 
data. (header + data + checksums)
-878 */
-879private ByteArrayOutputStream 
onDiskBlockBytesWithHeader;
-880
-881/**
-882 * The size of the checksum data on 
disk. It is used only if data is
-883 * not compressed. If data is 
compressed, then the checksums are already
-884 * part of onDiskBytesWithHeader. If 
data is uncompressed, then this
-885 * variable stores the checksum data 
for this block.
-886 */
-887private byte[] onDiskChecksum = 
HConstants.EMPTY_BYTE_ARRAY;
-888
-889/**
-890 * Current block's start offset in 
the {@link HFile}. Set in
-891 * {@link 
#writeHeaderAndData(FSDataOutputStream)}.
-892 */
-893private long startOffset;
-894
-895/**
-896 * Offset of previous block by block 
type. Updated when the next block is
-897 * started.
-898 */
-899private long[] prevOffsetByType;
-900
-901/** The offset of the previous block 
of the same type */
-902private long prevOffset;
-903/** Meta data that holds information 
about the hfileblock**/
-904private HFileContext fileContext;
-905
-906/**
-907 * @param dataBlockEncoder data block 
encoding algorithm to use
-908 */
-909public Writer(HFileDataBlockEncoder 
dataBlockEncoder, HFileContext fileContext) {
-910  if 
(fileContext.getBytesPerChecksum()  HConstants.HFILEBLOCK_HEADER_SIZE) {
-911throw new 
RuntimeException("Unsupported value of bytesPerChecksum. " +
-912" Minimum is " + 
HConstants.HFILEBLOCK_HEADER_SIZE + " but the configured value is " +
-913
fileContext.getBytesPerChecksum());
-914  }
-915  this.dataBlockEncoder = 
dataBlockEncoder != null?
-916  dataBlockEncoder: 
NoOpDataBlockEncoder.INSTANCE;
-917  this.dataBlockEncodingCtx = 
this.dataBlockEncoder.
-918  
newDataBlockEncodingContext(HConstants.HFILEBLOCK_DUMMY_HEADER, fileContext);
-919  // TODO: This should be lazily 
instantiated since we usually do NOT need this default encoder
-920  this.defaultBlockEncodingCtx = new 
HFileBlockDefaultEncodingContext(null,
-921  
HConstants.HFILEBLOCK_DUMMY_HEADER, fileContext);
-922  // TODO: Set BAOS initial size. Use 
fileContext.getBlocksize() and add for header/checksum
-923  baosInMemory = new 
ByteArrayOutputStream();
-924  prevOffsetByType = new 
long[BlockType.values().length];
-925  for (int i = 0; i  
prevOffsetByType.length; ++i) {
-926prevOffsetByType[i] = UNSET;
-927  }
-928  // TODO: Why fileContext saved away 
when we have dataBlockEncoder and/or
-929  // defaultDataBlockEncoder?
-930  this.fileContext = fileContext;
-931}
-932
-933/**
-934 * Starts writing into the block. The 
previous block's data is discarded.
-935 *
-936 * @return the stream the user can 
write their data into
-937 * @throws IOException
-938 */
-939DataOutputStream 
startWriting(BlockType newBlockType)
-940throws IOException {
-941  if (state == State.BLOCK_READY 
 startOffset != -1) {
-942// We had a previous block that 
was written to a stream at a specific
-943// offset. Save that offset as 
the last offset of a block of that type.
-944
prevOffsetByType[blockType.getId()] = startOffset;
-945  }
-946
-947  startOffset = -1;
-948  blockType = newBlockType;
-949
-950  baosInMemory.reset();
-951  
baosInMemory.write(HConstants.HFILEBLOCK_DUMMY_HEADER);
-952
-953  state = State.WRITING;
-954
-955  // We will compress it later in 
finishBlock()
-956  userDataStream = new 
ByteBufferWriterDataOutputStream(baosInMemory);
-957  if (newBlockType == BlockType.DATA) 
{
-958
this.dataBlockEncoder.startBlockEncoding(dataBlockEncodingCtx, 
userDataStream);
-959  }
-960  this.unencodedDataSizeWritten = 
0;
-961  return userDataStream;
-962}
-963
-964/**
-965 * Writes the Cell to this block
-966 * @param cell
-967 * @throws IOException
-968 */
-969void write(Cell cell) 

[20/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.MasterRpcCall.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.MasterRpcCall.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.MasterRpcCall.html
index 6c52543..f3f7a46 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.MasterRpcCall.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.MasterRpcCall.html
@@ -31,1797 +31,2040 @@
 023import java.util.ArrayList;
 024import java.util.Arrays;
 025import java.util.Collection;
-026import java.util.HashMap;
-027import java.util.LinkedList;
-028import java.util.List;
-029import java.util.Map;
-030import java.util.Optional;
-031import 
java.util.concurrent.CompletableFuture;
-032import java.util.concurrent.TimeUnit;
-033import 
java.util.concurrent.atomic.AtomicReference;
-034import java.util.function.BiConsumer;
-035import java.util.regex.Pattern;
-036import java.util.stream.Collectors;
-037
-038import 
com.google.common.annotations.VisibleForTesting;
-039
-040import io.netty.util.Timeout;
-041import io.netty.util.TimerTask;
-042import org.apache.commons.logging.Log;
-043import 
org.apache.commons.logging.LogFactory;
-044import 
org.apache.hadoop.hbase.HColumnDescriptor;
-045import 
org.apache.hadoop.hbase.HRegionInfo;
-046import 
org.apache.hadoop.hbase.HRegionLocation;
-047import 
org.apache.hadoop.hbase.MetaTableAccessor;
-048import 
org.apache.hadoop.hbase.MetaTableAccessor.QueryType;
-049import 
org.apache.hadoop.hbase.NotServingRegionException;
-050import 
org.apache.hadoop.hbase.RegionLocations;
-051import 
org.apache.hadoop.hbase.ServerName;
-052import 
org.apache.hadoop.hbase.NamespaceDescriptor;
-053import 
org.apache.hadoop.hbase.HConstants;
-054import 
org.apache.hadoop.hbase.TableExistsException;
-055import 
org.apache.hadoop.hbase.TableName;
-056import 
org.apache.hadoop.hbase.AsyncMetaTableAccessor;
-057import 
org.apache.hadoop.hbase.TableNotFoundException;
-058import 
org.apache.hadoop.hbase.UnknownRegionException;
-059import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
-060import 
org.apache.hadoop.hbase.classification.InterfaceStability;
-061import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.AdminRequestCallerBuilder;
-062import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.MasterRequestCallerBuilder;
-063import 
org.apache.hadoop.hbase.client.Scan.ReadType;
-064import 
org.apache.hadoop.hbase.client.replication.ReplicationSerDeHelper;
-065import 
org.apache.hadoop.hbase.client.replication.TableCFs;
-066import 
org.apache.hadoop.hbase.exceptions.DeserializationException;
-067import 
org.apache.hadoop.hbase.ipc.HBaseRpcController;
-068import 
org.apache.hadoop.hbase.quotas.QuotaFilter;
-069import 
org.apache.hadoop.hbase.quotas.QuotaSettings;
-070import 
org.apache.hadoop.hbase.quotas.QuotaTableUtil;
-071import 
org.apache.hadoop.hbase.replication.ReplicationException;
-072import 
org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
-073import 
org.apache.hadoop.hbase.replication.ReplicationPeerDescription;
-074import 
org.apache.hadoop.hbase.shaded.com.google.protobuf.RpcCallback;
-075import 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-076import 
org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
-077import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService;
-078import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionRequest;
-079import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionResponse;
-080import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionRequest;
-081import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.SplitRegionResponse;
-082import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
-083import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableSchema;
-084import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnRequest;
-085import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnResponse;
-086import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionRequest;
-087import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionResponse;
-088import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceRequest;
-089import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceResponse;
-090import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceRequest;
-091import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceResponse;
-092import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteNamespaceRequest;

[47/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/apidocs/src-html/org/apache/hadoop/hbase/HConstants.html
--
diff --git a/apidocs/src-html/org/apache/hadoop/hbase/HConstants.html 
b/apidocs/src-html/org/apache/hadoop/hbase/HConstants.html
index b4e1ffc..8adbacb 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/HConstants.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/HConstants.html
@@ -1363,10 +1363,15 @@
 1355  
"hbase.snapshot.restore.take.failsafe.snapshot";
 1356  public static final boolean 
DEFAULT_SNAPSHOT_RESTORE_TAKE_FAILSAFE_SNAPSHOT = false;
 1357
-1358  private HConstants() {
-1359// Can't be instantiated with this 
ctor.
-1360  }
-1361}
+1358  public static final String 
SNAPSHOT_RESTORE_FAILSAFE_NAME =
+1359  
"hbase.snapshot.restore.failsafe.name";
+1360  public static final String 
DEFAULT_SNAPSHOT_RESTORE_FAILSAFE_NAME =
+1361  
"hbase-failsafe-{snapshot.name}-{restore.timestamp}";
+1362
+1363  private HConstants() {
+1364// Can't be instantiated with this 
ctor.
+1365  }
+1366}
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/bulk-loads.html
--
diff --git a/bulk-loads.html b/bulk-loads.html
index 47c6028..07534a1 100644
--- a/bulk-loads.html
+++ b/bulk-loads.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase   
   Bulk Loads in Apache HBase (TM)
@@ -311,7 +311,7 @@ under the License. -->
 https://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2017-04-27
+  Last Published: 
2017-04-28
 
 
 



[41/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/org/apache/hadoop/hbase/backup/package-tree.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/backup/package-tree.html 
b/devapidocs/org/apache/hadoop/hbase/backup/package-tree.html
index 850c242..72dbc96 100644
--- a/devapidocs/org/apache/hadoop/hbase/backup/package-tree.html
+++ b/devapidocs/org/apache/hadoop/hbase/backup/package-tree.html
@@ -165,10 +165,10 @@
 
 java.lang.http://docs.oracle.com/javase/8/docs/api/java/lang/Enum.html?is-external=true;
 title="class or interface in java.lang">EnumE (implements java.lang.http://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true;
 title="class or interface in java.lang">ComparableT, java.io.http://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html?is-external=true;
 title="class or interface in java.io">Serializable)
 
-org.apache.hadoop.hbase.backup.BackupType
-org.apache.hadoop.hbase.backup.BackupRestoreConstants.BackupCommand
 org.apache.hadoop.hbase.backup.BackupInfo.BackupPhase
 org.apache.hadoop.hbase.backup.BackupInfo.BackupState
+org.apache.hadoop.hbase.backup.BackupType
+org.apache.hadoop.hbase.backup.BackupRestoreConstants.BackupCommand
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/org/apache/hadoop/hbase/class-use/TableName.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/class-use/TableName.html 
b/devapidocs/org/apache/hadoop/hbase/class-use/TableName.html
index cb725da..ce8fb64 100644
--- a/devapidocs/org/apache/hadoop/hbase/class-use/TableName.html
+++ b/devapidocs/org/apache/hadoop/hbase/class-use/TableName.html
@@ -3541,67 +3541,73 @@ service.
 
 
 
+private http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
+AsyncHBaseAdmin.restoreSnapshotWithFailSafe(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringsnapshotName,
+   TableNametableName,
+   booleantakeFailSafeSnapshot)
+
+
 private void
 TableDescriptorBuilder.ModifyableTableDescriptor.setMetaFlags(TableNamename)
 
-
+
 (package private) void
 MasterCallable.setPriority(TableNametableName)
 
-
+
 AsyncProcessTask.BuilderT
 AsyncProcessTask.Builder.setTableName(TableNametableName)
 
-
+
 private void
 HBaseAdmin.setTableRep(TableNametableName,
booleanenableRep)
 Set the table's replication switch if the table's 
replication switch is already not set.
 
 
-
+
 void
 Admin.snapshot(byte[]snapshotName,
 TableNametableName)
 Create a timestamp consistent snapshot for the given 
table.
 
 
-
+
 void
 HBaseAdmin.snapshot(byte[]snapshotName,
 TableNametableName)
 
-
+
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
 AsyncHBaseAdmin.snapshot(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringsnapshotName,
 TableNametableName)
 
-
+
 void
 Admin.snapshot(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringsnapshotName,
 TableNametableName)
 Take a snapshot for the given table.
 
 
-
+
 void
 HBaseAdmin.snapshot(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringsnapshotName,
 TableNametableName)
 
-
+
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
 AsyncAdmin.snapshot(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringsnapshotName,
 TableNametableName)
 Take a snapshot for the given table.
 
 
-
+
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
 

[42/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/org/apache/hadoop/hbase/HConstants.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/HConstants.html 
b/devapidocs/org/apache/hadoop/hbase/HConstants.html
index 57810f4..21f6fcd 100644
--- a/devapidocs/org/apache/hadoop/hbase/HConstants.html
+++ b/devapidocs/org/apache/hadoop/hbase/HConstants.html
@@ -622,300 +622,304 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 
+static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
+DEFAULT_SNAPSHOT_RESTORE_FAILSAFE_NAME
+
+
 static boolean
 DEFAULT_SNAPSHOT_RESTORE_TAKE_FAILSAFE_SNAPSHOT
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 DEFAULT_STATUS_MULTICAST_ADDRESS
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 DEFAULT_STATUS_MULTICAST_BIND_ADDRESS
 
-
+
 static int
 DEFAULT_STATUS_MULTICAST_PORT
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 DEFAULT_TEMPORARY_HDFS_DIRECTORY
 
-
+
 static int
 DEFAULT_THREAD_WAKE_FREQUENCY
 Default value for thread wake frequency
 
 
-
+
 static boolean
 DEFAULT_USE_META_REPLICAS
 
-
+
 static int
 DEFAULT_VERSION_FILE_WRITE_ATTEMPTS
 Parameter name for how often we should try to write a 
version file, before failing
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 DEFAULT_WAL_STORAGE_POLICY
 
-
+
 static int
 DEFAULT_ZK_SESSION_TIMEOUT
 Default value for ZooKeeper session timeout
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 DEFAULT_ZOOKEEPER_ZNODE_PARENT
 
-
+
 static int
 DEFAULT_ZOOKEPER_CLIENT_PORT
 Default client port that the zookeeper listens on
 
 
-
+
 static int
 DEFAULT_ZOOKEPER_MAX_CLIENT_CNXNS
 Default limit on concurrent client-side zookeeper 
connections
 
 
-
+
 static long
 DEFAULT_ZOOKEPER_RECOVERABLE_WAITIME
 Default wait time for the recoverable zookeeper
 
 
-
+
 static int
 DELIMITER
 delimiter used between portions of a region name
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 DISALLOW_WRITES_IN_RECOVERING
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 DISTRIBUTED_LOG_REPLAY_KEY
 Conf key that enables unflushed WAL edits directly being 
replayed to region servers
 
 
-
+
 static byte[]
 EMPTY_BYTE_ARRAY
 An empty instance.
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/nio/ByteBuffer.html?is-external=true;
 title="class or interface in java.nio">ByteBuffer
 EMPTY_BYTE_BUFFER
 
-
+
 static byte[]
 EMPTY_END_ROW
 Last row in a table.
 
 
-
+
 static byte[]
 EMPTY_START_ROW
 Used by scanners, etc when they want to start at the 
beginning of a region
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 ENABLE_CLIENT_BACKPRESSURE
 Config key for if the server should send backpressure and 
if the client should listen to
  that backpressure from the server
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 ENABLE_DATA_FILE_UMASK
 Enable file permission modification from standard 
hbase
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 ENABLE_WAL_COMPRESSION
 Configuration name of WAL Compression
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 ENABLE_WAL_ENCRYPTION
 Configuration key for enabling WAL encryption, a 
boolean
 
 
-
+
 static TableName
 ENSEMBLE_TABLE_NAME
 The name of the ensemble table
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 FILE_SYSTEM_VERSION
 Current version of file system.
 
 
-
+
 static int
 FOREVER
 Unlimited time-to-live.
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 HBASE_BALANCER_MAX_BALANCING
 Config for the max balancing time
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 

[44/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/constant-values.html
--
diff --git a/devapidocs/constant-values.html b/devapidocs/constant-values.html
index 0bc28de..d82576b 100644
--- a/devapidocs/constant-values.html
+++ b/devapidocs/constant-values.html
@@ -1302,1223 +1302,1237 @@
 16020
 
 
+
+
+publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
+DEFAULT_SNAPSHOT_RESTORE_FAILSAFE_NAME
+"hbase-failsafe-{snapshot.name}-{restore.timestamp}"
+
+
 
 
 publicstaticfinalboolean
 DEFAULT_SNAPSHOT_RESTORE_TAKE_FAILSAFE_SNAPSHOT
 false
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 DEFAULT_STATUS_MULTICAST_ADDRESS
 "226.1.1.3"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 DEFAULT_STATUS_MULTICAST_BIND_ADDRESS
 "0.0.0.0"
 
-
+
 
 
 publicstaticfinalint
 DEFAULT_STATUS_MULTICAST_PORT
 16100
 
-
+
 
 
 publicstaticfinalint
 DEFAULT_THREAD_WAKE_FREQUENCY
 1
 
-
+
 
 
 publicstaticfinalboolean
 DEFAULT_USE_META_REPLICAS
 false
 
-
+
 
 
 publicstaticfinalint
 DEFAULT_VERSION_FILE_WRITE_ATTEMPTS
 3
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 DEFAULT_WAL_STORAGE_POLICY
 "NONE"
 
-
+
 
 
 publicstaticfinalint
 DEFAULT_ZK_SESSION_TIMEOUT
 18
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 DEFAULT_ZOOKEEPER_ZNODE_PARENT
 "/hbase"
 
-
+
 
 
 publicstaticfinalint
 DEFAULT_ZOOKEPER_CLIENT_PORT
 2181
 
-
+
 
 
 publicstaticfinalint
 DEFAULT_ZOOKEPER_MAX_CLIENT_CNXNS
 300
 
-
+
 
 
 publicstaticfinallong
 DEFAULT_ZOOKEPER_RECOVERABLE_WAITIME
 1L
 
-
+
 
 
 publicstaticfinalint
 DELIMITER
 44
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 DISALLOW_WRITES_IN_RECOVERING
 "hbase.regionserver.disallow.writes.when.recovering"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 DISTRIBUTED_LOG_REPLAY_KEY
 "hbase.master.distributed.log.replay"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 ENABLE_CLIENT_BACKPRESSURE
 "hbase.client.backpressure.enabled"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 ENABLE_DATA_FILE_UMASK
 "hbase.data.umask.enable"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 ENABLE_WAL_COMPRESSION
 "hbase.regionserver.wal.enablecompression"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 ENABLE_WAL_ENCRYPTION
 "hbase.regionserver.wal.encryption"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 FILE_SYSTEM_VERSION
 "8"
 
-
+
 
 
 publicstaticfinalint
 FOREVER
 2147483647
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 HBASE_BALANCER_MAX_BALANCING
 "hbase.balancer.max.balancing"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 HBASE_BALANCER_PERIOD
 "hbase.balancer.period"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 HBASE_CANARY_READ_RAW_SCAN_KEY
 "hbase.canary.read.raw.enabled"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 HBASE_CANARY_WRITE_DATA_TTL_KEY
 "hbase.canary.write.data.ttl"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 HBASE_CANARY_WRITE_PERSERVER_REGIONS_LOWERLIMIT_KEY
 "hbase.canary.write.perserver.regions.lowerLimit"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in 

[03/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Writer.State.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Writer.State.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Writer.State.html
index a665139..3fedd0b 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Writer.State.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Writer.State.html
@@ -879,1201 +879,1221 @@
 871// includes the header size also.
 872private int 
unencodedDataSizeWritten;
 873
-874/**
-875 * Bytes to be written to the file 
system, including the header. Compressed
-876 * if compression is turned on. It 
also includes the checksum data that
-877 * immediately follows the block 
data. (header + data + checksums)
-878 */
-879private ByteArrayOutputStream 
onDiskBlockBytesWithHeader;
-880
-881/**
-882 * The size of the checksum data on 
disk. It is used only if data is
-883 * not compressed. If data is 
compressed, then the checksums are already
-884 * part of onDiskBytesWithHeader. If 
data is uncompressed, then this
-885 * variable stores the checksum data 
for this block.
-886 */
-887private byte[] onDiskChecksum = 
HConstants.EMPTY_BYTE_ARRAY;
-888
-889/**
-890 * Current block's start offset in 
the {@link HFile}. Set in
-891 * {@link 
#writeHeaderAndData(FSDataOutputStream)}.
-892 */
-893private long startOffset;
-894
-895/**
-896 * Offset of previous block by block 
type. Updated when the next block is
-897 * started.
-898 */
-899private long[] prevOffsetByType;
-900
-901/** The offset of the previous block 
of the same type */
-902private long prevOffset;
-903/** Meta data that holds information 
about the hfileblock**/
-904private HFileContext fileContext;
-905
-906/**
-907 * @param dataBlockEncoder data block 
encoding algorithm to use
-908 */
-909public Writer(HFileDataBlockEncoder 
dataBlockEncoder, HFileContext fileContext) {
-910  if 
(fileContext.getBytesPerChecksum()  HConstants.HFILEBLOCK_HEADER_SIZE) {
-911throw new 
RuntimeException("Unsupported value of bytesPerChecksum. " +
-912" Minimum is " + 
HConstants.HFILEBLOCK_HEADER_SIZE + " but the configured value is " +
-913
fileContext.getBytesPerChecksum());
-914  }
-915  this.dataBlockEncoder = 
dataBlockEncoder != null?
-916  dataBlockEncoder: 
NoOpDataBlockEncoder.INSTANCE;
-917  this.dataBlockEncodingCtx = 
this.dataBlockEncoder.
-918  
newDataBlockEncodingContext(HConstants.HFILEBLOCK_DUMMY_HEADER, fileContext);
-919  // TODO: This should be lazily 
instantiated since we usually do NOT need this default encoder
-920  this.defaultBlockEncodingCtx = new 
HFileBlockDefaultEncodingContext(null,
-921  
HConstants.HFILEBLOCK_DUMMY_HEADER, fileContext);
-922  // TODO: Set BAOS initial size. Use 
fileContext.getBlocksize() and add for header/checksum
-923  baosInMemory = new 
ByteArrayOutputStream();
-924  prevOffsetByType = new 
long[BlockType.values().length];
-925  for (int i = 0; i  
prevOffsetByType.length; ++i) {
-926prevOffsetByType[i] = UNSET;
-927  }
-928  // TODO: Why fileContext saved away 
when we have dataBlockEncoder and/or
-929  // defaultDataBlockEncoder?
-930  this.fileContext = fileContext;
-931}
-932
-933/**
-934 * Starts writing into the block. The 
previous block's data is discarded.
-935 *
-936 * @return the stream the user can 
write their data into
-937 * @throws IOException
-938 */
-939DataOutputStream 
startWriting(BlockType newBlockType)
-940throws IOException {
-941  if (state == State.BLOCK_READY 
 startOffset != -1) {
-942// We had a previous block that 
was written to a stream at a specific
-943// offset. Save that offset as 
the last offset of a block of that type.
-944
prevOffsetByType[blockType.getId()] = startOffset;
-945  }
-946
-947  startOffset = -1;
-948  blockType = newBlockType;
-949
-950  baosInMemory.reset();
-951  
baosInMemory.write(HConstants.HFILEBLOCK_DUMMY_HEADER);
-952
-953  state = State.WRITING;
-954
-955  // We will compress it later in 
finishBlock()
-956  userDataStream = new 
ByteBufferWriterDataOutputStream(baosInMemory);
-957  if (newBlockType == BlockType.DATA) 
{
-958
this.dataBlockEncoder.startBlockEncoding(dataBlockEncodingCtx, 
userDataStream);
-959  }
-960  this.unencodedDataSizeWritten = 
0;
-961  return userDataStream;
-962}
-963
-964/**
-965 * Writes the Cell to this block
-966 * @param cell
-967 * @throws IOException
-968 */
-969void write(Cell cell) 

[02/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Writer.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Writer.html 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Writer.html
index a665139..3fedd0b 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Writer.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Writer.html
@@ -879,1201 +879,1221 @@
 871// includes the header size also.
 872private int 
unencodedDataSizeWritten;
 873
-874/**
-875 * Bytes to be written to the file 
system, including the header. Compressed
-876 * if compression is turned on. It 
also includes the checksum data that
-877 * immediately follows the block 
data. (header + data + checksums)
-878 */
-879private ByteArrayOutputStream 
onDiskBlockBytesWithHeader;
-880
-881/**
-882 * The size of the checksum data on 
disk. It is used only if data is
-883 * not compressed. If data is 
compressed, then the checksums are already
-884 * part of onDiskBytesWithHeader. If 
data is uncompressed, then this
-885 * variable stores the checksum data 
for this block.
-886 */
-887private byte[] onDiskChecksum = 
HConstants.EMPTY_BYTE_ARRAY;
-888
-889/**
-890 * Current block's start offset in 
the {@link HFile}. Set in
-891 * {@link 
#writeHeaderAndData(FSDataOutputStream)}.
-892 */
-893private long startOffset;
-894
-895/**
-896 * Offset of previous block by block 
type. Updated when the next block is
-897 * started.
-898 */
-899private long[] prevOffsetByType;
-900
-901/** The offset of the previous block 
of the same type */
-902private long prevOffset;
-903/** Meta data that holds information 
about the hfileblock**/
-904private HFileContext fileContext;
-905
-906/**
-907 * @param dataBlockEncoder data block 
encoding algorithm to use
-908 */
-909public Writer(HFileDataBlockEncoder 
dataBlockEncoder, HFileContext fileContext) {
-910  if 
(fileContext.getBytesPerChecksum()  HConstants.HFILEBLOCK_HEADER_SIZE) {
-911throw new 
RuntimeException("Unsupported value of bytesPerChecksum. " +
-912" Minimum is " + 
HConstants.HFILEBLOCK_HEADER_SIZE + " but the configured value is " +
-913
fileContext.getBytesPerChecksum());
-914  }
-915  this.dataBlockEncoder = 
dataBlockEncoder != null?
-916  dataBlockEncoder: 
NoOpDataBlockEncoder.INSTANCE;
-917  this.dataBlockEncodingCtx = 
this.dataBlockEncoder.
-918  
newDataBlockEncodingContext(HConstants.HFILEBLOCK_DUMMY_HEADER, fileContext);
-919  // TODO: This should be lazily 
instantiated since we usually do NOT need this default encoder
-920  this.defaultBlockEncodingCtx = new 
HFileBlockDefaultEncodingContext(null,
-921  
HConstants.HFILEBLOCK_DUMMY_HEADER, fileContext);
-922  // TODO: Set BAOS initial size. Use 
fileContext.getBlocksize() and add for header/checksum
-923  baosInMemory = new 
ByteArrayOutputStream();
-924  prevOffsetByType = new 
long[BlockType.values().length];
-925  for (int i = 0; i  
prevOffsetByType.length; ++i) {
-926prevOffsetByType[i] = UNSET;
-927  }
-928  // TODO: Why fileContext saved away 
when we have dataBlockEncoder and/or
-929  // defaultDataBlockEncoder?
-930  this.fileContext = fileContext;
-931}
-932
-933/**
-934 * Starts writing into the block. The 
previous block's data is discarded.
-935 *
-936 * @return the stream the user can 
write their data into
-937 * @throws IOException
-938 */
-939DataOutputStream 
startWriting(BlockType newBlockType)
-940throws IOException {
-941  if (state == State.BLOCK_READY 
 startOffset != -1) {
-942// We had a previous block that 
was written to a stream at a specific
-943// offset. Save that offset as 
the last offset of a block of that type.
-944
prevOffsetByType[blockType.getId()] = startOffset;
-945  }
-946
-947  startOffset = -1;
-948  blockType = newBlockType;
-949
-950  baosInMemory.reset();
-951  
baosInMemory.write(HConstants.HFILEBLOCK_DUMMY_HEADER);
-952
-953  state = State.WRITING;
-954
-955  // We will compress it later in 
finishBlock()
-956  userDataStream = new 
ByteBufferWriterDataOutputStream(baosInMemory);
-957  if (newBlockType == BlockType.DATA) 
{
-958
this.dataBlockEncoder.startBlockEncoding(dataBlockEncodingCtx, 
userDataStream);
-959  }
-960  this.unencodedDataSizeWritten = 
0;
-961  return userDataStream;
-962}
-963
-964/**
-965 * Writes the Cell to this block
-966 * @param cell
-967 * @throws IOException
-968 */
-969void write(Cell cell) throws 
IOException{
-970  

[07/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.FSReader.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.FSReader.html 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.FSReader.html
index a665139..3fedd0b 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.FSReader.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.FSReader.html
@@ -879,1201 +879,1221 @@
 871// includes the header size also.
 872private int 
unencodedDataSizeWritten;
 873
-874/**
-875 * Bytes to be written to the file 
system, including the header. Compressed
-876 * if compression is turned on. It 
also includes the checksum data that
-877 * immediately follows the block 
data. (header + data + checksums)
-878 */
-879private ByteArrayOutputStream 
onDiskBlockBytesWithHeader;
-880
-881/**
-882 * The size of the checksum data on 
disk. It is used only if data is
-883 * not compressed. If data is 
compressed, then the checksums are already
-884 * part of onDiskBytesWithHeader. If 
data is uncompressed, then this
-885 * variable stores the checksum data 
for this block.
-886 */
-887private byte[] onDiskChecksum = 
HConstants.EMPTY_BYTE_ARRAY;
-888
-889/**
-890 * Current block's start offset in 
the {@link HFile}. Set in
-891 * {@link 
#writeHeaderAndData(FSDataOutputStream)}.
-892 */
-893private long startOffset;
-894
-895/**
-896 * Offset of previous block by block 
type. Updated when the next block is
-897 * started.
-898 */
-899private long[] prevOffsetByType;
-900
-901/** The offset of the previous block 
of the same type */
-902private long prevOffset;
-903/** Meta data that holds information 
about the hfileblock**/
-904private HFileContext fileContext;
-905
-906/**
-907 * @param dataBlockEncoder data block 
encoding algorithm to use
-908 */
-909public Writer(HFileDataBlockEncoder 
dataBlockEncoder, HFileContext fileContext) {
-910  if 
(fileContext.getBytesPerChecksum()  HConstants.HFILEBLOCK_HEADER_SIZE) {
-911throw new 
RuntimeException("Unsupported value of bytesPerChecksum. " +
-912" Minimum is " + 
HConstants.HFILEBLOCK_HEADER_SIZE + " but the configured value is " +
-913
fileContext.getBytesPerChecksum());
-914  }
-915  this.dataBlockEncoder = 
dataBlockEncoder != null?
-916  dataBlockEncoder: 
NoOpDataBlockEncoder.INSTANCE;
-917  this.dataBlockEncodingCtx = 
this.dataBlockEncoder.
-918  
newDataBlockEncodingContext(HConstants.HFILEBLOCK_DUMMY_HEADER, fileContext);
-919  // TODO: This should be lazily 
instantiated since we usually do NOT need this default encoder
-920  this.defaultBlockEncodingCtx = new 
HFileBlockDefaultEncodingContext(null,
-921  
HConstants.HFILEBLOCK_DUMMY_HEADER, fileContext);
-922  // TODO: Set BAOS initial size. Use 
fileContext.getBlocksize() and add for header/checksum
-923  baosInMemory = new 
ByteArrayOutputStream();
-924  prevOffsetByType = new 
long[BlockType.values().length];
-925  for (int i = 0; i  
prevOffsetByType.length; ++i) {
-926prevOffsetByType[i] = UNSET;
-927  }
-928  // TODO: Why fileContext saved away 
when we have dataBlockEncoder and/or
-929  // defaultDataBlockEncoder?
-930  this.fileContext = fileContext;
-931}
-932
-933/**
-934 * Starts writing into the block. The 
previous block's data is discarded.
-935 *
-936 * @return the stream the user can 
write their data into
-937 * @throws IOException
-938 */
-939DataOutputStream 
startWriting(BlockType newBlockType)
-940throws IOException {
-941  if (state == State.BLOCK_READY 
 startOffset != -1) {
-942// We had a previous block that 
was written to a stream at a specific
-943// offset. Save that offset as 
the last offset of a block of that type.
-944
prevOffsetByType[blockType.getId()] = startOffset;
-945  }
-946
-947  startOffset = -1;
-948  blockType = newBlockType;
-949
-950  baosInMemory.reset();
-951  
baosInMemory.write(HConstants.HFILEBLOCK_DUMMY_HEADER);
-952
-953  state = State.WRITING;
-954
-955  // We will compress it later in 
finishBlock()
-956  userDataStream = new 
ByteBufferWriterDataOutputStream(baosInMemory);
-957  if (newBlockType == BlockType.DATA) 
{
-958
this.dataBlockEncoder.startBlockEncoding(dataBlockEncodingCtx, 
userDataStream);
-959  }
-960  this.unencodedDataSizeWritten = 
0;
-961  return userDataStream;
-962}
-963
-964/**
-965 * Writes the Cell to this block
-966 * @param cell
-967 * @throws IOException
-968 */
-969void write(Cell cell) throws 
IOException{

[09/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.BlockIterator.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.BlockIterator.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.BlockIterator.html
index a665139..3fedd0b 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.BlockIterator.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.BlockIterator.html
@@ -879,1201 +879,1221 @@
 871// includes the header size also.
 872private int 
unencodedDataSizeWritten;
 873
-874/**
-875 * Bytes to be written to the file 
system, including the header. Compressed
-876 * if compression is turned on. It 
also includes the checksum data that
-877 * immediately follows the block 
data. (header + data + checksums)
-878 */
-879private ByteArrayOutputStream 
onDiskBlockBytesWithHeader;
-880
-881/**
-882 * The size of the checksum data on 
disk. It is used only if data is
-883 * not compressed. If data is 
compressed, then the checksums are already
-884 * part of onDiskBytesWithHeader. If 
data is uncompressed, then this
-885 * variable stores the checksum data 
for this block.
-886 */
-887private byte[] onDiskChecksum = 
HConstants.EMPTY_BYTE_ARRAY;
-888
-889/**
-890 * Current block's start offset in 
the {@link HFile}. Set in
-891 * {@link 
#writeHeaderAndData(FSDataOutputStream)}.
-892 */
-893private long startOffset;
-894
-895/**
-896 * Offset of previous block by block 
type. Updated when the next block is
-897 * started.
-898 */
-899private long[] prevOffsetByType;
-900
-901/** The offset of the previous block 
of the same type */
-902private long prevOffset;
-903/** Meta data that holds information 
about the hfileblock**/
-904private HFileContext fileContext;
-905
-906/**
-907 * @param dataBlockEncoder data block 
encoding algorithm to use
-908 */
-909public Writer(HFileDataBlockEncoder 
dataBlockEncoder, HFileContext fileContext) {
-910  if 
(fileContext.getBytesPerChecksum()  HConstants.HFILEBLOCK_HEADER_SIZE) {
-911throw new 
RuntimeException("Unsupported value of bytesPerChecksum. " +
-912" Minimum is " + 
HConstants.HFILEBLOCK_HEADER_SIZE + " but the configured value is " +
-913
fileContext.getBytesPerChecksum());
-914  }
-915  this.dataBlockEncoder = 
dataBlockEncoder != null?
-916  dataBlockEncoder: 
NoOpDataBlockEncoder.INSTANCE;
-917  this.dataBlockEncodingCtx = 
this.dataBlockEncoder.
-918  
newDataBlockEncodingContext(HConstants.HFILEBLOCK_DUMMY_HEADER, fileContext);
-919  // TODO: This should be lazily 
instantiated since we usually do NOT need this default encoder
-920  this.defaultBlockEncodingCtx = new 
HFileBlockDefaultEncodingContext(null,
-921  
HConstants.HFILEBLOCK_DUMMY_HEADER, fileContext);
-922  // TODO: Set BAOS initial size. Use 
fileContext.getBlocksize() and add for header/checksum
-923  baosInMemory = new 
ByteArrayOutputStream();
-924  prevOffsetByType = new 
long[BlockType.values().length];
-925  for (int i = 0; i  
prevOffsetByType.length; ++i) {
-926prevOffsetByType[i] = UNSET;
-927  }
-928  // TODO: Why fileContext saved away 
when we have dataBlockEncoder and/or
-929  // defaultDataBlockEncoder?
-930  this.fileContext = fileContext;
-931}
-932
-933/**
-934 * Starts writing into the block. The 
previous block's data is discarded.
-935 *
-936 * @return the stream the user can 
write their data into
-937 * @throws IOException
-938 */
-939DataOutputStream 
startWriting(BlockType newBlockType)
-940throws IOException {
-941  if (state == State.BLOCK_READY 
 startOffset != -1) {
-942// We had a previous block that 
was written to a stream at a specific
-943// offset. Save that offset as 
the last offset of a block of that type.
-944
prevOffsetByType[blockType.getId()] = startOffset;
-945  }
-946
-947  startOffset = -1;
-948  blockType = newBlockType;
-949
-950  baosInMemory.reset();
-951  
baosInMemory.write(HConstants.HFILEBLOCK_DUMMY_HEADER);
-952
-953  state = State.WRITING;
-954
-955  // We will compress it later in 
finishBlock()
-956  userDataStream = new 
ByteBufferWriterDataOutputStream(baosInMemory);
-957  if (newBlockType == BlockType.DATA) 
{
-958
this.dataBlockEncoder.startBlockEncoding(dataBlockEncodingCtx, 
userDataStream);
-959  }
-960  this.unencodedDataSizeWritten = 
0;
-961  return userDataStream;
-962}
-963
-964/**
-965 * Writes the Cell to this block
-966 * @param cell
-967 * @throws IOException
-968 */
-969void write(Cell 

[04/51] [partial] hbase-site git commit: Published site at 82d554e3783372cc6b05489452c815b57c06f6cd.

2017-04-28 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6f2e75f2/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.PrefetchedHeader.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.PrefetchedHeader.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.PrefetchedHeader.html
index a665139..3fedd0b 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.PrefetchedHeader.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.PrefetchedHeader.html
@@ -879,1201 +879,1221 @@
 871// includes the header size also.
 872private int 
unencodedDataSizeWritten;
 873
-874/**
-875 * Bytes to be written to the file 
system, including the header. Compressed
-876 * if compression is turned on. It 
also includes the checksum data that
-877 * immediately follows the block 
data. (header + data + checksums)
-878 */
-879private ByteArrayOutputStream 
onDiskBlockBytesWithHeader;
-880
-881/**
-882 * The size of the checksum data on 
disk. It is used only if data is
-883 * not compressed. If data is 
compressed, then the checksums are already
-884 * part of onDiskBytesWithHeader. If 
data is uncompressed, then this
-885 * variable stores the checksum data 
for this block.
-886 */
-887private byte[] onDiskChecksum = 
HConstants.EMPTY_BYTE_ARRAY;
-888
-889/**
-890 * Current block's start offset in 
the {@link HFile}. Set in
-891 * {@link 
#writeHeaderAndData(FSDataOutputStream)}.
-892 */
-893private long startOffset;
-894
-895/**
-896 * Offset of previous block by block 
type. Updated when the next block is
-897 * started.
-898 */
-899private long[] prevOffsetByType;
-900
-901/** The offset of the previous block 
of the same type */
-902private long prevOffset;
-903/** Meta data that holds information 
about the hfileblock**/
-904private HFileContext fileContext;
-905
-906/**
-907 * @param dataBlockEncoder data block 
encoding algorithm to use
-908 */
-909public Writer(HFileDataBlockEncoder 
dataBlockEncoder, HFileContext fileContext) {
-910  if 
(fileContext.getBytesPerChecksum()  HConstants.HFILEBLOCK_HEADER_SIZE) {
-911throw new 
RuntimeException("Unsupported value of bytesPerChecksum. " +
-912" Minimum is " + 
HConstants.HFILEBLOCK_HEADER_SIZE + " but the configured value is " +
-913
fileContext.getBytesPerChecksum());
-914  }
-915  this.dataBlockEncoder = 
dataBlockEncoder != null?
-916  dataBlockEncoder: 
NoOpDataBlockEncoder.INSTANCE;
-917  this.dataBlockEncodingCtx = 
this.dataBlockEncoder.
-918  
newDataBlockEncodingContext(HConstants.HFILEBLOCK_DUMMY_HEADER, fileContext);
-919  // TODO: This should be lazily 
instantiated since we usually do NOT need this default encoder
-920  this.defaultBlockEncodingCtx = new 
HFileBlockDefaultEncodingContext(null,
-921  
HConstants.HFILEBLOCK_DUMMY_HEADER, fileContext);
-922  // TODO: Set BAOS initial size. Use 
fileContext.getBlocksize() and add for header/checksum
-923  baosInMemory = new 
ByteArrayOutputStream();
-924  prevOffsetByType = new 
long[BlockType.values().length];
-925  for (int i = 0; i  
prevOffsetByType.length; ++i) {
-926prevOffsetByType[i] = UNSET;
-927  }
-928  // TODO: Why fileContext saved away 
when we have dataBlockEncoder and/or
-929  // defaultDataBlockEncoder?
-930  this.fileContext = fileContext;
-931}
-932
-933/**
-934 * Starts writing into the block. The 
previous block's data is discarded.
-935 *
-936 * @return the stream the user can 
write their data into
-937 * @throws IOException
-938 */
-939DataOutputStream 
startWriting(BlockType newBlockType)
-940throws IOException {
-941  if (state == State.BLOCK_READY 
 startOffset != -1) {
-942// We had a previous block that 
was written to a stream at a specific
-943// offset. Save that offset as 
the last offset of a block of that type.
-944
prevOffsetByType[blockType.getId()] = startOffset;
-945  }
-946
-947  startOffset = -1;
-948  blockType = newBlockType;
-949
-950  baosInMemory.reset();
-951  
baosInMemory.write(HConstants.HFILEBLOCK_DUMMY_HEADER);
-952
-953  state = State.WRITING;
-954
-955  // We will compress it later in 
finishBlock()
-956  userDataStream = new 
ByteBufferWriterDataOutputStream(baosInMemory);
-957  if (newBlockType == BlockType.DATA) 
{
-958
this.dataBlockEncoder.startBlockEncoding(dataBlockEncodingCtx, 
userDataStream);
-959  }
-960  this.unencodedDataSizeWritten = 
0;
-961  return userDataStream;
-962}
-963
-964/**
-965 * Writes the Cell to this block
-966 * @param cell
-967 * @throws IOException
-968 */
-969

hbase git commit: HBASE-17972 Remove mergePool from CompactSplitThread (Guangxu Cheng)

2017-04-28 Thread tedyu
Repository: hbase
Updated Branches:
  refs/heads/master b401a35fd -> 5411d3ecb


HBASE-17972 Remove mergePool from CompactSplitThread (Guangxu Cheng)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/5411d3ec
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/5411d3ec
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/5411d3ec

Branch: refs/heads/master
Commit: 5411d3ecb156a5128b9045bdb4e58850a10968fb
Parents: b401a35
Author: tedyu 
Authored: Fri Apr 28 06:52:10 2017 -0700
Committer: tedyu 
Committed: Fri Apr 28 06:52:10 2017 -0700

--
 .../hbase/regionserver/CompactSplitThread.java  | 52 +---
 .../regionserver/TestCompactSplitThread.java|  6 ---
 2 files changed, 1 insertion(+), 57 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/5411d3ec/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java
index eba984a..cddfccb 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java
@@ -72,10 +72,6 @@ public class CompactSplitThread implements 
CompactionRequestor, PropagatingConfi
   // Configuration key for split threads
   public final static String SPLIT_THREADS = "hbase.regionserver.thread.split";
   public final static int SPLIT_THREADS_DEFAULT = 1;
-  
-  // Configuration keys for merge threads
-  public final static String MERGE_THREADS = "hbase.regionserver.thread.merge";
-  public final static int MERGE_THREADS_DEFAULT = 1;
 
   public static final String REGION_SERVER_REGION_SPLIT_LIMIT =
   "hbase.regionserver.regionSplitLimit";
@@ -87,7 +83,6 @@ public class CompactSplitThread implements 
CompactionRequestor, PropagatingConfi
   private final ThreadPoolExecutor longCompactions;
   private final ThreadPoolExecutor shortCompactions;
   private final ThreadPoolExecutor splits;
-  private final ThreadPoolExecutor mergePool;
 
   private volatile ThroughputController compactionThroughputController;
 
@@ -150,15 +145,6 @@ public class CompactSplitThread implements 
CompactionRequestor, PropagatingConfi
 return new Thread(r, name);
   }
   });
-int mergeThreads = conf.getInt(MERGE_THREADS, MERGE_THREADS_DEFAULT);
-this.mergePool = (ThreadPoolExecutor) Executors.newFixedThreadPool(
-mergeThreads, new ThreadFactory() {
-  @Override
-  public Thread newThread(Runnable r) {
-String name = n + "-merges-" + System.currentTimeMillis();
-return new Thread(r, name);
-  }
-});
 
 // compaction throughput controller
 this.compactionThroughputController =
@@ -170,8 +156,7 @@ public class CompactSplitThread implements 
CompactionRequestor, PropagatingConfi
 return "compaction_queue=("
 + longCompactions.getQueue().size() + ":"
 + shortCompactions.getQueue().size() + ")"
-+ ", split_queue=" + splits.getQueue().size()
-+ ", merge_queue=" + mergePool.getQueue().size();
++ ", split_queue=" + splits.getQueue().size();
   }
   
   public String dumpQueue() {
@@ -205,15 +190,6 @@ public class CompactSplitThread implements 
CompactionRequestor, PropagatingConfi
   queueLists.append("\n");
 }
 
-queueLists.append("\n");
-queueLists.append("  Region Merge Queue:\n");
-lq = mergePool.getQueue();
-it = lq.iterator();
-while (it.hasNext()) {
-  queueLists.append("" + it.next().toString());
-  queueLists.append("\n");
-}
-
 return queueLists.toString();
   }
 
@@ -372,7 +348,6 @@ public class CompactSplitThread implements 
CompactionRequestor, PropagatingConfi
*/
   void interruptIfNecessary() {
 splits.shutdown();
-mergePool.shutdown();
 longCompactions.shutdown();
 shortCompactions.shutdown();
   }
@@ -394,7 +369,6 @@ public class CompactSplitThread implements 
CompactionRequestor, PropagatingConfi
 
   void join() {
 waitFor(splits, "Split Thread");
-waitFor(mergePool, "Merge Thread");
 waitFor(longCompactions, "Large Compaction Thread");
 waitFor(shortCompactions, "Small Compaction Thread");
   }
@@ -641,21 +615,6 @@ public class CompactSplitThread implements 
CompactionRequestor, PropagatingConfi
   }
 }
 
-int mergeThreads = newConf.getInt(MERGE_THREADS,
-MERGE_THREADS_DEFAULT);
-if (this.mergePool.getCorePoolSize() != mergeThreads) {
-  

hbase git commit: HBASE-17950 Write the chunkId also as Int instead of long into the first byte of the chunk (Ram)

2017-04-28 Thread ramkrishna
Repository: hbase
Updated Branches:
  refs/heads/master 68b2e0f7d -> b401a35fd


HBASE-17950 Write the chunkId also as Int instead of long into the first
byte of the chunk (Ram)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/b401a35f
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/b401a35f
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/b401a35f

Branch: refs/heads/master
Commit: b401a35fdc883c74847bc41131e5900939558dab
Parents: 68b2e0f
Author: Ramkrishna 
Authored: Fri Apr 28 14:43:19 2017 +0530
Committer: Ramkrishna 
Committed: Fri Apr 28 14:44:46 2017 +0530

--
 .../org/apache/hadoop/hbase/regionserver/Chunk.java |  4 ++--
 .../apache/hadoop/hbase/regionserver/OffheapChunk.java  |  2 +-
 .../apache/hadoop/hbase/regionserver/OnheapChunk.java   |  2 +-
 .../hadoop/hbase/regionserver/TestDefaultMemStore.java  |  4 ++--
 .../hbase/regionserver/TestMemStoreChunkPool.java   |  2 +-
 .../hadoop/hbase/regionserver/TestMemStoreLAB.java  | 12 ++--
 .../hbase/regionserver/TestMemstoreLABWithoutPool.java  |  8 
 7 files changed, 17 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/b401a35f/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Chunk.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Chunk.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Chunk.java
index fc4aa0b..a45d801 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Chunk.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Chunk.java
@@ -100,8 +100,8 @@ public abstract class Chunk {
   throw e;
 }
 // Mark that it's ready for use
-// Move 8 bytes since the first 8 bytes are having the chunkid in it
-boolean initted = nextFreeOffset.compareAndSet(UNINITIALIZED, 
Bytes.SIZEOF_LONG);
+// Move 4 bytes since the first 4 bytes are having the chunkid in it
+boolean initted = nextFreeOffset.compareAndSet(UNINITIALIZED, 
Bytes.SIZEOF_INT);
 // We should always succeed the above CAS since only one thread
 // calls init()!
 Preconditions.checkState(initted, "Multiple threads tried to init same 
chunk");

http://git-wip-us.apache.org/repos/asf/hbase/blob/b401a35f/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/OffheapChunk.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/OffheapChunk.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/OffheapChunk.java
index e244a33..f5d4905 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/OffheapChunk.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/OffheapChunk.java
@@ -41,7 +41,7 @@ public class OffheapChunk extends Chunk {
   void allocateDataBuffer() {
 if (data == null) {
   data = ByteBuffer.allocateDirect(this.size);
-  data.putLong(0, this.getId());
+  data.putInt(0, this.getId());
 }
   }
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/b401a35f/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/OnheapChunk.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/OnheapChunk.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/OnheapChunk.java
index da34e24..38001ea 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/OnheapChunk.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/OnheapChunk.java
@@ -39,7 +39,7 @@ public class OnheapChunk extends Chunk {
   void allocateDataBuffer() {
 if (data == null) {
   data = ByteBuffer.allocate(this.size);
-  data.putLong(0, this.getId());
+  data.putInt(0, this.getId());
 }
   }
 }
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hbase/blob/b401a35f/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultMemStore.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultMemStore.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultMemStore.java
index 41b304b..3acb48b 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultMemStore.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultMemStore.java
@@ -140,8 +140,8 @@ public class 

hbase git commit: HBASE-14925 Develop HBase shell command/tool to list table's region info through command line

2017-04-28 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1 cdda1d030 -> 3765e7bed


HBASE-14925 Develop HBase shell command/tool to list table's region info 
through command line

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/3765e7be
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/3765e7be
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/3765e7be

Branch: refs/heads/branch-1
Commit: 3765e7bedb937044c8e0a416a7b44d41165ee48c
Parents: cdda1d0
Author: Karan Mehta 
Authored: Fri Apr 28 14:08:04 2017 +0530
Committer: Ashish Singhi 
Committed: Fri Apr 28 14:08:04 2017 +0530

--
 hbase-shell/src/main/ruby/shell.rb  |  1 +
 .../main/ruby/shell/commands/list_regions.rb| 76 
 2 files changed, 77 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/3765e7be/hbase-shell/src/main/ruby/shell.rb
--
diff --git a/hbase-shell/src/main/ruby/shell.rb 
b/hbase-shell/src/main/ruby/shell.rb
index 9576cc7..99adf73 100644
--- a/hbase-shell/src/main/ruby/shell.rb
+++ b/hbase-shell/src/main/ruby/shell.rb
@@ -272,6 +272,7 @@ Shell.load_command_group(
 alter_async
 get_table
 locate_region
+list_regions
   ],
   :aliases => {
 'describe' => ['desc']

http://git-wip-us.apache.org/repos/asf/hbase/blob/3765e7be/hbase-shell/src/main/ruby/shell/commands/list_regions.rb
--
diff --git a/hbase-shell/src/main/ruby/shell/commands/list_regions.rb 
b/hbase-shell/src/main/ruby/shell/commands/list_regions.rb
new file mode 100644
index 000..527a6cb
--- /dev/null
+++ b/hbase-shell/src/main/ruby/shell/commands/list_regions.rb
@@ -0,0 +1,76 @@
+#
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+class ListRegions < Command
+  def help
+
+return< list_regions 'table_name'
+hbase> list_regions 'table_name', 'server_name'
+
+EOF
+return
+  end
+
+  def command(table_name, region_server_name = "")
+admin_instance = admin.instance_variable_get("@admin")
+conn_instance = admin_instance.getConnection()
+cluster_status = admin_instance.getClusterStatus()
+hregion_locator_instance = 
conn_instance.getRegionLocator(TableName.valueOf(table_name))
+hregion_locator_list = hregion_locator_instance.getAllRegionLocations()
+results = Array.new
+
+begin
+  hregion_locator_list.each do |hregion|
+hregion_info = hregion.getRegionInfo()
+server_name = hregion.getServerName()
+if hregion.getServerName().toString.start_with? region_server_name
+  startKey = Bytes.toString(hregion.getRegionInfo().getStartKey())
+  endKey = Bytes.toString(hregion.getRegionInfo().getEndKey())
+  region_load_map = 
cluster_status.getLoad(server_name).getRegionsLoad()
+  region_load = region_load_map.get(hregion_info.getRegionName())
+  region_store_file_size = region_load.getStorefileSizeMB()
+  region_requests = region_load.getRequestsCount()
+  results << { "server" => hregion.getServerName().toString(), 
"name" => hregion_info.getRegionNameAsString(), "startkey" => startKey, 
"endkey" => endKey, "size" => region_store_file_size, "requests" => 
region_requests }
+end
+  end
+ensure
+  hregion_locator_instance.close()
+end
+
+@end_time = Time.now
+
+printf("%-60s | %-60s | %-15s | %-15s | %-20s | %-20s", "SERVER_NAME", 
"REGION_NAME", "START_KEY", "END_KEY", "SIZE", "REQ");
+printf("\n")
+for result in results
+  printf("%-60s | %-60s | %-15s | %-15s | %-20s | %-20s", 
result["server"], result["name"], result["startkey"], result["endkey"], 
result["size"], result["requests"]);
+printf("\n")
+ 

hbase git commit: HBase-14925 Develop HBase shell command/tool to list table's region info through command line

2017-04-28 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/master c4cbb419a -> 68b2e0f7d


HBase-14925 Develop HBase shell command/tool to list table's region info 
through command line

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/68b2e0f7
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/68b2e0f7
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/68b2e0f7

Branch: refs/heads/master
Commit: 68b2e0f7d94c02aa82ac89f2ec2f052bdcd58704
Parents: c4cbb41
Author: Karan Mehta 
Authored: Fri Apr 28 14:06:03 2017 +0530
Committer: Ashish Singhi 
Committed: Fri Apr 28 14:06:03 2017 +0530

--
 hbase-shell/src/main/ruby/shell.rb  |  1 +
 .../main/ruby/shell/commands/list_regions.rb| 76 
 2 files changed, 77 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/68b2e0f7/hbase-shell/src/main/ruby/shell.rb
--
diff --git a/hbase-shell/src/main/ruby/shell.rb 
b/hbase-shell/src/main/ruby/shell.rb
index fc55f94..a6aba76 100644
--- a/hbase-shell/src/main/ruby/shell.rb
+++ b/hbase-shell/src/main/ruby/shell.rb
@@ -285,6 +285,7 @@ Shell.load_command_group(
 alter_async
 get_table
 locate_region
+list_regions
   ],
   :aliases => {
 'describe' => ['desc']

http://git-wip-us.apache.org/repos/asf/hbase/blob/68b2e0f7/hbase-shell/src/main/ruby/shell/commands/list_regions.rb
--
diff --git a/hbase-shell/src/main/ruby/shell/commands/list_regions.rb 
b/hbase-shell/src/main/ruby/shell/commands/list_regions.rb
new file mode 100644
index 000..527a6cb
--- /dev/null
+++ b/hbase-shell/src/main/ruby/shell/commands/list_regions.rb
@@ -0,0 +1,76 @@
+#
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+class ListRegions < Command
+  def help
+
+return< list_regions 'table_name'
+hbase> list_regions 'table_name', 'server_name'
+
+EOF
+return
+  end
+
+  def command(table_name, region_server_name = "")
+admin_instance = admin.instance_variable_get("@admin")
+conn_instance = admin_instance.getConnection()
+cluster_status = admin_instance.getClusterStatus()
+hregion_locator_instance = 
conn_instance.getRegionLocator(TableName.valueOf(table_name))
+hregion_locator_list = hregion_locator_instance.getAllRegionLocations()
+results = Array.new
+
+begin
+  hregion_locator_list.each do |hregion|
+hregion_info = hregion.getRegionInfo()
+server_name = hregion.getServerName()
+if hregion.getServerName().toString.start_with? region_server_name
+  startKey = Bytes.toString(hregion.getRegionInfo().getStartKey())
+  endKey = Bytes.toString(hregion.getRegionInfo().getEndKey())
+  region_load_map = 
cluster_status.getLoad(server_name).getRegionsLoad()
+  region_load = region_load_map.get(hregion_info.getRegionName())
+  region_store_file_size = region_load.getStorefileSizeMB()
+  region_requests = region_load.getRequestsCount()
+  results << { "server" => hregion.getServerName().toString(), 
"name" => hregion_info.getRegionNameAsString(), "startkey" => startKey, 
"endkey" => endKey, "size" => region_store_file_size, "requests" => 
region_requests }
+end
+  end
+ensure
+  hregion_locator_instance.close()
+end
+
+@end_time = Time.now
+
+printf("%-60s | %-60s | %-15s | %-15s | %-20s | %-20s", "SERVER_NAME", 
"REGION_NAME", "START_KEY", "END_KEY", "SIZE", "REQ");
+printf("\n")
+for result in results
+  printf("%-60s | %-60s | %-15s | %-15s | %-20s | %-20s", 
result["server"], result["name"], result["startkey"], result["endkey"], 
result["size"], result["requests"]);
+printf("\n")
+