[hbase-operator-tools] branch master created (now c5effc5)

2018-08-02 Thread stack
This is an automated email from the ASF dual-hosted git repository.

stack pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hbase-operator-tools.git.


  at c5effc5  First push: LICENSE, README, hbase-hbck2 module

This branch includes the following new commits:

 new c5effc5  First push: LICENSE, README, hbase-hbck2 module

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[hbase-operator-tools] 01/01: First push: LICENSE, README, hbase-hbck2 module

2018-08-02 Thread stack
This is an automated email from the ASF dual-hosted git repository.

stack pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase-operator-tools.git

commit c5effc592e440d83fbe9e5481d1ba95eb4afb363
Author: Michael Stack 
AuthorDate: Thu Aug 2 17:11:28 2018 -0700

First push: LICENSE, README, hbase-hbck2 module
---
 LICENSE.txt| 202 +
 README.md  |   3 +
 hbase-hbck2/pom.xml| 172 +++
 hbase-hbck2/src/main/avro/HbaseKafkaEvent.avro |  30 ++
 pom.xml| 385 +
 5 files changed, 792 insertions(+)

diff --git a/LICENSE.txt b/LICENSE.txt
new file mode 100755
index 000..1db8e3c
--- /dev/null
+++ b/LICENSE.txt
@@ -0,0 +1,202 @@
+
+ Apache License
+   Version 2.0, January 2004
+http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+  "License" shall mean the terms and conditions for use, reproduction,
+  and distribution as defined by Sections 1 through 9 of this document.
+
+  "Licensor" shall mean the copyright owner or entity authorized by
+  the copyright owner that is granting the License.
+
+  "Legal Entity" shall mean the union of the acting entity and all
+  other entities that control, are controlled by, or are under common
+  control with that entity. For the purposes of this definition,
+  "control" means (i) the power, direct or indirect, to cause the
+  direction or management of such entity, whether by contract or
+  otherwise, or (ii) ownership of fifty percent (50%) or more of the
+  outstanding shares, or (iii) beneficial ownership of such entity.
+
+  "You" (or "Your") shall mean an individual or Legal Entity
+  exercising permissions granted by this License.
+
+  "Source" form shall mean the preferred form for making modifications,
+  including but not limited to software source code, documentation
+  source, and configuration files.
+
+  "Object" form shall mean any form resulting from mechanical
+  transformation or translation of a Source form, including but
+  not limited to compiled object code, generated documentation,
+  and conversions to other media types.
+
+  "Work" shall mean the work of authorship, whether in Source or
+  Object form, made available under the License, as indicated by a
+  copyright notice that is included in or attached to the work
+  (an example is provided in the Appendix below).
+
+  "Derivative Works" shall mean any work, whether in Source or Object
+  form, that is based on (or derived from) the Work and for which the
+  editorial revisions, annotations, elaborations, or other modifications
+  represent, as a whole, an original work of authorship. For the purposes
+  of this License, Derivative Works shall not include works that remain
+  separable from, or merely link (or bind by name) to the interfaces of,
+  the Work and Derivative Works thereof.
+
+  "Contribution" shall mean any work of authorship, including
+  the original version of the Work and any modifications or additions
+  to that Work or Derivative Works thereof, that is intentionally
+  submitted to Licensor for inclusion in the Work by the copyright owner
+  or by an individual or Legal Entity authorized to submit on behalf of
+  the copyright owner. For the purposes of this definition, "submitted"
+  means any form of electronic, verbal, or written communication sent
+  to the Licensor or its representatives, including but not limited to
+  communication on electronic mailing lists, source code control systems,
+  and issue tracking systems that are managed by, or on behalf of, the
+  Licensor for the purpose of discussing and improving the Work, but
+  excluding communication that is conspicuously marked or otherwise
+  designated in writing by the copyright owner as "Not a Contribution."
+
+  "Contributor" shall mean Licensor and any individual or Legal Entity
+  on behalf of whom a Contribution has been received by Licensor and
+  subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+  this License, each Contributor hereby grants to You a perpetual,
+  worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+  copyright license to reproduce, prepare Derivative Works of,
+  publicly display, publicly perform, sublicense, and distribute the
+  Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+  this License, each Contributor hereby grants to You a perpetual,
+  

[hbase-connectors] branch master updated: HBASE-20934 Create an hbase-connectors repository; commit new kafka connect here

2018-08-02 Thread stack
This is an automated email from the ASF dual-hosted git repository.

stack pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase-connectors.git


The following commit(s) were added to refs/heads/master by this push:
 new 552167c  HBASE-20934 Create an hbase-connectors repository; commit new 
kafka connect here
552167c is described below

commit 552167cedbcd621ed2dda213059b673487412aed
Author: Michael Stack 
AuthorDate: Thu Aug 2 15:56:37 2018 -0700

HBASE-20934 Create an hbase-connectors repository; commit new kafka connect 
here

First cut. No bin to startup the proxy server. TODO.
---
 conf/kafka-route-rules.xml |  45 +++
 hbase-kafka-model/pom.xml  | 220 +++
 .../src/main/avro/HbaseKafkaEvent.avro |  30 ++
 hbase-kafka-proxy/.pom.xml.swp | Bin 0 -> 24576 bytes
 hbase-kafka-proxy/pom.xml  | 258 +
 .../org/apache/hadoop/hbase/kafka/DropRule.java|  29 ++
 .../hadoop/hbase/kafka/DumpToStringListener.java   | 112 ++
 .../hadoop/hbase/kafka/KafkaBridgeConnection.java  | 216 +++
 .../org/apache/hadoop/hbase/kafka/KafkaProxy.java  | 341 +
 .../hadoop/hbase/kafka/KafkaTableForBridge.java| 199 ++
 .../java/org/apache/hadoop/hbase/kafka/Rule.java   | 228 
 .../hadoop/hbase/kafka/TopicRoutingRules.java  | 237 
 .../org/apache/hadoop/hbase/kafka/TopicRule.java   |  41 +++
 .../hadoop/hbase/kafka/ProducerForTesting.java | 142 
 .../apache/hadoop/hbase/kafka/TestDropRule.java| 210 +++
 .../hadoop/hbase/kafka/TestProcessMutations.java   | 114 ++
 .../hadoop/hbase/kafka/TestQualifierMatching.java  |  73 
 .../apache/hadoop/hbase/kafka/TestRouteRules.java  | 218 +++
 pom.xml| 404 +
 19 files changed, 3117 insertions(+)

diff --git a/conf/kafka-route-rules.xml b/conf/kafka-route-rules.xml
new file mode 100644
index 000..4d31ee2
--- /dev/null
+++ b/conf/kafka-route-rules.xml
@@ -0,0 +1,45 @@
+
+
+
+
diff --git a/hbase-kafka-model/pom.xml b/hbase-kafka-model/pom.xml
new file mode 100644
index 000..8c497b1
--- /dev/null
+++ b/hbase-kafka-model/pom.xml
@@ -0,0 +1,220 @@
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+  
+  4.0.0
+  
+org.apache.hbase
+hbase-connectors
+1.0.0-SNAPSHOT
+..
+  
+
+  hbase-kafka-model
+  Apache HBase - Model Objects for Kafka Proxy
+  Model objects that represent HBase mutations
+
+  
+
+  org.apache.avro
+  avro
+
+  
+
+  
+${project.basedir}/target/java
+
+  
+src/main/resources/
+
+  hbase-default.xml
+
+  
+
+
+  
+src/test/resources/META-INF/
+META-INF/
+
+  NOTICE
+
+true
+  
+
+
+  
+org.apache.avro
+avro-maven-plugin
+${avro.version}
+
+  
+generate-sources
+
+  schema
+
+
+  
${project.basedir}/src/main/avro/
+  
${project.basedir}/target/java/
+  
+**/*.avro
+  
+
+  
+
+  
+
+
+  
+org.apache.maven.plugins
+maven-remote-resources-plugin
+  
+  
+org.apache.maven.plugins
+maven-site-plugin
+
+  true
+
+  
+  
+
+maven-assembly-plugin
+
+  true
+
+  
+  
+maven-surefire-plugin
+
+  
+
+  listener
+  
org.apache.hadoop.hbase.ResourceCheckerJUnitListener
+
+  
+
+  
+  
+  
+org.apache.maven.plugins
+maven-source-plugin
+
+  
+hbase-default.xml
+  
+
+  
+
+
+  
+
+
+  org.eclipse.m2e
+  lifecycle-mapping
+  1.0.0
+  
+
+  
+
+  
+org.apache.maven.plugins
+maven-antrun-plugin
+[${maven.antrun.version}]
+
+  run
+
+  
+  
+
+  
+
+
+  
+org.apache.maven.plugins
+maven-dependency-plugin
+[2.8,)
+
+  build-classpath
+
+  
+  
+
+   

[16/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.StatisticsThread.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.StatisticsThread.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.StatisticsThread.html
index bd3c59e..21e240a 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.StatisticsThread.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.StatisticsThread.html
@@ -33,62 +33,62 @@
 025import java.io.FileNotFoundException;
 026import java.io.FileOutputStream;
 027import java.io.IOException;
-028import java.io.ObjectInputStream;
-029import java.io.ObjectOutputStream;
-030import java.io.Serializable;
-031import java.nio.ByteBuffer;
-032import java.util.ArrayList;
-033import java.util.Comparator;
-034import java.util.HashSet;
-035import java.util.Iterator;
-036import java.util.List;
-037import java.util.Map;
-038import java.util.NavigableSet;
-039import java.util.PriorityQueue;
-040import java.util.Set;
-041import 
java.util.concurrent.ArrayBlockingQueue;
-042import 
java.util.concurrent.BlockingQueue;
-043import 
java.util.concurrent.ConcurrentHashMap;
-044import 
java.util.concurrent.ConcurrentMap;
-045import 
java.util.concurrent.ConcurrentSkipListSet;
-046import java.util.concurrent.Executors;
-047import 
java.util.concurrent.ScheduledExecutorService;
-048import java.util.concurrent.TimeUnit;
-049import 
java.util.concurrent.atomic.AtomicInteger;
-050import 
java.util.concurrent.atomic.AtomicLong;
-051import 
java.util.concurrent.atomic.LongAdder;
-052import java.util.concurrent.locks.Lock;
-053import 
java.util.concurrent.locks.ReentrantLock;
-054import 
java.util.concurrent.locks.ReentrantReadWriteLock;
-055import 
org.apache.hadoop.conf.Configuration;
-056import 
org.apache.hadoop.hbase.HBaseConfiguration;
-057import 
org.apache.hadoop.hbase.io.HeapSize;
-058import 
org.apache.hadoop.hbase.io.hfile.BlockCache;
-059import 
org.apache.hadoop.hbase.io.hfile.BlockCacheKey;
-060import 
org.apache.hadoop.hbase.io.hfile.BlockCacheUtil;
-061import 
org.apache.hadoop.hbase.io.hfile.BlockPriority;
-062import 
org.apache.hadoop.hbase.io.hfile.BlockType;
-063import 
org.apache.hadoop.hbase.io.hfile.CacheStats;
-064import 
org.apache.hadoop.hbase.io.hfile.Cacheable;
-065import 
org.apache.hadoop.hbase.io.hfile.Cacheable.MemoryType;
-066import 
org.apache.hadoop.hbase.io.hfile.CacheableDeserializer;
-067import 
org.apache.hadoop.hbase.io.hfile.CacheableDeserializerIdManager;
-068import 
org.apache.hadoop.hbase.io.hfile.CachedBlock;
-069import 
org.apache.hadoop.hbase.io.hfile.HFileBlock;
-070import 
org.apache.hadoop.hbase.nio.ByteBuff;
-071import 
org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
-072import 
org.apache.hadoop.hbase.util.HasThread;
-073import 
org.apache.hadoop.hbase.util.IdReadWriteLock;
-074import 
org.apache.hadoop.hbase.util.IdReadWriteLock.ReferenceType;
-075import 
org.apache.hadoop.hbase.util.UnsafeAvailChecker;
-076import 
org.apache.hadoop.util.StringUtils;
-077import 
org.apache.yetus.audience.InterfaceAudience;
-078import org.slf4j.Logger;
-079import org.slf4j.LoggerFactory;
-080
-081import 
org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
-082import 
org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
-083import 
org.apache.hbase.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder;
+028import java.io.Serializable;
+029import java.nio.ByteBuffer;
+030import java.util.ArrayList;
+031import java.util.Comparator;
+032import java.util.HashSet;
+033import java.util.Iterator;
+034import java.util.List;
+035import java.util.Map;
+036import java.util.NavigableSet;
+037import java.util.PriorityQueue;
+038import java.util.Set;
+039import 
java.util.concurrent.ArrayBlockingQueue;
+040import 
java.util.concurrent.BlockingQueue;
+041import 
java.util.concurrent.ConcurrentHashMap;
+042import 
java.util.concurrent.ConcurrentMap;
+043import 
java.util.concurrent.ConcurrentSkipListSet;
+044import java.util.concurrent.Executors;
+045import 
java.util.concurrent.ScheduledExecutorService;
+046import java.util.concurrent.TimeUnit;
+047import 
java.util.concurrent.atomic.AtomicInteger;
+048import 
java.util.concurrent.atomic.AtomicLong;
+049import 
java.util.concurrent.atomic.LongAdder;
+050import java.util.concurrent.locks.Lock;
+051import 
java.util.concurrent.locks.ReentrantLock;
+052import 
java.util.concurrent.locks.ReentrantReadWriteLock;
+053import 
org.apache.hadoop.conf.Configuration;
+054import 
org.apache.hadoop.hbase.HBaseConfiguration;
+055import 
org.apache.hadoop.hbase.io.HeapSize;
+056import 
org.apache.hadoop.hbase.io.hfile.BlockCache;
+057import 
org.apache.hadoop.hbase.io.hfile.BlockCacheKey;
+058import 

[36/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/org/apache/hadoop/hbase/master/package-tree.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/master/package-tree.html 
b/devapidocs/org/apache/hadoop/hbase/master/package-tree.html
index 2d9fc38..a607492 100644
--- a/devapidocs/org/apache/hadoop/hbase/master/package-tree.html
+++ b/devapidocs/org/apache/hadoop/hbase/master/package-tree.html
@@ -348,11 +348,11 @@
 
 java.lang.https://docs.oracle.com/javase/8/docs/api/java/lang/Enum.html?is-external=true;
 title="class or interface in java.lang">EnumE (implements java.lang.https://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true;
 title="class or interface in java.lang">ComparableT, java.io.https://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html?is-external=true;
 title="class or interface in java.io">Serializable)
 
-org.apache.hadoop.hbase.master.MetricsMasterSourceFactoryImpl.FactoryStorage
-org.apache.hadoop.hbase.master.RegionState.State
+org.apache.hadoop.hbase.master.MasterRpcServices.BalanceSwitchMode
 org.apache.hadoop.hbase.master.SplitLogManager.TerminationStatus
+org.apache.hadoop.hbase.master.RegionState.State
 org.apache.hadoop.hbase.master.SplitLogManager.ResubmitDirective
-org.apache.hadoop.hbase.master.MasterRpcServices.BalanceSwitchMode
+org.apache.hadoop.hbase.master.MetricsMasterSourceFactoryImpl.FactoryStorage
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/org/apache/hadoop/hbase/master/procedure/DisableTableProcedure.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/master/procedure/DisableTableProcedure.html
 
b/devapidocs/org/apache/hadoop/hbase/master/procedure/DisableTableProcedure.html
index c08b2dc..0f18bc4 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/master/procedure/DisableTableProcedure.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/master/procedure/DisableTableProcedure.html
@@ -763,7 +763,7 @@ extends 
 
 setTableStateToDisabled
-protected staticvoidsetTableStateToDisabled(MasterProcedureEnvenv,
+protected staticvoidsetTableStateToDisabled(MasterProcedureEnvenv,
   TableNametableName)
throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
 Mark table state to Disabled
@@ -781,7 +781,7 @@ extends 
 
 postDisable
-protectedvoidpostDisable(MasterProcedureEnvenv,
+protectedvoidpostDisable(MasterProcedureEnvenv,

org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProcedureProtos.DisableTableStatestate)
 throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException,
https://docs.oracle.com/javase/8/docs/api/java/lang/InterruptedException.html?is-external=true;
 title="class or interface in java.lang">InterruptedException
@@ -802,7 +802,7 @@ extends 
 
 isTraceEnabled
-privatehttps://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true;
 title="class or interface in java.lang">BooleanisTraceEnabled()
+privatehttps://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true;
 title="class or interface in java.lang">BooleanisTraceEnabled()
 The procedure could be restarted from a different machine. 
If the variable is null, we need to
  retrieve it.
 
@@ -817,7 +817,7 @@ extends 
 
 runCoprocessorAction
-privatevoidrunCoprocessorAction(MasterProcedureEnvenv,
+privatevoidrunCoprocessorAction(MasterProcedureEnvenv,
   
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProcedureProtos.DisableTableStatestate)
throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException,
   https://docs.oracle.com/javase/8/docs/api/java/lang/InterruptedException.html?is-external=true;
 title="class or interface in java.lang">InterruptedException

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/org/apache/hadoop/hbase/master/procedure/package-tree.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/master/procedure/package-tree.html 
b/devapidocs/org/apache/hadoop/hbase/master/procedure/package-tree.html
index 71e02ff..ddea7b8 100644
--- a/devapidocs/org/apache/hadoop/hbase/master/procedure/package-tree.html
+++ b/devapidocs/org/apache/hadoop/hbase/master/procedure/package-tree.html
@@ -216,10 +216,10 @@
 
 java.lang.https://docs.oracle.com/javase/8/docs/api/java/lang/Enum.html?is-external=true;
 title="class or 

[46/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/org/apache/hadoop/hbase/backup/impl/BackupSystemTable.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/backup/impl/BackupSystemTable.html 
b/devapidocs/org/apache/hadoop/hbase/backup/impl/BackupSystemTable.html
index f06b2c1..3212827 100644
--- a/devapidocs/org/apache/hadoop/hbase/backup/impl/BackupSystemTable.html
+++ b/devapidocs/org/apache/hadoop/hbase/backup/impl/BackupSystemTable.html
@@ -114,7 +114,7 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Private
-public final class BackupSystemTable
+public final class BackupSystemTable
 extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
 implements https://docs.oracle.com/javase/8/docs/api/java/io/Closeable.html?is-external=true;
 title="class or interface in java.io">Closeable
 This class provides API to access backup system table
@@ -973,7 +973,7 @@ implements https://docs.oracle.com/javase/8/docs/api/java/io/Closeable.
 
 
 LOG
-private static finalorg.slf4j.Logger LOG
+private static finalorg.slf4j.Logger LOG
 
 
 
@@ -982,7 +982,7 @@ implements https://docs.oracle.com/javase/8/docs/api/java/io/Closeable.
 
 
 tableName
-privateTableName tableName
+privateTableName tableName
 Backup system table (main) name
 
 
@@ -992,7 +992,7 @@ implements https://docs.oracle.com/javase/8/docs/api/java/io/Closeable.
 
 
 bulkLoadTableName
-privateTableName bulkLoadTableName
+privateTableName bulkLoadTableName
 Backup System table name for bulk loaded files. We keep all 
bulk loaded file references in a
  separate table because we have to isolate general backup operations: create, 
merge etc from
  activity of RegionObserver, which controls process of a bulk loading
@@ -1005,7 +1005,7 @@ implements https://docs.oracle.com/javase/8/docs/api/java/io/Closeable.
 
 
 SESSIONS_FAMILY
-static finalbyte[] SESSIONS_FAMILY
+static finalbyte[] SESSIONS_FAMILY
 Stores backup sessions (contexts)
 
 
@@ -1015,7 +1015,7 @@ implements https://docs.oracle.com/javase/8/docs/api/java/io/Closeable.
 
 
 META_FAMILY
-static finalbyte[] META_FAMILY
+static finalbyte[] META_FAMILY
 Stores other meta
 
 
@@ -1025,7 +1025,7 @@ implements https://docs.oracle.com/javase/8/docs/api/java/io/Closeable.
 
 
 BULK_LOAD_FAMILY
-static finalbyte[] BULK_LOAD_FAMILY
+static finalbyte[] BULK_LOAD_FAMILY
 
 
 
@@ -1034,7 +1034,7 @@ implements https://docs.oracle.com/javase/8/docs/api/java/io/Closeable.
 
 
 connection
-private finalConnection connection
+private finalConnection connection
 Connection to HBase cluster, shared among all 
instances
 
 
@@ -1044,7 +1044,7 @@ implements https://docs.oracle.com/javase/8/docs/api/java/io/Closeable.
 
 
 BACKUP_INFO_PREFIX
-private static finalhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String BACKUP_INFO_PREFIX
+private static finalhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String BACKUP_INFO_PREFIX
 
 See Also:
 Constant
 Field Values
@@ -1057,7 +1057,7 @@ implements https://docs.oracle.com/javase/8/docs/api/java/io/Closeable.
 
 
 START_CODE_ROW
-private static finalhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String START_CODE_ROW
+private static finalhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String START_CODE_ROW
 
 See Also:
 Constant
 Field Values
@@ -1070,7 +1070,7 @@ implements https://docs.oracle.com/javase/8/docs/api/java/io/Closeable.
 
 
 ACTIVE_SESSION_ROW
-private static finalbyte[] ACTIVE_SESSION_ROW
+private static finalbyte[] ACTIVE_SESSION_ROW
 
 
 
@@ -1079,7 +1079,7 @@ implements https://docs.oracle.com/javase/8/docs/api/java/io/Closeable.
 
 
 ACTIVE_SESSION_COL
-private static finalbyte[] ACTIVE_SESSION_COL
+private static finalbyte[] ACTIVE_SESSION_COL
 
 
 
@@ -1088,7 +1088,7 @@ implements https://docs.oracle.com/javase/8/docs/api/java/io/Closeable.
 
 
 ACTIVE_SESSION_YES
-private static finalbyte[] ACTIVE_SESSION_YES
+private static finalbyte[] ACTIVE_SESSION_YES
 
 
 
@@ -1097,7 +1097,7 @@ implements https://docs.oracle.com/javase/8/docs/api/java/io/Closeable.
 
 
 ACTIVE_SESSION_NO
-private static finalbyte[] ACTIVE_SESSION_NO
+private static finalbyte[] ACTIVE_SESSION_NO
 
 
 
@@ -1106,7 +1106,7 @@ implements https://docs.oracle.com/javase/8/docs/api/java/io/Closeable.
 
 
 INCR_BACKUP_SET
-private static finalhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String INCR_BACKUP_SET
+private static finalhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class 

[48/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/checkstyle-aggregate.html
--
diff --git a/checkstyle-aggregate.html b/checkstyle-aggregate.html
index 0b79690..09486bc 100644
--- a/checkstyle-aggregate.html
+++ b/checkstyle-aggregate.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase  Checkstyle Results
 
@@ -271,7 +271,7 @@
   
 
 Checkstyle Results
-The following document contains the results of http://checkstyle.sourceforge.net/;>Checkstyle 8.2 with 
hbase/checkstyle.xml ruleset.
+The following document contains the results of http://checkstyle.sourceforge.net/;>Checkstyle 8.11 with 
hbase/checkstyle.xml ruleset.
 
 Summary
 
@@ -281,10 +281,10 @@
 Warnings
 Errors
 
-3697
+3698
 0
 0
-15626
+15578
 
 Files
 
@@ -434,570 +434,580 @@
 0
 6
 
+org/apache/hadoop/hbase/HBaseIOException.java
+0
+0
+2
+
 org/apache/hadoop/hbase/HBaseTestCase.java
 0
 0
 25
-
+
 org/apache/hadoop/hbase/HBaseTestingUtility.java
 0
 0
-276
-
+275
+
 org/apache/hadoop/hbase/HColumnDescriptor.java
 0
 0
 40
-
+
 org/apache/hadoop/hbase/HFilePerformanceEvaluation.java
 0
 0
 15
-
+
 org/apache/hadoop/hbase/HRegionInfo.java
 0
 0
 59
-
+
 org/apache/hadoop/hbase/HRegionLocation.java
 0
 0
 1
-
+
 org/apache/hadoop/hbase/HTableDescriptor.java
 0
 0
 38
-
+
 org/apache/hadoop/hbase/HTestConst.java
 0
 0
 2
-
+
 org/apache/hadoop/hbase/HealthChecker.java
 0
 0
 16
-
+
 org/apache/hadoop/hbase/IndividualBytesFieldCell.java
 0
 0
 9
-
+
 org/apache/hadoop/hbase/IntegrationTestBackupRestore.java
 0
 0
 3
-
+
 org/apache/hadoop/hbase/IntegrationTestDDLMasterFailover.java
 0
 0
 52
-
+
 org/apache/hadoop/hbase/IntegrationTestIngest.java
 0
 0
 10
-
+
 org/apache/hadoop/hbase/IntegrationTestIngestWithACL.java
 0
 0
 3
-
+
 org/apache/hadoop/hbase/IntegrationTestIngestWithEncryption.java
 0
 0
 1
-
+
 org/apache/hadoop/hbase/IntegrationTestIngestWithMOB.java
 0
 0
 5
-
+
 org/apache/hadoop/hbase/IntegrationTestIngestWithVisibilityLabels.java
 0
 0
 3
-
+
 org/apache/hadoop/hbase/IntegrationTestManyRegions.java
 0
 0
 2
-
+
 org/apache/hadoop/hbase/IntegrationTestMetaReplicas.java
 0
 0
 2
-
+
 org/apache/hadoop/hbase/IntegrationTestRegionReplicaPerf.java
 0
 0
 11
-
+
 org/apache/hadoop/hbase/IntegrationTestingUtility.java
 0
 0
 5
-
+
 org/apache/hadoop/hbase/KeyValue.java
 0
 0
 117
-
+
 org/apache/hadoop/hbase/KeyValueTestUtil.java
 0
 0
 8
-
+
 org/apache/hadoop/hbase/KeyValueUtil.java
 0
 0
 29
-
+
 org/apache/hadoop/hbase/LocalHBaseCluster.java
 0
 0
 32
-
+
 org/apache/hadoop/hbase/MetaMockingUtil.java
 0
 0
 4
-
+
 org/apache/hadoop/hbase/MetaMutationAnnotation.java
 0
 0
 1
-
+
 org/apache/hadoop/hbase/MetaTableAccessor.java
 0
 0
 66
-
+
 org/apache/hadoop/hbase/MiniHBaseCluster.java
 0
 0
 25
-
+
 org/apache/hadoop/hbase/MockRegionServerServices.java
 0
 0
 1
-
+
 org/apache/hadoop/hbase/MultithreadedTestUtil.java
 0
 0
 3
-
+
 org/apache/hadoop/hbase/NamespaceDescriptor.java
 0
 0
 3
-
+
 org/apache/hadoop/hbase/NotAllMetaRegionsOnlineException.java
 0
 0
 1
-
+
 org/apache/hadoop/hbase/NotServingRegionException.java
 0
 0
 1
-
+
 org/apache/hadoop/hbase/PerformanceEvaluation.java
 0
 0
 39
-
+
 org/apache/hadoop/hbase/PerformanceEvaluationCommons.java
 0
 0
 1
-
+
 org/apache/hadoop/hbase/PrivateCellUtil.java
 0
 0
 67
-
+
 org/apache/hadoop/hbase/QosTestHelper.java
 0
 0
 3
-
+
 org/apache/hadoop/hbase/RESTApiClusterManager.java
 0
 0
 2
-
+
 org/apache/hadoop/hbase/RegionLoad.java
 0
 0
 1
-
+
 org/apache/hadoop/hbase/RegionLocations.java
 0
 0
 11
-
+
 org/apache/hadoop/hbase/RegionStateListener.java
 0
 0
 2
-
+
 org/apache/hadoop/hbase/ResourceChecker.java
 0
 0
 4
-
+
 org/apache/hadoop/hbase/ScanPerformanceEvaluation.java
 0
 0
 12
-
+
 org/apache/hadoop/hbase/ScheduledChore.java
 0
 0
 5
-
+
 org/apache/hadoop/hbase/Server.java
 0
 0
 1
-
+
 org/apache/hadoop/hbase/ServerLoad.java
 0
 0
 1
-
+
 org/apache/hadoop/hbase/ServerName.java
 0
 0
 25
-
+
 org/apache/hadoop/hbase/SplitLogCounters.java
 0
 0
 1
-
+
 org/apache/hadoop/hbase/SplitLogTask.java
 0
 0
 4
-
+
 org/apache/hadoop/hbase/StripeCompactionsPerformanceEvaluation.java
 0
 0
 3
-
+
 org/apache/hadoop/hbase/TableDescriptors.java
 0
 0
 12
-
+
 org/apache/hadoop/hbase/TableInfoMissingException.java
 0
 0
 6
-
+
 org/apache/hadoop/hbase/TableName.java
 0
 0
 17
-
+
 org/apache/hadoop/hbase/TableNotDisabledException.java
 0
 0
 1
-
+
 org/apache/hadoop/hbase/TableNotEnabledException.java
 0
 0
 1
-
+
 org/apache/hadoop/hbase/TableNotFoundException.java
 0
 0
 1
-
+
 org/apache/hadoop/hbase/TagType.java
 0
 0
 1
-
+
 org/apache/hadoop/hbase/TestCellUtil.java
 0
 0
 1
-
+
 org/apache/hadoop/hbase/TestCheckTestClasses.java
 0
 0
 2
-
+
 org/apache/hadoop/hbase/TestClassFinder.java
 0
 0
 2
-
+
 org/apache/hadoop/hbase/TestClientClusterStatus.java
 0
 0
 2
-
+
 org/apache/hadoop/hbase/TestClientOperationTimeout.java
 0

[37/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/org/apache/hadoop/hbase/master/MasterRpcServices.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/master/MasterRpcServices.html 
b/devapidocs/org/apache/hadoop/hbase/master/MasterRpcServices.html
index c26e1d4..1ddba78 100644
--- a/devapidocs/org/apache/hadoop/hbase/master/MasterRpcServices.html
+++ b/devapidocs/org/apache/hadoop/hbase/master/MasterRpcServices.html
@@ -1132,7 +1132,7 @@ implements 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.Master
 
 
 deleteColumn
-publicorg.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteColumnResponsedeleteColumn(org.apache.hbase.thirdparty.com.google.protobuf.RpcControllercontroller,
+publicorg.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteColumnResponsedeleteColumn(org.apache.hbase.thirdparty.com.google.protobuf.RpcControllercontroller,

 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteColumnRequestreq)

  throws 
org.apache.hbase.thirdparty.com.google.protobuf.ServiceException
 
@@ -1149,7 +1149,7 @@ implements 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.Master
 
 
 deleteNamespace
-publicorg.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteNamespaceResponsedeleteNamespace(org.apache.hbase.thirdparty.com.google.protobuf.RpcControllercontroller,
+publicorg.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteNamespaceResponsedeleteNamespace(org.apache.hbase.thirdparty.com.google.protobuf.RpcControllercontroller,

   
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteNamespaceRequestrequest)

throws 
org.apache.hbase.thirdparty.com.google.protobuf.ServiceException
 
@@ -1166,7 +1166,7 @@ implements 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.Master
 
 
 deleteSnapshot
-publicorg.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteSnapshotResponsedeleteSnapshot(org.apache.hbase.thirdparty.com.google.protobuf.RpcControllercontroller,
+publicorg.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteSnapshotResponsedeleteSnapshot(org.apache.hbase.thirdparty.com.google.protobuf.RpcControllercontroller,

 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteSnapshotRequestrequest)

  throws 
org.apache.hbase.thirdparty.com.google.protobuf.ServiceException
 Execute Delete Snapshot operation.
@@ -1188,7 +1188,7 @@ implements 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.Master
 
 
 deleteTable
-publicorg.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteTableResponsedeleteTable(org.apache.hbase.thirdparty.com.google.protobuf.RpcControllercontroller,
+publicorg.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteTableResponsedeleteTable(org.apache.hbase.thirdparty.com.google.protobuf.RpcControllercontroller,

   
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteTableRequestrequest)

throws 
org.apache.hbase.thirdparty.com.google.protobuf.ServiceException
 
@@ -1205,7 +1205,7 @@ implements 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.Master
 
 
 truncateTable
-publicorg.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.TruncateTableResponsetruncateTable(org.apache.hbase.thirdparty.com.google.protobuf.RpcControllercontroller,
+publicorg.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.TruncateTableResponsetruncateTable(org.apache.hbase.thirdparty.com.google.protobuf.RpcControllercontroller,

   
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.TruncateTableRequestrequest)

throws 
org.apache.hbase.thirdparty.com.google.protobuf.ServiceException
 
@@ -1222,7 +1222,7 @@ implements 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.Master
 
 
 disableTable

hbase-site git commit: INFRA-10751 Empty commit

2018-08-02 Thread git-site-role
Repository: hbase-site
Updated Branches:
  refs/heads/asf-site 7cf6034ba -> 6dd5afc28


INFRA-10751 Empty commit


Project: http://git-wip-us.apache.org/repos/asf/hbase-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase-site/commit/6dd5afc2
Tree: http://git-wip-us.apache.org/repos/asf/hbase-site/tree/6dd5afc2
Diff: http://git-wip-us.apache.org/repos/asf/hbase-site/diff/6dd5afc2

Branch: refs/heads/asf-site
Commit: 6dd5afc2803960b8437b0ef294c767703de7abda
Parents: 7cf6034
Author: jenkins 
Authored: Thu Aug 2 19:51:50 2018 +
Committer: jenkins 
Committed: Thu Aug 2 19:51:50 2018 +

--

--




[41/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.html 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.html
index dccbeab..7b93965 100644
--- a/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.html
+++ b/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.html
@@ -18,7 +18,7 @@
 catch(err) {
 }
 //-->
-var methods = 
{"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":10,"i14":10,"i15":10,"i16":10,"i17":10,"i18":10,"i19":10,"i20":10,"i21":10,"i22":10,"i23":10,"i24":10,"i25":10,"i26":10,"i27":10,"i28":10,"i29":10,"i30":10,"i31":10,"i32":10,"i33":9,"i34":10,"i35":10,"i36":10,"i37":10,"i38":10,"i39":10,"i40":10,"i41":10,"i42":10,"i43":10,"i44":10,"i45":10,"i46":10,"i47":10,"i48":10,"i49":10,"i50":10};
+var methods = 
{"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":10,"i14":10,"i15":10,"i16":10,"i17":10,"i18":10,"i19":10,"i20":10,"i21":10,"i22":10,"i23":10,"i24":10,"i25":10,"i26":10,"i27":10,"i28":10,"i29":10,"i30":10,"i31":10,"i32":10,"i33":10,"i34":9,"i35":10,"i36":10,"i37":10,"i38":10,"i39":10,"i40":10,"i41":10,"i42":10,"i43":10,"i44":10,"i45":10,"i46":10,"i47":10,"i48":10,"i49":10,"i50":10,"i51":10,"i52":10,"i53":10};
 var tabs = {65535:["t0","All Methods"],1:["t1","Static 
Methods"],2:["t2","Instance Methods"],8:["t4","Concrete Methods"]};
 var altColor = "altColor";
 var rowColor = "rowColor";
@@ -297,134 +297,130 @@ implements DEFAULT_WRITER_THREADS
 
 
-private UniqueIndexMaphttps://docs.oracle.com/javase/8/docs/api/java/lang/Integer.html?is-external=true;
 title="class or interface in java.lang">Integer
-deserialiserMap
-
-
 (package private) static https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 EXTRA_FREE_FACTOR_CONFIG_NAME
 
-
+
 private float
 extraFreeFactor
 Free this floating point factor of extra blocks when 
evicting.
 
 
-
+
 private boolean
 freeInProgress
 Volatile boolean to track if free space is in process or 
not
 
 
-
+
 private https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/locks/Lock.html?is-external=true;
 title="class or interface in java.util.concurrent.locks">Lock
 freeSpaceLock
 
-
+
 private https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/atomic/LongAdder.html?is-external=true;
 title="class or interface in 
java.util.concurrent.atomic">LongAdder
 heapSize
 
-
+
 (package private) IOEngine
 ioEngine
 
-
+
 private long
 ioErrorStartTime
 
-
+
 private int
 ioErrorsTolerationDuration
 Duration of IO errors tolerated before we disable cache, 1 
min as default
 
 
-
+
 private static org.slf4j.Logger
 LOG
 
-
+
 (package private) static https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 MEMORY_FACTOR_CONFIG_NAME
 
-
+
 private float
 memoryFactor
 In-memory bucket size
 
 
-
+
 (package private) static https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 MIN_FACTOR_CONFIG_NAME
 
-
+
 private float
 minFactor
 Minimum threshold of cache (when evicting, evict until size 
< min)
 
 
-
+
 (package private) static https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 MULTI_FACTOR_CONFIG_NAME
 
-
+
 private float
 multiFactor
 Multiple access bucket size
 
 
-
+
 (package private) IdReadWriteLock
 offsetLock
 A ReentrantReadWriteLock to lock on a particular block 
identified by offset.
 
 
-
+
 private https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 persistencePath
 
-
+
 (package private) https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ConcurrentMap.html?is-external=true;
 title="class or interface in java.util.concurrent">ConcurrentMapBlockCacheKey,BucketCache.RAMQueueEntry
 ramCache
 
-
+
 private https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/atomic/LongAdder.html?is-external=true;
 title="class or interface in 
java.util.concurrent.atomic">LongAdder
 realCacheSize
 
-
+
 private https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ScheduledExecutorService.html?is-external=true;
 title="class or interface in 
java.util.concurrent">ScheduledExecutorService
 scheduleThreadPool
 Statistics thread schedule pool (for heavy debugging, could 
remove)
 
 
-
+
 (package private) static https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 

[39/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/class-use/UniqueIndexMap.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/class-use/UniqueIndexMap.html
 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/class-use/UniqueIndexMap.html
deleted file mode 100644
index f576e0c..000
--- 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/class-use/UniqueIndexMap.html
+++ /dev/null
@@ -1,193 +0,0 @@
-http://www.w3.org/TR/html4/loose.dtd;>
-
-
-
-
-
-Uses of Class org.apache.hadoop.hbase.io.hfile.bucket.UniqueIndexMap 
(Apache HBase 3.0.0-SNAPSHOT API)
-
-
-
-
-
-
-
-JavaScript is disabled on your browser.
-
-
-
-
-
-Skip navigation links
-
-
-
-
-Overview
-Package
-Class
-Use
-Tree
-Deprecated
-Index
-Help
-
-
-
-
-Prev
-Next
-
-
-Frames
-NoFrames
-
-
-AllClasses
-
-
-
-
-
-
-
-
-
-
-Uses of 
Classorg.apache.hadoop.hbase.io.hfile.bucket.UniqueIndexMap
-
-
-
-
-
-Packages that use UniqueIndexMap
-
-Package
-Description
-
-
-
-org.apache.hadoop.hbase.io.hfile.bucket
-
-Provides BucketCache, an 
implementation of
- BlockCache.
-
-
-
-
-
-
-
-
-
-
-Uses of UniqueIndexMap in 
org.apache.hadoop.hbase.io.hfile.bucket
-
-Fields in org.apache.hadoop.hbase.io.hfile.bucket
 declared as UniqueIndexMap
-
-Modifier and Type
-Field and Description
-
-
-
-private UniqueIndexMaphttps://docs.oracle.com/javase/8/docs/api/java/lang/Integer.html?is-external=true;
 title="class or interface in java.lang">Integer
-BucketCache.deserialiserMap
-
-
-
-
-Methods in org.apache.hadoop.hbase.io.hfile.bucket
 with parameters of type UniqueIndexMap
-
-Modifier and Type
-Method and Description
-
-
-
-protected CacheableDeserializerCacheable
-BucketCache.BucketEntry.deserializerReference(UniqueIndexMaphttps://docs.oracle.com/javase/8/docs/api/java/lang/Integer.html?is-external=true;
 title="class or interface in 
java.lang">IntegerdeserialiserMap)
-
-
-protected void
-BucketCache.BucketEntry.setDeserialiserReference(CacheableDeserializerCacheabledeserializer,
-UniqueIndexMaphttps://docs.oracle.com/javase/8/docs/api/java/lang/Integer.html?is-external=true;
 title="class or interface in 
java.lang">IntegerdeserialiserMap)
-
-
-BucketCache.BucketEntry
-BucketCache.RAMQueueEntry.writeToCache(IOEngineioEngine,
-BucketAllocatorbucketAllocator,
-UniqueIndexMaphttps://docs.oracle.com/javase/8/docs/api/java/lang/Integer.html?is-external=true;
 title="class or interface in java.lang">IntegerdeserialiserMap,
-https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/atomic/LongAdder.html?is-external=true;
 title="class or interface in 
java.util.concurrent.atomic">LongAdderrealCacheSize)
-
-
-
-
-
-
-
-
-
-
-
-
-Skip navigation links
-
-
-
-
-Overview
-Package
-Class
-Use
-Tree
-Deprecated
-Index
-Help
-
-
-
-
-Prev
-Next
-
-
-Frames
-NoFrames
-
-
-AllClasses
-
-
-
-
-
-
-
-
-
-Copyright  20072018 https://www.apache.org/;>The Apache Software Foundation. All rights 
reserved.
-
-

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/package-frame.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/package-frame.html 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/package-frame.html
index 877954c..6518b6c 100644
--- a/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/package-frame.html
+++ b/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/package-frame.html
@@ -27,13 +27,13 @@
 BucketCache.SharedMemoryBucketEntry
 BucketCache.StatisticsThread
 BucketCacheStats
+BucketProtoUtils
 ByteBufferIOEngine
 CachedEntryQueue
 FileIOEngine
 FileIOEngine.FileReadAccessor
 FileIOEngine.FileWriteAccessor
 FileMmapEngine
-UniqueIndexMap
 UnsafeSharedMemoryBucketEntry
 
 Exceptions

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/package-summary.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/package-summary.html 

[47/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/checkstyle.rss
--
diff --git a/checkstyle.rss b/checkstyle.rss
index 217f86d..e2959b5 100644
--- a/checkstyle.rss
+++ b/checkstyle.rss
@@ -25,8 +25,8 @@ under the License.
 en-us
 2007 - 2018 The Apache Software Foundation
 
-  File: 3697,
- Errors: 15626,
+  File: 3698,
+ Errors: 15578,
  Warnings: 0,
  Infos: 0
   
@@ -3023,7 +3023,7 @@ under the License.
   0
 
 
-  8
+  10
 
   
   
@@ -5137,7 +5137,7 @@ under the License.
   0
 
 
-  12
+  13
 
   
   
@@ -5305,7 +5305,7 @@ under the License.
   0
 
 
-  0
+  1
 
   
   
@@ -6887,7 +6887,7 @@ under the License.
   0
 
 
-  23
+  22
 
   
   
@@ -7965,7 +7965,7 @@ under the License.
   0
 
 
-  15
+  10
 
   
   
@@ -8175,7 +8175,7 @@ under the License.
   0
 
 
-  14
+  13
 
   
   
@@ -8348,20 +8348,6 @@ under the License.
   
   
 
-  http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.io.hfile.bucket.UniqueIndexMap.java;>org/apache/hadoop/hbase/io/hfile/bucket/UniqueIndexMap.java
-
-
-  0
-
-
-  0
-
-
-  1
-
-  
-  
-
   http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.master.TestRollingRestart.java;>org/apache/hadoop/hbase/master/TestRollingRestart.java
 
 
@@ -9314,6 +9300,20 @@ under the License.
   
   
 
+  http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.io.hfile.bucket.BucketProtoUtils.java;>org/apache/hadoop/hbase/io/hfile/bucket/BucketProtoUtils.java
+
+
+  0
+
+
+  0
+
+
+  0
+
+  
+  
+
   http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.coprocessor.CoprocessorServiceBackwardCompatiblity.java;>org/apache/hadoop/hbase/coprocessor/CoprocessorServiceBackwardCompatiblity.java
 
 
@@ -9729,7 +9729,7 @@ under the License.
   0
 
 
-  13
+  14
 
   
   
@@ -14522,6 +14522,20 @@ under the License.
   
   
 
+  http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.chaos.actions.RestartActiveNameNodeAction.java;>org/apache/hadoop/hbase/chaos/actions/RestartActiveNameNodeAction.java
+
+
+  0
+
+
+  0
+
+
+  0
+
+  
+  
+
   http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.replication.regionserver.TestSourceFSConfigurationProvider.java;>org/apache/hadoop/hbase/replication/regionserver/TestSourceFSConfigurationProvider.java
 
 
@@ -14923,7 +14937,7 @@ under the License.
   0
 
 
-  5
+  4
 
   
   
@@ -16141,7 +16155,7 @@ under the License.
   0
 
 
-  44
+  43
 
   
   
@@ -17177,7 +17191,7 @@ under the License.
   0
 
 
-  10
+  9
 
   
   
@@ -17471,7 +17485,7 @@ under the License.
   0
 

[49/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/book.html
--
diff --git a/book.html b/book.html
index fc5ffdb..4c27d13 100644
--- a/book.html
+++ b/book.html
@@ -14672,6 +14672,121 @@ See hbase:meta for 
more information on the meta
 
 
 
+
+69.5. 
MasterProcWAL
+
+HMaster records administrative operations and their running states, such as 
the handling of a crashed server,
+table creation, and other DDLs, into its own WAL file. The WALs are stored 
under the MasterProcWALs
+directory. The Master WALs are not like RegionServer WALs. Keeping up the 
Master WAL allows
+us run a state machine that is resilient across Master failures. For example, 
if a HMaster was in the
+middle of creating a table encounters an issue and fails, the next active 
HMaster can take up where
+the previous left off and carry the operation to completion. Since 
hbase-2.0.0, a
+new AssignmentManager (A.K.A AMv2) was introduced and the HMaster handles 
region assignment
+operations, server crash processing, balancing, etc., all via AMv2 persisting 
all state and
+transitions into MasterProcWALs rather than up into ZooKeeper, as we do in 
hbase-1.x.
+
+
+See AMv2 Description for Devs (and Procedure Framework (Pv2): https://issues.apache.org/jira/browse/HBASE-12439;>HBASE-12439 
for its basis) if you would like to learn more about the new
+AssignmentManager.
+
+
+69.5.1. 
Configurations for MasterProcWAL
+
+Here are the list of configurations that effect MasterProcWAL operation.
+You should not have to change your defaults.
+
+
+
+hbase.procedure.store.wal.periodic.roll.msec
+
+
+Description
+Frequency of generating a new WAL
+
+
+Default
+1h (360 in msec)
+
+
+
+
+
+
+hbase.procedure.store.wal.roll.threshold
+
+
+Description
+Threshold in size before the WAL rolls. Every time the WAL reaches this 
size or the above period, 1 hour, passes since last log roll, the HMaster will 
generate a new WAL.
+
+
+Default
+32MB (33554432 in byte)
+
+
+
+
+
+
+hbase.procedure.store.wal.warn.threshold
+
+
+Description
+If the number of WALs goes beyond this threshold, the following message 
should appear in the HMaster log with WARN level when rolling.
+
+
+
+procedure WALs count=xx above the warning threshold 64. check running 
procedures to see if something is stuck.
+
+
+
+Default
+64
+
+
+
+
+
+
+hbase.procedure.store.wal.max.retries.before.roll
+
+
+Description
+Max number of retry when syncing slots (records) to its underlying storage, 
such as HDFS. Every attempt, the following message should appear in the HMaster 
log.
+
+
+
+unable to sync slots, retry=xx
+
+
+
+Default
+3
+
+
+
+
+
+
+hbase.procedure.store.wal.sync.failure.roll.max
+
+
+Description
+After the above 3 retrials, the log is rolled and the retry count is reset 
to 0, thereon a new set of retrial starts. This configuration controls the max 
number of attempts of log rolling upon sync failure. That is, HMaster is 
allowed to fail to sync 9 times in total. Once it exceeds, the following log 
should appear in the HMaster log.
+
+
+
+Sync slots after log roll failed, abort.
+
+
+
+Default
+3
+
+
+
+
+
+
 
 
 
@@ -15320,7 +15435,8 @@ You will likely find references to the HLog in 
documentation tailored to these o
 by a short name label (that unfortunately is not always descriptive). You set 
the provider in
 hbase-site.xml passing the WAL provder short-name as the value on the
 hbase.wal.provider property (Set the provider for hbase:meta 
using the
-hbase.wal.meta_provider property).
+hbase.wal.meta_provider property, otherwise it uses the same provider 
configured
+by hbase.wal.provider).
 
 
 
@@ -40976,7 +41092,7 @@ 
org/apache/hadoop/hbase/security/access/AccessControlClient.revoke:(Lorg/apache/
 
 
 Version 3.0.0-SNAPSHOT
-Last updated 2018-08-01 14:29:55 UTC
+Last updated 2018-08-02 19:32:10 UTC
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/bulk-loads.html
--
diff --git a/bulk-loads.html b/bulk-loads.html
index 9b3e7dd..e292b41 100644
--- a/bulk-loads.html
+++ b/bulk-loads.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase   
   Bulk Loads in Apache HBase (TM)
@@ -306,7 +306,7 @@ under the License. -->
 https://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2018-08-01
+  Last Published: 
2018-08-02
 
 
 



[43/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileBlock.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileBlock.html 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileBlock.html
index 2f0eda0..10fd671 100644
--- a/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileBlock.html
+++ b/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileBlock.html
@@ -50,7 +50,7 @@ var activeTableTab = "activeTableTab";
 
 
 PrevClass
-NextClass
+NextClass
 
 
 Frames
@@ -186,40 +186,44 @@ implements Class and Description
 
 
+static class
+HFileBlock.BlockDeserializer
+
+
 (package private) static interface
 HFileBlock.BlockIterator
 Iterator for HFileBlocks.
 
 
-
+
 (package private) static interface
 HFileBlock.BlockWritable
 Something that can be written into a block.
 
 
-
+
 (package private) static interface
 HFileBlock.FSReader
 An HFile block reader with iteration ability.
 
 
-
+
 (package private) static class
 HFileBlock.FSReaderImpl
 Reads version 2 HFile blocks from the filesystem.
 
 
-
+
 (package private) static class
 HFileBlock.Header
 
-
+
 private static class
 HFileBlock.PrefetchedHeader
 Data-structure to use caching the header of the NEXT 
block.
 
 
-
+
 (package private) static class
 HFileBlock.Writer
 Unified version 2 HFile block 
writer.
@@ -248,7 +252,7 @@ implements Field and Description
 
 
-(package private) static CacheableDeserializerCacheable
+static CacheableDeserializerCacheable
 BLOCK_DESERIALIZER
 Used deserializing blocks from Cache.
 
@@ -968,7 +972,7 @@ implements 
 
 BLOCK_DESERIALIZER
-static finalCacheableDeserializerCacheable BLOCK_DESERIALIZER
+public static finalCacheableDeserializerCacheable BLOCK_DESERIALIZER
 Used deserializing blocks from Cache.
 
  
@@ -982,7 +986,7 @@ implements See Also:
-#serialize(ByteBuffer)
+serialize(ByteBuffer,
 boolean)
 
 
 
@@ -992,7 +996,7 @@ implements 
 
 DESERIALIZER_IDENTIFIER
-private static finalint DESERIALIZER_IDENTIFIER
+private static finalint DESERIALIZER_IDENTIFIER
 
 
 
@@ -1009,7 +1013,7 @@ implements 
 
 HFileBlock
-privateHFileBlock(HFileBlockthat)
+privateHFileBlock(HFileBlockthat)
 Copy constructor. Creates a shallow copy of 
that's buffer.
 
 
@@ -1019,7 +1023,7 @@ implements 
 
 HFileBlock
-privateHFileBlock(HFileBlockthat,
+privateHFileBlock(HFileBlockthat,
booleanbufCopy)
 Copy constructor. Creates a shallow/deep copy of 
that's buffer as per the boolean
  param.
@@ -1031,7 +1035,7 @@ implements 
 
 HFileBlock
-publicHFileBlock(BlockTypeblockType,
+publicHFileBlock(BlockTypeblockType,
   intonDiskSizeWithoutHeader,
   intuncompressedSizeWithoutHeader,
   longprevBlockOffset,
@@ -1068,7 +1072,7 @@ implements 
 
 HFileBlock
-HFileBlock(ByteBuffbuf,
+HFileBlock(ByteBuffbuf,
booleanusesHBaseChecksum,
Cacheable.MemoryTypememType,
longoffset,
@@ -1101,7 +1105,7 @@ implements 
 
 init
-privatevoidinit(BlockTypeblockType,
+privatevoidinit(BlockTypeblockType,
   intonDiskSizeWithoutHeader,
   intuncompressedSizeWithoutHeader,
   longprevBlockOffset,
@@ -1118,7 +1122,7 @@ implements 
 
 getOnDiskSizeWithHeader
-private staticintgetOnDiskSizeWithHeader(https://docs.oracle.com/javase/8/docs/api/java/nio/ByteBuffer.html?is-external=true;
 title="class or interface in java.nio">ByteBufferheaderBuf,
+private staticintgetOnDiskSizeWithHeader(https://docs.oracle.com/javase/8/docs/api/java/nio/ByteBuffer.html?is-external=true;
 title="class or interface in java.nio">ByteBufferheaderBuf,
booleanverifyChecksum)
 Parse total on disk size including header and 
checksum.
 
@@ -1136,7 +1140,7 @@ implements 
 
 getNextBlockOnDiskSize
-intgetNextBlockOnDiskSize()
+intgetNextBlockOnDiskSize()
 
 Returns:
 the on-disk size of the next block (including the header size and any 
checksums if
@@ -1151,7 +1155,7 @@ implements 
 
 getBlockType
-publicBlockTypegetBlockType()
+publicBlockTypegetBlockType()
 
 Specified by:
 getBlockTypein
 interfaceCacheable
@@ -1166,7 +1170,7 @@ implements 
 
 getDataBlockEncodingId
-shortgetDataBlockEncodingId()
+shortgetDataBlockEncodingId()
 
 Returns:
 get data block encoding id that was used to encode this block
@@ -1179,7 +1183,7 @@ implements 
 
 getOnDiskSizeWithHeader
-publicintgetOnDiskSizeWithHeader()
+publicintgetOnDiskSizeWithHeader()
 
 Returns:
 the on-disk size of header + data part + checksum.
@@ -1192,7 +1196,7 @@ implements 
 
 getOnDiskSizeWithoutHeader
-intgetOnDiskSizeWithoutHeader()
+intgetOnDiskSizeWithoutHeader()
 
 Returns:
 the on-disk size of the data part + checksum (header excluded).
@@ -1205,7 +1209,7 @@ implements 
 
 getUncompressedSizeWithoutHeader
-intgetUncompressedSizeWithoutHeader()
+intgetUncompressedSizeWithoutHeader()
 
 Returns:
 the 

[04/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/testdevapidocs/allclasses-frame.html
--
diff --git a/testdevapidocs/allclasses-frame.html 
b/testdevapidocs/allclasses-frame.html
index 1b24e50..b9a38d0 100644
--- a/testdevapidocs/allclasses-frame.html
+++ b/testdevapidocs/allclasses-frame.html
@@ -490,6 +490,7 @@
 RESTApiClusterManager.Service
 RestartActionBaseAction
 RestartActiveMasterAction
+RestartActiveNameNodeAction
 RestartMetaTest
 RestartRandomDataNodeAction
 RestartRandomRsAction

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/testdevapidocs/allclasses-noframe.html
--
diff --git a/testdevapidocs/allclasses-noframe.html 
b/testdevapidocs/allclasses-noframe.html
index 7b96d5d..8092c7f 100644
--- a/testdevapidocs/allclasses-noframe.html
+++ b/testdevapidocs/allclasses-noframe.html
@@ -490,6 +490,7 @@
 RESTApiClusterManager.Service
 RestartActionBaseAction
 RestartActiveMasterAction
+RestartActiveNameNodeAction
 RestartMetaTest
 RestartRandomDataNodeAction
 RestartRandomRsAction

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/testdevapidocs/constant-values.html
--
diff --git a/testdevapidocs/constant-values.html 
b/testdevapidocs/constant-values.html
index 92fdfe5..6491347 100644
--- a/testdevapidocs/constant-values.html
+++ b/testdevapidocs/constant-values.html
@@ -2220,6 +2220,20 @@
 "hbase.chaosmonkey.action.killmastertimeout"
 
 
+
+
+protectedstaticfinallong
+KILL_NAMENODE_TIMEOUT_DEFAULT
+6L
+
+
+
+
+publicstaticfinalhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
+KILL_NAMENODE_TIMEOUT_KEY
+"hbase.chaosmonkey.action.killnamenodetimeout"
+
+
 
 
 protectedstaticfinallong
@@ -2276,6 +2290,20 @@
 "hbase.chaosmonkey.action.startmastertimeout"
 
 
+
+
+protectedstaticfinallong
+START_NAMENODE_TIMEOUT_DEFAULT
+6L
+
+
+
+
+publicstaticfinalhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
+START_NAMENODE_TIMEOUT_KEY
+"hbase.chaosmonkey.action.startnamenodetimeout"
+
+
 
 
 protectedstaticfinallong
@@ -2327,6 +2355,39 @@
 
 
 
+org.apache.hadoop.hbase.chaos.actions.RestartActiveNameNodeAction
+
+Modifier and Type
+Constant Field
+Value
+
+
+
+
+
+privatestaticfinalhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
+ACTIVE_NN_LOCK_NAME
+"ActiveStandbyElectorLock"
+
+
+
+
+privatestaticfinalhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
+ZK_PARENT_ZNODE_DEFAULT
+"/hadoop-ha"
+
+
+
+
+privatestaticfinalhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
+ZK_PARENT_ZNODE_KEY
+"ha.zookeeper.parent-znode"
+
+
+
+
+
+
 org.apache.hadoop.hbase.chaos.actions.SplitAllRegionOfTableAction
 
 Modifier and Type

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/testdevapidocs/index-all.html
--
diff --git a/testdevapidocs/index-all.html b/testdevapidocs/index-all.html
index 8c8b1c3..41763ca 100644
--- a/testdevapidocs/index-all.html
+++ b/testdevapidocs/index-all.html
@@ -498,6 +498,8 @@
 
 activateFailure
 - Static variable in class org.apache.hadoop.hbase.regionserver.wal.InstrumentedLogWriter
 
+ACTIVE_NN_LOCK_NAME
 - Static variable in class org.apache.hadoop.hbase.chaos.actions.RestartActiveNameNodeAction
+
 activeMasterManager
 - Variable in class org.apache.hadoop.hbase.master.TestActiveMasterManager.DummyMaster
 
 ACTOR_PATTERN
 - Static variable in class org.apache.hadoop.hbase.mapred.TestTableMapReduceUtil
@@ -22257,6 +22259,10 @@
 
 KILL_MASTER_TIMEOUT_KEY
 - Static variable in class org.apache.hadoop.hbase.chaos.actions.Action
 
+KILL_NAMENODE_TIMEOUT_DEFAULT
 - Static variable in class org.apache.hadoop.hbase.chaos.actions.Action
+
+KILL_NAMENODE_TIMEOUT_KEY
 - Static variable in class org.apache.hadoop.hbase.chaos.actions.Action
+
 KILL_RS_TIMEOUT_DEFAULT
 - Static variable in class org.apache.hadoop.hbase.chaos.actions.Action
 
 KILL_RS_TIMEOUT_KEY
 - Static variable in class org.apache.hadoop.hbase.chaos.actions.Action
@@ -22319,6 +22325,19 @@
 
 killMetaRs
 - Variable in class org.apache.hadoop.hbase.chaos.factories.UnbalanceMonkeyFactory
 
+killNameNode(ServerName)
 - Method in class org.apache.hadoop.hbase.chaos.actions.Action
+
+killNameNode(ServerName)
 - Method in class org.apache.hadoop.hbase.DistributedHBaseCluster
+
+killNameNode(ServerName)
 - Method in class org.apache.hadoop.hbase.HBaseCluster
+
+Kills the namenode process if this is a distributed 

[08/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/wal/WALFactory.html
--
diff --git a/devapidocs/src-html/org/apache/hadoop/hbase/wal/WALFactory.html 
b/devapidocs/src-html/org/apache/hadoop/hbase/wal/WALFactory.html
index d2d8da1..5bbbf0c 100644
--- a/devapidocs/src-html/org/apache/hadoop/hbase/wal/WALFactory.html
+++ b/devapidocs/src-html/org/apache/hadoop/hbase/wal/WALFactory.html
@@ -90,391 +90,392 @@
 082  static final String 
DEFAULT_WAL_PROVIDER = Providers.defaultProvider.name();
 083
 084  public static final String 
META_WAL_PROVIDER = "hbase.wal.meta_provider";
-085  static final String 
DEFAULT_META_WAL_PROVIDER = Providers.defaultProvider.name();
-086
-087  final String factoryId;
-088  private final WALProvider provider;
-089  // The meta updates are written to a 
different wal. If this
-090  // regionserver holds meta regions, 
then this ref will be non-null.
-091  // lazily intialized; most 
RegionServers don't deal with META
-092  private final 
AtomicReferenceWALProvider metaProvider = new 
AtomicReference();
-093
-094  /**
-095   * Configuration-specified WAL Reader 
used when a custom reader is requested
-096   */
-097  private final Class? extends 
AbstractFSWALProvider.Reader logReaderClass;
-098
-099  /**
-100   * How long to attempt opening 
in-recovery wals
-101   */
-102  private final int timeoutMillis;
-103
-104  private final Configuration conf;
-105
-106  // Used for the singleton WALFactory, 
see below.
-107  private WALFactory(Configuration conf) 
{
-108// this code is duplicated here so we 
can keep our members final.
-109// until we've moved reader/writer 
construction down into providers, this initialization must
-110// happen prior to provider 
initialization, in case they need to instantiate a reader/writer.
-111timeoutMillis = 
conf.getInt("hbase.hlog.open.timeout", 30);
-112/* TODO Both of these are probably 
specific to the fs wal provider */
-113logReaderClass = 
conf.getClass("hbase.regionserver.hlog.reader.impl", ProtobufLogReader.class,
-114  
AbstractFSWALProvider.Reader.class);
-115this.conf = conf;
-116// end required early 
initialization
-117
-118// this instance can't create wals, 
just reader/writers.
-119provider = null;
-120factoryId = SINGLETON_ID;
-121  }
-122
-123  @VisibleForTesting
-124  public Class? extends 
WALProvider getProviderClass(String key, String defaultValue) {
-125try {
-126  Providers provider = 
Providers.valueOf(conf.get(key, defaultValue));
-127  if (provider != 
Providers.defaultProvider) {
-128// User gives a wal provider 
explicitly, just use that one
-129return provider.clazz;
-130  }
-131  // AsyncFSWAL has better 
performance in most cases, and also uses less resources, we will try
-132  // to use it if possible. But it 
deeply hacks into the internal of DFSClient so will be easily
-133  // broken when upgrading hadoop. If 
it is broken, then we fall back to use FSHLog.
-134  if (AsyncFSWALProvider.load()) {
-135return 
AsyncFSWALProvider.class;
-136  } else {
-137return FSHLogProvider.class;
-138  }
-139} catch (IllegalArgumentException 
exception) {
-140  // Fall back to them specifying a 
class name
-141  // Note that the passed default 
class shouldn't actually be used, since the above only fails
-142  // when there is a config value 
present.
-143  return conf.getClass(key, 
Providers.defaultProvider.clazz, WALProvider.class);
-144}
-145  }
-146
-147  static WALProvider 
createProvider(Class? extends WALProvider clazz) throws IOException {
-148LOG.info("Instantiating WALProvider 
of type {}", clazz);
-149try {
-150  return 
clazz.getDeclaredConstructor().newInstance();
-151} catch (Exception e) {
-152  LOG.error("couldn't set up 
WALProvider, the configured class is " + clazz);
-153  LOG.debug("Exception details for 
failure to load WALProvider.", e);
-154  throw new IOException("couldn't set 
up WALProvider", e);
-155}
-156  }
-157
-158  /**
-159   * @param conf must not be null, will 
keep a reference to read params in later reader/writer
-160   *  instances.
-161   * @param factoryId a unique identifier 
for this factory. used i.e. by filesystem implementations
-162   *  to make a directory
-163   */
-164  public WALFactory(Configuration conf, 
String factoryId) throws IOException {
-165// default 
enableSyncReplicationWALProvider is true, only disable 
SyncReplicationWALProvider
-166// for HMaster or HRegionServer which 
take system table only. See HBASE-1
-167this(conf, factoryId, true);
-168  }
-169
-170  /**
-171   * @param conf must not be null, will 
keep a reference to read params in later reader/writer
-172   *  instances.
-173   * @param factoryId a unique identifier 

[19/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.BucketEntryGroup.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.BucketEntryGroup.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.BucketEntryGroup.html
index bd3c59e..21e240a 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.BucketEntryGroup.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.BucketEntryGroup.html
@@ -33,62 +33,62 @@
 025import java.io.FileNotFoundException;
 026import java.io.FileOutputStream;
 027import java.io.IOException;
-028import java.io.ObjectInputStream;
-029import java.io.ObjectOutputStream;
-030import java.io.Serializable;
-031import java.nio.ByteBuffer;
-032import java.util.ArrayList;
-033import java.util.Comparator;
-034import java.util.HashSet;
-035import java.util.Iterator;
-036import java.util.List;
-037import java.util.Map;
-038import java.util.NavigableSet;
-039import java.util.PriorityQueue;
-040import java.util.Set;
-041import 
java.util.concurrent.ArrayBlockingQueue;
-042import 
java.util.concurrent.BlockingQueue;
-043import 
java.util.concurrent.ConcurrentHashMap;
-044import 
java.util.concurrent.ConcurrentMap;
-045import 
java.util.concurrent.ConcurrentSkipListSet;
-046import java.util.concurrent.Executors;
-047import 
java.util.concurrent.ScheduledExecutorService;
-048import java.util.concurrent.TimeUnit;
-049import 
java.util.concurrent.atomic.AtomicInteger;
-050import 
java.util.concurrent.atomic.AtomicLong;
-051import 
java.util.concurrent.atomic.LongAdder;
-052import java.util.concurrent.locks.Lock;
-053import 
java.util.concurrent.locks.ReentrantLock;
-054import 
java.util.concurrent.locks.ReentrantReadWriteLock;
-055import 
org.apache.hadoop.conf.Configuration;
-056import 
org.apache.hadoop.hbase.HBaseConfiguration;
-057import 
org.apache.hadoop.hbase.io.HeapSize;
-058import 
org.apache.hadoop.hbase.io.hfile.BlockCache;
-059import 
org.apache.hadoop.hbase.io.hfile.BlockCacheKey;
-060import 
org.apache.hadoop.hbase.io.hfile.BlockCacheUtil;
-061import 
org.apache.hadoop.hbase.io.hfile.BlockPriority;
-062import 
org.apache.hadoop.hbase.io.hfile.BlockType;
-063import 
org.apache.hadoop.hbase.io.hfile.CacheStats;
-064import 
org.apache.hadoop.hbase.io.hfile.Cacheable;
-065import 
org.apache.hadoop.hbase.io.hfile.Cacheable.MemoryType;
-066import 
org.apache.hadoop.hbase.io.hfile.CacheableDeserializer;
-067import 
org.apache.hadoop.hbase.io.hfile.CacheableDeserializerIdManager;
-068import 
org.apache.hadoop.hbase.io.hfile.CachedBlock;
-069import 
org.apache.hadoop.hbase.io.hfile.HFileBlock;
-070import 
org.apache.hadoop.hbase.nio.ByteBuff;
-071import 
org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
-072import 
org.apache.hadoop.hbase.util.HasThread;
-073import 
org.apache.hadoop.hbase.util.IdReadWriteLock;
-074import 
org.apache.hadoop.hbase.util.IdReadWriteLock.ReferenceType;
-075import 
org.apache.hadoop.hbase.util.UnsafeAvailChecker;
-076import 
org.apache.hadoop.util.StringUtils;
-077import 
org.apache.yetus.audience.InterfaceAudience;
-078import org.slf4j.Logger;
-079import org.slf4j.LoggerFactory;
-080
-081import 
org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
-082import 
org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
-083import 
org.apache.hbase.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder;
+028import java.io.Serializable;
+029import java.nio.ByteBuffer;
+030import java.util.ArrayList;
+031import java.util.Comparator;
+032import java.util.HashSet;
+033import java.util.Iterator;
+034import java.util.List;
+035import java.util.Map;
+036import java.util.NavigableSet;
+037import java.util.PriorityQueue;
+038import java.util.Set;
+039import 
java.util.concurrent.ArrayBlockingQueue;
+040import 
java.util.concurrent.BlockingQueue;
+041import 
java.util.concurrent.ConcurrentHashMap;
+042import 
java.util.concurrent.ConcurrentMap;
+043import 
java.util.concurrent.ConcurrentSkipListSet;
+044import java.util.concurrent.Executors;
+045import 
java.util.concurrent.ScheduledExecutorService;
+046import java.util.concurrent.TimeUnit;
+047import 
java.util.concurrent.atomic.AtomicInteger;
+048import 
java.util.concurrent.atomic.AtomicLong;
+049import 
java.util.concurrent.atomic.LongAdder;
+050import java.util.concurrent.locks.Lock;
+051import 
java.util.concurrent.locks.ReentrantLock;
+052import 
java.util.concurrent.locks.ReentrantReadWriteLock;
+053import 
org.apache.hadoop.conf.Configuration;
+054import 
org.apache.hadoop.hbase.HBaseConfiguration;
+055import 
org.apache.hadoop.hbase.io.HeapSize;
+056import 
org.apache.hadoop.hbase.io.hfile.BlockCache;
+057import 
org.apache.hadoop.hbase.io.hfile.BlockCacheKey;
+058import 

[27/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.FSReader.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.FSReader.html 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.FSReader.html
index b7b4236..3d1edb3 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.FSReader.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.FSReader.html
@@ -259,1863 +259,1867 @@
 251   * + Metadata!  + = See note on 
BLOCK_METADATA_SPACE above.
 252   * ++
 253   * /code
-254   * @see #serialize(ByteBuffer)
+254   * @see #serialize(ByteBuffer, 
boolean)
 255   */
-256  static final 
CacheableDeserializerCacheable BLOCK_DESERIALIZER =
-257  new 
CacheableDeserializerCacheable() {
-258@Override
-259public HFileBlock 
deserialize(ByteBuff buf, boolean reuse, MemoryType memType)
-260throws IOException {
-261  // The buf has the file block 
followed by block metadata.
-262  // Set limit to just before the 
BLOCK_METADATA_SPACE then rewind.
-263  buf.limit(buf.limit() - 
BLOCK_METADATA_SPACE).rewind();
-264  // Get a new buffer to pass the 
HFileBlock for it to 'own'.
-265  ByteBuff newByteBuff;
-266  if (reuse) {
-267newByteBuff = buf.slice();
-268  } else {
-269int len = buf.limit();
-270newByteBuff = new 
SingleByteBuff(ByteBuffer.allocate(len));
-271newByteBuff.put(0, buf, 
buf.position(), len);
-272  }
-273  // Read out the 
BLOCK_METADATA_SPACE content and shove into our HFileBlock.
-274  buf.position(buf.limit());
-275  buf.limit(buf.limit() + 
HFileBlock.BLOCK_METADATA_SPACE);
-276  boolean usesChecksum = buf.get() == 
(byte) 1;
-277  long offset = buf.getLong();
-278  int nextBlockOnDiskSize = 
buf.getInt();
-279  HFileBlock hFileBlock =
-280  new HFileBlock(newByteBuff, 
usesChecksum, memType, offset, nextBlockOnDiskSize, null);
-281  return hFileBlock;
-282}
-283
-284@Override
-285public int 
getDeserialiserIdentifier() {
-286  return DESERIALIZER_IDENTIFIER;
-287}
-288
-289@Override
-290public HFileBlock 
deserialize(ByteBuff b) throws IOException {
-291  // Used only in tests
-292  return deserialize(b, false, 
MemoryType.EXCLUSIVE);
-293}
-294  };
-295
-296  private static final int 
DESERIALIZER_IDENTIFIER;
-297  static {
-298DESERIALIZER_IDENTIFIER =
-299
CacheableDeserializerIdManager.registerDeserializer(BLOCK_DESERIALIZER);
-300  }
-301
-302  /**
-303   * Copy constructor. Creates a shallow 
copy of {@code that}'s buffer.
-304   */
-305  private HFileBlock(HFileBlock that) {
-306this(that, false);
-307  }
-308
-309  /**
-310   * Copy constructor. Creates a 
shallow/deep copy of {@code that}'s buffer as per the boolean
-311   * param.
-312   */
-313  private HFileBlock(HFileBlock that, 
boolean bufCopy) {
-314init(that.blockType, 
that.onDiskSizeWithoutHeader,
-315
that.uncompressedSizeWithoutHeader, that.prevBlockOffset,
-316that.offset, 
that.onDiskDataSizeWithHeader, that.nextBlockOnDiskSize, that.fileContext);
-317if (bufCopy) {
-318  this.buf = new 
SingleByteBuff(ByteBuffer.wrap(that.buf.toBytes(0, that.buf.limit(;
-319} else {
-320  this.buf = that.buf.duplicate();
-321}
-322  }
-323
-324  /**
-325   * Creates a new {@link HFile} block 
from the given fields. This constructor
-326   * is used only while writing blocks 
and caching,
-327   * and is sitting in a byte buffer and 
we want to stuff the block into cache.
-328   *
-329   * pTODO: The caller presumes 
no checksumming
-330   * required of this block instance 
since going into cache; checksum already verified on
-331   * underlying block data pulled in from 
filesystem. Is that correct? What if cache is SSD?
+256  public static final 
CacheableDeserializerCacheable BLOCK_DESERIALIZER = new 
BlockDeserializer();
+257
+258  public static final class 
BlockDeserializer implements CacheableDeserializerCacheable {
+259private BlockDeserializer() {
+260}
+261
+262@Override
+263public HFileBlock 
deserialize(ByteBuff buf, boolean reuse, MemoryType memType)
+264throws IOException {
+265  // The buf has the file block 
followed by block metadata.
+266  // Set limit to just before the 
BLOCK_METADATA_SPACE then rewind.
+267  buf.limit(buf.limit() - 
BLOCK_METADATA_SPACE).rewind();
+268  // Get a new buffer to pass the 
HFileBlock for it to 'own'.
+269  ByteBuff newByteBuff;
+270  if (reuse) {
+271newByteBuff = buf.slice();
+272  } else {
+273int len = buf.limit();
+274newByteBuff = new 
SingleByteBuff(ByteBuffer.allocate(len));
+275newByteBuff.put(0, buf, 
buf.position(), len);
+276  }
+277  

[06/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/testapidocs/org/apache/hadoop/hbase/MiniHBaseCluster.html
--
diff --git a/testapidocs/org/apache/hadoop/hbase/MiniHBaseCluster.html 
b/testapidocs/org/apache/hadoop/hbase/MiniHBaseCluster.html
index 00c8bf0..1e87652 100644
--- a/testapidocs/org/apache/hadoop/hbase/MiniHBaseCluster.html
+++ b/testapidocs/org/apache/hadoop/hbase/MiniHBaseCluster.html
@@ -18,7 +18,7 @@
 catch(err) {
 }
 //-->
-var methods = 
{"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":42,"i13":10,"i14":10,"i15":10,"i16":10,"i17":10,"i18":10,"i19":10,"i20":10,"i21":10,"i22":10,"i23":10,"i24":10,"i25":10,"i26":10,"i27":10,"i28":10,"i29":10,"i30":10,"i31":10,"i32":10,"i33":10,"i34":10,"i35":10,"i36":10,"i37":10,"i38":10,"i39":10,"i40":10,"i41":10,"i42":10,"i43":10,"i44":10,"i45":10,"i46":10,"i47":10,"i48":10,"i49":10,"i50":10,"i51":10,"i52":10,"i53":10,"i54":10,"i55":10,"i56":10,"i57":10,"i58":10,"i59":10,"i60":10,"i61":10,"i62":10,"i63":10};
+var methods = 
{"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":42,"i13":10,"i14":10,"i15":10,"i16":10,"i17":10,"i18":10,"i19":10,"i20":10,"i21":10,"i22":10,"i23":10,"i24":10,"i25":10,"i26":10,"i27":10,"i28":10,"i29":10,"i30":10,"i31":10,"i32":10,"i33":10,"i34":10,"i35":10,"i36":10,"i37":10,"i38":10,"i39":10,"i40":10,"i41":10,"i42":10,"i43":10,"i44":10,"i45":10,"i46":10,"i47":10,"i48":10,"i49":10,"i50":10,"i51":10,"i52":10,"i53":10,"i54":10,"i55":10,"i56":10,"i57":10,"i58":10,"i59":10,"i60":10,"i61":10,"i62":10,"i63":10,"i64":10,"i65":10,"i66":10,"i67":10,"i68":10};
 var tabs = {65535:["t0","All Methods"],2:["t2","Instance 
Methods"],8:["t4","Concrete Methods"],32:["t6","Deprecated Methods"]};
 var altColor = "altColor";
 var rowColor = "rowColor";
@@ -119,7 +119,7 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Public
-public class MiniHBaseCluster
+public class MiniHBaseCluster
 extends HBaseCluster
 This class creates a single process HBase cluster.
  each server.  The master uses the 'default' FileSystem.  The RegionServers,
@@ -416,38 +416,45 @@ extends 
 void
+killNameNode(ServerNameserverName)
+Kills the namenode process if this is a distributed 
cluster, otherwise, this causes master to
+ exit doing basic clean up only.
+
+
+
+void
 killRegionServer(ServerNameserverName)
 Kills the region server process if this is a distributed 
cluster, otherwise
  this causes the region server to exit doing basic clean up only.
 
 
-
+
 void
 killZkNode(ServerNameserverName)
 Kills the zookeeper node process if this is a distributed 
cluster, otherwise,
  this causes master to exit doing basic clean up only.
 
 
-
+
 void
 shutdown()
 Shut down the mini HBase cluster
 
 
-
+
 void
 startDataNode(ServerNameserverName)
 Starts a new datanode on the given hostname or if this is a 
mini/local cluster,
  silently logs warning message.
 
 
-
+
 JVMClusterUtil.MasterThread
 startMaster()
 Starts a master thread running
 
 
-
+
 void
 startMaster(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringhostname,
intport)
@@ -455,13 +462,20 @@ extends 
+
+void
+startNameNode(ServerNameserverName)
+Starts a new namenode on the given hostname or if this is a 
mini/local cluster, silently logs
+ warning message.
+
+
+
 JVMClusterUtil.RegionServerThread
 startRegionServer()
 Starts a region server thread running
 
 
-
+
 void
 startRegionServer(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringhostname,
  intport)
@@ -469,13 +483,13 @@ extends 
+
 JVMClusterUtil.RegionServerThread
 startRegionServerAndWait(longtimeout)
 Starts a region server thread and waits until its processed 
by master.
 
 
-
+
 void
 startZkNode(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringhostname,
intport)
@@ -483,120 +497,140 @@ extends 
+
 void
 stopDataNode(ServerNameserverName)
 Stops the datanode if this is a distributed cluster, 
otherwise
  silently logs warning message.
 
 
-
+
 JVMClusterUtil.MasterThread
 stopMaster(intserverNumber)
 Shut down the specified master cleanly
 
 
-
+
 JVMClusterUtil.MasterThread
 stopMaster(intserverNumber,
   booleanshutdownFS)
 Shut down the specified master cleanly
 
 
-
+
 void
 stopMaster(ServerNameserverName)
 Stops the given master, by attempting a gradual stop.
 
 
-
+
+void
+stopNameNode(ServerNameserverName)
+Stops the namenode if this is a distributed cluster, 
otherwise silently logs warning message.
+
+
+
 JVMClusterUtil.RegionServerThread
 stopRegionServer(intserverNumber)
 Shut down the specified region server cleanly
 
 
-
+
 

[32/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/IncrementalBackupManager.NewestLogFilter.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/IncrementalBackupManager.NewestLogFilter.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/IncrementalBackupManager.NewestLogFilter.html
index ef680de..f919922 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/IncrementalBackupManager.NewestLogFilter.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/IncrementalBackupManager.NewestLogFilter.html
@@ -46,120 +46,120 @@
 038import 
org.apache.hadoop.hbase.backup.util.BackupUtils;
 039import 
org.apache.hadoop.hbase.client.Admin;
 040import 
org.apache.hadoop.hbase.client.Connection;
-041import 
org.apache.hadoop.hbase.util.FSUtils;
-042import 
org.apache.hadoop.hbase.wal.AbstractFSWALProvider;
-043import 
org.apache.yetus.audience.InterfaceAudience;
-044import org.slf4j.Logger;
-045import org.slf4j.LoggerFactory;
-046
-047/**
-048 * After a full backup was created, the 
incremental backup will only store the changes made after
-049 * the last full or incremental backup. 
Creating the backup copies the logfiles in .logs and
-050 * .oldlogs since the last backup 
timestamp.
-051 */
-052@InterfaceAudience.Private
-053public class IncrementalBackupManager 
extends BackupManager {
-054  public static final Logger LOG = 
LoggerFactory.getLogger(IncrementalBackupManager.class);
-055
-056  public 
IncrementalBackupManager(Connection conn, Configuration conf) throws 
IOException {
-057super(conn, conf);
-058  }
-059
-060  /**
-061   * Obtain the list of logs that need to 
be copied out for this incremental backup. The list is set
-062   * in BackupInfo.
-063   * @return The new HashMap of RS log 
time stamps after the log roll for this incremental backup.
-064   * @throws IOException exception
-065   */
-066  public HashMapString, Long 
getIncrBackupLogFileMap() throws IOException {
-067ListString logList;
-068HashMapString, Long 
newTimestamps;
-069HashMapString, Long 
previousTimestampMins;
-070
-071String savedStartCode = 
readBackupStartCode();
-072
-073// key: tableName
-074// value: 
RegionServer,PreviousTimeStamp
-075HashMapTableName, 
HashMapString, Long previousTimestampMap = readLogTimestampMap();
-076
-077previousTimestampMins = 
BackupUtils.getRSLogTimestampMins(previousTimestampMap);
-078
-079if (LOG.isDebugEnabled()) {
-080  LOG.debug("StartCode " + 
savedStartCode + "for backupID " + backupInfo.getBackupId());
-081}
-082// get all new log files from .logs 
and .oldlogs after last TS and before new timestamp
-083if (savedStartCode == null || 
previousTimestampMins == null
-084|| 
previousTimestampMins.isEmpty()) {
-085  throw new IOException(
-086  "Cannot read any previous back 
up timestamps from backup system table. "
-087  + "In order to create an 
incremental backup, at least one full backup is needed.");
-088}
-089
-090LOG.info("Execute roll log procedure 
for incremental backup ...");
-091HashMapString, String props = 
new HashMap();
-092props.put("backupRoot", 
backupInfo.getBackupRootDir());
-093
-094try (Admin admin = conn.getAdmin()) 
{
-095  
admin.execProcedure(LogRollMasterProcedureManager.ROLLLOG_PROCEDURE_SIGNATURE,
-096
LogRollMasterProcedureManager.ROLLLOG_PROCEDURE_NAME, props);
-097}
-098newTimestamps = 
readRegionServerLastLogRollResult();
-099
-100logList = 
getLogFilesForNewBackup(previousTimestampMins, newTimestamps, conf, 
savedStartCode);
-101ListWALItem 
logFromSystemTable =
-102
getLogFilesFromBackupSystem(previousTimestampMins, newTimestamps, 
getBackupInfo()
-103.getBackupRootDir());
-104logList = 
excludeAlreadyBackedUpWALs(logList, logFromSystemTable);
-105
backupInfo.setIncrBackupFileList(logList);
-106
-107return newTimestamps;
-108  }
-109
-110  /**
-111   * Get list of WAL files eligible for 
incremental backup.
-112   *
-113   * @return list of WAL files
-114   * @throws IOException if getting the 
list of WAL files fails
-115   */
-116  public ListString 
getIncrBackupLogFileList() throws IOException {
-117ListString logList;
-118HashMapString, Long 
newTimestamps;
-119HashMapString, Long 
previousTimestampMins;
-120
-121String savedStartCode = 
readBackupStartCode();
-122
-123// key: tableName
-124// value: 
RegionServer,PreviousTimeStamp
-125HashMapTableName, 
HashMapString, Long previousTimestampMap = readLogTimestampMap();
-126
-127previousTimestampMins = 
BackupUtils.getRSLogTimestampMins(previousTimestampMap);
-128
-129if (LOG.isDebugEnabled()) {
-130  LOG.debug("StartCode " + 
savedStartCode + "for backupID " + backupInfo.getBackupId());
-131}
-132  

[51/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.


Project: http://git-wip-us.apache.org/repos/asf/hbase-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase-site/commit/7cf6034b
Tree: http://git-wip-us.apache.org/repos/asf/hbase-site/tree/7cf6034b
Diff: http://git-wip-us.apache.org/repos/asf/hbase-site/diff/7cf6034b

Branch: refs/heads/asf-site
Commit: 7cf6034ba8af6b95a6dda060a311d501f809196c
Parents: a44d796
Author: jenkins 
Authored: Thu Aug 2 19:51:25 2018 +
Committer: jenkins 
Committed: Thu Aug 2 19:51:25 2018 +

--
 acid-semantics.html | 4 +-
 apache_hbase_reference_guide.pdf| 28731 +
 book.html   |   120 +-
 bulk-loads.html | 4 +-
 checkstyle-aggregate.html   | 20154 ++--
 checkstyle.rss  |   118 +-
 coc.html| 4 +-
 dependencies.html   | 4 +-
 dependency-convergence.html | 4 +-
 dependency-info.html| 4 +-
 dependency-management.html  | 4 +-
 devapidocs/allclasses-frame.html| 3 +-
 devapidocs/allclasses-noframe.html  | 3 +-
 devapidocs/constant-values.html |25 +-
 devapidocs/index-all.html   |91 +-
 .../backup/impl/BackupSystemTable.WALItem.html  |18 +-
 .../hbase/backup/impl/BackupSystemTable.html|   290 +-
 .../backup/impl/IncrementalBackupManager.html   |22 +-
 .../class-use/BackupSystemTable.WALItem.html| 4 +-
 .../hbase/backup/master/BackupLogCleaner.html   | 6 +-
 .../hadoop/hbase/backup/package-tree.html   | 2 +-
 .../hadoop/hbase/client/package-tree.html   |26 +-
 .../hadoop/hbase/coprocessor/package-tree.html  | 2 +-
 .../hadoop/hbase/filter/package-tree.html   |10 +-
 .../hbase/io/hfile/CacheableDeserializer.html   | 4 +
 .../hfile/CacheableDeserializerIdManager.html   |65 +-
 .../io/hfile/HFileBlock.BlockDeserializer.html  |   349 +
 .../io/hfile/HFileBlock.BlockIterator.html  |10 +-
 .../io/hfile/HFileBlock.BlockWritable.html  | 6 +-
 .../hbase/io/hfile/HFileBlock.FSReader.html |18 +-
 .../hbase/io/hfile/HFileBlock.FSReaderImpl.html |58 +-
 .../io/hfile/HFileBlock.PrefetchedHeader.html   |12 +-
 .../hbase/io/hfile/HFileBlock.Writer.State.html |12 +-
 .../hbase/io/hfile/HFileBlock.Writer.html   |80 +-
 .../hadoop/hbase/io/hfile/HFileBlock.html   |   130 +-
 .../hfile/bucket/BucketCache.BucketEntry.html   |97 +-
 .../bucket/BucketCache.BucketEntryGroup.html|18 +-
 .../hfile/bucket/BucketCache.RAMQueueEntry.html |26 +-
 .../BucketCache.SharedMemoryBucketEntry.html|22 +-
 .../bucket/BucketCache.StatisticsThread.html| 8 +-
 .../hfile/bucket/BucketCache.WriterThread.html  |16 +-
 .../hbase/io/hfile/bucket/BucketCache.html  |   383 +-
 .../hbase/io/hfile/bucket/BucketCacheStats.html | 4 +-
 .../hbase/io/hfile/bucket/BucketProtoUtils.html |   373 +
 .../io/hfile/bucket/ByteBufferIOEngine.html | 4 +-
 .../hadoop/hbase/io/hfile/bucket/IOEngine.html  | 4 +-
 .../hbase/io/hfile/bucket/UniqueIndexMap.html   |   378 -
 .../bucket/UnsafeSharedMemoryBucketEntry.html   | 6 +-
 .../hfile/bucket/class-use/BucketAllocator.html | 3 +-
 .../class-use/BucketAllocatorException.html | 7 +-
 .../class-use/BucketCache.BucketEntry.html  |18 +-
 .../io/hfile/bucket/class-use/BucketCache.html  |13 +
 .../bucket/class-use/BucketProtoUtils.html  |   125 +
 .../bucket/class-use/CacheFullException.html| 3 +-
 .../io/hfile/bucket/class-use/IOEngine.html | 3 +-
 .../hfile/bucket/class-use/UniqueIndexMap.html  |   193 -
 .../hbase/io/hfile/bucket/package-frame.html| 2 +-
 .../hbase/io/hfile/bucket/package-summary.html  |20 +-
 .../hbase/io/hfile/bucket/package-tree.html | 2 +-
 .../hbase/io/hfile/bucket/package-use.html  | 5 -
 .../hbase/io/hfile/class-use/BlockCacheKey.html |15 +-
 .../hbase/io/hfile/class-use/BlockPriority.html |13 +
 .../hbase/io/hfile/class-use/BlockType.html |38 +
 .../hfile/class-use/Cacheable.MemoryType.html   | 6 +
 .../hbase/io/hfile/class-use/Cacheable.html |14 +-
 .../hfile/class-use/CacheableDeserializer.html  |27 +-
 .../class-use/HFileBlock.BlockDeserializer.html |   125 +
 .../hbase/io/hfile/class-use/HFileBlock.html|10 +
 .../hadoop/hbase/io/hfile/package-frame.html| 1 +
 .../hadoop/hbase/io/hfile/package-summary.html  |71 +-
 .../hadoop/hbase/io/hfile/package-tree.html | 7 +-
 .../hadoop/hbase/io/hfile/package-use.html  |11 +-
 

[35/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/org/apache/hadoop/hbase/wal/WALFactory.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/wal/WALFactory.html 
b/devapidocs/org/apache/hadoop/hbase/wal/WALFactory.html
index 6f63a2a..7a02d08 100644
--- a/devapidocs/org/apache/hadoop/hbase/wal/WALFactory.html
+++ b/devapidocs/org/apache/hadoop/hbase/wal/WALFactory.html
@@ -175,53 +175,49 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 (package private) static https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
-DEFAULT_META_WAL_PROVIDER
-
-
-(package private) static https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 DEFAULT_WAL_PROVIDER
 
-
+
 (package private) https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 factoryId
 
-
+
 private static org.slf4j.Logger
 LOG
 
-
+
 private https://docs.oracle.com/javase/8/docs/api/java/lang/Class.html?is-external=true;
 title="class or interface in java.lang">Class? extends AbstractFSWALProvider.Reader
 logReaderClass
 Configuration-specified WAL Reader used when a custom 
reader is requested
 
 
-
+
 static https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 META_WAL_PROVIDER
 
-
+
 private https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/atomic/AtomicReference.html?is-external=true;
 title="class or interface in 
java.util.concurrent.atomic">AtomicReferenceWALProvider
 metaProvider
 
-
+
 private WALProvider
 provider
 
-
+
 private static https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/atomic/AtomicReference.html?is-external=true;
 title="class or interface in 
java.util.concurrent.atomic">AtomicReferenceWALFactory
 singleton
 
-
+
 private static https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 SINGLETON_ID
 
-
+
 private int
 timeoutMillis
 How long to attempt opening in-recovery wals
 
 
-
+
 static https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 WAL_PROVIDER
 
@@ -360,7 +356,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 getInstance(org.apache.hadoop.conf.Configurationconfiguration)
 
 
-private WALProvider
+(package private) WALProvider
 getMetaProvider()
 
 
@@ -456,22 +452,13 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 
-
-
-
-
-
-DEFAULT_META_WAL_PROVIDER
-static finalhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String DEFAULT_META_WAL_PROVIDER
-
-
 
 
 
 
 
 factoryId
-finalhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String factoryId
+finalhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String factoryId
 
 
 
@@ -480,7 +467,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 provider
-private finalWALProvider provider
+private finalWALProvider provider
 
 
 
@@ -489,7 +476,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 metaProvider
-private finalhttps://docs.oracle.com/javase/8/docs/api/java/util/concurrent/atomic/AtomicReference.html?is-external=true;
 title="class or interface in 
java.util.concurrent.atomic">AtomicReferenceWALProvider metaProvider
+private finalhttps://docs.oracle.com/javase/8/docs/api/java/util/concurrent/atomic/AtomicReference.html?is-external=true;
 title="class or interface in 
java.util.concurrent.atomic">AtomicReferenceWALProvider metaProvider
 
 
 
@@ -498,7 +485,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 logReaderClass
-private finalhttps://docs.oracle.com/javase/8/docs/api/java/lang/Class.html?is-external=true;
 title="class or interface in java.lang">Class? extends AbstractFSWALProvider.Reader logReaderClass
+private finalhttps://docs.oracle.com/javase/8/docs/api/java/lang/Class.html?is-external=true;
 title="class or interface in java.lang">Class? extends AbstractFSWALProvider.Reader logReaderClass
 Configuration-specified WAL Reader used when a custom 
reader is requested
 
 
@@ -508,7 +495,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 timeoutMillis
-private finalint timeoutMillis
+private finalint timeoutMillis
 How long to attempt opening in-recovery wals
 
 
@@ -518,7 +505,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 conf

[25/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Header.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Header.html 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Header.html
index b7b4236..3d1edb3 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Header.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Header.html
@@ -259,1863 +259,1867 @@
 251   * + Metadata!  + = See note on 
BLOCK_METADATA_SPACE above.
 252   * ++
 253   * /code
-254   * @see #serialize(ByteBuffer)
+254   * @see #serialize(ByteBuffer, 
boolean)
 255   */
-256  static final 
CacheableDeserializerCacheable BLOCK_DESERIALIZER =
-257  new 
CacheableDeserializerCacheable() {
-258@Override
-259public HFileBlock 
deserialize(ByteBuff buf, boolean reuse, MemoryType memType)
-260throws IOException {
-261  // The buf has the file block 
followed by block metadata.
-262  // Set limit to just before the 
BLOCK_METADATA_SPACE then rewind.
-263  buf.limit(buf.limit() - 
BLOCK_METADATA_SPACE).rewind();
-264  // Get a new buffer to pass the 
HFileBlock for it to 'own'.
-265  ByteBuff newByteBuff;
-266  if (reuse) {
-267newByteBuff = buf.slice();
-268  } else {
-269int len = buf.limit();
-270newByteBuff = new 
SingleByteBuff(ByteBuffer.allocate(len));
-271newByteBuff.put(0, buf, 
buf.position(), len);
-272  }
-273  // Read out the 
BLOCK_METADATA_SPACE content and shove into our HFileBlock.
-274  buf.position(buf.limit());
-275  buf.limit(buf.limit() + 
HFileBlock.BLOCK_METADATA_SPACE);
-276  boolean usesChecksum = buf.get() == 
(byte) 1;
-277  long offset = buf.getLong();
-278  int nextBlockOnDiskSize = 
buf.getInt();
-279  HFileBlock hFileBlock =
-280  new HFileBlock(newByteBuff, 
usesChecksum, memType, offset, nextBlockOnDiskSize, null);
-281  return hFileBlock;
-282}
-283
-284@Override
-285public int 
getDeserialiserIdentifier() {
-286  return DESERIALIZER_IDENTIFIER;
-287}
-288
-289@Override
-290public HFileBlock 
deserialize(ByteBuff b) throws IOException {
-291  // Used only in tests
-292  return deserialize(b, false, 
MemoryType.EXCLUSIVE);
-293}
-294  };
-295
-296  private static final int 
DESERIALIZER_IDENTIFIER;
-297  static {
-298DESERIALIZER_IDENTIFIER =
-299
CacheableDeserializerIdManager.registerDeserializer(BLOCK_DESERIALIZER);
-300  }
-301
-302  /**
-303   * Copy constructor. Creates a shallow 
copy of {@code that}'s buffer.
-304   */
-305  private HFileBlock(HFileBlock that) {
-306this(that, false);
-307  }
-308
-309  /**
-310   * Copy constructor. Creates a 
shallow/deep copy of {@code that}'s buffer as per the boolean
-311   * param.
-312   */
-313  private HFileBlock(HFileBlock that, 
boolean bufCopy) {
-314init(that.blockType, 
that.onDiskSizeWithoutHeader,
-315
that.uncompressedSizeWithoutHeader, that.prevBlockOffset,
-316that.offset, 
that.onDiskDataSizeWithHeader, that.nextBlockOnDiskSize, that.fileContext);
-317if (bufCopy) {
-318  this.buf = new 
SingleByteBuff(ByteBuffer.wrap(that.buf.toBytes(0, that.buf.limit(;
-319} else {
-320  this.buf = that.buf.duplicate();
-321}
-322  }
-323
-324  /**
-325   * Creates a new {@link HFile} block 
from the given fields. This constructor
-326   * is used only while writing blocks 
and caching,
-327   * and is sitting in a byte buffer and 
we want to stuff the block into cache.
-328   *
-329   * pTODO: The caller presumes 
no checksumming
-330   * required of this block instance 
since going into cache; checksum already verified on
-331   * underlying block data pulled in from 
filesystem. Is that correct? What if cache is SSD?
+256  public static final 
CacheableDeserializerCacheable BLOCK_DESERIALIZER = new 
BlockDeserializer();
+257
+258  public static final class 
BlockDeserializer implements CacheableDeserializerCacheable {
+259private BlockDeserializer() {
+260}
+261
+262@Override
+263public HFileBlock 
deserialize(ByteBuff buf, boolean reuse, MemoryType memType)
+264throws IOException {
+265  // The buf has the file block 
followed by block metadata.
+266  // Set limit to just before the 
BLOCK_METADATA_SPACE then rewind.
+267  buf.limit(buf.limit() - 
BLOCK_METADATA_SPACE).rewind();
+268  // Get a new buffer to pass the 
HFileBlock for it to 'own'.
+269  ByteBuff newByteBuff;
+270  if (reuse) {
+271newByteBuff = buf.slice();
+272  } else {
+273int len = buf.limit();
+274newByteBuff = new 
SingleByteBuff(ByteBuffer.allocate(len));
+275newByteBuff.put(0, buf, 
buf.position(), len);
+276  }
+277  // Read 

[17/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.SharedMemoryBucketEntry.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.SharedMemoryBucketEntry.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.SharedMemoryBucketEntry.html
index bd3c59e..21e240a 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.SharedMemoryBucketEntry.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.SharedMemoryBucketEntry.html
@@ -33,62 +33,62 @@
 025import java.io.FileNotFoundException;
 026import java.io.FileOutputStream;
 027import java.io.IOException;
-028import java.io.ObjectInputStream;
-029import java.io.ObjectOutputStream;
-030import java.io.Serializable;
-031import java.nio.ByteBuffer;
-032import java.util.ArrayList;
-033import java.util.Comparator;
-034import java.util.HashSet;
-035import java.util.Iterator;
-036import java.util.List;
-037import java.util.Map;
-038import java.util.NavigableSet;
-039import java.util.PriorityQueue;
-040import java.util.Set;
-041import 
java.util.concurrent.ArrayBlockingQueue;
-042import 
java.util.concurrent.BlockingQueue;
-043import 
java.util.concurrent.ConcurrentHashMap;
-044import 
java.util.concurrent.ConcurrentMap;
-045import 
java.util.concurrent.ConcurrentSkipListSet;
-046import java.util.concurrent.Executors;
-047import 
java.util.concurrent.ScheduledExecutorService;
-048import java.util.concurrent.TimeUnit;
-049import 
java.util.concurrent.atomic.AtomicInteger;
-050import 
java.util.concurrent.atomic.AtomicLong;
-051import 
java.util.concurrent.atomic.LongAdder;
-052import java.util.concurrent.locks.Lock;
-053import 
java.util.concurrent.locks.ReentrantLock;
-054import 
java.util.concurrent.locks.ReentrantReadWriteLock;
-055import 
org.apache.hadoop.conf.Configuration;
-056import 
org.apache.hadoop.hbase.HBaseConfiguration;
-057import 
org.apache.hadoop.hbase.io.HeapSize;
-058import 
org.apache.hadoop.hbase.io.hfile.BlockCache;
-059import 
org.apache.hadoop.hbase.io.hfile.BlockCacheKey;
-060import 
org.apache.hadoop.hbase.io.hfile.BlockCacheUtil;
-061import 
org.apache.hadoop.hbase.io.hfile.BlockPriority;
-062import 
org.apache.hadoop.hbase.io.hfile.BlockType;
-063import 
org.apache.hadoop.hbase.io.hfile.CacheStats;
-064import 
org.apache.hadoop.hbase.io.hfile.Cacheable;
-065import 
org.apache.hadoop.hbase.io.hfile.Cacheable.MemoryType;
-066import 
org.apache.hadoop.hbase.io.hfile.CacheableDeserializer;
-067import 
org.apache.hadoop.hbase.io.hfile.CacheableDeserializerIdManager;
-068import 
org.apache.hadoop.hbase.io.hfile.CachedBlock;
-069import 
org.apache.hadoop.hbase.io.hfile.HFileBlock;
-070import 
org.apache.hadoop.hbase.nio.ByteBuff;
-071import 
org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
-072import 
org.apache.hadoop.hbase.util.HasThread;
-073import 
org.apache.hadoop.hbase.util.IdReadWriteLock;
-074import 
org.apache.hadoop.hbase.util.IdReadWriteLock.ReferenceType;
-075import 
org.apache.hadoop.hbase.util.UnsafeAvailChecker;
-076import 
org.apache.hadoop.util.StringUtils;
-077import 
org.apache.yetus.audience.InterfaceAudience;
-078import org.slf4j.Logger;
-079import org.slf4j.LoggerFactory;
-080
-081import 
org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
-082import 
org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
-083import 
org.apache.hbase.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder;
+028import java.io.Serializable;
+029import java.nio.ByteBuffer;
+030import java.util.ArrayList;
+031import java.util.Comparator;
+032import java.util.HashSet;
+033import java.util.Iterator;
+034import java.util.List;
+035import java.util.Map;
+036import java.util.NavigableSet;
+037import java.util.PriorityQueue;
+038import java.util.Set;
+039import 
java.util.concurrent.ArrayBlockingQueue;
+040import 
java.util.concurrent.BlockingQueue;
+041import 
java.util.concurrent.ConcurrentHashMap;
+042import 
java.util.concurrent.ConcurrentMap;
+043import 
java.util.concurrent.ConcurrentSkipListSet;
+044import java.util.concurrent.Executors;
+045import 
java.util.concurrent.ScheduledExecutorService;
+046import java.util.concurrent.TimeUnit;
+047import 
java.util.concurrent.atomic.AtomicInteger;
+048import 
java.util.concurrent.atomic.AtomicLong;
+049import 
java.util.concurrent.atomic.LongAdder;
+050import java.util.concurrent.locks.Lock;
+051import 
java.util.concurrent.locks.ReentrantLock;
+052import 
java.util.concurrent.locks.ReentrantReadWriteLock;
+053import 
org.apache.hadoop.conf.Configuration;
+054import 
org.apache.hadoop.hbase.HBaseConfiguration;
+055import 
org.apache.hadoop.hbase.io.HeapSize;
+056import 
org.apache.hadoop.hbase.io.hfile.BlockCache;
+057import 
org.apache.hadoop.hbase.io.hfile.BlockCacheKey;

[01/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
Repository: hbase-site
Updated Branches:
  refs/heads/asf-site a44d79699 -> 7cf6034ba


http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/testdevapidocs/org/apache/hadoop/hbase/MiniHBaseCluster.html
--
diff --git a/testdevapidocs/org/apache/hadoop/hbase/MiniHBaseCluster.html 
b/testdevapidocs/org/apache/hadoop/hbase/MiniHBaseCluster.html
index 88d8c36..996b13a 100644
--- a/testdevapidocs/org/apache/hadoop/hbase/MiniHBaseCluster.html
+++ b/testdevapidocs/org/apache/hadoop/hbase/MiniHBaseCluster.html
@@ -18,7 +18,7 @@
 catch(err) {
 }
 //-->
-var methods = 
{"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":42,"i14":10,"i15":10,"i16":10,"i17":10,"i18":10,"i19":10,"i20":10,"i21":10,"i22":10,"i23":10,"i24":10,"i25":10,"i26":10,"i27":10,"i28":10,"i29":10,"i30":10,"i31":10,"i32":10,"i33":10,"i34":10,"i35":10,"i36":10,"i37":10,"i38":10,"i39":10,"i40":10,"i41":10,"i42":10,"i43":10,"i44":10,"i45":10,"i46":10,"i47":10,"i48":10,"i49":10,"i50":10,"i51":10,"i52":10,"i53":10,"i54":10,"i55":10,"i56":10,"i57":10,"i58":10,"i59":10,"i60":10,"i61":10,"i62":10,"i63":10,"i64":10,"i65":10};
+var methods = 
{"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":42,"i14":10,"i15":10,"i16":10,"i17":10,"i18":10,"i19":10,"i20":10,"i21":10,"i22":10,"i23":10,"i24":10,"i25":10,"i26":10,"i27":10,"i28":10,"i29":10,"i30":10,"i31":10,"i32":10,"i33":10,"i34":10,"i35":10,"i36":10,"i37":10,"i38":10,"i39":10,"i40":10,"i41":10,"i42":10,"i43":10,"i44":10,"i45":10,"i46":10,"i47":10,"i48":10,"i49":10,"i50":10,"i51":10,"i52":10,"i53":10,"i54":10,"i55":10,"i56":10,"i57":10,"i58":10,"i59":10,"i60":10,"i61":10,"i62":10,"i63":10,"i64":10,"i65":10,"i66":10,"i67":10,"i68":10,"i69":10,"i70":10};
 var tabs = {65535:["t0","All Methods"],2:["t2","Instance 
Methods"],8:["t4","Concrete Methods"],32:["t6","Deprecated Methods"]};
 var altColor = "altColor";
 var rowColor = "rowColor";
@@ -119,7 +119,7 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Public
-public class MiniHBaseCluster
+public class MiniHBaseCluster
 extends HBaseCluster
 This class creates a single process HBase cluster.
  each server.  The master uses the 'default' FileSystem.  The RegionServers,
@@ -463,38 +463,45 @@ extends 
 void
+killNameNode(org.apache.hadoop.hbase.ServerNameserverName)
+Kills the namenode process if this is a distributed 
cluster, otherwise, this causes master to
+ exit doing basic clean up only.
+
+
+
+void
 killRegionServer(org.apache.hadoop.hbase.ServerNameserverName)
 Kills the region server process if this is a distributed 
cluster, otherwise
  this causes the region server to exit doing basic clean up only.
 
 
-
+
 void
 killZkNode(org.apache.hadoop.hbase.ServerNameserverName)
 Kills the zookeeper node process if this is a distributed 
cluster, otherwise,
  this causes master to exit doing basic clean up only.
 
 
-
+
 void
 shutdown()
 Shut down the mini HBase cluster
 
 
-
+
 void
 startDataNode(org.apache.hadoop.hbase.ServerNameserverName)
 Starts a new datanode on the given hostname or if this is a 
mini/local cluster,
  silently logs warning message.
 
 
-
+
 org.apache.hadoop.hbase.util.JVMClusterUtil.MasterThread
 startMaster()
 Starts a master thread running
 
 
-
+
 void
 startMaster(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringhostname,
intport)
@@ -502,13 +509,20 @@ extends 
+
+void
+startNameNode(org.apache.hadoop.hbase.ServerNameserverName)
+Starts a new namenode on the given hostname or if this is a 
mini/local cluster, silently logs
+ warning message.
+
+
+
 org.apache.hadoop.hbase.util.JVMClusterUtil.RegionServerThread
 startRegionServer()
 Starts a region server thread running
 
 
-
+
 void
 startRegionServer(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringhostname,
  intport)
@@ -516,13 +530,13 @@ extends 
+
 org.apache.hadoop.hbase.util.JVMClusterUtil.RegionServerThread
 startRegionServerAndWait(longtimeout)
 Starts a region server thread and waits until its processed 
by master.
 
 
-
+
 void
 startZkNode(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringhostname,
intport)
@@ -530,120 +544,140 @@ extends 
+
 void
 stopDataNode(org.apache.hadoop.hbase.ServerNameserverName)
 Stops the datanode if this is a distributed cluster, 
otherwise
  silently logs warning message.
 
 
-
+
 org.apache.hadoop.hbase.util.JVMClusterUtil.MasterThread
 stopMaster(intserverNumber)
 Shut down the specified master cleanly
 
 
-
+
 org.apache.hadoop.hbase.util.JVMClusterUtil.MasterThread
 stopMaster(intserverNumber,
   booleanshutdownFS)
 Shut down the 

[45/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/org/apache/hadoop/hbase/backup/impl/IncrementalBackupManager.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/backup/impl/IncrementalBackupManager.html 
b/devapidocs/org/apache/hadoop/hbase/backup/impl/IncrementalBackupManager.html
index 6fcccaf..f3b483f 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/backup/impl/IncrementalBackupManager.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/backup/impl/IncrementalBackupManager.html
@@ -119,7 +119,7 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Private
-public class IncrementalBackupManager
+public class IncrementalBackupManager
 extends BackupManager
 After a full backup was created, the incremental backup 
will only store the changes made after
  the last full or incremental backup. Creating the backup copies the logfiles 
in .logs and
@@ -213,8 +213,8 @@ extends 
 private https://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">Listhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
-excludeAlreadyBackedUpWALs(https://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">Listhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringlogList,
-  https://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListBackupSystemTable.WALItemlogFromSystemTable)
+excludeAlreadyBackedUpAndProcV2WALs(https://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">Listhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringlogList,
+   https://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListBackupSystemTable.WALItemlogFromSystemTable)
 
 
 https://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">Listhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
@@ -286,7 +286,7 @@ extends 
 
 LOG
-public static finalorg.slf4j.Logger LOG
+public static finalorg.slf4j.Logger LOG
 
 
 
@@ -303,7 +303,7 @@ extends 
 
 IncrementalBackupManager
-publicIncrementalBackupManager(Connectionconn,
+publicIncrementalBackupManager(Connectionconn,
 org.apache.hadoop.conf.Configurationconf)
  throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
 
@@ -326,7 +326,7 @@ extends 
 
 getIncrBackupLogFileMap
-publichttps://docs.oracle.com/javase/8/docs/api/java/util/HashMap.html?is-external=true;
 title="class or interface in java.util">HashMaphttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String,https://docs.oracle.com/javase/8/docs/api/java/lang/Long.html?is-external=true;
 title="class or interface in java.lang">LonggetIncrBackupLogFileMap()
+publichttps://docs.oracle.com/javase/8/docs/api/java/util/HashMap.html?is-external=true;
 title="class or interface in java.util">HashMaphttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String,https://docs.oracle.com/javase/8/docs/api/java/lang/Long.html?is-external=true;
 title="class or interface in java.lang">LonggetIncrBackupLogFileMap()
  throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
 Obtain the list of logs that need to be copied out for this 
incremental backup. The list is set
  in BackupInfo.
@@ -344,7 +344,7 @@ extends 
 
 getIncrBackupLogFileList
-publichttps://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">Listhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringgetIncrBackupLogFileList()
+publichttps://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">Listhttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringgetIncrBackupLogFileList()
   throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 

[40/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/BucketCacheStats.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/BucketCacheStats.html 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/BucketCacheStats.html
index f1836f9..909e074 100644
--- a/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/BucketCacheStats.html
+++ b/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/BucketCacheStats.html
@@ -50,7 +50,7 @@ var activeTableTab = "activeTableTab";
 
 
 PrevClass
-NextClass
+NextClass
 
 
 Frames
@@ -372,7 +372,7 @@ extends 
 
 PrevClass
-NextClass
+NextClass
 
 
 Frames

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/BucketProtoUtils.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/BucketProtoUtils.html 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/BucketProtoUtils.html
new file mode 100644
index 000..65769c4
--- /dev/null
+++ b/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/BucketProtoUtils.html
@@ -0,0 +1,373 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+
+BucketProtoUtils (Apache HBase 3.0.0-SNAPSHOT API)
+
+
+
+
+
+var methods = {"i0":9,"i1":9,"i2":9,"i3":9,"i4":9,"i5":9,"i6":9,"i7":9};
+var tabs = {65535:["t0","All Methods"],1:["t1","Static 
Methods"],8:["t4","Concrete Methods"]};
+var altColor = "altColor";
+var rowColor = "rowColor";
+var tableTab = "tableTab";
+var activeTableTab = "activeTableTab";
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+Skip navigation links
+
+
+
+
+Overview
+Package
+Class
+Use
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+PrevClass
+NextClass
+
+
+Frames
+NoFrames
+
+
+AllClasses
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.hadoop.hbase.io.hfile.bucket
+Class BucketProtoUtils
+
+
+
+https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">java.lang.Object
+
+
+org.apache.hadoop.hbase.io.hfile.bucket.BucketProtoUtils
+
+
+
+
+
+
+
+
+@InterfaceAudience.Private
+final class BucketProtoUtils
+extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Modifier
+Constructor and Description
+
+
+private 
+BucketProtoUtils()
+
+
+
+
+
+
+
+
+
+Method Summary
+
+All MethodsStatic MethodsConcrete Methods
+
+Modifier and Type
+Method and Description
+
+
+private static BlockType
+fromPb(org.apache.hadoop.hbase.shaded.protobuf.generated.BucketCacheProtos.BlockTypeblockType)
+
+
+(package private) static https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ConcurrentHashMap.html?is-external=true;
 title="class or interface in java.util.concurrent">ConcurrentHashMapBlockCacheKey,BucketCache.BucketEntry
+fromPB(https://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true;
 title="class or interface in java.util">Maphttps://docs.oracle.com/javase/8/docs/api/java/lang/Integer.html?is-external=true;
 title="class or interface in java.lang">Integer,https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringdeserializers,
+  
org.apache.hadoop.hbase.shaded.protobuf.generated.BucketCacheProtos.BackingMapbackingMap)
+
+
+private static 
org.apache.hadoop.hbase.shaded.protobuf.generated.BucketCacheProtos.BlockCacheKey
+toPB(BlockCacheKeykey)
+
+
+private static 
org.apache.hadoop.hbase.shaded.protobuf.generated.BucketCacheProtos.BlockPriority
+toPB(BlockPriorityp)
+
+
+private static 
org.apache.hadoop.hbase.shaded.protobuf.generated.BucketCacheProtos.BlockType
+toPB(BlockTypeblockType)
+
+
+private static 
org.apache.hadoop.hbase.shaded.protobuf.generated.BucketCacheProtos.BucketEntry
+toPB(BucketCache.BucketEntryentry)
+
+
+(package private) static 
org.apache.hadoop.hbase.shaded.protobuf.generated.BucketCacheProtos.BucketCacheEntry
+toPB(BucketCachecache)
+
+
+private static 
org.apache.hadoop.hbase.shaded.protobuf.generated.BucketCacheProtos.BackingMap
+toPB(https://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true;
 title="class or interface in java.util">MapBlockCacheKey,BucketCache.BucketEntrybackingMap)
+
+
+
+
+
+

[14/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.html 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.html
index bd3c59e..21e240a 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.html
@@ -33,62 +33,62 @@
 025import java.io.FileNotFoundException;
 026import java.io.FileOutputStream;
 027import java.io.IOException;
-028import java.io.ObjectInputStream;
-029import java.io.ObjectOutputStream;
-030import java.io.Serializable;
-031import java.nio.ByteBuffer;
-032import java.util.ArrayList;
-033import java.util.Comparator;
-034import java.util.HashSet;
-035import java.util.Iterator;
-036import java.util.List;
-037import java.util.Map;
-038import java.util.NavigableSet;
-039import java.util.PriorityQueue;
-040import java.util.Set;
-041import 
java.util.concurrent.ArrayBlockingQueue;
-042import 
java.util.concurrent.BlockingQueue;
-043import 
java.util.concurrent.ConcurrentHashMap;
-044import 
java.util.concurrent.ConcurrentMap;
-045import 
java.util.concurrent.ConcurrentSkipListSet;
-046import java.util.concurrent.Executors;
-047import 
java.util.concurrent.ScheduledExecutorService;
-048import java.util.concurrent.TimeUnit;
-049import 
java.util.concurrent.atomic.AtomicInteger;
-050import 
java.util.concurrent.atomic.AtomicLong;
-051import 
java.util.concurrent.atomic.LongAdder;
-052import java.util.concurrent.locks.Lock;
-053import 
java.util.concurrent.locks.ReentrantLock;
-054import 
java.util.concurrent.locks.ReentrantReadWriteLock;
-055import 
org.apache.hadoop.conf.Configuration;
-056import 
org.apache.hadoop.hbase.HBaseConfiguration;
-057import 
org.apache.hadoop.hbase.io.HeapSize;
-058import 
org.apache.hadoop.hbase.io.hfile.BlockCache;
-059import 
org.apache.hadoop.hbase.io.hfile.BlockCacheKey;
-060import 
org.apache.hadoop.hbase.io.hfile.BlockCacheUtil;
-061import 
org.apache.hadoop.hbase.io.hfile.BlockPriority;
-062import 
org.apache.hadoop.hbase.io.hfile.BlockType;
-063import 
org.apache.hadoop.hbase.io.hfile.CacheStats;
-064import 
org.apache.hadoop.hbase.io.hfile.Cacheable;
-065import 
org.apache.hadoop.hbase.io.hfile.Cacheable.MemoryType;
-066import 
org.apache.hadoop.hbase.io.hfile.CacheableDeserializer;
-067import 
org.apache.hadoop.hbase.io.hfile.CacheableDeserializerIdManager;
-068import 
org.apache.hadoop.hbase.io.hfile.CachedBlock;
-069import 
org.apache.hadoop.hbase.io.hfile.HFileBlock;
-070import 
org.apache.hadoop.hbase.nio.ByteBuff;
-071import 
org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
-072import 
org.apache.hadoop.hbase.util.HasThread;
-073import 
org.apache.hadoop.hbase.util.IdReadWriteLock;
-074import 
org.apache.hadoop.hbase.util.IdReadWriteLock.ReferenceType;
-075import 
org.apache.hadoop.hbase.util.UnsafeAvailChecker;
-076import 
org.apache.hadoop.util.StringUtils;
-077import 
org.apache.yetus.audience.InterfaceAudience;
-078import org.slf4j.Logger;
-079import org.slf4j.LoggerFactory;
-080
-081import 
org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
-082import 
org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
-083import 
org.apache.hbase.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder;
+028import java.io.Serializable;
+029import java.nio.ByteBuffer;
+030import java.util.ArrayList;
+031import java.util.Comparator;
+032import java.util.HashSet;
+033import java.util.Iterator;
+034import java.util.List;
+035import java.util.Map;
+036import java.util.NavigableSet;
+037import java.util.PriorityQueue;
+038import java.util.Set;
+039import 
java.util.concurrent.ArrayBlockingQueue;
+040import 
java.util.concurrent.BlockingQueue;
+041import 
java.util.concurrent.ConcurrentHashMap;
+042import 
java.util.concurrent.ConcurrentMap;
+043import 
java.util.concurrent.ConcurrentSkipListSet;
+044import java.util.concurrent.Executors;
+045import 
java.util.concurrent.ScheduledExecutorService;
+046import java.util.concurrent.TimeUnit;
+047import 
java.util.concurrent.atomic.AtomicInteger;
+048import 
java.util.concurrent.atomic.AtomicLong;
+049import 
java.util.concurrent.atomic.LongAdder;
+050import java.util.concurrent.locks.Lock;
+051import 
java.util.concurrent.locks.ReentrantLock;
+052import 
java.util.concurrent.locks.ReentrantReadWriteLock;
+053import 
org.apache.hadoop.conf.Configuration;
+054import 
org.apache.hadoop.hbase.HBaseConfiguration;
+055import 
org.apache.hadoop.hbase.io.HeapSize;
+056import 
org.apache.hadoop.hbase.io.hfile.BlockCache;
+057import 
org.apache.hadoop.hbase.io.hfile.BlockCacheKey;
+058import 
org.apache.hadoop.hbase.io.hfile.BlockCacheUtil;
+059import 
org.apache.hadoop.hbase.io.hfile.BlockPriority;

[26/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.FSReaderImpl.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.FSReaderImpl.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.FSReaderImpl.html
index b7b4236..3d1edb3 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.FSReaderImpl.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.FSReaderImpl.html
@@ -259,1863 +259,1867 @@
 251   * + Metadata!  + = See note on 
BLOCK_METADATA_SPACE above.
 252   * ++
 253   * /code
-254   * @see #serialize(ByteBuffer)
+254   * @see #serialize(ByteBuffer, 
boolean)
 255   */
-256  static final 
CacheableDeserializerCacheable BLOCK_DESERIALIZER =
-257  new 
CacheableDeserializerCacheable() {
-258@Override
-259public HFileBlock 
deserialize(ByteBuff buf, boolean reuse, MemoryType memType)
-260throws IOException {
-261  // The buf has the file block 
followed by block metadata.
-262  // Set limit to just before the 
BLOCK_METADATA_SPACE then rewind.
-263  buf.limit(buf.limit() - 
BLOCK_METADATA_SPACE).rewind();
-264  // Get a new buffer to pass the 
HFileBlock for it to 'own'.
-265  ByteBuff newByteBuff;
-266  if (reuse) {
-267newByteBuff = buf.slice();
-268  } else {
-269int len = buf.limit();
-270newByteBuff = new 
SingleByteBuff(ByteBuffer.allocate(len));
-271newByteBuff.put(0, buf, 
buf.position(), len);
-272  }
-273  // Read out the 
BLOCK_METADATA_SPACE content and shove into our HFileBlock.
-274  buf.position(buf.limit());
-275  buf.limit(buf.limit() + 
HFileBlock.BLOCK_METADATA_SPACE);
-276  boolean usesChecksum = buf.get() == 
(byte) 1;
-277  long offset = buf.getLong();
-278  int nextBlockOnDiskSize = 
buf.getInt();
-279  HFileBlock hFileBlock =
-280  new HFileBlock(newByteBuff, 
usesChecksum, memType, offset, nextBlockOnDiskSize, null);
-281  return hFileBlock;
-282}
-283
-284@Override
-285public int 
getDeserialiserIdentifier() {
-286  return DESERIALIZER_IDENTIFIER;
-287}
-288
-289@Override
-290public HFileBlock 
deserialize(ByteBuff b) throws IOException {
-291  // Used only in tests
-292  return deserialize(b, false, 
MemoryType.EXCLUSIVE);
-293}
-294  };
-295
-296  private static final int 
DESERIALIZER_IDENTIFIER;
-297  static {
-298DESERIALIZER_IDENTIFIER =
-299
CacheableDeserializerIdManager.registerDeserializer(BLOCK_DESERIALIZER);
-300  }
-301
-302  /**
-303   * Copy constructor. Creates a shallow 
copy of {@code that}'s buffer.
-304   */
-305  private HFileBlock(HFileBlock that) {
-306this(that, false);
-307  }
-308
-309  /**
-310   * Copy constructor. Creates a 
shallow/deep copy of {@code that}'s buffer as per the boolean
-311   * param.
-312   */
-313  private HFileBlock(HFileBlock that, 
boolean bufCopy) {
-314init(that.blockType, 
that.onDiskSizeWithoutHeader,
-315
that.uncompressedSizeWithoutHeader, that.prevBlockOffset,
-316that.offset, 
that.onDiskDataSizeWithHeader, that.nextBlockOnDiskSize, that.fileContext);
-317if (bufCopy) {
-318  this.buf = new 
SingleByteBuff(ByteBuffer.wrap(that.buf.toBytes(0, that.buf.limit(;
-319} else {
-320  this.buf = that.buf.duplicate();
-321}
-322  }
-323
-324  /**
-325   * Creates a new {@link HFile} block 
from the given fields. This constructor
-326   * is used only while writing blocks 
and caching,
-327   * and is sitting in a byte buffer and 
we want to stuff the block into cache.
-328   *
-329   * pTODO: The caller presumes 
no checksumming
-330   * required of this block instance 
since going into cache; checksum already verified on
-331   * underlying block data pulled in from 
filesystem. Is that correct? What if cache is SSD?
+256  public static final 
CacheableDeserializerCacheable BLOCK_DESERIALIZER = new 
BlockDeserializer();
+257
+258  public static final class 
BlockDeserializer implements CacheableDeserializerCacheable {
+259private BlockDeserializer() {
+260}
+261
+262@Override
+263public HFileBlock 
deserialize(ByteBuff buf, boolean reuse, MemoryType memType)
+264throws IOException {
+265  // The buf has the file block 
followed by block metadata.
+266  // Set limit to just before the 
BLOCK_METADATA_SPACE then rewind.
+267  buf.limit(buf.limit() - 
BLOCK_METADATA_SPACE).rewind();
+268  // Get a new buffer to pass the 
HFileBlock for it to 'own'.
+269  ByteBuff newByteBuff;
+270  if (reuse) {
+271newByteBuff = buf.slice();
+272  } else {
+273int len = buf.limit();
+274newByteBuff = new 
SingleByteBuff(ByteBuffer.allocate(len));
+275newByteBuff.put(0, buf, 
buf.position(), len);

[29/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.BlockIterator.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.BlockIterator.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.BlockIterator.html
index b7b4236..3d1edb3 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.BlockIterator.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.BlockIterator.html
@@ -259,1863 +259,1867 @@
 251   * + Metadata!  + = See note on 
BLOCK_METADATA_SPACE above.
 252   * ++
 253   * /code
-254   * @see #serialize(ByteBuffer)
+254   * @see #serialize(ByteBuffer, 
boolean)
 255   */
-256  static final 
CacheableDeserializerCacheable BLOCK_DESERIALIZER =
-257  new 
CacheableDeserializerCacheable() {
-258@Override
-259public HFileBlock 
deserialize(ByteBuff buf, boolean reuse, MemoryType memType)
-260throws IOException {
-261  // The buf has the file block 
followed by block metadata.
-262  // Set limit to just before the 
BLOCK_METADATA_SPACE then rewind.
-263  buf.limit(buf.limit() - 
BLOCK_METADATA_SPACE).rewind();
-264  // Get a new buffer to pass the 
HFileBlock for it to 'own'.
-265  ByteBuff newByteBuff;
-266  if (reuse) {
-267newByteBuff = buf.slice();
-268  } else {
-269int len = buf.limit();
-270newByteBuff = new 
SingleByteBuff(ByteBuffer.allocate(len));
-271newByteBuff.put(0, buf, 
buf.position(), len);
-272  }
-273  // Read out the 
BLOCK_METADATA_SPACE content and shove into our HFileBlock.
-274  buf.position(buf.limit());
-275  buf.limit(buf.limit() + 
HFileBlock.BLOCK_METADATA_SPACE);
-276  boolean usesChecksum = buf.get() == 
(byte) 1;
-277  long offset = buf.getLong();
-278  int nextBlockOnDiskSize = 
buf.getInt();
-279  HFileBlock hFileBlock =
-280  new HFileBlock(newByteBuff, 
usesChecksum, memType, offset, nextBlockOnDiskSize, null);
-281  return hFileBlock;
-282}
-283
-284@Override
-285public int 
getDeserialiserIdentifier() {
-286  return DESERIALIZER_IDENTIFIER;
-287}
-288
-289@Override
-290public HFileBlock 
deserialize(ByteBuff b) throws IOException {
-291  // Used only in tests
-292  return deserialize(b, false, 
MemoryType.EXCLUSIVE);
-293}
-294  };
-295
-296  private static final int 
DESERIALIZER_IDENTIFIER;
-297  static {
-298DESERIALIZER_IDENTIFIER =
-299
CacheableDeserializerIdManager.registerDeserializer(BLOCK_DESERIALIZER);
-300  }
-301
-302  /**
-303   * Copy constructor. Creates a shallow 
copy of {@code that}'s buffer.
-304   */
-305  private HFileBlock(HFileBlock that) {
-306this(that, false);
-307  }
-308
-309  /**
-310   * Copy constructor. Creates a 
shallow/deep copy of {@code that}'s buffer as per the boolean
-311   * param.
-312   */
-313  private HFileBlock(HFileBlock that, 
boolean bufCopy) {
-314init(that.blockType, 
that.onDiskSizeWithoutHeader,
-315
that.uncompressedSizeWithoutHeader, that.prevBlockOffset,
-316that.offset, 
that.onDiskDataSizeWithHeader, that.nextBlockOnDiskSize, that.fileContext);
-317if (bufCopy) {
-318  this.buf = new 
SingleByteBuff(ByteBuffer.wrap(that.buf.toBytes(0, that.buf.limit(;
-319} else {
-320  this.buf = that.buf.duplicate();
-321}
-322  }
-323
-324  /**
-325   * Creates a new {@link HFile} block 
from the given fields. This constructor
-326   * is used only while writing blocks 
and caching,
-327   * and is sitting in a byte buffer and 
we want to stuff the block into cache.
-328   *
-329   * pTODO: The caller presumes 
no checksumming
-330   * required of this block instance 
since going into cache; checksum already verified on
-331   * underlying block data pulled in from 
filesystem. Is that correct? What if cache is SSD?
+256  public static final 
CacheableDeserializerCacheable BLOCK_DESERIALIZER = new 
BlockDeserializer();
+257
+258  public static final class 
BlockDeserializer implements CacheableDeserializerCacheable {
+259private BlockDeserializer() {
+260}
+261
+262@Override
+263public HFileBlock 
deserialize(ByteBuff buf, boolean reuse, MemoryType memType)
+264throws IOException {
+265  // The buf has the file block 
followed by block metadata.
+266  // Set limit to just before the 
BLOCK_METADATA_SPACE then rewind.
+267  buf.limit(buf.limit() - 
BLOCK_METADATA_SPACE).rewind();
+268  // Get a new buffer to pass the 
HFileBlock for it to 'own'.
+269  ByteBuff newByteBuff;
+270  if (reuse) {
+271newByteBuff = buf.slice();
+272  } else {
+273int len = buf.limit();
+274newByteBuff = new 
SingleByteBuff(ByteBuffer.allocate(len));
+275newByteBuff.put(0, buf, 
buf.position(), 

[50/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/apache_hbase_reference_guide.pdf
--
diff --git a/apache_hbase_reference_guide.pdf b/apache_hbase_reference_guide.pdf
index 6dd953e..4a52551 100644
--- a/apache_hbase_reference_guide.pdf
+++ b/apache_hbase_reference_guide.pdf
@@ -5,16 +5,16 @@
 /Author (Apache HBase Team)
 /Creator (Asciidoctor PDF 1.5.0.alpha.15, based on Prawn 2.2.2)
 /Producer (Apache HBase Team)
-/ModDate (D:20180801142955+00'00')
-/CreationDate (D:20180801144546+00'00')
+/ModDate (D:20180802193210+00'00')
+/CreationDate (D:20180802194755+00'00')
 >>
 endobj
 2 0 obj
 << /Type /Catalog
 /Pages 3 0 R
 /Names 28 0 R
-/Outlines 4972 0 R
-/PageLabels 5223 0 R
+/Outlines 4987 0 R
+/PageLabels 5238 0 R
 /PageMode /UseOutlines
 /OpenAction [7 0 R /FitH 842.89]
 /ViewerPreferences << /DisplayDocTitle true
@@ -23,8 +23,8 @@ endobj
 endobj
 3 0 obj
 << /Type /Pages
-/Count 783
-/Kids [7 0 R 12 0 R 14 0 R 16 0 R 18 0 R 20 0 R 22 0 R 24 0 R 26 0 R 46 0 R 49 
0 R 52 0 R 56 0 R 63 0 R 65 0 R 69 0 R 71 0 R 73 0 R 80 0 R 83 0 R 85 0 R 91 0 
R 94 0 R 96 0 R 98 0 R 105 0 R 112 0 R 117 0 R 119 0 R 135 0 R 140 0 R 148 0 R 
157 0 R 165 0 R 169 0 R 178 0 R 189 0 R 193 0 R 195 0 R 199 0 R 208 0 R 217 0 R 
225 0 R 234 0 R 239 0 R 248 0 R 256 0 R 265 0 R 278 0 R 285 0 R 295 0 R 303 0 R 
311 0 R 318 0 R 327 0 R 333 0 R 339 0 R 346 0 R 354 0 R 362 0 R 373 0 R 386 0 R 
394 0 R 401 0 R 409 0 R 417 0 R 426 0 R 436 0 R 444 0 R 450 0 R 459 0 R 471 0 R 
481 0 R 488 0 R 496 0 R 503 0 R 512 0 R 520 0 R 524 0 R 530 0 R 535 0 R 539 0 R 
555 0 R 566 0 R 570 0 R 585 0 R 590 0 R 595 0 R 597 0 R 599 0 R 602 0 R 604 0 R 
606 0 R 614 0 R 620 0 R 623 0 R 627 0 R 636 0 R 647 0 R 655 0 R 659 0 R 663 0 R 
665 0 R 675 0 R 690 0 R 697 0 R 708 0 R 718 0 R 729 0 R 741 0 R 761 0 R 771 0 R 
778 0 R 782 0 R 788 0 R 791 0 R 795 0 R 799 0 R 802 0 R 805 0 R 807 0 R 810 0 R 
814 0 R 816 0 R 820 0 R 826 0 R 831 0 R 
 835 0 R 838 0 R 844 0 R 846 0 R 850 0 R 858 0 R 860 0 R 863 0 R 866 0 R 869 0 
R 872 0 R 886 0 R 894 0 R 905 0 R 916 0 R 922 0 R 932 0 R 943 0 R 946 0 R 950 0 
R 953 0 R 958 0 R 967 0 R 975 0 R 979 0 R 983 0 R 988 0 R 992 0 R 994 0 R 1010 
0 R 1021 0 R 1026 0 R 1033 0 R 1036 0 R 1044 0 R 1052 0 R 1057 0 R 1062 0 R 
1067 0 R 1069 0 R 1071 0 R 1073 0 R 1083 0 R 1091 0 R 1095 0 R 1102 0 R 1109 0 
R 1117 0 R 1121 0 R 1127 0 R 1132 0 R 1140 0 R 1144 0 R 1149 0 R 1151 0 R 1157 
0 R 1165 0 R 1171 0 R 1178 0 R 1189 0 R 1193 0 R 1195 0 R 1197 0 R 1201 0 R 
1204 0 R 1209 0 R 1212 0 R 1224 0 R 1228 0 R 1234 0 R 1242 0 R 1247 0 R 1251 0 
R 1255 0 R 1257 0 R 1260 0 R 1263 0 R 1266 0 R 1270 0 R 1274 0 R 1278 0 R 1283 
0 R 1287 0 R 1290 0 R 1292 0 R 1302 0 R 1304 0 R 1309 0 R 1322 0 R 1326 0 R 
1332 0 R 1334 0 R 1345 0 R 1348 0 R 1354 0 R 1362 0 R 1365 0 R 1372 0 R 1379 0 
R 1382 0 R 1384 0 R 1393 0 R 1395 0 R 1397 0 R 1400 0 R 1402 0 R 1404 0 R 1406 
0 R 1408 0 R 1411 0 R 1415 0 R 1420 0 R 1422 0 R 1424 0 R 
 1426 0 R 1431 0 R 1438 0 R 1444 0 R 1447 0 R 1449 0 R 1452 0 R 1456 0 R 1460 0 
R 1463 0 R 1465 0 R 1467 0 R 1470 0 R 1475 0 R 1481 0 R 1489 0 R 1503 0 R 1517 
0 R 1520 0 R 1525 0 R 1538 0 R 1543 0 R 1558 0 R 1566 0 R 1570 0 R 1579 0 R 
1594 0 R 1608 0 R 1616 0 R 1621 0 R 1632 0 R 1637 0 R 1643 0 R 1649 0 R 1661 0 
R 1664 0 R 1673 0 R 1676 0 R 1685 0 R 1691 0 R 1695 0 R 1700 0 R 1712 0 R 1714 
0 R 1720 0 R 1726 0 R 1729 0 R 1737 0 R 1745 0 R 1749 0 R 1751 0 R 1753 0 R 
1765 0 R 1771 0 R 1780 0 R 1786 0 R 1799 0 R 1805 0 R 1811 0 R 1822 0 R 1828 0 
R 1833 0 R 1837 0 R 1841 0 R 1844 0 R 1849 0 R 1854 0 R 1860 0 R 1865 0 R 1869 
0 R 1878 0 R 1884 0 R 1887 0 R 1891 0 R 1900 0 R 1907 0 R 1913 0 R 1919 0 R 
1923 0 R 1927 0 R 1932 0 R 1937 0 R 1943 0 R 1945 0 R 1947 0 R 1950 0 R 1961 0 
R 1964 0 R 1971 0 R 1979 0 R 1984 0 R 1988 0 R 1993 0 R 1995 0 R 1998 0 R 2003 
0 R 2006 0 R 2008 0 R 2011 0 R 2014 0 R 2017 0 R 2027 0 R 2032 0 R 2037 0 R 
2039 0 R 2047 0 R 2054 0 R 2061 0 R 2067 0 R 2072 0 R 2074 0 
 R 2083 0 R 2093 0 R 2103 0 R 2109 0 R 2116 0 R 2118 0 R 2123 0 R 2125 0 R 2127 
0 R 2131 0 R 2134 0 R 2137 0 R 2142 0 R 2146 0 R 2157 0 R 2160 0 R 2163 0 R 
2167 0 R 2171 0 R 2174 0 R 2176 0 R 2181 0 R 2184 0 R 2186 0 R 2191 0 R 2201 0 
R 2203 0 R 2205 0 R 2207 0 R 2209 0 R 2212 0 R 2214 0 R 2216 0 R 2219 0 R 2221 
0 R 2223 0 R 2227 0 R 2232 0 R 2241 0 R 2243 0 R 2245 0 R 2251 0 R 2253 0 R 
2258 0 R 2260 0 R 2262 0 R 2269 0 R 2274 0 R 2278 0 R 2283 0 R 2287 0 R 2289 0 
R 2291 0 R 2295 0 R 2298 0 R 2300 0 R 2302 0 R 2306 0 R 2308 0 R 2311 0 R 2313 
0 R 2315 0 R 2317 0 R 2324 0 R 2327 0 R 2332 0 R 2334 0 R 2336 0 R 2338 0 R 
2340 0 R 2348 0 R 2359 0 R 2373 0 R 2384 0 R 2388 0 R 2393 0 R 2397 0 R 2400 0 
R 2405 0 R 2411 0 R 2413 0 R 2416 0 R 2418 0 R 2420 0 R 2422 0 R 2427 0 R 2429 
0 R 2442 0 R 2445 0 R 2453 0 R 2459 0 R 2471 0 R 2485 0 R 2498 0 R 2517 0 R 
2519 0 R 2521 0 R 2525 0 R 2543 0 R 2549 0 R 2561 0 R 2565 0 R 2569 0 R 2578 0 
R 2590 0 R 2595 0 R 2605 0 R 2618 0 

[20/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.BucketEntry.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.BucketEntry.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.BucketEntry.html
index bd3c59e..21e240a 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.BucketEntry.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.BucketEntry.html
@@ -33,62 +33,62 @@
 025import java.io.FileNotFoundException;
 026import java.io.FileOutputStream;
 027import java.io.IOException;
-028import java.io.ObjectInputStream;
-029import java.io.ObjectOutputStream;
-030import java.io.Serializable;
-031import java.nio.ByteBuffer;
-032import java.util.ArrayList;
-033import java.util.Comparator;
-034import java.util.HashSet;
-035import java.util.Iterator;
-036import java.util.List;
-037import java.util.Map;
-038import java.util.NavigableSet;
-039import java.util.PriorityQueue;
-040import java.util.Set;
-041import 
java.util.concurrent.ArrayBlockingQueue;
-042import 
java.util.concurrent.BlockingQueue;
-043import 
java.util.concurrent.ConcurrentHashMap;
-044import 
java.util.concurrent.ConcurrentMap;
-045import 
java.util.concurrent.ConcurrentSkipListSet;
-046import java.util.concurrent.Executors;
-047import 
java.util.concurrent.ScheduledExecutorService;
-048import java.util.concurrent.TimeUnit;
-049import 
java.util.concurrent.atomic.AtomicInteger;
-050import 
java.util.concurrent.atomic.AtomicLong;
-051import 
java.util.concurrent.atomic.LongAdder;
-052import java.util.concurrent.locks.Lock;
-053import 
java.util.concurrent.locks.ReentrantLock;
-054import 
java.util.concurrent.locks.ReentrantReadWriteLock;
-055import 
org.apache.hadoop.conf.Configuration;
-056import 
org.apache.hadoop.hbase.HBaseConfiguration;
-057import 
org.apache.hadoop.hbase.io.HeapSize;
-058import 
org.apache.hadoop.hbase.io.hfile.BlockCache;
-059import 
org.apache.hadoop.hbase.io.hfile.BlockCacheKey;
-060import 
org.apache.hadoop.hbase.io.hfile.BlockCacheUtil;
-061import 
org.apache.hadoop.hbase.io.hfile.BlockPriority;
-062import 
org.apache.hadoop.hbase.io.hfile.BlockType;
-063import 
org.apache.hadoop.hbase.io.hfile.CacheStats;
-064import 
org.apache.hadoop.hbase.io.hfile.Cacheable;
-065import 
org.apache.hadoop.hbase.io.hfile.Cacheable.MemoryType;
-066import 
org.apache.hadoop.hbase.io.hfile.CacheableDeserializer;
-067import 
org.apache.hadoop.hbase.io.hfile.CacheableDeserializerIdManager;
-068import 
org.apache.hadoop.hbase.io.hfile.CachedBlock;
-069import 
org.apache.hadoop.hbase.io.hfile.HFileBlock;
-070import 
org.apache.hadoop.hbase.nio.ByteBuff;
-071import 
org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
-072import 
org.apache.hadoop.hbase.util.HasThread;
-073import 
org.apache.hadoop.hbase.util.IdReadWriteLock;
-074import 
org.apache.hadoop.hbase.util.IdReadWriteLock.ReferenceType;
-075import 
org.apache.hadoop.hbase.util.UnsafeAvailChecker;
-076import 
org.apache.hadoop.util.StringUtils;
-077import 
org.apache.yetus.audience.InterfaceAudience;
-078import org.slf4j.Logger;
-079import org.slf4j.LoggerFactory;
-080
-081import 
org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
-082import 
org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
-083import 
org.apache.hbase.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder;
+028import java.io.Serializable;
+029import java.nio.ByteBuffer;
+030import java.util.ArrayList;
+031import java.util.Comparator;
+032import java.util.HashSet;
+033import java.util.Iterator;
+034import java.util.List;
+035import java.util.Map;
+036import java.util.NavigableSet;
+037import java.util.PriorityQueue;
+038import java.util.Set;
+039import 
java.util.concurrent.ArrayBlockingQueue;
+040import 
java.util.concurrent.BlockingQueue;
+041import 
java.util.concurrent.ConcurrentHashMap;
+042import 
java.util.concurrent.ConcurrentMap;
+043import 
java.util.concurrent.ConcurrentSkipListSet;
+044import java.util.concurrent.Executors;
+045import 
java.util.concurrent.ScheduledExecutorService;
+046import java.util.concurrent.TimeUnit;
+047import 
java.util.concurrent.atomic.AtomicInteger;
+048import 
java.util.concurrent.atomic.AtomicLong;
+049import 
java.util.concurrent.atomic.LongAdder;
+050import java.util.concurrent.locks.Lock;
+051import 
java.util.concurrent.locks.ReentrantLock;
+052import 
java.util.concurrent.locks.ReentrantReadWriteLock;
+053import 
org.apache.hadoop.conf.Configuration;
+054import 
org.apache.hadoop.hbase.HBaseConfiguration;
+055import 
org.apache.hadoop.hbase.io.HeapSize;
+056import 
org.apache.hadoop.hbase.io.hfile.BlockCache;
+057import 
org.apache.hadoop.hbase.io.hfile.BlockCacheKey;
+058import 
org.apache.hadoop.hbase.io.hfile.BlockCacheUtil;

[03/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/testdevapidocs/org/apache/hadoop/hbase/HBaseCluster.html
--
diff --git a/testdevapidocs/org/apache/hadoop/hbase/HBaseCluster.html 
b/testdevapidocs/org/apache/hadoop/hbase/HBaseCluster.html
index d51b5ba..e6811bb 100644
--- a/testdevapidocs/org/apache/hadoop/hbase/HBaseCluster.html
+++ b/testdevapidocs/org/apache/hadoop/hbase/HBaseCluster.html
@@ -18,7 +18,7 @@
 catch(err) {
 }
 //-->
-var methods = 
{"i0":6,"i1":6,"i2":6,"i3":6,"i4":10,"i5":10,"i6":6,"i7":10,"i8":6,"i9":10,"i10":6,"i11":6,"i12":6,"i13":6,"i14":6,"i15":10,"i16":10,"i17":10,"i18":6,"i19":6,"i20":6,"i21":6,"i22":6,"i23":6,"i24":6,"i25":6,"i26":6,"i27":10,"i28":6,"i29":10,"i30":6,"i31":6,"i32":6,"i33":10,"i34":10,"i35":6,"i36":6,"i37":6,"i38":6};
+var methods = 
{"i0":6,"i1":6,"i2":6,"i3":6,"i4":10,"i5":10,"i6":6,"i7":10,"i8":6,"i9":10,"i10":6,"i11":6,"i12":6,"i13":6,"i14":6,"i15":6,"i16":10,"i17":10,"i18":10,"i19":6,"i20":6,"i21":6,"i22":6,"i23":6,"i24":6,"i25":6,"i26":6,"i27":6,"i28":6,"i29":6,"i30":10,"i31":6,"i32":10,"i33":6,"i34":6,"i35":6,"i36":10,"i37":6,"i38":6,"i39":10,"i40":6,"i41":6,"i42":6,"i43":6};
 var tabs = {65535:["t0","All Methods"],2:["t2","Instance 
Methods"],4:["t3","Abstract Methods"],8:["t4","Concrete Methods"]};
 var altColor = "altColor";
 var rowColor = "rowColor";
@@ -118,7 +118,7 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Private
-public abstract class HBaseCluster
+public abstract class HBaseCluster
 extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
 implements https://docs.oracle.com/javase/8/docs/api/java/io/Closeable.html?is-external=true;
 title="class or interface in java.io">Closeable, 
org.apache.hadoop.conf.Configurable
 This class defines methods that can help with managing 
HBase clusters
@@ -290,50 +290,57 @@ implements https://docs.oracle.com/javase/8/docs/api/java/io/Closeable.
 
 
 abstract void
+killNameNode(org.apache.hadoop.hbase.ServerNameserverName)
+Kills the namenode process if this is a distributed 
cluster, otherwise, this causes master to
+ exit doing basic clean up only.
+
+
+
+abstract void
 killRegionServer(org.apache.hadoop.hbase.ServerNameserverName)
 Kills the region server process if this is a distributed 
cluster, otherwise
  this causes the region server to exit doing basic clean up only.
 
 
-
+
 abstract void
 killZkNode(org.apache.hadoop.hbase.ServerNameserverName)
 Kills the zookeeper node process if this is a distributed 
cluster, otherwise,
  this causes master to exit doing basic clean up only.
 
 
-
+
 boolean
 restoreClusterMetrics(org.apache.hadoop.hbase.ClusterMetricsdesiredStatus)
 Restores the cluster to given state if this is a real 
cluster,
  otherwise does nothing.
 
 
-
+
 boolean
 restoreInitialStatus()
 Restores the cluster to it's initial state if this is a 
real cluster,
  otherwise does nothing.
 
 
-
+
 void
 setConf(org.apache.hadoop.conf.Configurationconf)
 
-
+
 abstract void
 shutdown()
 Shut down the HBase cluster
 
 
-
+
 abstract void
 startDataNode(org.apache.hadoop.hbase.ServerNameserverName)
 Starts a new datanode on the given hostname or if this is a 
mini/local cluster,
  silently logs warning message.
 
 
-
+
 abstract void
 startMaster(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringhostname,
intport)
@@ -341,7 +348,14 @@ implements https://docs.oracle.com/javase/8/docs/api/java/io/Closeable.
  starts a master locally.
 
 
-
+
+abstract void
+startNameNode(org.apache.hadoop.hbase.ServerNameserverName)
+Starts a new namenode on the given hostname or if this is a 
mini/local cluster, silently logs
+ warning message.
+
+
+
 abstract void
 startRegionServer(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringhostname,
  intport)
@@ -349,7 +363,7 @@ implements https://docs.oracle.com/javase/8/docs/api/java/io/Closeable.
  starts a region server locally.
 
 
-
+
 abstract void
 startZkNode(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringhostname,
intport)
@@ -357,78 +371,98 @@ implements https://docs.oracle.com/javase/8/docs/api/java/io/Closeable.
  silently logs warning message.
 
 
-
+
 abstract void
 stopDataNode(org.apache.hadoop.hbase.ServerNameserverName)
 Stops the datanode if this is a distributed cluster, 
otherwise
  silently logs warning message.
 
 
-
+
 abstract void
 stopMaster(org.apache.hadoop.hbase.ServerNameserverName)
 Stops the given master, by attempting a gradual stop.
 
 
-
+
+abstract void
+stopNameNode(org.apache.hadoop.hbase.ServerNameserverName)
+Stops the namenode if this is a distributed cluster, 

[23/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Writer.State.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Writer.State.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Writer.State.html
index b7b4236..3d1edb3 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Writer.State.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Writer.State.html
@@ -259,1863 +259,1867 @@
 251   * + Metadata!  + = See note on 
BLOCK_METADATA_SPACE above.
 252   * ++
 253   * /code
-254   * @see #serialize(ByteBuffer)
+254   * @see #serialize(ByteBuffer, 
boolean)
 255   */
-256  static final 
CacheableDeserializerCacheable BLOCK_DESERIALIZER =
-257  new 
CacheableDeserializerCacheable() {
-258@Override
-259public HFileBlock 
deserialize(ByteBuff buf, boolean reuse, MemoryType memType)
-260throws IOException {
-261  // The buf has the file block 
followed by block metadata.
-262  // Set limit to just before the 
BLOCK_METADATA_SPACE then rewind.
-263  buf.limit(buf.limit() - 
BLOCK_METADATA_SPACE).rewind();
-264  // Get a new buffer to pass the 
HFileBlock for it to 'own'.
-265  ByteBuff newByteBuff;
-266  if (reuse) {
-267newByteBuff = buf.slice();
-268  } else {
-269int len = buf.limit();
-270newByteBuff = new 
SingleByteBuff(ByteBuffer.allocate(len));
-271newByteBuff.put(0, buf, 
buf.position(), len);
-272  }
-273  // Read out the 
BLOCK_METADATA_SPACE content and shove into our HFileBlock.
-274  buf.position(buf.limit());
-275  buf.limit(buf.limit() + 
HFileBlock.BLOCK_METADATA_SPACE);
-276  boolean usesChecksum = buf.get() == 
(byte) 1;
-277  long offset = buf.getLong();
-278  int nextBlockOnDiskSize = 
buf.getInt();
-279  HFileBlock hFileBlock =
-280  new HFileBlock(newByteBuff, 
usesChecksum, memType, offset, nextBlockOnDiskSize, null);
-281  return hFileBlock;
-282}
-283
-284@Override
-285public int 
getDeserialiserIdentifier() {
-286  return DESERIALIZER_IDENTIFIER;
-287}
-288
-289@Override
-290public HFileBlock 
deserialize(ByteBuff b) throws IOException {
-291  // Used only in tests
-292  return deserialize(b, false, 
MemoryType.EXCLUSIVE);
-293}
-294  };
-295
-296  private static final int 
DESERIALIZER_IDENTIFIER;
-297  static {
-298DESERIALIZER_IDENTIFIER =
-299
CacheableDeserializerIdManager.registerDeserializer(BLOCK_DESERIALIZER);
-300  }
-301
-302  /**
-303   * Copy constructor. Creates a shallow 
copy of {@code that}'s buffer.
-304   */
-305  private HFileBlock(HFileBlock that) {
-306this(that, false);
-307  }
-308
-309  /**
-310   * Copy constructor. Creates a 
shallow/deep copy of {@code that}'s buffer as per the boolean
-311   * param.
-312   */
-313  private HFileBlock(HFileBlock that, 
boolean bufCopy) {
-314init(that.blockType, 
that.onDiskSizeWithoutHeader,
-315
that.uncompressedSizeWithoutHeader, that.prevBlockOffset,
-316that.offset, 
that.onDiskDataSizeWithHeader, that.nextBlockOnDiskSize, that.fileContext);
-317if (bufCopy) {
-318  this.buf = new 
SingleByteBuff(ByteBuffer.wrap(that.buf.toBytes(0, that.buf.limit(;
-319} else {
-320  this.buf = that.buf.duplicate();
-321}
-322  }
-323
-324  /**
-325   * Creates a new {@link HFile} block 
from the given fields. This constructor
-326   * is used only while writing blocks 
and caching,
-327   * and is sitting in a byte buffer and 
we want to stuff the block into cache.
-328   *
-329   * pTODO: The caller presumes 
no checksumming
-330   * required of this block instance 
since going into cache; checksum already verified on
-331   * underlying block data pulled in from 
filesystem. Is that correct? What if cache is SSD?
+256  public static final 
CacheableDeserializerCacheable BLOCK_DESERIALIZER = new 
BlockDeserializer();
+257
+258  public static final class 
BlockDeserializer implements CacheableDeserializerCacheable {
+259private BlockDeserializer() {
+260}
+261
+262@Override
+263public HFileBlock 
deserialize(ByteBuff buf, boolean reuse, MemoryType memType)
+264throws IOException {
+265  // The buf has the file block 
followed by block metadata.
+266  // Set limit to just before the 
BLOCK_METADATA_SPACE then rewind.
+267  buf.limit(buf.limit() - 
BLOCK_METADATA_SPACE).rewind();
+268  // Get a new buffer to pass the 
HFileBlock for it to 'own'.
+269  ByteBuff newByteBuff;
+270  if (reuse) {
+271newByteBuff = buf.slice();
+272  } else {
+273int len = buf.limit();
+274newByteBuff = new 
SingleByteBuff(ByteBuffer.allocate(len));
+275newByteBuff.put(0, buf, 
buf.position(), len);

[30/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.BlockDeserializer.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.BlockDeserializer.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.BlockDeserializer.html
new file mode 100644
index 000..3d1edb3
--- /dev/null
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.BlockDeserializer.html
@@ -0,0 +1,2186 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+Source code
+
+
+
+
+001/*
+002 * Licensed to the Apache Software 
Foundation (ASF) under one
+003 * or more contributor license 
agreements.  See the NOTICE file
+004 * distributed with this work for 
additional information
+005 * regarding copyright ownership.  The 
ASF licenses this file
+006 * to you under the Apache License, 
Version 2.0 (the
+007 * "License"); you may not use this file 
except in compliance
+008 * with the License.  You may obtain a 
copy of the License at
+009 *
+010 * 
http://www.apache.org/licenses/LICENSE-2.0
+011 *
+012 * Unless required by applicable law or 
agreed to in writing, software
+013 * distributed under the License is 
distributed on an "AS IS" BASIS,
+014 * WITHOUT WARRANTIES OR CONDITIONS OF 
ANY KIND, either express or implied.
+015 * See the License for the specific 
language governing permissions and
+016 * limitations under the License.
+017 */
+018package 
org.apache.hadoop.hbase.io.hfile;
+019
+020import java.io.DataInputStream;
+021import java.io.DataOutput;
+022import java.io.DataOutputStream;
+023import java.io.IOException;
+024import java.io.InputStream;
+025import java.nio.ByteBuffer;
+026import 
java.util.concurrent.atomic.AtomicReference;
+027import java.util.concurrent.locks.Lock;
+028import 
java.util.concurrent.locks.ReentrantLock;
+029
+030import 
org.apache.hadoop.fs.FSDataInputStream;
+031import 
org.apache.hadoop.fs.FSDataOutputStream;
+032import org.apache.hadoop.fs.Path;
+033import org.apache.hadoop.hbase.Cell;
+034import 
org.apache.hadoop.hbase.HConstants;
+035import 
org.apache.yetus.audience.InterfaceAudience;
+036import org.slf4j.Logger;
+037import org.slf4j.LoggerFactory;
+038import 
org.apache.hadoop.hbase.fs.HFileSystem;
+039import 
org.apache.hadoop.hbase.io.ByteArrayOutputStream;
+040import 
org.apache.hadoop.hbase.io.ByteBuffInputStream;
+041import 
org.apache.hadoop.hbase.io.ByteBufferWriterDataOutputStream;
+042import 
org.apache.hadoop.hbase.io.FSDataInputStreamWrapper;
+043import 
org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
+044import 
org.apache.hadoop.hbase.io.encoding.HFileBlockDecodingContext;
+045import 
org.apache.hadoop.hbase.io.encoding.HFileBlockDefaultDecodingContext;
+046import 
org.apache.hadoop.hbase.io.encoding.HFileBlockDefaultEncodingContext;
+047import 
org.apache.hadoop.hbase.io.encoding.HFileBlockEncodingContext;
+048import 
org.apache.hadoop.hbase.nio.ByteBuff;
+049import 
org.apache.hadoop.hbase.nio.MultiByteBuff;
+050import 
org.apache.hadoop.hbase.nio.SingleByteBuff;
+051import 
org.apache.hadoop.hbase.util.Bytes;
+052import 
org.apache.hadoop.hbase.util.ChecksumType;
+053import 
org.apache.hadoop.hbase.util.ClassSize;
+054import org.apache.hadoop.io.IOUtils;
+055
+056import 
org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
+057import 
org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
+058
+059/**
+060 * Cacheable Blocks of an {@link HFile} 
version 2 file.
+061 * Version 2 was introduced in 
hbase-0.92.0.
+062 *
+063 * pVersion 1 was the original 
file block. Version 2 was introduced when we changed the hbase file
+064 * format to support multi-level block 
indexes and compound bloom filters (HBASE-3857). Support
+065 * for Version 1 was removed in 
hbase-1.3.0.
+066 *
+067 * h3HFileBlock: Version 
2/h3
+068 * In version 2, a block is structured as 
follows:
+069 * ul
+070 * libHeader:/b 
See Writer#putHeader() for where header is written; header total size is
+071 * HFILEBLOCK_HEADER_SIZE
+072 * ul
+073 * li0. blockType: Magic record 
identifying the {@link BlockType} (8 bytes):
+074 * e.g. 
codeDATABLK*/code
+075 * li1. onDiskSizeWithoutHeader: 
Compressed -- a.k.a 'on disk' -- block size, excluding header,
+076 * but including tailing checksum bytes 
(4 bytes)
+077 * li2. 
uncompressedSizeWithoutHeader: Uncompressed block size, excluding header, and 
excluding
+078 * checksum bytes (4 bytes)
+079 * li3. prevBlockOffset: The 
offset of the previous block of the same type (8 bytes). This is
+080 * used to navigate to the previous block 
without having to go to the block index
+081 * li4: For minorVersions 
gt;=1, the ordinal describing checksum type (1 byte)
+082 * li5: For minorVersions 
gt;=1, the number of data bytes/checksum chunk (4 bytes)
+083 * li6: onDiskDataSizeWithHeader: 
For minorVersions gt;=1, the size of data 'on 

[11/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/master/MasterRpcServices.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/master/MasterRpcServices.html 
b/devapidocs/src-html/org/apache/hadoop/hbase/master/MasterRpcServices.html
index f2fd195..b293714 100644
--- a/devapidocs/src-html/org/apache/hadoop/hbase/master/MasterRpcServices.html
+++ b/devapidocs/src-html/org/apache/hadoop/hbase/master/MasterRpcServices.html
@@ -619,1696 +619,1698 @@
 611try {
 612  long procId =
 613  
master.createTable(tableDescriptor, splitKeys, req.getNonceGroup(), 
req.getNonce());
-614  return 
CreateTableResponse.newBuilder().setProcId(procId).build();
-615} catch (IOException ioe) {
-616  throw new ServiceException(ioe);
-617}
-618  }
-619
-620  @Override
-621  public DeleteColumnResponse 
deleteColumn(RpcController controller,
-622  DeleteColumnRequest req) throws 
ServiceException {
-623try {
-624  long procId = 
master.deleteColumn(
-625
ProtobufUtil.toTableName(req.getTableName()),
-626
req.getColumnName().toByteArray(),
-627req.getNonceGroup(),
-628req.getNonce());
-629  if (procId == -1) {
-630// This mean operation was not 
performed in server, so do not set any procId
-631return 
DeleteColumnResponse.newBuilder().build();
-632  } else {
-633return 
DeleteColumnResponse.newBuilder().setProcId(procId).build();
-634  }
-635} catch (IOException ioe) {
-636  throw new ServiceException(ioe);
-637}
-638  }
-639
-640  @Override
-641  public DeleteNamespaceResponse 
deleteNamespace(RpcController controller,
-642  DeleteNamespaceRequest request) 
throws ServiceException {
-643try {
-644  long procId = 
master.deleteNamespace(
-645request.getNamespaceName(),
-646request.getNonceGroup(),
-647request.getNonce());
-648  return 
DeleteNamespaceResponse.newBuilder().setProcId(procId).build();
-649} catch (IOException e) {
-650  throw new ServiceException(e);
-651}
-652  }
-653
-654  /**
-655   * Execute Delete Snapshot operation.
-656   * @return DeleteSnapshotResponse (a 
protobuf wrapped void) if the snapshot existed and was
-657   *deleted properly.
-658   * @throws ServiceException wrapping 
SnapshotDoesNotExistException if specified snapshot did not
-659   *exist.
-660   */
-661  @Override
-662  public DeleteSnapshotResponse 
deleteSnapshot(RpcController controller,
-663  DeleteSnapshotRequest request) 
throws ServiceException {
-664try {
-665  master.checkInitialized();
-666  
master.snapshotManager.checkSnapshotSupport();
-667
-668  
LOG.info(master.getClientIdAuditPrefix() + " delete " + 
request.getSnapshot());
-669  
master.snapshotManager.deleteSnapshot(request.getSnapshot());
-670  return 
DeleteSnapshotResponse.newBuilder().build();
-671} catch (IOException e) {
-672  throw new ServiceException(e);
-673}
-674  }
-675
-676  @Override
-677  public DeleteTableResponse 
deleteTable(RpcController controller,
-678  DeleteTableRequest request) throws 
ServiceException {
-679try {
-680  long procId = 
master.deleteTable(ProtobufUtil.toTableName(
-681  request.getTableName()), 
request.getNonceGroup(), request.getNonce());
-682  return 
DeleteTableResponse.newBuilder().setProcId(procId).build();
-683} catch (IOException ioe) {
-684  throw new ServiceException(ioe);
-685}
-686  }
-687
-688  @Override
-689  public TruncateTableResponse 
truncateTable(RpcController controller, TruncateTableRequest request)
-690  throws ServiceException {
-691try {
-692  long procId = 
master.truncateTable(
-693
ProtobufUtil.toTableName(request.getTableName()),
-694request.getPreserveSplits(),
-695request.getNonceGroup(),
-696request.getNonce());
-697  return 
TruncateTableResponse.newBuilder().setProcId(procId).build();
-698} catch (IOException ioe) {
-699  throw new ServiceException(ioe);
-700}
-701  }
-702
-703  @Override
-704  public DisableTableResponse 
disableTable(RpcController controller,
-705  DisableTableRequest request) throws 
ServiceException {
-706try {
-707  long procId = 
master.disableTable(
-708
ProtobufUtil.toTableName(request.getTableName()),
-709request.getNonceGroup(),
-710request.getNonce());
-711  return 
DisableTableResponse.newBuilder().setProcId(procId).build();
-712} catch (IOException ioe) {
-713  throw new ServiceException(ioe);
-714}
-715  }
-716
-717  @Override
-718  public EnableCatalogJanitorResponse 
enableCatalogJanitor(RpcController c,
-719  EnableCatalogJanitorRequest req) 
throws ServiceException {
-720
rpcPreCheck("enableCatalogJanitor");
-721return 

[31/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/CacheableDeserializerIdManager.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/CacheableDeserializerIdManager.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/CacheableDeserializerIdManager.html
index e50f682..7d5287a 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/CacheableDeserializerIdManager.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/CacheableDeserializerIdManager.html
@@ -33,37 +33,55 @@
 025import 
org.apache.yetus.audience.InterfaceAudience;
 026
 027/**
-028 * This class is used to manage the 
identifiers for
-029 * {@link CacheableDeserializer}
-030 */
-031@InterfaceAudience.Private
-032public class 
CacheableDeserializerIdManager {
-033  private static final MapInteger, 
CacheableDeserializerCacheable registeredDeserializers = new 
HashMap();
-034  private static final AtomicInteger 
identifier = new AtomicInteger(0);
-035
-036  /**
-037   * Register the given cacheable 
deserializer and generate an unique identifier
-038   * id for it
-039   * @param cd
-040   * @return the identifier of given 
cacheable deserializer
-041   */
-042  public static int 
registerDeserializer(CacheableDeserializerCacheable cd) {
-043int idx = 
identifier.incrementAndGet();
-044synchronized 
(registeredDeserializers) {
-045  registeredDeserializers.put(idx, 
cd);
-046}
-047return idx;
-048  }
-049
-050  /**
-051   * Get the cacheable deserializer as 
the given identifier Id
-052   * @param id
-053   * @return CacheableDeserializer
-054   */
-055  public static 
CacheableDeserializerCacheable getDeserializer(int id) {
-056return 
registeredDeserializers.get(id);
-057  }
-058}
+028 * This class is used to manage the 
identifiers for {@link CacheableDeserializer}.
+029 * All deserializers are registered with 
this Manager via the
+030 * {@link 
#registerDeserializer(CacheableDeserializer)}}. On registration, we return an
+031 * int *identifier* for this 
deserializer. The int identifier is passed to
+032 * {@link #getDeserializer(int)}} to 
obtain the registered deserializer instance.
+033 */
+034@InterfaceAudience.Private
+035public class 
CacheableDeserializerIdManager {
+036  private static final MapInteger, 
CacheableDeserializerCacheable registeredDeserializers = new 
HashMap();
+037  private static final AtomicInteger 
identifier = new AtomicInteger(0);
+038
+039  /**
+040   * Register the given {@link Cacheable} 
-- usually an hfileblock instance, these implement
+041   * the Cacheable Interface -- 
deserializer and generate an unique identifier id for it and return
+042   * this as our result.
+043   * @return the identifier of given 
cacheable deserializer
+044   * @see #getDeserializer(int)
+045   */
+046  public static int 
registerDeserializer(CacheableDeserializerCacheable cd) {
+047int idx = 
identifier.incrementAndGet();
+048synchronized 
(registeredDeserializers) {
+049  registeredDeserializers.put(idx, 
cd);
+050}
+051return idx;
+052  }
+053
+054  /**
+055   * Get the cacheable deserializer 
registered at the given identifier Id.
+056   * @see 
#registerDeserializer(CacheableDeserializer)
+057   */
+058  public static 
CacheableDeserializerCacheable getDeserializer(int id) {
+059return 
registeredDeserializers.get(id);
+060  }
+061
+062  /**
+063   * Snapshot a map of the current 
identifiers to class names for reconstruction on reading out
+064   * of a file.
+065   */
+066  public static MapInteger,String 
save() {
+067MapInteger, String snapshot = 
new HashMap();
+068synchronized 
(registeredDeserializers) {
+069  for (Map.EntryInteger, 
CacheableDeserializerCacheable entry :
+070  
registeredDeserializers.entrySet()) {
+071snapshot.put(entry.getKey(), 
entry.getValue().getClass().getName());
+072  }
+073}
+074return snapshot;
+075  }
+076}
 
 
 



[42/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.RAMQueueEntry.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.RAMQueueEntry.html
 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.RAMQueueEntry.html
index 7c7d1af..f93a683 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.RAMQueueEntry.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.RAMQueueEntry.html
@@ -113,7 +113,7 @@ var activeTableTab = "activeTableTab";
 
 
 
-static class BucketCache.RAMQueueEntry
+static class BucketCache.RAMQueueEntry
 extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
 Block Entry stored in the memory with key,data and so 
on
 
@@ -199,9 +199,8 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 BucketCache.BucketEntry
-writeToCache(IOEngineioEngine,
+writeToCache(IOEngineioEngine,
 BucketAllocatorbucketAllocator,
-UniqueIndexMaphttps://docs.oracle.com/javase/8/docs/api/java/lang/Integer.html?is-external=true;
 title="class or interface in java.lang">IntegerdeserialiserMap,
 https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/atomic/LongAdder.html?is-external=true;
 title="class or interface in 
java.util.concurrent.atomic">LongAdderrealCacheSize)
 
 
@@ -232,7 +231,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 key
-privateBlockCacheKey key
+privateBlockCacheKey key
 
 
 
@@ -241,7 +240,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 data
-privateCacheable data
+privateCacheable data
 
 
 
@@ -250,7 +249,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 accessCounter
-privatelong accessCounter
+privatelong accessCounter
 
 
 
@@ -259,7 +258,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 inMemory
-privateboolean inMemory
+privateboolean inMemory
 
 
 
@@ -276,7 +275,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 RAMQueueEntry
-publicRAMQueueEntry(BlockCacheKeybck,
+publicRAMQueueEntry(BlockCacheKeybck,
  Cacheabledata,
  longaccessCounter,
  booleaninMemory)
@@ -296,7 +295,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 getData
-publicCacheablegetData()
+publicCacheablegetData()
 
 
 
@@ -305,7 +304,7 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 getKey
-publicBlockCacheKeygetKey()
+publicBlockCacheKeygetKey()
 
 
 
@@ -314,18 +313,17 @@ extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html
 
 
 access
-publicvoidaccess(longaccessCounter)
+publicvoidaccess(longaccessCounter)
 
 
-
+
 
 
 
 
 writeToCache
-publicBucketCache.BucketEntrywriteToCache(IOEngineioEngine,
+publicBucketCache.BucketEntrywriteToCache(IOEngineioEngine,
 BucketAllocatorbucketAllocator,
-UniqueIndexMaphttps://docs.oracle.com/javase/8/docs/api/java/lang/Integer.html?is-external=true;
 title="class or interface in java.lang">IntegerdeserialiserMap,
 https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/atomic/LongAdder.html?is-external=true;
 title="class or interface in 
java.util.concurrent.atomic">LongAdderrealCacheSize)
  throws CacheFullException,
 https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException,

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.SharedMemoryBucketEntry.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.SharedMemoryBucketEntry.html
 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.SharedMemoryBucketEntry.html
index 1772d99..ddadbd6 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.SharedMemoryBucketEntry.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.SharedMemoryBucketEntry.html
@@ -122,7 +122,7 @@ var activeTableTab = "activeTableTab";
 
 
 
-static class BucketCache.SharedMemoryBucketEntry
+static class BucketCache.SharedMemoryBucketEntry
 extends BucketCache.BucketEntry
 
 
@@ -222,7 +222,7 @@ extends BucketCache.BucketEntry
-access,
 deserializerReference,
 getCachedTime,
 getLength,
 getPriority,
 offset,
 setDeserialiserReference
+access,
 deserializerReference,
 

[34/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/BackupSystemTable.WALItem.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/BackupSystemTable.WALItem.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/BackupSystemTable.WALItem.html
index 1e0659a..981ebcd 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/BackupSystemTable.WALItem.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/BackupSystemTable.WALItem.html
@@ -73,1969 +73,1975 @@
 065import 
org.apache.hadoop.hbase.client.Table;
 066import 
org.apache.hadoop.hbase.client.TableDescriptor;
 067import 
org.apache.hadoop.hbase.client.TableDescriptorBuilder;
-068import 
org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos;
-069import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
-070import 
org.apache.hadoop.hbase.util.Bytes;
-071import 
org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
-072import 
org.apache.hadoop.hbase.util.Pair;
-073import 
org.apache.yetus.audience.InterfaceAudience;
-074import org.slf4j.Logger;
-075import org.slf4j.LoggerFactory;
-076
-077/**
-078 * This class provides API to access 
backup system tablebr
-079 * Backup system table 
schema:br
-080 * p
-081 * ul
-082 * li1. Backup sessions rowkey= 
"session:"+backupId; value =serialized BackupInfo/li
-083 * li2. Backup start code rowkey 
= "startcode:"+backupRoot; value = startcode/li
-084 * li3. Incremental backup set 
rowkey="incrbackupset:"+backupRoot; value=[list of tables]/li
-085 * li4. Table-RS-timestamp map 
rowkey="trslm:"+backupRoot+table_name; value = map[RS- last WAL
-086 * timestamp]/li
-087 * li5. RS - WAL ts map 
rowkey="rslogts:"+backupRoot +server; value = last WAL timestamp/li
-088 * li6. WALs recorded 
rowkey="wals:"+WAL unique file name; value = backupId and full WAL file
-089 * name/li
-090 * /ul
-091 * /p
-092 */
-093@InterfaceAudience.Private
-094public final class BackupSystemTable 
implements Closeable {
-095
-096  private static final Logger LOG = 
LoggerFactory.getLogger(BackupSystemTable.class);
-097
-098  static class WALItem {
-099String backupId;
-100String walFile;
-101String backupRoot;
-102
-103WALItem(String backupId, String 
walFile, String backupRoot) {
-104  this.backupId = backupId;
-105  this.walFile = walFile;
-106  this.backupRoot = backupRoot;
-107}
-108
-109public String getBackupId() {
-110  return backupId;
-111}
-112
-113public String getWalFile() {
-114  return walFile;
-115}
-116
-117public String getBackupRoot() {
-118  return backupRoot;
-119}
-120
-121@Override
-122public String toString() {
-123  return Path.SEPARATOR + backupRoot 
+ Path.SEPARATOR + backupId + Path.SEPARATOR + walFile;
-124}
-125  }
-126
-127  /**
-128   * Backup system table (main) name
-129   */
-130  private TableName tableName;
-131
-132  /**
-133   * Backup System table name for bulk 
loaded files. We keep all bulk loaded file references in a
-134   * separate table because we have to 
isolate general backup operations: create, merge etc from
-135   * activity of RegionObserver, which 
controls process of a bulk loading
-136   * {@link 
org.apache.hadoop.hbase.backup.BackupObserver}
-137   */
-138  private TableName bulkLoadTableName;
-139
-140  /**
-141   * Stores backup sessions (contexts)
-142   */
-143  final static byte[] SESSIONS_FAMILY = 
"session".getBytes();
-144  /**
-145   * Stores other meta
-146   */
-147  final static byte[] META_FAMILY = 
"meta".getBytes();
-148  final static byte[] BULK_LOAD_FAMILY = 
"bulk".getBytes();
-149  /**
-150   * Connection to HBase cluster, shared 
among all instances
-151   */
-152  private final Connection connection;
-153
-154  private final static String 
BACKUP_INFO_PREFIX = "session:";
-155  private final static String 
START_CODE_ROW = "startcode:";
-156  private final static byte[] 
ACTIVE_SESSION_ROW = "activesession:".getBytes();
-157  private final static byte[] 
ACTIVE_SESSION_COL = "c".getBytes();
-158
-159  private final static byte[] 
ACTIVE_SESSION_YES = "yes".getBytes();
-160  private final static byte[] 
ACTIVE_SESSION_NO = "no".getBytes();
-161
-162  private final static String 
INCR_BACKUP_SET = "incrbackupset:";
-163  private final static String 
TABLE_RS_LOG_MAP_PREFIX = "trslm:";
-164  private final static String 
RS_LOG_TS_PREFIX = "rslogts:";
-165
-166  private final static String 
BULK_LOAD_PREFIX = "bulk:";
-167  private final static byte[] 
BULK_LOAD_PREFIX_BYTES = BULK_LOAD_PREFIX.getBytes();
-168  private final static byte[] 
DELETE_OP_ROW = "delete_op_row".getBytes();
-169  private final static byte[] 
MERGE_OP_ROW = "merge_op_row".getBytes();
-170
-171  final static byte[] TBL_COL = 
Bytes.toBytes("tbl");
-172  final static byte[] FAM_COL = 

[33/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/BackupSystemTable.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/BackupSystemTable.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/BackupSystemTable.html
index 1e0659a..981ebcd 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/BackupSystemTable.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/backup/impl/BackupSystemTable.html
@@ -73,1969 +73,1975 @@
 065import 
org.apache.hadoop.hbase.client.Table;
 066import 
org.apache.hadoop.hbase.client.TableDescriptor;
 067import 
org.apache.hadoop.hbase.client.TableDescriptorBuilder;
-068import 
org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos;
-069import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
-070import 
org.apache.hadoop.hbase.util.Bytes;
-071import 
org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
-072import 
org.apache.hadoop.hbase.util.Pair;
-073import 
org.apache.yetus.audience.InterfaceAudience;
-074import org.slf4j.Logger;
-075import org.slf4j.LoggerFactory;
-076
-077/**
-078 * This class provides API to access 
backup system tablebr
-079 * Backup system table 
schema:br
-080 * p
-081 * ul
-082 * li1. Backup sessions rowkey= 
"session:"+backupId; value =serialized BackupInfo/li
-083 * li2. Backup start code rowkey 
= "startcode:"+backupRoot; value = startcode/li
-084 * li3. Incremental backup set 
rowkey="incrbackupset:"+backupRoot; value=[list of tables]/li
-085 * li4. Table-RS-timestamp map 
rowkey="trslm:"+backupRoot+table_name; value = map[RS- last WAL
-086 * timestamp]/li
-087 * li5. RS - WAL ts map 
rowkey="rslogts:"+backupRoot +server; value = last WAL timestamp/li
-088 * li6. WALs recorded 
rowkey="wals:"+WAL unique file name; value = backupId and full WAL file
-089 * name/li
-090 * /ul
-091 * /p
-092 */
-093@InterfaceAudience.Private
-094public final class BackupSystemTable 
implements Closeable {
-095
-096  private static final Logger LOG = 
LoggerFactory.getLogger(BackupSystemTable.class);
-097
-098  static class WALItem {
-099String backupId;
-100String walFile;
-101String backupRoot;
-102
-103WALItem(String backupId, String 
walFile, String backupRoot) {
-104  this.backupId = backupId;
-105  this.walFile = walFile;
-106  this.backupRoot = backupRoot;
-107}
-108
-109public String getBackupId() {
-110  return backupId;
-111}
-112
-113public String getWalFile() {
-114  return walFile;
-115}
-116
-117public String getBackupRoot() {
-118  return backupRoot;
-119}
-120
-121@Override
-122public String toString() {
-123  return Path.SEPARATOR + backupRoot 
+ Path.SEPARATOR + backupId + Path.SEPARATOR + walFile;
-124}
-125  }
-126
-127  /**
-128   * Backup system table (main) name
-129   */
-130  private TableName tableName;
-131
-132  /**
-133   * Backup System table name for bulk 
loaded files. We keep all bulk loaded file references in a
-134   * separate table because we have to 
isolate general backup operations: create, merge etc from
-135   * activity of RegionObserver, which 
controls process of a bulk loading
-136   * {@link 
org.apache.hadoop.hbase.backup.BackupObserver}
-137   */
-138  private TableName bulkLoadTableName;
-139
-140  /**
-141   * Stores backup sessions (contexts)
-142   */
-143  final static byte[] SESSIONS_FAMILY = 
"session".getBytes();
-144  /**
-145   * Stores other meta
-146   */
-147  final static byte[] META_FAMILY = 
"meta".getBytes();
-148  final static byte[] BULK_LOAD_FAMILY = 
"bulk".getBytes();
-149  /**
-150   * Connection to HBase cluster, shared 
among all instances
-151   */
-152  private final Connection connection;
-153
-154  private final static String 
BACKUP_INFO_PREFIX = "session:";
-155  private final static String 
START_CODE_ROW = "startcode:";
-156  private final static byte[] 
ACTIVE_SESSION_ROW = "activesession:".getBytes();
-157  private final static byte[] 
ACTIVE_SESSION_COL = "c".getBytes();
-158
-159  private final static byte[] 
ACTIVE_SESSION_YES = "yes".getBytes();
-160  private final static byte[] 
ACTIVE_SESSION_NO = "no".getBytes();
-161
-162  private final static String 
INCR_BACKUP_SET = "incrbackupset:";
-163  private final static String 
TABLE_RS_LOG_MAP_PREFIX = "trslm:";
-164  private final static String 
RS_LOG_TS_PREFIX = "rslogts:";
-165
-166  private final static String 
BULK_LOAD_PREFIX = "bulk:";
-167  private final static byte[] 
BULK_LOAD_PREFIX_BYTES = BULK_LOAD_PREFIX.getBytes();
-168  private final static byte[] 
DELETE_OP_ROW = "delete_op_row".getBytes();
-169  private final static byte[] 
MERGE_OP_ROW = "merge_op_row".getBytes();
-170
-171  final static byte[] TBL_COL = 
Bytes.toBytes("tbl");
-172  final static byte[] FAM_COL = 
Bytes.toBytes("fam");
-173  final static byte[] PATH_COL = 

[10/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/master/procedure/AbstractStateMachineTableProcedure.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/master/procedure/AbstractStateMachineTableProcedure.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/master/procedure/AbstractStateMachineTableProcedure.html
index 69db023..59daaeb 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/master/procedure/AbstractStateMachineTableProcedure.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/master/procedure/AbstractStateMachineTableProcedure.html
@@ -196,7 +196,7 @@
 188  throw new 
UnknownRegionException("No RegionState found for " + ri.getEncodedName());
 189}
 190if (!rs.isOpened()) {
-191  throw new 
DoNotRetryRegionException(ri.getEncodedName() + " is not OPEN");
+191  throw new 
DoNotRetryRegionException(ri.getEncodedName() + " is not OPEN; regionState=" + 
rs);
 192}
 193if (ri.isSplitParent()) {
 194  throw new 
DoNotRetryRegionException(ri.getEncodedName() +

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/master/procedure/DeleteTableProcedure.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/master/procedure/DeleteTableProcedure.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/master/procedure/DeleteTableProcedure.html
index 32d662d..e5a5866 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/master/procedure/DeleteTableProcedure.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/master/procedure/DeleteTableProcedure.html
@@ -102,7 +102,7 @@
 094  }
 095
 096  // TODO: Move out... in the 
acquireLock()
-097  LOG.debug("Waiting for '" + 
getTableName() + "' regions in transition");
+097  LOG.debug("Waiting for RIT for 
{}", this);
 098  regions = 
env.getAssignmentManager().getRegionStates().getRegionsOfTable(getTableName());
 099  assert regions != null 
 !regions.isEmpty() : "unexpected 0 regions";
 100  
ProcedureSyncWait.waitRegionInTransition(env, regions);
@@ -113,29 +113,29 @@
 105  
setNextState(DeleteTableState.DELETE_TABLE_REMOVE_FROM_META);
 106  break;
 107case 
DELETE_TABLE_REMOVE_FROM_META:
-108  LOG.debug("delete '" + 
getTableName() + "' regions from META");
+108  LOG.debug("Deleting regions 
from META for {}", this);
 109  
DeleteTableProcedure.deleteFromMeta(env, getTableName(), regions);
 110  
setNextState(DeleteTableState.DELETE_TABLE_CLEAR_FS_LAYOUT);
 111  break;
 112case 
DELETE_TABLE_CLEAR_FS_LAYOUT:
-113  LOG.debug("delete '" + 
getTableName() + "' from filesystem");
+113  LOG.debug("Deleting regions 
from filesystem for {}", this);
 114  
DeleteTableProcedure.deleteFromFs(env, getTableName(), regions, true);
 115  
setNextState(DeleteTableState.DELETE_TABLE_UPDATE_DESC_CACHE);
 116  regions = null;
 117  break;
 118case 
DELETE_TABLE_UPDATE_DESC_CACHE:
-119  LOG.debug("delete '" + 
getTableName() + "' descriptor");
+119  LOG.debug("Deleting descriptor 
for {}", this);
 120  
DeleteTableProcedure.deleteTableDescriptorCache(env, getTableName());
 121  
setNextState(DeleteTableState.DELETE_TABLE_UNASSIGN_REGIONS);
 122  break;
 123case 
DELETE_TABLE_UNASSIGN_REGIONS:
-124  LOG.debug("delete '" + 
getTableName() + "' assignment state");
+124  LOG.debug("Deleting assignment 
state for {}", this);
 125  
DeleteTableProcedure.deleteAssignmentState(env, getTableName());
 126  
setNextState(DeleteTableState.DELETE_TABLE_POST_OPERATION);
 127  break;
 128case 
DELETE_TABLE_POST_OPERATION:
 129  postDelete(env);
-130  LOG.debug("delete '" + 
getTableName() + "' completed");
+130  LOG.debug("Finished {}", 
this);
 131  return Flow.NO_MORE_STATE;
 132default:
 133  throw new 
UnsupportedOperationException("unhandled state=" + state);

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/master/procedure/DisableTableProcedure.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/master/procedure/DisableTableProcedure.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/master/procedure/DisableTableProcedure.html
index 2f83467..3e6a53e 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/master/procedure/DisableTableProcedure.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/master/procedure/DisableTableProcedure.html
@@ -150,7 +150,7 @@
 142  if (isRollbackSupported(state)) {
 143  

[13/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketProtoUtils.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketProtoUtils.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketProtoUtils.html
new file mode 100644
index 000..80852ec
--- /dev/null
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketProtoUtils.html
@@ -0,0 +1,263 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+Source code
+
+
+
+
+001/*
+002 * Copyright The Apache Software 
Foundation
+003 *
+004 * Licensed to the Apache Software 
Foundation (ASF) under one
+005 * or more contributor license 
agreements.  See the NOTICE file
+006 * distributed with this work for 
additional information
+007 * regarding copyright ownership.  The 
ASF licenses this file
+008 * to you under the Apache License, 
Version 2.0 (the
+009 * "License"); you may not use this file 
except in compliance
+010 * with the License.  You may obtain a 
copy of the License at
+011 *
+012 * 
http://www.apache.org/licenses/LICENSE-2.0
+013 *
+014 * Unless required by applicable law or 
agreed to in writing, software
+015
+016 * distributed under the License is 
distributed on an "AS IS" BASIS,
+017 * WITHOUT WARRANTIES OR CONDITIONS OF 
ANY KIND, either express or implied.
+018 * See the License for the specific 
language governing permissions and
+019 * limitations under the License.
+020 */
+021package 
org.apache.hadoop.hbase.io.hfile.bucket;
+022
+023import java.io.IOException;
+024import java.util.Map;
+025import 
java.util.concurrent.ConcurrentHashMap;
+026
+027import 
org.apache.hadoop.hbase.io.hfile.BlockCacheKey;
+028import 
org.apache.hadoop.hbase.io.hfile.BlockPriority;
+029import 
org.apache.hadoop.hbase.io.hfile.BlockType;
+030import 
org.apache.hadoop.hbase.io.hfile.CacheableDeserializerIdManager;
+031import 
org.apache.hadoop.hbase.io.hfile.HFileBlock;
+032import 
org.apache.yetus.audience.InterfaceAudience;
+033
+034import 
org.apache.hadoop.hbase.shaded.protobuf.generated.BucketCacheProtos;
+035
+036@InterfaceAudience.Private
+037final class BucketProtoUtils {
+038  private BucketProtoUtils() {
+039
+040  }
+041
+042  static 
BucketCacheProtos.BucketCacheEntry toPB(BucketCache cache) {
+043return 
BucketCacheProtos.BucketCacheEntry.newBuilder()
+044
.setCacheCapacity(cache.getMaxSize())
+045
.setIoClass(cache.ioEngine.getClass().getName())
+046
.setMapClass(cache.backingMap.getClass().getName())
+047
.putAllDeserializers(CacheableDeserializerIdManager.save())
+048
.setBackingMap(BucketProtoUtils.toPB(cache.backingMap))
+049.build();
+050  }
+051
+052  private static 
BucketCacheProtos.BackingMap toPB(
+053  MapBlockCacheKey, 
BucketCache.BucketEntry backingMap) {
+054BucketCacheProtos.BackingMap.Builder 
builder = BucketCacheProtos.BackingMap.newBuilder();
+055for (Map.EntryBlockCacheKey, 
BucketCache.BucketEntry entry : backingMap.entrySet()) {
+056  
builder.addEntry(BucketCacheProtos.BackingMapEntry.newBuilder()
+057  .setKey(toPB(entry.getKey()))
+058  
.setValue(toPB(entry.getValue()))
+059  .build());
+060}
+061return builder.build();
+062  }
+063
+064  private static 
BucketCacheProtos.BlockCacheKey toPB(BlockCacheKey key) {
+065return 
BucketCacheProtos.BlockCacheKey.newBuilder()
+066
.setHfilename(key.getHfileName())
+067.setOffset(key.getOffset())
+068
.setPrimaryReplicaBlock(key.isPrimary())
+069
.setBlockType(toPB(key.getBlockType()))
+070.build();
+071  }
+072
+073  private static 
BucketCacheProtos.BlockType toPB(BlockType blockType) {
+074switch(blockType) {
+075  case DATA:
+076return 
BucketCacheProtos.BlockType.data;
+077  case META:
+078return 
BucketCacheProtos.BlockType.meta;
+079  case TRAILER:
+080return 
BucketCacheProtos.BlockType.trailer;
+081  case INDEX_V1:
+082return 
BucketCacheProtos.BlockType.index_v1;
+083  case FILE_INFO:
+084return 
BucketCacheProtos.BlockType.file_info;
+085  case LEAF_INDEX:
+086return 
BucketCacheProtos.BlockType.leaf_index;
+087  case ROOT_INDEX:
+088return 
BucketCacheProtos.BlockType.root_index;
+089  case BLOOM_CHUNK:
+090return 
BucketCacheProtos.BlockType.bloom_chunk;
+091  case ENCODED_DATA:
+092return 
BucketCacheProtos.BlockType.encoded_data;
+093  case GENERAL_BLOOM_META:
+094return 
BucketCacheProtos.BlockType.general_bloom_meta;
+095  case INTERMEDIATE_INDEX:
+096return 
BucketCacheProtos.BlockType.intermediate_index;
+097  case DELETE_FAMILY_BLOOM_META:
+098return 
BucketCacheProtos.BlockType.delete_family_bloom_meta;
+099  default:
+100throw new Error("Unrecognized 

[38/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/org/apache/hadoop/hbase/ipc/package-tree.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/ipc/package-tree.html 
b/devapidocs/org/apache/hadoop/hbase/ipc/package-tree.html
index 6a2998c..7600159 100644
--- a/devapidocs/org/apache/hadoop/hbase/ipc/package-tree.html
+++ b/devapidocs/org/apache/hadoop/hbase/ipc/package-tree.html
@@ -349,9 +349,9 @@
 
 java.lang.https://docs.oracle.com/javase/8/docs/api/java/lang/Enum.html?is-external=true;
 title="class or interface in java.lang">EnumE (implements java.lang.https://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true;
 title="class or interface in java.lang">ComparableT, java.io.https://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html?is-external=true;
 title="class or interface in java.io">Serializable)
 
+org.apache.hadoop.hbase.ipc.MetricsHBaseServerSourceFactoryImpl.SourceStorage
 org.apache.hadoop.hbase.ipc.CallEvent.Type
 org.apache.hadoop.hbase.ipc.BufferCallBeforeInitHandler.BufferCallAction
-org.apache.hadoop.hbase.ipc.MetricsHBaseServerSourceFactoryImpl.SourceStorage
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/org/apache/hadoop/hbase/mapreduce/package-tree.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/mapreduce/package-tree.html 
b/devapidocs/org/apache/hadoop/hbase/mapreduce/package-tree.html
index 4335db6..ff8a2f3 100644
--- a/devapidocs/org/apache/hadoop/hbase/mapreduce/package-tree.html
+++ b/devapidocs/org/apache/hadoop/hbase/mapreduce/package-tree.html
@@ -293,9 +293,9 @@
 
 java.lang.https://docs.oracle.com/javase/8/docs/api/java/lang/Enum.html?is-external=true;
 title="class or interface in java.lang">EnumE (implements java.lang.https://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true;
 title="class or interface in java.lang">ComparableT, java.io.https://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html?is-external=true;
 title="class or interface in java.io">Serializable)
 
+org.apache.hadoop.hbase.mapreduce.RowCounter.RowCounterMapper.Counters
 org.apache.hadoop.hbase.mapreduce.CellCounter.CellCounterMapper.Counters
 org.apache.hadoop.hbase.mapreduce.TableSplit.Version
-org.apache.hadoop.hbase.mapreduce.RowCounter.RowCounterMapper.Counters
 org.apache.hadoop.hbase.mapreduce.SyncTable.SyncMapper.Counter
 
 



[21/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.html 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.html
index b7b4236..3d1edb3 100644
--- a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.html
+++ b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.html
@@ -259,1863 +259,1867 @@
 251   * + Metadata!  + = See note on 
BLOCK_METADATA_SPACE above.
 252   * ++
 253   * /code
-254   * @see #serialize(ByteBuffer)
+254   * @see #serialize(ByteBuffer, 
boolean)
 255   */
-256  static final 
CacheableDeserializerCacheable BLOCK_DESERIALIZER =
-257  new 
CacheableDeserializerCacheable() {
-258@Override
-259public HFileBlock 
deserialize(ByteBuff buf, boolean reuse, MemoryType memType)
-260throws IOException {
-261  // The buf has the file block 
followed by block metadata.
-262  // Set limit to just before the 
BLOCK_METADATA_SPACE then rewind.
-263  buf.limit(buf.limit() - 
BLOCK_METADATA_SPACE).rewind();
-264  // Get a new buffer to pass the 
HFileBlock for it to 'own'.
-265  ByteBuff newByteBuff;
-266  if (reuse) {
-267newByteBuff = buf.slice();
-268  } else {
-269int len = buf.limit();
-270newByteBuff = new 
SingleByteBuff(ByteBuffer.allocate(len));
-271newByteBuff.put(0, buf, 
buf.position(), len);
-272  }
-273  // Read out the 
BLOCK_METADATA_SPACE content and shove into our HFileBlock.
-274  buf.position(buf.limit());
-275  buf.limit(buf.limit() + 
HFileBlock.BLOCK_METADATA_SPACE);
-276  boolean usesChecksum = buf.get() == 
(byte) 1;
-277  long offset = buf.getLong();
-278  int nextBlockOnDiskSize = 
buf.getInt();
-279  HFileBlock hFileBlock =
-280  new HFileBlock(newByteBuff, 
usesChecksum, memType, offset, nextBlockOnDiskSize, null);
-281  return hFileBlock;
-282}
-283
-284@Override
-285public int 
getDeserialiserIdentifier() {
-286  return DESERIALIZER_IDENTIFIER;
-287}
-288
-289@Override
-290public HFileBlock 
deserialize(ByteBuff b) throws IOException {
-291  // Used only in tests
-292  return deserialize(b, false, 
MemoryType.EXCLUSIVE);
-293}
-294  };
-295
-296  private static final int 
DESERIALIZER_IDENTIFIER;
-297  static {
-298DESERIALIZER_IDENTIFIER =
-299
CacheableDeserializerIdManager.registerDeserializer(BLOCK_DESERIALIZER);
-300  }
-301
-302  /**
-303   * Copy constructor. Creates a shallow 
copy of {@code that}'s buffer.
-304   */
-305  private HFileBlock(HFileBlock that) {
-306this(that, false);
-307  }
-308
-309  /**
-310   * Copy constructor. Creates a 
shallow/deep copy of {@code that}'s buffer as per the boolean
-311   * param.
-312   */
-313  private HFileBlock(HFileBlock that, 
boolean bufCopy) {
-314init(that.blockType, 
that.onDiskSizeWithoutHeader,
-315
that.uncompressedSizeWithoutHeader, that.prevBlockOffset,
-316that.offset, 
that.onDiskDataSizeWithHeader, that.nextBlockOnDiskSize, that.fileContext);
-317if (bufCopy) {
-318  this.buf = new 
SingleByteBuff(ByteBuffer.wrap(that.buf.toBytes(0, that.buf.limit(;
-319} else {
-320  this.buf = that.buf.duplicate();
-321}
-322  }
-323
-324  /**
-325   * Creates a new {@link HFile} block 
from the given fields. This constructor
-326   * is used only while writing blocks 
and caching,
-327   * and is sitting in a byte buffer and 
we want to stuff the block into cache.
-328   *
-329   * pTODO: The caller presumes 
no checksumming
-330   * required of this block instance 
since going into cache; checksum already verified on
-331   * underlying block data pulled in from 
filesystem. Is that correct? What if cache is SSD?
+256  public static final 
CacheableDeserializerCacheable BLOCK_DESERIALIZER = new 
BlockDeserializer();
+257
+258  public static final class 
BlockDeserializer implements CacheableDeserializerCacheable {
+259private BlockDeserializer() {
+260}
+261
+262@Override
+263public HFileBlock 
deserialize(ByteBuff buf, boolean reuse, MemoryType memType)
+264throws IOException {
+265  // The buf has the file block 
followed by block metadata.
+266  // Set limit to just before the 
BLOCK_METADATA_SPACE then rewind.
+267  buf.limit(buf.limit() - 
BLOCK_METADATA_SPACE).rewind();
+268  // Get a new buffer to pass the 
HFileBlock for it to 'own'.
+269  ByteBuff newByteBuff;
+270  if (reuse) {
+271newByteBuff = buf.slice();
+272  } else {
+273int len = buf.limit();
+274newByteBuff = new 
SingleByteBuff(ByteBuffer.allocate(len));
+275newByteBuff.put(0, buf, 
buf.position(), len);
+276  }
+277  // Read out the 
BLOCK_METADATA_SPACE content 

[12/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/master/MasterRpcServices.BalanceSwitchMode.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/master/MasterRpcServices.BalanceSwitchMode.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/master/MasterRpcServices.BalanceSwitchMode.html
index f2fd195..b293714 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/master/MasterRpcServices.BalanceSwitchMode.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/master/MasterRpcServices.BalanceSwitchMode.html
@@ -619,1696 +619,1698 @@
 611try {
 612  long procId =
 613  
master.createTable(tableDescriptor, splitKeys, req.getNonceGroup(), 
req.getNonce());
-614  return 
CreateTableResponse.newBuilder().setProcId(procId).build();
-615} catch (IOException ioe) {
-616  throw new ServiceException(ioe);
-617}
-618  }
-619
-620  @Override
-621  public DeleteColumnResponse 
deleteColumn(RpcController controller,
-622  DeleteColumnRequest req) throws 
ServiceException {
-623try {
-624  long procId = 
master.deleteColumn(
-625
ProtobufUtil.toTableName(req.getTableName()),
-626
req.getColumnName().toByteArray(),
-627req.getNonceGroup(),
-628req.getNonce());
-629  if (procId == -1) {
-630// This mean operation was not 
performed in server, so do not set any procId
-631return 
DeleteColumnResponse.newBuilder().build();
-632  } else {
-633return 
DeleteColumnResponse.newBuilder().setProcId(procId).build();
-634  }
-635} catch (IOException ioe) {
-636  throw new ServiceException(ioe);
-637}
-638  }
-639
-640  @Override
-641  public DeleteNamespaceResponse 
deleteNamespace(RpcController controller,
-642  DeleteNamespaceRequest request) 
throws ServiceException {
-643try {
-644  long procId = 
master.deleteNamespace(
-645request.getNamespaceName(),
-646request.getNonceGroup(),
-647request.getNonce());
-648  return 
DeleteNamespaceResponse.newBuilder().setProcId(procId).build();
-649} catch (IOException e) {
-650  throw new ServiceException(e);
-651}
-652  }
-653
-654  /**
-655   * Execute Delete Snapshot operation.
-656   * @return DeleteSnapshotResponse (a 
protobuf wrapped void) if the snapshot existed and was
-657   *deleted properly.
-658   * @throws ServiceException wrapping 
SnapshotDoesNotExistException if specified snapshot did not
-659   *exist.
-660   */
-661  @Override
-662  public DeleteSnapshotResponse 
deleteSnapshot(RpcController controller,
-663  DeleteSnapshotRequest request) 
throws ServiceException {
-664try {
-665  master.checkInitialized();
-666  
master.snapshotManager.checkSnapshotSupport();
-667
-668  
LOG.info(master.getClientIdAuditPrefix() + " delete " + 
request.getSnapshot());
-669  
master.snapshotManager.deleteSnapshot(request.getSnapshot());
-670  return 
DeleteSnapshotResponse.newBuilder().build();
-671} catch (IOException e) {
-672  throw new ServiceException(e);
-673}
-674  }
-675
-676  @Override
-677  public DeleteTableResponse 
deleteTable(RpcController controller,
-678  DeleteTableRequest request) throws 
ServiceException {
-679try {
-680  long procId = 
master.deleteTable(ProtobufUtil.toTableName(
-681  request.getTableName()), 
request.getNonceGroup(), request.getNonce());
-682  return 
DeleteTableResponse.newBuilder().setProcId(procId).build();
-683} catch (IOException ioe) {
-684  throw new ServiceException(ioe);
-685}
-686  }
-687
-688  @Override
-689  public TruncateTableResponse 
truncateTable(RpcController controller, TruncateTableRequest request)
-690  throws ServiceException {
-691try {
-692  long procId = 
master.truncateTable(
-693
ProtobufUtil.toTableName(request.getTableName()),
-694request.getPreserveSplits(),
-695request.getNonceGroup(),
-696request.getNonce());
-697  return 
TruncateTableResponse.newBuilder().setProcId(procId).build();
-698} catch (IOException ioe) {
-699  throw new ServiceException(ioe);
-700}
-701  }
-702
-703  @Override
-704  public DisableTableResponse 
disableTable(RpcController controller,
-705  DisableTableRequest request) throws 
ServiceException {
-706try {
-707  long procId = 
master.disableTable(
-708
ProtobufUtil.toTableName(request.getTableName()),
-709request.getNonceGroup(),
-710request.getNonce());
-711  return 
DisableTableResponse.newBuilder().setProcId(procId).build();
-712} catch (IOException ioe) {
-713  throw new ServiceException(ioe);
-714}
-715  }
-716
-717  @Override
-718  public EnableCatalogJanitorResponse 
enableCatalogJanitor(RpcController c,
-719  EnableCatalogJanitorRequest req) 
throws ServiceException {
-720

[09/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/wal/WALFactory.Providers.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/wal/WALFactory.Providers.html 
b/devapidocs/src-html/org/apache/hadoop/hbase/wal/WALFactory.Providers.html
index d2d8da1..5bbbf0c 100644
--- a/devapidocs/src-html/org/apache/hadoop/hbase/wal/WALFactory.Providers.html
+++ b/devapidocs/src-html/org/apache/hadoop/hbase/wal/WALFactory.Providers.html
@@ -90,391 +90,392 @@
 082  static final String 
DEFAULT_WAL_PROVIDER = Providers.defaultProvider.name();
 083
 084  public static final String 
META_WAL_PROVIDER = "hbase.wal.meta_provider";
-085  static final String 
DEFAULT_META_WAL_PROVIDER = Providers.defaultProvider.name();
-086
-087  final String factoryId;
-088  private final WALProvider provider;
-089  // The meta updates are written to a 
different wal. If this
-090  // regionserver holds meta regions, 
then this ref will be non-null.
-091  // lazily intialized; most 
RegionServers don't deal with META
-092  private final 
AtomicReferenceWALProvider metaProvider = new 
AtomicReference();
-093
-094  /**
-095   * Configuration-specified WAL Reader 
used when a custom reader is requested
-096   */
-097  private final Class? extends 
AbstractFSWALProvider.Reader logReaderClass;
-098
-099  /**
-100   * How long to attempt opening 
in-recovery wals
-101   */
-102  private final int timeoutMillis;
-103
-104  private final Configuration conf;
-105
-106  // Used for the singleton WALFactory, 
see below.
-107  private WALFactory(Configuration conf) 
{
-108// this code is duplicated here so we 
can keep our members final.
-109// until we've moved reader/writer 
construction down into providers, this initialization must
-110// happen prior to provider 
initialization, in case they need to instantiate a reader/writer.
-111timeoutMillis = 
conf.getInt("hbase.hlog.open.timeout", 30);
-112/* TODO Both of these are probably 
specific to the fs wal provider */
-113logReaderClass = 
conf.getClass("hbase.regionserver.hlog.reader.impl", ProtobufLogReader.class,
-114  
AbstractFSWALProvider.Reader.class);
-115this.conf = conf;
-116// end required early 
initialization
-117
-118// this instance can't create wals, 
just reader/writers.
-119provider = null;
-120factoryId = SINGLETON_ID;
-121  }
-122
-123  @VisibleForTesting
-124  public Class? extends 
WALProvider getProviderClass(String key, String defaultValue) {
-125try {
-126  Providers provider = 
Providers.valueOf(conf.get(key, defaultValue));
-127  if (provider != 
Providers.defaultProvider) {
-128// User gives a wal provider 
explicitly, just use that one
-129return provider.clazz;
-130  }
-131  // AsyncFSWAL has better 
performance in most cases, and also uses less resources, we will try
-132  // to use it if possible. But it 
deeply hacks into the internal of DFSClient so will be easily
-133  // broken when upgrading hadoop. If 
it is broken, then we fall back to use FSHLog.
-134  if (AsyncFSWALProvider.load()) {
-135return 
AsyncFSWALProvider.class;
-136  } else {
-137return FSHLogProvider.class;
-138  }
-139} catch (IllegalArgumentException 
exception) {
-140  // Fall back to them specifying a 
class name
-141  // Note that the passed default 
class shouldn't actually be used, since the above only fails
-142  // when there is a config value 
present.
-143  return conf.getClass(key, 
Providers.defaultProvider.clazz, WALProvider.class);
-144}
-145  }
-146
-147  static WALProvider 
createProvider(Class? extends WALProvider clazz) throws IOException {
-148LOG.info("Instantiating WALProvider 
of type {}", clazz);
-149try {
-150  return 
clazz.getDeclaredConstructor().newInstance();
-151} catch (Exception e) {
-152  LOG.error("couldn't set up 
WALProvider, the configured class is " + clazz);
-153  LOG.debug("Exception details for 
failure to load WALProvider.", e);
-154  throw new IOException("couldn't set 
up WALProvider", e);
-155}
-156  }
-157
-158  /**
-159   * @param conf must not be null, will 
keep a reference to read params in later reader/writer
-160   *  instances.
-161   * @param factoryId a unique identifier 
for this factory. used i.e. by filesystem implementations
-162   *  to make a directory
-163   */
-164  public WALFactory(Configuration conf, 
String factoryId) throws IOException {
-165// default 
enableSyncReplicationWALProvider is true, only disable 
SyncReplicationWALProvider
-166// for HMaster or HRegionServer which 
take system table only. See HBASE-1
-167this(conf, factoryId, true);
-168  }
-169
-170  /**
-171   * @param conf must not be null, will 
keep a reference to read params in later reader/writer
-172   *  

[22/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Writer.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Writer.html 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Writer.html
index b7b4236..3d1edb3 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Writer.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.Writer.html
@@ -259,1863 +259,1867 @@
 251   * + Metadata!  + = See note on 
BLOCK_METADATA_SPACE above.
 252   * ++
 253   * /code
-254   * @see #serialize(ByteBuffer)
+254   * @see #serialize(ByteBuffer, 
boolean)
 255   */
-256  static final 
CacheableDeserializerCacheable BLOCK_DESERIALIZER =
-257  new 
CacheableDeserializerCacheable() {
-258@Override
-259public HFileBlock 
deserialize(ByteBuff buf, boolean reuse, MemoryType memType)
-260throws IOException {
-261  // The buf has the file block 
followed by block metadata.
-262  // Set limit to just before the 
BLOCK_METADATA_SPACE then rewind.
-263  buf.limit(buf.limit() - 
BLOCK_METADATA_SPACE).rewind();
-264  // Get a new buffer to pass the 
HFileBlock for it to 'own'.
-265  ByteBuff newByteBuff;
-266  if (reuse) {
-267newByteBuff = buf.slice();
-268  } else {
-269int len = buf.limit();
-270newByteBuff = new 
SingleByteBuff(ByteBuffer.allocate(len));
-271newByteBuff.put(0, buf, 
buf.position(), len);
-272  }
-273  // Read out the 
BLOCK_METADATA_SPACE content and shove into our HFileBlock.
-274  buf.position(buf.limit());
-275  buf.limit(buf.limit() + 
HFileBlock.BLOCK_METADATA_SPACE);
-276  boolean usesChecksum = buf.get() == 
(byte) 1;
-277  long offset = buf.getLong();
-278  int nextBlockOnDiskSize = 
buf.getInt();
-279  HFileBlock hFileBlock =
-280  new HFileBlock(newByteBuff, 
usesChecksum, memType, offset, nextBlockOnDiskSize, null);
-281  return hFileBlock;
-282}
-283
-284@Override
-285public int 
getDeserialiserIdentifier() {
-286  return DESERIALIZER_IDENTIFIER;
-287}
-288
-289@Override
-290public HFileBlock 
deserialize(ByteBuff b) throws IOException {
-291  // Used only in tests
-292  return deserialize(b, false, 
MemoryType.EXCLUSIVE);
-293}
-294  };
-295
-296  private static final int 
DESERIALIZER_IDENTIFIER;
-297  static {
-298DESERIALIZER_IDENTIFIER =
-299
CacheableDeserializerIdManager.registerDeserializer(BLOCK_DESERIALIZER);
-300  }
-301
-302  /**
-303   * Copy constructor. Creates a shallow 
copy of {@code that}'s buffer.
-304   */
-305  private HFileBlock(HFileBlock that) {
-306this(that, false);
-307  }
-308
-309  /**
-310   * Copy constructor. Creates a 
shallow/deep copy of {@code that}'s buffer as per the boolean
-311   * param.
-312   */
-313  private HFileBlock(HFileBlock that, 
boolean bufCopy) {
-314init(that.blockType, 
that.onDiskSizeWithoutHeader,
-315
that.uncompressedSizeWithoutHeader, that.prevBlockOffset,
-316that.offset, 
that.onDiskDataSizeWithHeader, that.nextBlockOnDiskSize, that.fileContext);
-317if (bufCopy) {
-318  this.buf = new 
SingleByteBuff(ByteBuffer.wrap(that.buf.toBytes(0, that.buf.limit(;
-319} else {
-320  this.buf = that.buf.duplicate();
-321}
-322  }
-323
-324  /**
-325   * Creates a new {@link HFile} block 
from the given fields. This constructor
-326   * is used only while writing blocks 
and caching,
-327   * and is sitting in a byte buffer and 
we want to stuff the block into cache.
-328   *
-329   * pTODO: The caller presumes 
no checksumming
-330   * required of this block instance 
since going into cache; checksum already verified on
-331   * underlying block data pulled in from 
filesystem. Is that correct? What if cache is SSD?
+256  public static final 
CacheableDeserializerCacheable BLOCK_DESERIALIZER = new 
BlockDeserializer();
+257
+258  public static final class 
BlockDeserializer implements CacheableDeserializerCacheable {
+259private BlockDeserializer() {
+260}
+261
+262@Override
+263public HFileBlock 
deserialize(ByteBuff buf, boolean reuse, MemoryType memType)
+264throws IOException {
+265  // The buf has the file block 
followed by block metadata.
+266  // Set limit to just before the 
BLOCK_METADATA_SPACE then rewind.
+267  buf.limit(buf.limit() - 
BLOCK_METADATA_SPACE).rewind();
+268  // Get a new buffer to pass the 
HFileBlock for it to 'own'.
+269  ByteBuff newByteBuff;
+270  if (reuse) {
+271newByteBuff = buf.slice();
+272  } else {
+273int len = buf.limit();
+274newByteBuff = new 
SingleByteBuff(ByteBuffer.allocate(len));
+275newByteBuff.put(0, buf, 
buf.position(), len);
+276  }
+277  // Read 

[28/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.BlockWritable.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.BlockWritable.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.BlockWritable.html
index b7b4236..3d1edb3 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.BlockWritable.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.BlockWritable.html
@@ -259,1863 +259,1867 @@
 251   * + Metadata!  + = See note on 
BLOCK_METADATA_SPACE above.
 252   * ++
 253   * /code
-254   * @see #serialize(ByteBuffer)
+254   * @see #serialize(ByteBuffer, 
boolean)
 255   */
-256  static final 
CacheableDeserializerCacheable BLOCK_DESERIALIZER =
-257  new 
CacheableDeserializerCacheable() {
-258@Override
-259public HFileBlock 
deserialize(ByteBuff buf, boolean reuse, MemoryType memType)
-260throws IOException {
-261  // The buf has the file block 
followed by block metadata.
-262  // Set limit to just before the 
BLOCK_METADATA_SPACE then rewind.
-263  buf.limit(buf.limit() - 
BLOCK_METADATA_SPACE).rewind();
-264  // Get a new buffer to pass the 
HFileBlock for it to 'own'.
-265  ByteBuff newByteBuff;
-266  if (reuse) {
-267newByteBuff = buf.slice();
-268  } else {
-269int len = buf.limit();
-270newByteBuff = new 
SingleByteBuff(ByteBuffer.allocate(len));
-271newByteBuff.put(0, buf, 
buf.position(), len);
-272  }
-273  // Read out the 
BLOCK_METADATA_SPACE content and shove into our HFileBlock.
-274  buf.position(buf.limit());
-275  buf.limit(buf.limit() + 
HFileBlock.BLOCK_METADATA_SPACE);
-276  boolean usesChecksum = buf.get() == 
(byte) 1;
-277  long offset = buf.getLong();
-278  int nextBlockOnDiskSize = 
buf.getInt();
-279  HFileBlock hFileBlock =
-280  new HFileBlock(newByteBuff, 
usesChecksum, memType, offset, nextBlockOnDiskSize, null);
-281  return hFileBlock;
-282}
-283
-284@Override
-285public int 
getDeserialiserIdentifier() {
-286  return DESERIALIZER_IDENTIFIER;
-287}
-288
-289@Override
-290public HFileBlock 
deserialize(ByteBuff b) throws IOException {
-291  // Used only in tests
-292  return deserialize(b, false, 
MemoryType.EXCLUSIVE);
-293}
-294  };
-295
-296  private static final int 
DESERIALIZER_IDENTIFIER;
-297  static {
-298DESERIALIZER_IDENTIFIER =
-299
CacheableDeserializerIdManager.registerDeserializer(BLOCK_DESERIALIZER);
-300  }
-301
-302  /**
-303   * Copy constructor. Creates a shallow 
copy of {@code that}'s buffer.
-304   */
-305  private HFileBlock(HFileBlock that) {
-306this(that, false);
-307  }
-308
-309  /**
-310   * Copy constructor. Creates a 
shallow/deep copy of {@code that}'s buffer as per the boolean
-311   * param.
-312   */
-313  private HFileBlock(HFileBlock that, 
boolean bufCopy) {
-314init(that.blockType, 
that.onDiskSizeWithoutHeader,
-315
that.uncompressedSizeWithoutHeader, that.prevBlockOffset,
-316that.offset, 
that.onDiskDataSizeWithHeader, that.nextBlockOnDiskSize, that.fileContext);
-317if (bufCopy) {
-318  this.buf = new 
SingleByteBuff(ByteBuffer.wrap(that.buf.toBytes(0, that.buf.limit(;
-319} else {
-320  this.buf = that.buf.duplicate();
-321}
-322  }
-323
-324  /**
-325   * Creates a new {@link HFile} block 
from the given fields. This constructor
-326   * is used only while writing blocks 
and caching,
-327   * and is sitting in a byte buffer and 
we want to stuff the block into cache.
-328   *
-329   * pTODO: The caller presumes 
no checksumming
-330   * required of this block instance 
since going into cache; checksum already verified on
-331   * underlying block data pulled in from 
filesystem. Is that correct? What if cache is SSD?
+256  public static final 
CacheableDeserializerCacheable BLOCK_DESERIALIZER = new 
BlockDeserializer();
+257
+258  public static final class 
BlockDeserializer implements CacheableDeserializerCacheable {
+259private BlockDeserializer() {
+260}
+261
+262@Override
+263public HFileBlock 
deserialize(ByteBuff buf, boolean reuse, MemoryType memType)
+264throws IOException {
+265  // The buf has the file block 
followed by block metadata.
+266  // Set limit to just before the 
BLOCK_METADATA_SPACE then rewind.
+267  buf.limit(buf.limit() - 
BLOCK_METADATA_SPACE).rewind();
+268  // Get a new buffer to pass the 
HFileBlock for it to 'own'.
+269  ByteBuff newByteBuff;
+270  if (reuse) {
+271newByteBuff = buf.slice();
+272  } else {
+273int len = buf.limit();
+274newByteBuff = new 
SingleByteBuff(ByteBuffer.allocate(len));
+275newByteBuff.put(0, buf, 
buf.position(), 

[15/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.WriterThread.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.WriterThread.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.WriterThread.html
index bd3c59e..21e240a 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.WriterThread.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.WriterThread.html
@@ -33,62 +33,62 @@
 025import java.io.FileNotFoundException;
 026import java.io.FileOutputStream;
 027import java.io.IOException;
-028import java.io.ObjectInputStream;
-029import java.io.ObjectOutputStream;
-030import java.io.Serializable;
-031import java.nio.ByteBuffer;
-032import java.util.ArrayList;
-033import java.util.Comparator;
-034import java.util.HashSet;
-035import java.util.Iterator;
-036import java.util.List;
-037import java.util.Map;
-038import java.util.NavigableSet;
-039import java.util.PriorityQueue;
-040import java.util.Set;
-041import 
java.util.concurrent.ArrayBlockingQueue;
-042import 
java.util.concurrent.BlockingQueue;
-043import 
java.util.concurrent.ConcurrentHashMap;
-044import 
java.util.concurrent.ConcurrentMap;
-045import 
java.util.concurrent.ConcurrentSkipListSet;
-046import java.util.concurrent.Executors;
-047import 
java.util.concurrent.ScheduledExecutorService;
-048import java.util.concurrent.TimeUnit;
-049import 
java.util.concurrent.atomic.AtomicInteger;
-050import 
java.util.concurrent.atomic.AtomicLong;
-051import 
java.util.concurrent.atomic.LongAdder;
-052import java.util.concurrent.locks.Lock;
-053import 
java.util.concurrent.locks.ReentrantLock;
-054import 
java.util.concurrent.locks.ReentrantReadWriteLock;
-055import 
org.apache.hadoop.conf.Configuration;
-056import 
org.apache.hadoop.hbase.HBaseConfiguration;
-057import 
org.apache.hadoop.hbase.io.HeapSize;
-058import 
org.apache.hadoop.hbase.io.hfile.BlockCache;
-059import 
org.apache.hadoop.hbase.io.hfile.BlockCacheKey;
-060import 
org.apache.hadoop.hbase.io.hfile.BlockCacheUtil;
-061import 
org.apache.hadoop.hbase.io.hfile.BlockPriority;
-062import 
org.apache.hadoop.hbase.io.hfile.BlockType;
-063import 
org.apache.hadoop.hbase.io.hfile.CacheStats;
-064import 
org.apache.hadoop.hbase.io.hfile.Cacheable;
-065import 
org.apache.hadoop.hbase.io.hfile.Cacheable.MemoryType;
-066import 
org.apache.hadoop.hbase.io.hfile.CacheableDeserializer;
-067import 
org.apache.hadoop.hbase.io.hfile.CacheableDeserializerIdManager;
-068import 
org.apache.hadoop.hbase.io.hfile.CachedBlock;
-069import 
org.apache.hadoop.hbase.io.hfile.HFileBlock;
-070import 
org.apache.hadoop.hbase.nio.ByteBuff;
-071import 
org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
-072import 
org.apache.hadoop.hbase.util.HasThread;
-073import 
org.apache.hadoop.hbase.util.IdReadWriteLock;
-074import 
org.apache.hadoop.hbase.util.IdReadWriteLock.ReferenceType;
-075import 
org.apache.hadoop.hbase.util.UnsafeAvailChecker;
-076import 
org.apache.hadoop.util.StringUtils;
-077import 
org.apache.yetus.audience.InterfaceAudience;
-078import org.slf4j.Logger;
-079import org.slf4j.LoggerFactory;
-080
-081import 
org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
-082import 
org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
-083import 
org.apache.hbase.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder;
+028import java.io.Serializable;
+029import java.nio.ByteBuffer;
+030import java.util.ArrayList;
+031import java.util.Comparator;
+032import java.util.HashSet;
+033import java.util.Iterator;
+034import java.util.List;
+035import java.util.Map;
+036import java.util.NavigableSet;
+037import java.util.PriorityQueue;
+038import java.util.Set;
+039import 
java.util.concurrent.ArrayBlockingQueue;
+040import 
java.util.concurrent.BlockingQueue;
+041import 
java.util.concurrent.ConcurrentHashMap;
+042import 
java.util.concurrent.ConcurrentMap;
+043import 
java.util.concurrent.ConcurrentSkipListSet;
+044import java.util.concurrent.Executors;
+045import 
java.util.concurrent.ScheduledExecutorService;
+046import java.util.concurrent.TimeUnit;
+047import 
java.util.concurrent.atomic.AtomicInteger;
+048import 
java.util.concurrent.atomic.AtomicLong;
+049import 
java.util.concurrent.atomic.LongAdder;
+050import java.util.concurrent.locks.Lock;
+051import 
java.util.concurrent.locks.ReentrantLock;
+052import 
java.util.concurrent.locks.ReentrantReadWriteLock;
+053import 
org.apache.hadoop.conf.Configuration;
+054import 
org.apache.hadoop.hbase.HBaseConfiguration;
+055import 
org.apache.hadoop.hbase.io.HeapSize;
+056import 
org.apache.hadoop.hbase.io.hfile.BlockCache;
+057import 
org.apache.hadoop.hbase.io.hfile.BlockCacheKey;
+058import 
org.apache.hadoop.hbase.io.hfile.BlockCacheUtil;

[18/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.RAMQueueEntry.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.RAMQueueEntry.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.RAMQueueEntry.html
index bd3c59e..21e240a 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.RAMQueueEntry.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.RAMQueueEntry.html
@@ -33,62 +33,62 @@
 025import java.io.FileNotFoundException;
 026import java.io.FileOutputStream;
 027import java.io.IOException;
-028import java.io.ObjectInputStream;
-029import java.io.ObjectOutputStream;
-030import java.io.Serializable;
-031import java.nio.ByteBuffer;
-032import java.util.ArrayList;
-033import java.util.Comparator;
-034import java.util.HashSet;
-035import java.util.Iterator;
-036import java.util.List;
-037import java.util.Map;
-038import java.util.NavigableSet;
-039import java.util.PriorityQueue;
-040import java.util.Set;
-041import 
java.util.concurrent.ArrayBlockingQueue;
-042import 
java.util.concurrent.BlockingQueue;
-043import 
java.util.concurrent.ConcurrentHashMap;
-044import 
java.util.concurrent.ConcurrentMap;
-045import 
java.util.concurrent.ConcurrentSkipListSet;
-046import java.util.concurrent.Executors;
-047import 
java.util.concurrent.ScheduledExecutorService;
-048import java.util.concurrent.TimeUnit;
-049import 
java.util.concurrent.atomic.AtomicInteger;
-050import 
java.util.concurrent.atomic.AtomicLong;
-051import 
java.util.concurrent.atomic.LongAdder;
-052import java.util.concurrent.locks.Lock;
-053import 
java.util.concurrent.locks.ReentrantLock;
-054import 
java.util.concurrent.locks.ReentrantReadWriteLock;
-055import 
org.apache.hadoop.conf.Configuration;
-056import 
org.apache.hadoop.hbase.HBaseConfiguration;
-057import 
org.apache.hadoop.hbase.io.HeapSize;
-058import 
org.apache.hadoop.hbase.io.hfile.BlockCache;
-059import 
org.apache.hadoop.hbase.io.hfile.BlockCacheKey;
-060import 
org.apache.hadoop.hbase.io.hfile.BlockCacheUtil;
-061import 
org.apache.hadoop.hbase.io.hfile.BlockPriority;
-062import 
org.apache.hadoop.hbase.io.hfile.BlockType;
-063import 
org.apache.hadoop.hbase.io.hfile.CacheStats;
-064import 
org.apache.hadoop.hbase.io.hfile.Cacheable;
-065import 
org.apache.hadoop.hbase.io.hfile.Cacheable.MemoryType;
-066import 
org.apache.hadoop.hbase.io.hfile.CacheableDeserializer;
-067import 
org.apache.hadoop.hbase.io.hfile.CacheableDeserializerIdManager;
-068import 
org.apache.hadoop.hbase.io.hfile.CachedBlock;
-069import 
org.apache.hadoop.hbase.io.hfile.HFileBlock;
-070import 
org.apache.hadoop.hbase.nio.ByteBuff;
-071import 
org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
-072import 
org.apache.hadoop.hbase.util.HasThread;
-073import 
org.apache.hadoop.hbase.util.IdReadWriteLock;
-074import 
org.apache.hadoop.hbase.util.IdReadWriteLock.ReferenceType;
-075import 
org.apache.hadoop.hbase.util.UnsafeAvailChecker;
-076import 
org.apache.hadoop.util.StringUtils;
-077import 
org.apache.yetus.audience.InterfaceAudience;
-078import org.slf4j.Logger;
-079import org.slf4j.LoggerFactory;
-080
-081import 
org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
-082import 
org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
-083import 
org.apache.hbase.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder;
+028import java.io.Serializable;
+029import java.nio.ByteBuffer;
+030import java.util.ArrayList;
+031import java.util.Comparator;
+032import java.util.HashSet;
+033import java.util.Iterator;
+034import java.util.List;
+035import java.util.Map;
+036import java.util.NavigableSet;
+037import java.util.PriorityQueue;
+038import java.util.Set;
+039import 
java.util.concurrent.ArrayBlockingQueue;
+040import 
java.util.concurrent.BlockingQueue;
+041import 
java.util.concurrent.ConcurrentHashMap;
+042import 
java.util.concurrent.ConcurrentMap;
+043import 
java.util.concurrent.ConcurrentSkipListSet;
+044import java.util.concurrent.Executors;
+045import 
java.util.concurrent.ScheduledExecutorService;
+046import java.util.concurrent.TimeUnit;
+047import 
java.util.concurrent.atomic.AtomicInteger;
+048import 
java.util.concurrent.atomic.AtomicLong;
+049import 
java.util.concurrent.atomic.LongAdder;
+050import java.util.concurrent.locks.Lock;
+051import 
java.util.concurrent.locks.ReentrantLock;
+052import 
java.util.concurrent.locks.ReentrantReadWriteLock;
+053import 
org.apache.hadoop.conf.Configuration;
+054import 
org.apache.hadoop.hbase.HBaseConfiguration;
+055import 
org.apache.hadoop.hbase.io.HeapSize;
+056import 
org.apache.hadoop.hbase.io.hfile.BlockCache;
+057import 
org.apache.hadoop.hbase.io.hfile.BlockCacheKey;
+058import 

[24/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.PrefetchedHeader.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.PrefetchedHeader.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.PrefetchedHeader.html
index b7b4236..3d1edb3 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.PrefetchedHeader.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/io/hfile/HFileBlock.PrefetchedHeader.html
@@ -259,1863 +259,1867 @@
 251   * + Metadata!  + = See note on 
BLOCK_METADATA_SPACE above.
 252   * ++
 253   * /code
-254   * @see #serialize(ByteBuffer)
+254   * @see #serialize(ByteBuffer, 
boolean)
 255   */
-256  static final 
CacheableDeserializerCacheable BLOCK_DESERIALIZER =
-257  new 
CacheableDeserializerCacheable() {
-258@Override
-259public HFileBlock 
deserialize(ByteBuff buf, boolean reuse, MemoryType memType)
-260throws IOException {
-261  // The buf has the file block 
followed by block metadata.
-262  // Set limit to just before the 
BLOCK_METADATA_SPACE then rewind.
-263  buf.limit(buf.limit() - 
BLOCK_METADATA_SPACE).rewind();
-264  // Get a new buffer to pass the 
HFileBlock for it to 'own'.
-265  ByteBuff newByteBuff;
-266  if (reuse) {
-267newByteBuff = buf.slice();
-268  } else {
-269int len = buf.limit();
-270newByteBuff = new 
SingleByteBuff(ByteBuffer.allocate(len));
-271newByteBuff.put(0, buf, 
buf.position(), len);
-272  }
-273  // Read out the 
BLOCK_METADATA_SPACE content and shove into our HFileBlock.
-274  buf.position(buf.limit());
-275  buf.limit(buf.limit() + 
HFileBlock.BLOCK_METADATA_SPACE);
-276  boolean usesChecksum = buf.get() == 
(byte) 1;
-277  long offset = buf.getLong();
-278  int nextBlockOnDiskSize = 
buf.getInt();
-279  HFileBlock hFileBlock =
-280  new HFileBlock(newByteBuff, 
usesChecksum, memType, offset, nextBlockOnDiskSize, null);
-281  return hFileBlock;
-282}
-283
-284@Override
-285public int 
getDeserialiserIdentifier() {
-286  return DESERIALIZER_IDENTIFIER;
-287}
-288
-289@Override
-290public HFileBlock 
deserialize(ByteBuff b) throws IOException {
-291  // Used only in tests
-292  return deserialize(b, false, 
MemoryType.EXCLUSIVE);
-293}
-294  };
-295
-296  private static final int 
DESERIALIZER_IDENTIFIER;
-297  static {
-298DESERIALIZER_IDENTIFIER =
-299
CacheableDeserializerIdManager.registerDeserializer(BLOCK_DESERIALIZER);
-300  }
-301
-302  /**
-303   * Copy constructor. Creates a shallow 
copy of {@code that}'s buffer.
-304   */
-305  private HFileBlock(HFileBlock that) {
-306this(that, false);
-307  }
-308
-309  /**
-310   * Copy constructor. Creates a 
shallow/deep copy of {@code that}'s buffer as per the boolean
-311   * param.
-312   */
-313  private HFileBlock(HFileBlock that, 
boolean bufCopy) {
-314init(that.blockType, 
that.onDiskSizeWithoutHeader,
-315
that.uncompressedSizeWithoutHeader, that.prevBlockOffset,
-316that.offset, 
that.onDiskDataSizeWithHeader, that.nextBlockOnDiskSize, that.fileContext);
-317if (bufCopy) {
-318  this.buf = new 
SingleByteBuff(ByteBuffer.wrap(that.buf.toBytes(0, that.buf.limit(;
-319} else {
-320  this.buf = that.buf.duplicate();
-321}
-322  }
-323
-324  /**
-325   * Creates a new {@link HFile} block 
from the given fields. This constructor
-326   * is used only while writing blocks 
and caching,
-327   * and is sitting in a byte buffer and 
we want to stuff the block into cache.
-328   *
-329   * pTODO: The caller presumes 
no checksumming
-330   * required of this block instance 
since going into cache; checksum already verified on
-331   * underlying block data pulled in from 
filesystem. Is that correct? What if cache is SSD?
+256  public static final 
CacheableDeserializerCacheable BLOCK_DESERIALIZER = new 
BlockDeserializer();
+257
+258  public static final class 
BlockDeserializer implements CacheableDeserializerCacheable {
+259private BlockDeserializer() {
+260}
+261
+262@Override
+263public HFileBlock 
deserialize(ByteBuff buf, boolean reuse, MemoryType memType)
+264throws IOException {
+265  // The buf has the file block 
followed by block metadata.
+266  // Set limit to just before the 
BLOCK_METADATA_SPACE then rewind.
+267  buf.limit(buf.limit() - 
BLOCK_METADATA_SPACE).rewind();
+268  // Get a new buffer to pass the 
HFileBlock for it to 'own'.
+269  ByteBuff newByteBuff;
+270  if (reuse) {
+271newByteBuff = buf.slice();
+272  } else {
+273int len = buf.limit();
+274newByteBuff = new 
SingleByteBuff(ByteBuffer.allocate(len));
+275newByteBuff.put(0, buf, 

[44/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileBlock.FSReaderImpl.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileBlock.FSReaderImpl.html 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileBlock.FSReaderImpl.html
index ae13b31..fa95c11 100644
--- a/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileBlock.FSReaderImpl.html
+++ b/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileBlock.FSReaderImpl.html
@@ -117,7 +117,7 @@ var activeTableTab = "activeTableTab";
 
 
 
-static class HFileBlock.FSReaderImpl
+static class HFileBlock.FSReaderImpl
 extends https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
 implements HFileBlock.FSReader
 Reads version 2 HFile blocks from the filesystem.
@@ -376,7 +376,7 @@ implements 
 
 streamWrapper
-privateFSDataInputStreamWrapper streamWrapper
+privateFSDataInputStreamWrapper streamWrapper
 The file system stream of the underlying HFile that
  does or doesn't do checksum validations in the filesystem
 
@@ -387,7 +387,7 @@ implements 
 
 encodedBlockDecodingCtx
-privateHFileBlockDecodingContext encodedBlockDecodingCtx
+privateHFileBlockDecodingContext encodedBlockDecodingCtx
 
 
 
@@ -396,7 +396,7 @@ implements 
 
 defaultDecodingCtx
-private finalHFileBlockDefaultDecodingContext defaultDecodingCtx
+private finalHFileBlockDefaultDecodingContext defaultDecodingCtx
 Default context used when BlockType != BlockType.ENCODED_DATA.
 
 
@@ -406,7 +406,7 @@ implements 
 
 prefetchedHeader
-privatehttps://docs.oracle.com/javase/8/docs/api/java/util/concurrent/atomic/AtomicReference.html?is-external=true;
 title="class or interface in 
java.util.concurrent.atomic">AtomicReferenceHFileBlock.PrefetchedHeader prefetchedHeader
+privatehttps://docs.oracle.com/javase/8/docs/api/java/util/concurrent/atomic/AtomicReference.html?is-external=true;
 title="class or interface in 
java.util.concurrent.atomic">AtomicReferenceHFileBlock.PrefetchedHeader prefetchedHeader
 Cache of the NEXT header after this. Check it is indeed 
next blocks header
  before using it. TODO: Review. This overread into next block to fetch
  next blocks header seems unnecessary given we usually get the block size
@@ -419,7 +419,7 @@ implements 
 
 fileSize
-privatelong fileSize
+privatelong fileSize
 The size of the file we are reading from, or -1 if 
unknown.
 
 
@@ -429,7 +429,7 @@ implements 
 
 hdrSize
-protected finalint hdrSize
+protected finalint hdrSize
 The size of the header
 
 
@@ -439,7 +439,7 @@ implements 
 
 hfs
-privateHFileSystem hfs
+privateHFileSystem hfs
 The filesystem used to access data
 
 
@@ -449,7 +449,7 @@ implements 
 
 fileContext
-privateHFileContext fileContext
+privateHFileContext fileContext
 
 
 
@@ -458,7 +458,7 @@ implements 
 
 pathName
-privatehttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String pathName
+privatehttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String pathName
 
 
 
@@ -467,7 +467,7 @@ implements 
 
 streamLock
-private finalhttps://docs.oracle.com/javase/8/docs/api/java/util/concurrent/locks/Lock.html?is-external=true;
 title="class or interface in java.util.concurrent.locks">Lock streamLock
+private finalhttps://docs.oracle.com/javase/8/docs/api/java/util/concurrent/locks/Lock.html?is-external=true;
 title="class or interface in java.util.concurrent.locks">Lock streamLock
 
 
 
@@ -484,7 +484,7 @@ implements 
 
 FSReaderImpl
-FSReaderImpl(FSDataInputStreamWrapperstream,
+FSReaderImpl(FSDataInputStreamWrapperstream,
  longfileSize,
  HFileSystemhfs,
  org.apache.hadoop.fs.Pathpath,
@@ -502,7 +502,7 @@ implements 
 
 FSReaderImpl
-FSReaderImpl(org.apache.hadoop.fs.FSDataInputStreamistream,
+FSReaderImpl(org.apache.hadoop.fs.FSDataInputStreamistream,
  longfileSize,
  HFileContextfileContext)
   throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
@@ -528,7 +528,7 @@ implements 
 
 blockRange
-publicHFileBlock.BlockIteratorblockRange(longstartOffset,
+publicHFileBlock.BlockIteratorblockRange(longstartOffset,
longendOffset)
 Description copied from 
interface:HFileBlock.FSReader
 Creates a block iterator over the given portion of the HFile.
@@ -553,7 +553,7 @@ implements 
 
 readAtOffset
-protectedintreadAtOffset(org.apache.hadoop.fs.FSDataInputStreamistream,
+protectedintreadAtOffset(org.apache.hadoop.fs.FSDataInputStreamistream,
byte[]dest,
intdestOffset,
intsize,
@@ -587,7 +587,7 @@ implements 
 
 readBlockData

[05/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/testapidocs/src-html/org/apache/hadoop/hbase/MiniHBaseCluster.html
--
diff --git a/testapidocs/src-html/org/apache/hadoop/hbase/MiniHBaseCluster.html 
b/testapidocs/src-html/org/apache/hadoop/hbase/MiniHBaseCluster.html
index af2e5b1..fe2a7c8 100644
--- a/testapidocs/src-html/org/apache/hadoop/hbase/MiniHBaseCluster.html
+++ b/testapidocs/src-html/org/apache/hadoop/hbase/MiniHBaseCluster.html
@@ -32,889 +32,915 @@
 024import java.util.HashSet;
 025import java.util.List;
 026import java.util.Set;
-027import 
org.apache.hadoop.conf.Configuration;
-028import org.apache.hadoop.fs.FileSystem;
-029import 
org.apache.hadoop.hbase.master.HMaster;
-030import 
org.apache.hadoop.hbase.regionserver.HRegion;
-031import 
org.apache.hadoop.hbase.regionserver.HRegion.FlushResult;
-032import 
org.apache.hadoop.hbase.regionserver.HRegionServer;
-033import 
org.apache.hadoop.hbase.regionserver.Region;
-034import 
org.apache.hadoop.hbase.security.User;
-035import 
org.apache.hadoop.hbase.test.MetricsAssertHelper;
-036import 
org.apache.hadoop.hbase.util.JVMClusterUtil;
-037import 
org.apache.hadoop.hbase.util.JVMClusterUtil.MasterThread;
-038import 
org.apache.hadoop.hbase.util.JVMClusterUtil.RegionServerThread;
-039import 
org.apache.hadoop.hbase.util.Threads;
-040import 
org.apache.yetus.audience.InterfaceAudience;
-041import org.slf4j.Logger;
-042import org.slf4j.LoggerFactory;
-043
-044import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService;
-045import 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ClientService;
-046import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.MasterService;
-047import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos.RegionServerStartupResponse;
-048
-049/**
-050 * This class creates a single process 
HBase cluster.
-051 * each server.  The master uses the 
'default' FileSystem.  The RegionServers,
-052 * if we are running on 
DistributedFilesystem, create a FileSystem instance
-053 * each and will close down their 
instance on the way out.
-054 */
-055@InterfaceAudience.Public
-056public class MiniHBaseCluster extends 
HBaseCluster {
-057  private static final Logger LOG = 
LoggerFactory.getLogger(MiniHBaseCluster.class.getName());
-058  public LocalHBaseCluster 
hbaseCluster;
-059  private static int index;
-060
-061  /**
-062   * Start a MiniHBaseCluster.
-063   * @param conf Configuration to be used 
for cluster
-064   * @param numRegionServers initial 
number of region servers to start.
-065   * @throws IOException
-066   */
-067  public MiniHBaseCluster(Configuration 
conf, int numRegionServers)
-068  throws IOException, 
InterruptedException {
-069this(conf, 1, numRegionServers);
-070  }
-071
-072  /**
-073   * Start a MiniHBaseCluster.
-074   * @param conf Configuration to be used 
for cluster
-075   * @param numMasters initial number of 
masters to start.
-076   * @param numRegionServers initial 
number of region servers to start.
-077   * @throws IOException
-078   */
-079  public MiniHBaseCluster(Configuration 
conf, int numMasters, int numRegionServers)
-080  throws IOException, 
InterruptedException {
-081this(conf, numMasters, 
numRegionServers, null, null);
-082  }
-083
-084  /**
-085   * Start a MiniHBaseCluster.
-086   * @param conf Configuration to be used 
for cluster
-087   * @param numMasters initial number of 
masters to start.
-088   * @param numRegionServers initial 
number of region servers to start.
-089   */
-090  public MiniHBaseCluster(Configuration 
conf, int numMasters, int numRegionServers,
-091 Class? extends HMaster 
masterClass,
-092 Class? extends 
MiniHBaseCluster.MiniHBaseClusterRegionServer regionserverClass)
-093  throws IOException, 
InterruptedException {
-094this(conf, numMasters, 
numRegionServers, null, masterClass, regionserverClass);
-095  }
-096
-097  /**
-098   * @param rsPorts Ports that 
RegionServer should use; pass ports if you want to test cluster
-099   *   restart where for sure the 
regionservers come up on same address+port (but
-100   *   just with different startcode); by 
default mini hbase clusters choose new
-101   *   arbitrary ports on each cluster 
start.
-102   * @throws IOException
-103   * @throws InterruptedException
-104   */
-105  public MiniHBaseCluster(Configuration 
conf, int numMasters, int numRegionServers,
-106 ListInteger rsPorts,
-107 Class? extends HMaster 
masterClass,
-108 Class? extends 
MiniHBaseCluster.MiniHBaseClusterRegionServer regionserverClass)
-109  throws IOException, 
InterruptedException {
-110super(conf);
-111
-112// Hadoop 2
-113
CompatibilityFactory.getInstance(MetricsAssertHelper.class).init();
-114
-115init(numMasters, numRegionServers, 
rsPorts, masterClass, regionserverClass);
-116

[07/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/plugins.html
--
diff --git a/plugins.html b/plugins.html
index 133655d..d889203 100644
--- a/plugins.html
+++ b/plugins.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase  Project Plugins
 
@@ -375,7 +375,7 @@
 https://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2018-08-01
+  Last Published: 
2018-08-02
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/poweredbyhbase.html
--
diff --git a/poweredbyhbase.html b/poweredbyhbase.html
index a8bae52..7a002ac 100644
--- a/poweredbyhbase.html
+++ b/poweredbyhbase.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase  Powered By Apache HBase™
 
@@ -769,7 +769,7 @@ under the License. -->
 https://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2018-08-01
+  Last Published: 
2018-08-02
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/project-info.html
--
diff --git a/project-info.html b/project-info.html
index 95c4bbe..c3c3dc8 100644
--- a/project-info.html
+++ b/project-info.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase  Project Information
 
@@ -335,7 +335,7 @@
 https://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2018-08-01
+  Last Published: 
2018-08-02
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/project-reports.html
--
diff --git a/project-reports.html b/project-reports.html
index ed5b94c..c891241 100644
--- a/project-reports.html
+++ b/project-reports.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase  Generated Reports
 
@@ -305,7 +305,7 @@
 https://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2018-08-01
+  Last Published: 
2018-08-02
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/project-summary.html
--
diff --git a/project-summary.html b/project-summary.html
index d2d95c0..fd407ec 100644
--- a/project-summary.html
+++ b/project-summary.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase  Project Summary
 
@@ -331,7 +331,7 @@
 https://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2018-08-01
+  Last Published: 
2018-08-02
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/pseudo-distributed.html
--
diff --git a/pseudo-distributed.html b/pseudo-distributed.html
index 8263a90..66f093c 100644
--- a/pseudo-distributed.html
+++ b/pseudo-distributed.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase   
 Running Apache HBase (TM) in pseudo-distributed mode
@@ -308,7 +308,7 @@ under the License. -->
 https://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2018-08-01
+  Last Published: 
2018-08-02
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/replication.html
--
diff --git a/replication.html b/replication.html
index cca75ea..c69b6039 100644
--- a/replication.html
+++ b/replication.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase  
   Apache HBase (TM) Replication
@@ -303,7 +303,7 @@ under the License. -->
 https://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2018-08-01
+  Last Published: 
2018-08-02
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/resources.html
--

[02/51] [partial] hbase-site git commit: Published site at 613d831429960348dc42c3bdb6ea5d31be15c81c.

2018-08-02 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7cf6034b/testdevapidocs/org/apache/hadoop/hbase/HBaseClusterManager.RemoteShell.html
--
diff --git 
a/testdevapidocs/org/apache/hadoop/hbase/HBaseClusterManager.RemoteShell.html 
b/testdevapidocs/org/apache/hadoop/hbase/HBaseClusterManager.RemoteShell.html
index 1e10092..415dcdd 100644
--- 
a/testdevapidocs/org/apache/hadoop/hbase/HBaseClusterManager.RemoteShell.html
+++ 
b/testdevapidocs/org/apache/hadoop/hbase/HBaseClusterManager.RemoteShell.html
@@ -127,7 +127,7 @@ var activeTableTab = "activeTableTab";
 
 
 
-protected class HBaseClusterManager.RemoteShell
+protected class HBaseClusterManager.RemoteShell
 extends org.apache.hadoop.util.Shell.ShellCommandExecutor
 Executes commands over SSH
 
@@ -291,7 +291,7 @@ extends 
org.apache.hadoop.util.Shell.ShellCommandExecutor
 
 
 hostname
-privatehttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String hostname
+privatehttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String hostname
 
 
 
@@ -300,7 +300,7 @@ extends 
org.apache.hadoop.util.Shell.ShellCommandExecutor
 
 
 user
-privatehttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String user
+privatehttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String user
 
 
 
@@ -317,7 +317,7 @@ extends 
org.apache.hadoop.util.Shell.ShellCommandExecutor
 
 
 RemoteShell
-publicRemoteShell(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringhostname,
+publicRemoteShell(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringhostname,
https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String[]execString,
https://docs.oracle.com/javase/8/docs/api/java/io/File.html?is-external=true;
 title="class or interface in java.io">Filedir,
https://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true;
 title="class or interface in java.util">Maphttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String,https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringenv,
@@ -330,7 +330,7 @@ extends 
org.apache.hadoop.util.Shell.ShellCommandExecutor
 
 
 RemoteShell
-publicRemoteShell(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringhostname,
+publicRemoteShell(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringhostname,
https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String[]execString,
https://docs.oracle.com/javase/8/docs/api/java/io/File.html?is-external=true;
 title="class or interface in java.io">Filedir,
https://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true;
 title="class or interface in java.util">Maphttps://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String,https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringenv)
@@ -342,7 +342,7 @@ extends 
org.apache.hadoop.util.Shell.ShellCommandExecutor
 
 
 RemoteShell
-publicRemoteShell(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringhostname,
+publicRemoteShell(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringhostname,
https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String[]execString,
https://docs.oracle.com/javase/8/docs/api/java/io/File.html?is-external=true;
 title="class or interface in java.io">Filedir)
 
@@ -353,7 +353,7 @@ extends 
org.apache.hadoop.util.Shell.ShellCommandExecutor
 
 
 RemoteShell
-publicRemoteShell(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringhostname,
+publicRemoteShell(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringhostname,
  

[1/3] hbase git commit: HBASE-20749 Update to checkstyle 8.11

2018-08-02 Thread mdrob
Repository: hbase
Updated Branches:
  refs/heads/branch-2 06a92a3d2 -> e3ab91a80
  refs/heads/branch-2.1 2e1c12ca1 -> dff5ba27c
  refs/heads/master b3e41c952 -> 613d83142


HBASE-20749 Update to checkstyle 8.11


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/e3ab91a8
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/e3ab91a8
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/e3ab91a8

Branch: refs/heads/branch-2
Commit: e3ab91a800929de2ad2cb02d4cff06f0ad8202c1
Parents: 06a92a3
Author: Mike Drob 
Authored: Fri Jul 6 09:43:00 2018 -0500
Committer: Mike Drob 
Committed: Thu Aug 2 14:19:17 2018 -0500

--
 .../src/main/resources/hbase/checkstyle-suppressions.xml| 9 -
 pom.xml | 2 +-
 2 files changed, 5 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/e3ab91a8/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
--
diff --git 
a/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml 
b/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
index 2673b8b..33b4f68 100644
--- a/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
+++ b/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
@@ -1,7 +1,7 @@
 
 http://www.puppycrawl.com/dtds/suppressions_1_0.dtd;>
+"-//Checkstyle//DTD SuppressionFilter Configuration 1.2//EN"
+"https://checkstyle.org/dtds/suppressions_1_2.dtd;>
 
+  
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/e3ab91a8/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 10101b0..80567b2 100755
--- a/pom.xml
+++ b/pom.xml
@@ -1389,7 +1389,7 @@
 1.5.0-alpha.15
 3.0.0
 1.4
-8.2
+8.11
 1.6.0
 2.2.0
 1.3.9-1



[3/3] hbase git commit: HBASE-20749 Update to checkstyle 8.11

2018-08-02 Thread mdrob
HBASE-20749 Update to checkstyle 8.11


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/613d8314
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/613d8314
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/613d8314

Branch: refs/heads/master
Commit: 613d831429960348dc42c3bdb6ea5d31be15c81c
Parents: b3e41c9
Author: Mike Drob 
Authored: Fri Jul 6 09:43:00 2018 -0500
Committer: Mike Drob 
Committed: Thu Aug 2 14:27:07 2018 -0500

--
 .../src/main/resources/hbase/checkstyle-suppressions.xml | 8 +++-
 pom.xml  | 2 +-
 2 files changed, 4 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/613d8314/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
--
diff --git 
a/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml 
b/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
index 1679496..1274426 100644
--- a/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
+++ b/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
@@ -1,7 +1,7 @@
 
 http://www.puppycrawl.com/dtds/suppressions_1_0.dtd;>
+"-//Checkstyle//DTD SuppressionFilter Configuration 1.2//EN"
+"https://checkstyle.org/dtds/suppressions_1_2.dtd;>
 
+  
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/613d8314/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 8bb0f86..d0320db 100755
--- a/pom.xml
+++ b/pom.xml
@@ -1513,7 +1513,7 @@
 1.5.0-alpha.15
 3.0.0
 1.4
-8.2
+8.11
 1.6.0
 2.2.0
 1.3.9-1



[2/3] hbase git commit: HBASE-20749 Update to checkstyle 8.11

2018-08-02 Thread mdrob
HBASE-20749 Update to checkstyle 8.11


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/dff5ba27
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/dff5ba27
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/dff5ba27

Branch: refs/heads/branch-2.1
Commit: dff5ba27c344fdd3e8fe9f30312ed2323162b5ea
Parents: 2e1c12c
Author: Mike Drob 
Authored: Fri Jul 6 09:43:00 2018 -0500
Committer: Mike Drob 
Committed: Thu Aug 2 14:19:30 2018 -0500

--
 .../src/main/resources/hbase/checkstyle-suppressions.xml| 9 -
 pom.xml | 2 +-
 2 files changed, 5 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/dff5ba27/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
--
diff --git 
a/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml 
b/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
index 2673b8b..33b4f68 100644
--- a/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
+++ b/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
@@ -1,7 +1,7 @@
 
 http://www.puppycrawl.com/dtds/suppressions_1_0.dtd;>
+"-//Checkstyle//DTD SuppressionFilter Configuration 1.2//EN"
+"https://checkstyle.org/dtds/suppressions_1_2.dtd;>
 
+  
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/dff5ba27/pom.xml
--
diff --git a/pom.xml b/pom.xml
index e42f572..de977e1 100755
--- a/pom.xml
+++ b/pom.xml
@@ -1390,7 +1390,7 @@
 1.5.0-alpha.15
 3.0.0
 1.4
-8.2
+8.11
 1.6.0
 2.2.0
 1.3.9-1



hbase git commit: HBASE-19036 Add action in Chaos Monkey to restart Active Namenode

2018-08-02 Thread tedyu
Repository: hbase
Updated Branches:
  refs/heads/branch-1 a15c44574 -> 0298c06b4


HBASE-19036 Add action in Chaos Monkey to restart Active Namenode

Signed-off-by: tedyu 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/0298c06b
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/0298c06b
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/0298c06b

Branch: refs/heads/branch-1
Commit: 0298c06b4f1a26ced4c13b431c9029e0d1945008
Parents: a15c445
Author: Monani Mihir 
Authored: Tue Jul 31 18:40:24 2018 +0530
Committer: tedyu 
Committed: Thu Aug 2 05:04:27 2018 -0700

--
 .../hadoop/hbase/DistributedHBaseCluster.java   | 31 +++
 .../hadoop/hbase/HBaseClusterManager.java   |  2 +
 .../hadoop/hbase/chaos/actions/Action.java  | 27 ++
 .../chaos/actions/RestartActionBaseAction.java  | 11 +++
 .../actions/RestartActiveNameNodeAction.java| 89 
 .../org/apache/hadoop/hbase/HBaseCluster.java   | 40 -
 .../apache/hadoop/hbase/MiniHBaseCluster.java   | 25 ++
 7 files changed, 223 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/0298c06b/hbase-it/src/test/java/org/apache/hadoop/hbase/DistributedHBaseCluster.java
--
diff --git 
a/hbase-it/src/test/java/org/apache/hadoop/hbase/DistributedHBaseCluster.java 
b/hbase-it/src/test/java/org/apache/hadoop/hbase/DistributedHBaseCluster.java
index ce9ca70..b477f76 100644
--- 
a/hbase-it/src/test/java/org/apache/hadoop/hbase/DistributedHBaseCluster.java
+++ 
b/hbase-it/src/test/java/org/apache/hadoop/hbase/DistributedHBaseCluster.java
@@ -190,6 +190,37 @@ public class DistributedHBaseCluster extends HBaseCluster {
 waitForServiceToStop(ServiceType.HADOOP_DATANODE, serverName, timeout);
   }
 
+  @Override
+  public void startNameNode(ServerName serverName) throws IOException {
+LOG.info("Starting name node on: " + serverName.getServerName());
+clusterManager.start(ServiceType.HADOOP_NAMENODE, serverName.getHostname(),
+  serverName.getPort());
+  }
+
+  @Override
+  public void killNameNode(ServerName serverName) throws IOException {
+LOG.info("Aborting name node on: " + serverName.getServerName());
+clusterManager.kill(ServiceType.HADOOP_NAMENODE, serverName.getHostname(),
+  serverName.getPort());
+  }
+
+  @Override
+  public void stopNameNode(ServerName serverName) throws IOException {
+LOG.info("Stopping name node on: " + serverName.getServerName());
+clusterManager.stop(ServiceType.HADOOP_NAMENODE, serverName.getHostname(),
+  serverName.getPort());
+  }
+
+  @Override
+  public void waitForNameNodeToStart(ServerName serverName, long timeout) 
throws IOException {
+waitForServiceToStart(ServiceType.HADOOP_NAMENODE, serverName, timeout);
+  }
+
+  @Override
+  public void waitForNameNodeToStop(ServerName serverName, long timeout) 
throws IOException {
+waitForServiceToStop(ServiceType.HADOOP_NAMENODE, serverName, timeout);
+  }
+
   private void waitForServiceToStop(ServiceType service, ServerName 
serverName, long timeout)
 throws IOException {
 LOG.info("Waiting for service: " + service + " to stop: " + 
serverName.getServerName());

http://git-wip-us.apache.org/repos/asf/hbase/blob/0298c06b/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java
--
diff --git 
a/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java 
b/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java
index a3cd73b..509940a 100644
--- a/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java
+++ b/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java
@@ -101,6 +101,7 @@ public class HBaseClusterManager extends Configured 
implements ClusterManager {
 Configuration conf = getConf();
 switch (service) {
   case HADOOP_DATANODE:
+  case HADOOP_NAMENODE:
 return conf.get("hbase.it.clustermanager.hadoop.hdfs.user", "hdfs");
   case ZOOKEEPER_SERVER:
 return conf.get("hbase.it.clustermanager.zookeeper.user", "zookeeper");
@@ -282,6 +283,7 @@ public class HBaseClusterManager extends Configured 
implements ClusterManager {
   protected CommandProvider getCommandProvider(ServiceType service) throws 
IOException {
 switch (service) {
   case HADOOP_DATANODE:
+  case HADOOP_NAMENODE:
 return new HadoopShellCommandProvider(getConf());
   case ZOOKEEPER_SERVER:
 return new ZookeeperShellCommandProvider(getConf());

http://git-wip-us.apache.org/repos/asf/hbase/blob/0298c06b/hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/actions/Action.java

hbase git commit: HBASE-19036 Add action in Chaos Monkey to restart Active Namenode

2018-08-02 Thread tedyu
Repository: hbase
Updated Branches:
  refs/heads/branch-2 690d29bae -> 06a92a3d2


HBASE-19036 Add action in Chaos Monkey to restart Active Namenode

Signed-off-by: tedyu 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/06a92a3d
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/06a92a3d
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/06a92a3d

Branch: refs/heads/branch-2
Commit: 06a92a3d207b32968709d94639b649c274d5e79e
Parents: 690d29b
Author: Monani Mihir 
Authored: Tue Jul 31 18:44:45 2018 +0530
Committer: tedyu 
Committed: Thu Aug 2 05:00:16 2018 -0700

--
 .../hadoop/hbase/DistributedHBaseCluster.java   | 33 ++-
 .../hadoop/hbase/HBaseClusterManager.java   |  2 +
 .../hadoop/hbase/chaos/actions/Action.java  | 28 ++
 .../chaos/actions/RestartActionBaseAction.java  | 12 +++
 .../actions/RestartActiveNameNodeAction.java| 90 
 .../org/apache/hadoop/hbase/HBaseCluster.java   | 37 
 .../apache/hadoop/hbase/MiniHBaseCluster.java   | 26 ++
 7 files changed, 227 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/06a92a3d/hbase-it/src/test/java/org/apache/hadoop/hbase/DistributedHBaseCluster.java
--
diff --git 
a/hbase-it/src/test/java/org/apache/hadoop/hbase/DistributedHBaseCluster.java 
b/hbase-it/src/test/java/org/apache/hadoop/hbase/DistributedHBaseCluster.java
index 943f2a6..5ec9e25 100644
--- 
a/hbase-it/src/test/java/org/apache/hadoop/hbase/DistributedHBaseCluster.java
+++ 
b/hbase-it/src/test/java/org/apache/hadoop/hbase/DistributedHBaseCluster.java
@@ -25,6 +25,7 @@ import java.util.List;
 import java.util.Objects;
 import java.util.Set;
 import java.util.TreeSet;
+
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.ClusterManager.ServiceType;
 import org.apache.hadoop.hbase.client.Admin;
@@ -35,7 +36,6 @@ import org.apache.hadoop.hbase.client.RegionLocator;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.hbase.util.Threads;
 import org.apache.yetus.audience.InterfaceAudience;
-
 import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
 import org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.ServerInfo;
@@ -204,6 +204,37 @@ public class DistributedHBaseCluster extends HBaseCluster {
 waitForServiceToStop(ServiceType.HADOOP_DATANODE, serverName, timeout);
   }
 
+  @Override
+  public void startNameNode(ServerName serverName) throws IOException {
+LOG.info("Starting name node on: " + serverName.getServerName());
+clusterManager.start(ServiceType.HADOOP_NAMENODE, serverName.getHostname(),
+  serverName.getPort());
+  }
+
+  @Override
+  public void killNameNode(ServerName serverName) throws IOException {
+LOG.info("Aborting name node on: " + serverName.getServerName());
+clusterManager.kill(ServiceType.HADOOP_NAMENODE, serverName.getHostname(),
+  serverName.getPort());
+  }
+
+  @Override
+  public void stopNameNode(ServerName serverName) throws IOException {
+LOG.info("Stopping name node on: " + serverName.getServerName());
+clusterManager.stop(ServiceType.HADOOP_NAMENODE, serverName.getHostname(),
+  serverName.getPort());
+  }
+
+  @Override
+  public void waitForNameNodeToStart(ServerName serverName, long timeout) 
throws IOException {
+waitForServiceToStart(ServiceType.HADOOP_NAMENODE, serverName, timeout);
+  }
+
+  @Override
+  public void waitForNameNodeToStop(ServerName serverName, long timeout) 
throws IOException {
+waitForServiceToStop(ServiceType.HADOOP_NAMENODE, serverName, timeout);
+  }
+
   private void waitForServiceToStop(ServiceType service, ServerName 
serverName, long timeout)
 throws IOException {
 LOG.info("Waiting for service: " + service + " to stop: " + 
serverName.getServerName());

http://git-wip-us.apache.org/repos/asf/hbase/blob/06a92a3d/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java
--
diff --git 
a/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java 
b/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java
index 884ddad..f7c2fc6 100644
--- a/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java
+++ b/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java
@@ -101,6 +101,7 @@ public class HBaseClusterManager extends Configured 
implements ClusterManager {
 Configuration conf = getConf();
 switch (service) {
   case HADOOP_DATANODE:
+  case HADOOP_NAMENODE:
 return conf.get("hbase.it.clustermanager.hadoop.hdfs.user", "hdfs");

hbase git commit: HBASE-19036 Add action in Chaos Monkey to restart Active Namenode

2018-08-02 Thread tedyu
Repository: hbase
Updated Branches:
  refs/heads/master 78164efcf -> b3e41c952


HBASE-19036 Add action in Chaos Monkey to restart Active Namenode

Signed-off-by: tedyu 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/b3e41c95
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/b3e41c95
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/b3e41c95

Branch: refs/heads/master
Commit: b3e41c9525f0f8537b87bb7bf923cf74c31ee585
Parents: 78164ef
Author: Monani Mihir 
Authored: Tue Jul 31 18:44:45 2018 +0530
Committer: tedyu 
Committed: Thu Aug 2 04:59:51 2018 -0700

--
 .../hadoop/hbase/DistributedHBaseCluster.java   | 33 ++-
 .../hadoop/hbase/HBaseClusterManager.java   |  2 +
 .../hadoop/hbase/chaos/actions/Action.java  | 28 ++
 .../chaos/actions/RestartActionBaseAction.java  | 12 +++
 .../actions/RestartActiveNameNodeAction.java| 90 
 .../org/apache/hadoop/hbase/HBaseCluster.java   | 37 
 .../apache/hadoop/hbase/MiniHBaseCluster.java   | 26 ++
 7 files changed, 227 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/b3e41c95/hbase-it/src/test/java/org/apache/hadoop/hbase/DistributedHBaseCluster.java
--
diff --git 
a/hbase-it/src/test/java/org/apache/hadoop/hbase/DistributedHBaseCluster.java 
b/hbase-it/src/test/java/org/apache/hadoop/hbase/DistributedHBaseCluster.java
index 943f2a6..5ec9e25 100644
--- 
a/hbase-it/src/test/java/org/apache/hadoop/hbase/DistributedHBaseCluster.java
+++ 
b/hbase-it/src/test/java/org/apache/hadoop/hbase/DistributedHBaseCluster.java
@@ -25,6 +25,7 @@ import java.util.List;
 import java.util.Objects;
 import java.util.Set;
 import java.util.TreeSet;
+
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.ClusterManager.ServiceType;
 import org.apache.hadoop.hbase.client.Admin;
@@ -35,7 +36,6 @@ import org.apache.hadoop.hbase.client.RegionLocator;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.hbase.util.Threads;
 import org.apache.yetus.audience.InterfaceAudience;
-
 import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
 import org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.ServerInfo;
@@ -204,6 +204,37 @@ public class DistributedHBaseCluster extends HBaseCluster {
 waitForServiceToStop(ServiceType.HADOOP_DATANODE, serverName, timeout);
   }
 
+  @Override
+  public void startNameNode(ServerName serverName) throws IOException {
+LOG.info("Starting name node on: " + serverName.getServerName());
+clusterManager.start(ServiceType.HADOOP_NAMENODE, serverName.getHostname(),
+  serverName.getPort());
+  }
+
+  @Override
+  public void killNameNode(ServerName serverName) throws IOException {
+LOG.info("Aborting name node on: " + serverName.getServerName());
+clusterManager.kill(ServiceType.HADOOP_NAMENODE, serverName.getHostname(),
+  serverName.getPort());
+  }
+
+  @Override
+  public void stopNameNode(ServerName serverName) throws IOException {
+LOG.info("Stopping name node on: " + serverName.getServerName());
+clusterManager.stop(ServiceType.HADOOP_NAMENODE, serverName.getHostname(),
+  serverName.getPort());
+  }
+
+  @Override
+  public void waitForNameNodeToStart(ServerName serverName, long timeout) 
throws IOException {
+waitForServiceToStart(ServiceType.HADOOP_NAMENODE, serverName, timeout);
+  }
+
+  @Override
+  public void waitForNameNodeToStop(ServerName serverName, long timeout) 
throws IOException {
+waitForServiceToStop(ServiceType.HADOOP_NAMENODE, serverName, timeout);
+  }
+
   private void waitForServiceToStop(ServiceType service, ServerName 
serverName, long timeout)
 throws IOException {
 LOG.info("Waiting for service: " + service + " to stop: " + 
serverName.getServerName());

http://git-wip-us.apache.org/repos/asf/hbase/blob/b3e41c95/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java
--
diff --git 
a/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java 
b/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java
index 884ddad..f7c2fc6 100644
--- a/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java
+++ b/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java
@@ -101,6 +101,7 @@ public class HBaseClusterManager extends Configured 
implements ClusterManager {
 Configuration conf = getConf();
 switch (service) {
   case HADOOP_DATANODE:
+  case HADOOP_NAMENODE:
 return conf.get("hbase.it.clustermanager.hadoop.hdfs.user", "hdfs");