[2/2] hbase git commit: HBASE-19009 implement modifyTable and enable/disableTableReplication for AsyncAdmin

2017-11-15 Thread zghao
HBASE-19009 implement modifyTable and enable/disableTableReplication for 
AsyncAdmin


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/600fdee8
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/600fdee8
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/600fdee8

Branch: refs/heads/master
Commit: 600fdee8449aa1de80c8a78d3bb5e8551d3a0261
Parents: d89682e
Author: Guanghao Zhang 
Authored: Sun Nov 12 20:16:20 2017 +0800
Committer: Guanghao Zhang 
Committed: Thu Nov 16 07:07:20 2017 +0800

--
 .../apache/hadoop/hbase/client/AsyncAdmin.java  |  18 +
 .../hadoop/hbase/client/AsyncHBaseAdmin.java|  17 +-
 .../hbase/client/ColumnFamilyDescriptor.java|  27 ++
 .../apache/hadoop/hbase/client/HBaseAdmin.java  | 220 ++---
 .../hadoop/hbase/client/RawAsyncHBaseAdmin.java | 313 -
 .../hadoop/hbase/client/TableDescriptor.java|  51 +-
 .../hbase/client/TableDescriptorBuilder.java|  21 +-
 .../client/replication/ReplicationAdmin.java|   8 +-
 .../replication/ReplicationPeerConfigUtil.java  | 468 +++
 .../replication/ReplicationSerDeHelper.java | 437 -
 .../replication/ReplicationPeerConfig.java  |  20 +
 .../hbase/shaded/protobuf/RequestConverter.java |   6 +-
 .../replication/ReplicationPeerZKImpl.java  |   6 +-
 .../replication/ReplicationPeersZKImpl.java |  14 +-
 .../hadoop/hbase/master/MasterRpcServices.java  |  10 +-
 .../replication/master/TableCFsUpdater.java |  14 +-
 .../client/TestAsyncReplicationAdminApi.java|   2 -
 ...estAsyncReplicationAdminApiWithClusters.java | 242 ++
 .../replication/TestReplicationAdmin.java   |  16 +-
 .../replication/TestMasterReplication.java  |   4 +-
 .../replication/TestPerTableCFReplication.java  |  62 +--
 .../replication/master/TestTableCFsUpdater.java |  27 +-
 22 files changed, 1261 insertions(+), 742 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/600fdee8/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
index f251a8f..722e8b5 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
@@ -141,6 +141,12 @@ public interface AsyncAdmin {
*/
   CompletableFuture createTable(TableDescriptor desc, byte[][] 
splitKeys);
 
+  /*
+   * Modify an existing table, more IRB friendly version.
+   * @param desc modified description of the table
+   */
+  CompletableFuture modifyTable(TableDescriptor desc);
+
   /**
* Deletes a table.
* @param tableName name of table to delete
@@ -553,6 +559,18 @@ public interface AsyncAdmin {
   CompletableFuture listReplicatedTableCFs();
 
   /**
+   * Enable a table's replication switch.
+   * @param tableName name of the table
+   */
+  CompletableFuture enableTableReplication(TableName tableName);
+
+  /**
+   * Disable a table's replication switch.
+   * @param tableName name of the table
+   */
+  CompletableFuture disableTableReplication(TableName tableName);
+
+  /**
* Take a snapshot for the given table. If the table is enabled, a 
FLUSH-type snapshot will be
* taken. If the table is disabled, an offline snapshot is taken. Snapshots 
are considered unique
* based on the name of the snapshot. Attempts to take a snapshot 
with the same name (even

http://git-wip-us.apache.org/repos/asf/hbase/blob/600fdee8/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
index 250a38c..5a20291 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
@@ -128,6 +128,11 @@ public class AsyncHBaseAdmin implements AsyncAdmin {
   }
 
   @Override
+  public CompletableFuture modifyTable(TableDescriptor desc) {
+return wrap(rawAdmin.modifyTable(desc));
+  }
+
+  @Override
   public CompletableFuture deleteTable(TableName tableName) {
 return wrap(rawAdmin.deleteTable(tableName));
   }
@@ -420,6 +425,16 @@ public class AsyncHBaseAdmin implements AsyncAdmin {
   }
 
   @Override
+  public CompletableFuture enableTableReplication(TableName tableName) {
+return 

[1/2] hbase git commit: HBASE-19009 implement modifyTable and enable/disableTableReplication for AsyncAdmin

2017-11-15 Thread zghao
Repository: hbase
Updated Branches:
  refs/heads/master d89682ea9 -> 600fdee84


http://git-wip-us.apache.org/repos/asf/hbase/blob/600fdee8/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerZKImpl.java
--
diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerZKImpl.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerZKImpl.java
index 2de61cb..8f09479 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerZKImpl.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerZKImpl.java
@@ -30,14 +30,14 @@ import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.Abortable;
 import org.apache.hadoop.hbase.TableName;
-import org.apache.yetus.audience.InterfaceAudience;
-import org.apache.hadoop.hbase.client.replication.ReplicationSerDeHelper;
+import org.apache.hadoop.hbase.client.replication.ReplicationPeerConfigUtil;
 import org.apache.hadoop.hbase.exceptions.DeserializationException;
 import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
 import org.apache.hadoop.hbase.shaded.protobuf.generated.ReplicationProtos;
 import org.apache.hadoop.hbase.zookeeper.ZKUtil;
 import org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker;
 import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.yetus.audience.InterfaceAudience;
 import org.apache.zookeeper.KeeperException;
 import org.apache.zookeeper.KeeperException.NodeExistsException;
 
@@ -114,7 +114,7 @@ public class ReplicationPeerZKImpl extends 
ReplicationStateZKBase
 try {
   byte[] data = peerConfigTracker.getData(false);
   if (data != null) {
-this.peerConfig = ReplicationSerDeHelper.parsePeerFrom(data);
+this.peerConfig = ReplicationPeerConfigUtil.parsePeerFrom(data);
   }
 } catch (DeserializationException e) {
   LOG.error("", e);

http://git-wip-us.apache.org/repos/asf/hbase/blob/600fdee8/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
--
diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
index 0f39b2a..cc84c1d 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
@@ -35,8 +35,7 @@ import org.apache.hadoop.hbase.Abortable;
 import org.apache.hadoop.hbase.CompoundConfiguration;
 import org.apache.hadoop.hbase.HBaseConfiguration;
 import org.apache.hadoop.hbase.TableName;
-import org.apache.yetus.audience.InterfaceAudience;
-import org.apache.hadoop.hbase.client.replication.ReplicationSerDeHelper;
+import org.apache.hadoop.hbase.client.replication.ReplicationPeerConfigUtil;
 import org.apache.hadoop.hbase.exceptions.DeserializationException;
 import org.apache.hadoop.hbase.shaded.protobuf.generated.ReplicationProtos;
 import org.apache.hadoop.hbase.replication.ReplicationPeer.PeerState;
@@ -46,6 +45,7 @@ import org.apache.hadoop.hbase.zookeeper.ZKUtil;
 import org.apache.hadoop.hbase.zookeeper.ZKUtil.ZKUtilOp;
 import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
 import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.yetus.audience.InterfaceAudience;
 import org.apache.zookeeper.KeeperException;
 
 /**
@@ -131,7 +131,7 @@ public class ReplicationPeersZKImpl extends 
ReplicationStateZKBase implements Re
 
   List listOfOps = new ArrayList<>(2);
   ZKUtilOp op1 = ZKUtilOp.createAndFailSilent(getPeerNode(id),
-ReplicationSerDeHelper.toByteArray(peerConfig));
+ReplicationPeerConfigUtil.toByteArray(peerConfig));
   // b/w PeerWatcher and ReplicationZookeeper#add method to create the
   // peer-state znode. This happens while adding a peer
   // The peer state data is set as "ENABLED" by default.
@@ -206,9 +206,9 @@ public class ReplicationPeersZKImpl extends 
ReplicationStateZKBase implements Re
   }
   rpc.setTableCFsMap(tableCFs);
   ZKUtil.setData(this.zookeeper, getPeerNode(id),
-  ReplicationSerDeHelper.toByteArray(rpc));
+  ReplicationPeerConfigUtil.toByteArray(rpc));
   LOG.info("Peer tableCFs with id= " + id + " is now " +
-ReplicationSerDeHelper.convertToString(tableCFs));
+ReplicationPeerConfigUtil.convertToString(tableCFs));
 } catch (KeeperException e) {
   throw new ReplicationException("Unable to change tableCFs of the peer 
with id=" + id, e);
 }
@@ -303,7 +303,7 @@ public class ReplicationPeersZKImpl extends 

hbase git commit: HBASE-19215 Incorrect exception handling on the client causes incorrect call timeouts and byte buffer allocations on the server

2017-11-15 Thread apurtell
Repository: hbase
Updated Branches:
  refs/heads/branch-1.2 041fbe71b -> 0cc34b8f6


HBASE-19215 Incorrect exception handling on the client causes incorrect call 
timeouts and byte buffer allocations on the server

Signed-off-by: Andrew Purtell 
Amending-Author: Andrew Purtell 

Conflicts:

hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcConnection.java


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/0cc34b8f
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/0cc34b8f
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/0cc34b8f

Branch: refs/heads/branch-1.2
Commit: 0cc34b8f620e91cbd3fba53a7f3186b8d830c851
Parents: 041fbe7
Author: Abhishek Singh Chouhan 
Authored: Mon Nov 13 17:16:31 2017 +0530
Committer: Andrew Purtell 
Committed: Wed Nov 15 14:43:03 2017 -0800

--
 .../src/main/java/org/apache/hadoop/hbase/ipc/IPCUtil.java   | 8 
 .../main/java/org/apache/hadoop/hbase/ipc/RpcClientImpl.java | 7 +--
 2 files changed, 13 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/0cc34b8f/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/IPCUtil.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/IPCUtil.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/IPCUtil.java
index 67682f8..4189b85 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/IPCUtil.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/IPCUtil.java
@@ -315,4 +315,12 @@ public class IPCUtil {
 Preconditions.checkArgument(totalSize < Integer.MAX_VALUE);
 return totalSize;
   }
+
+  static IOException toIOE(Throwable t) {
+if (t instanceof IOException) {
+  return (IOException) t;
+} else {
+  return new IOException(t);
+}
+  }
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/0cc34b8f/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClientImpl.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClientImpl.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClientImpl.java
index 0260176..647e917 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClientImpl.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClientImpl.java
@@ -921,11 +921,14 @@ public class RpcClientImpl extends AbstractRpcClient {
 try {
   call.callStats.setRequestSizeBytes(IPCUtil.write(this.out, header, 
call.param,
   cellBlock));
-} catch (IOException e) {
+} catch (Throwable t) {
+  if (LOG.isTraceEnabled()) {
+LOG.trace("Error while writing call, call_id:" + call.id, t);
+  }
   // We set the value inside the synchronized block, this way the next 
in line
   //  won't even try to write. Otherwise we might miss a call in the 
calls map?
   shouldCloseConnection.set(true);
-  writeException = e;
+  writeException = IPCUtil.toIOE(t);
   interrupt();
 }
   }



hbase git commit: HBASE-19235 CoprocessorEnvironment should be exposed to CPs.

2017-11-15 Thread anoopsamjohn
Repository: hbase
Updated Branches:
  refs/heads/master 249bc09d8 -> 7d7048744


HBASE-19235 CoprocessorEnvironment should be exposed to CPs.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/7d704874
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/7d704874
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/7d704874

Branch: refs/heads/master
Commit: 7d704874423fbb387ef3251db220d5d2455e343d
Parents: 249bc09
Author: anoopsamjohn 
Authored: Wed Nov 15 14:49:42 2017 +0530
Committer: anoopsamjohn 
Committed: Wed Nov 15 14:49:42 2017 +0530

--
 .../hadoop/hbase/CoprocessorEnvironment.java| 15 ++
 .../hbase/coprocessor/BaseEnvironment.java  |  2 -
 .../hbase/coprocessor/CoprocessorHost.java  |  6 ++-
 .../hbase/coprocessor/TestCoprocessorHost.java  | 51 ++--
 .../security/token/TestTokenAuthentication.java |  6 ---
 5 files changed, 10 insertions(+), 70 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/7d704874/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java
index 4022b4b..418d624 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java
@@ -19,15 +19,15 @@
 
 package org.apache.hadoop.hbase;
 
-import java.io.IOException;
-
 import org.apache.hadoop.conf.Configuration;
 import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.yetus.audience.InterfaceStability;
 
 /**
  * Coprocessor environment state.
  */
-@InterfaceAudience.Private
+@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC)
+@InterfaceStability.Evolving
 public interface CoprocessorEnvironment {
 
   /** @return the Coprocessor interface version */
@@ -52,13 +52,4 @@ public interface CoprocessorEnvironment {
* @return the classloader for the loaded coprocessor instance
*/
   ClassLoader getClassLoader();
-
-  /**
-   * After a coprocessor has been loaded in an encapsulation of an 
environment, CoprocessorHost
-   * calls this function to initialize the environment.
-   */
-  void startup() throws IOException;
-
-  /** Clean up the environment. Called by CoprocessorHost when it itself is 
shutting down. */
-  void shutdown();
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/7d704874/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseEnvironment.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseEnvironment.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseEnvironment.java
index 32cef9e..ebbca65 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseEnvironment.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseEnvironment.java
@@ -61,7 +61,6 @@ public class BaseEnvironment 
implements CoprocessorEnviro
   }
 
   /** Initialize the environment */
-  @Override
   public void startup() throws IOException {
 if (state == Coprocessor.State.INSTALLED ||
 state == Coprocessor.State.STOPPED) {
@@ -82,7 +81,6 @@ public class BaseEnvironment 
implements CoprocessorEnviro
   }
 
   /** Clean up the environment */
-  @Override
   public void shutdown() {
 if (state == Coprocessor.State.ACTIVE) {
   state = Coprocessor.State.STOPPING;

http://git-wip-us.apache.org/repos/asf/hbase/blob/7d704874/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
index 18210d6..61c71cb 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
@@ -260,7 +260,8 @@ public abstract class CoprocessorHost) env).startup();
 // HBASE-4014: maintain list of loaded coprocessors for later crash 
analysis
 // if server (master or regionserver) aborts.
 coprocessorNames.add(implClass.getName());
@@ -283,10 +284,11 @@ public abstract class CoprocessorHost) e).shutdown();
   }
 
   /**


hbase git commit: HBASE-19235 CoprocessorEnvironment should be exposed to CPs.

2017-11-15 Thread anoopsamjohn
Repository: hbase
Updated Branches:
  refs/heads/branch-2 2dc191485 -> a1d86d90b


HBASE-19235 CoprocessorEnvironment should be exposed to CPs.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/a1d86d90
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/a1d86d90
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/a1d86d90

Branch: refs/heads/branch-2
Commit: a1d86d90ba9f57051223558cf7177076829871bc
Parents: 2dc1914
Author: anoopsamjohn 
Authored: Wed Nov 15 14:49:42 2017 +0530
Committer: anoopsamjohn 
Committed: Wed Nov 15 14:51:04 2017 +0530

--
 .../hadoop/hbase/CoprocessorEnvironment.java| 15 ++
 .../hbase/coprocessor/BaseEnvironment.java  |  2 -
 .../hbase/coprocessor/CoprocessorHost.java  |  6 ++-
 .../hbase/coprocessor/TestCoprocessorHost.java  | 51 ++--
 .../security/token/TestTokenAuthentication.java |  6 ---
 5 files changed, 10 insertions(+), 70 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/a1d86d90/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java
index 4022b4b..418d624 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java
@@ -19,15 +19,15 @@
 
 package org.apache.hadoop.hbase;
 
-import java.io.IOException;
-
 import org.apache.hadoop.conf.Configuration;
 import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.yetus.audience.InterfaceStability;
 
 /**
  * Coprocessor environment state.
  */
-@InterfaceAudience.Private
+@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC)
+@InterfaceStability.Evolving
 public interface CoprocessorEnvironment {
 
   /** @return the Coprocessor interface version */
@@ -52,13 +52,4 @@ public interface CoprocessorEnvironment {
* @return the classloader for the loaded coprocessor instance
*/
   ClassLoader getClassLoader();
-
-  /**
-   * After a coprocessor has been loaded in an encapsulation of an 
environment, CoprocessorHost
-   * calls this function to initialize the environment.
-   */
-  void startup() throws IOException;
-
-  /** Clean up the environment. Called by CoprocessorHost when it itself is 
shutting down. */
-  void shutdown();
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/a1d86d90/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseEnvironment.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseEnvironment.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseEnvironment.java
index 32cef9e..ebbca65 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseEnvironment.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseEnvironment.java
@@ -61,7 +61,6 @@ public class BaseEnvironment 
implements CoprocessorEnviro
   }
 
   /** Initialize the environment */
-  @Override
   public void startup() throws IOException {
 if (state == Coprocessor.State.INSTALLED ||
 state == Coprocessor.State.STOPPED) {
@@ -82,7 +81,6 @@ public class BaseEnvironment 
implements CoprocessorEnviro
   }
 
   /** Clean up the environment */
-  @Override
   public void shutdown() {
 if (state == Coprocessor.State.ACTIVE) {
   state = Coprocessor.State.STOPPING;

http://git-wip-us.apache.org/repos/asf/hbase/blob/a1d86d90/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
index 18210d6..61c71cb 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
@@ -260,7 +260,8 @@ public abstract class CoprocessorHost) env).startup();
 // HBASE-4014: maintain list of loaded coprocessors for later crash 
analysis
 // if server (master or regionserver) aborts.
 coprocessorNames.add(implClass.getName());
@@ -283,10 +284,11 @@ public abstract class CoprocessorHost) e).shutdown();
   }
 
   /**


[1/2] hbase git commit: HBASE-19009 implement modifyTable and enable/disableTableReplication for AsyncAdmin

2017-11-15 Thread zghao
Repository: hbase
Updated Branches:
  refs/heads/branch-2 fb79e9d4a -> d885e2232


http://git-wip-us.apache.org/repos/asf/hbase/blob/d885e223/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerZKImpl.java
--
diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerZKImpl.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerZKImpl.java
index 2de61cb..8f09479 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerZKImpl.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerZKImpl.java
@@ -30,14 +30,14 @@ import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.Abortable;
 import org.apache.hadoop.hbase.TableName;
-import org.apache.yetus.audience.InterfaceAudience;
-import org.apache.hadoop.hbase.client.replication.ReplicationSerDeHelper;
+import org.apache.hadoop.hbase.client.replication.ReplicationPeerConfigUtil;
 import org.apache.hadoop.hbase.exceptions.DeserializationException;
 import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
 import org.apache.hadoop.hbase.shaded.protobuf.generated.ReplicationProtos;
 import org.apache.hadoop.hbase.zookeeper.ZKUtil;
 import org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker;
 import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.yetus.audience.InterfaceAudience;
 import org.apache.zookeeper.KeeperException;
 import org.apache.zookeeper.KeeperException.NodeExistsException;
 
@@ -114,7 +114,7 @@ public class ReplicationPeerZKImpl extends 
ReplicationStateZKBase
 try {
   byte[] data = peerConfigTracker.getData(false);
   if (data != null) {
-this.peerConfig = ReplicationSerDeHelper.parsePeerFrom(data);
+this.peerConfig = ReplicationPeerConfigUtil.parsePeerFrom(data);
   }
 } catch (DeserializationException e) {
   LOG.error("", e);

http://git-wip-us.apache.org/repos/asf/hbase/blob/d885e223/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
--
diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
index 0f39b2a..cc84c1d 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
@@ -35,8 +35,7 @@ import org.apache.hadoop.hbase.Abortable;
 import org.apache.hadoop.hbase.CompoundConfiguration;
 import org.apache.hadoop.hbase.HBaseConfiguration;
 import org.apache.hadoop.hbase.TableName;
-import org.apache.yetus.audience.InterfaceAudience;
-import org.apache.hadoop.hbase.client.replication.ReplicationSerDeHelper;
+import org.apache.hadoop.hbase.client.replication.ReplicationPeerConfigUtil;
 import org.apache.hadoop.hbase.exceptions.DeserializationException;
 import org.apache.hadoop.hbase.shaded.protobuf.generated.ReplicationProtos;
 import org.apache.hadoop.hbase.replication.ReplicationPeer.PeerState;
@@ -46,6 +45,7 @@ import org.apache.hadoop.hbase.zookeeper.ZKUtil;
 import org.apache.hadoop.hbase.zookeeper.ZKUtil.ZKUtilOp;
 import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
 import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.yetus.audience.InterfaceAudience;
 import org.apache.zookeeper.KeeperException;
 
 /**
@@ -131,7 +131,7 @@ public class ReplicationPeersZKImpl extends 
ReplicationStateZKBase implements Re
 
   List listOfOps = new ArrayList<>(2);
   ZKUtilOp op1 = ZKUtilOp.createAndFailSilent(getPeerNode(id),
-ReplicationSerDeHelper.toByteArray(peerConfig));
+ReplicationPeerConfigUtil.toByteArray(peerConfig));
   // b/w PeerWatcher and ReplicationZookeeper#add method to create the
   // peer-state znode. This happens while adding a peer
   // The peer state data is set as "ENABLED" by default.
@@ -206,9 +206,9 @@ public class ReplicationPeersZKImpl extends 
ReplicationStateZKBase implements Re
   }
   rpc.setTableCFsMap(tableCFs);
   ZKUtil.setData(this.zookeeper, getPeerNode(id),
-  ReplicationSerDeHelper.toByteArray(rpc));
+  ReplicationPeerConfigUtil.toByteArray(rpc));
   LOG.info("Peer tableCFs with id= " + id + " is now " +
-ReplicationSerDeHelper.convertToString(tableCFs));
+ReplicationPeerConfigUtil.convertToString(tableCFs));
 } catch (KeeperException e) {
   throw new ReplicationException("Unable to change tableCFs of the peer 
with id=" + id, e);
 }
@@ -303,7 +303,7 @@ public class ReplicationPeersZKImpl extends 

hbase git commit: HBASE-19262 Revisit checkstyle rules

2017-11-15 Thread zhangduo
Repository: hbase
Updated Branches:
  refs/heads/branch-1.2 0cc34b8f6 -> cb7e60071


HBASE-19262 Revisit checkstyle rules


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/cb7e6007
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/cb7e6007
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/cb7e6007

Branch: refs/heads/branch-1.2
Commit: cb7e600716bd5807c77852b1cbc8d86513b7a698
Parents: 0cc34b8
Author: zhangduo 
Authored: Wed Nov 15 15:38:02 2017 +0800
Committer: zhangduo 
Committed: Thu Nov 16 09:43:33 2017 +0800

--
 .../src/main/resources/hbase/checkstyle-suppressions.xml  | 3 ++-
 hbase-checkstyle/src/main/resources/hbase/checkstyle.xml  | 1 +
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/cb7e6007/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
--
diff --git 
a/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml 
b/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
index 46009e9..1ecae86 100644
--- a/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
+++ b/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
@@ -33,5 +33,6 @@
 
   
   
-  
+  
+  
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/cb7e6007/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml
--
diff --git a/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml 
b/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml
index b423095..2240096 100644
--- a/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml
+++ b/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml
@@ -67,6 +67,7 @@
 http://checkstyle.sourceforge.net/config_imports.html -->
 
 
+  
   
   
   



hbase git commit: HBASE-19262 Revisit checkstyle rules

2017-11-15 Thread zhangduo
Repository: hbase
Updated Branches:
  refs/heads/branch-1.3 565527c60 -> b84e26973


HBASE-19262 Revisit checkstyle rules


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/b84e2697
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/b84e2697
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/b84e2697

Branch: refs/heads/branch-1.3
Commit: b84e26973f7a41509260e151c17e069789fd2ae0
Parents: 565527c
Author: zhangduo 
Authored: Wed Nov 15 15:38:02 2017 +0800
Committer: zhangduo 
Committed: Thu Nov 16 09:43:29 2017 +0800

--
 .../src/main/resources/hbase/checkstyle-suppressions.xml  | 3 ++-
 hbase-checkstyle/src/main/resources/hbase/checkstyle.xml  | 1 +
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/b84e2697/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
--
diff --git 
a/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml 
b/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
index 46009e9..1ecae86 100644
--- a/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
+++ b/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
@@ -33,5 +33,6 @@
 
   
   
-  
+  
+  
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/b84e2697/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml
--
diff --git a/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml 
b/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml
index b423095..2240096 100644
--- a/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml
+++ b/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml
@@ -67,6 +67,7 @@
 http://checkstyle.sourceforge.net/config_imports.html -->
 
 
+  
   
   
   



hbase git commit: HBASE-19262 Revisit checkstyle rules

2017-11-15 Thread zhangduo
Repository: hbase
Updated Branches:
  refs/heads/branch-2 d885e2232 -> c5ad80175


HBASE-19262 Revisit checkstyle rules


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/c5ad8017
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/c5ad8017
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/c5ad8017

Branch: refs/heads/branch-2
Commit: c5ad801754d877265ef184a19d7b619d637b06a3
Parents: d885e22
Author: zhangduo 
Authored: Wed Nov 15 15:38:02 2017 +0800
Committer: zhangduo 
Committed: Thu Nov 16 09:43:15 2017 +0800

--
 .../src/main/resources/hbase/checkstyle-suppressions.xml  | 3 ++-
 hbase-checkstyle/src/main/resources/hbase/checkstyle.xml  | 1 +
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/c5ad8017/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
--
diff --git 
a/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml 
b/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
index 46009e9..1ecae86 100644
--- a/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
+++ b/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
@@ -33,5 +33,6 @@
 
   
   
-  
+  
+  
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/c5ad8017/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml
--
diff --git a/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml 
b/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml
index b423095..2240096 100644
--- a/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml
+++ b/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml
@@ -67,6 +67,7 @@
 http://checkstyle.sourceforge.net/config_imports.html -->
 
 
+  
   
   
   



hbase git commit: HBASE-19262 Revisit checkstyle rules

2017-11-15 Thread zhangduo
Repository: hbase
Updated Branches:
  refs/heads/master 600fdee84 -> d4babbf06


HBASE-19262 Revisit checkstyle rules


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/d4babbf0
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/d4babbf0
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/d4babbf0

Branch: refs/heads/master
Commit: d4babbf060a99222c7ebe60ee1b0f4197411ea37
Parents: 600fdee
Author: zhangduo 
Authored: Wed Nov 15 15:38:02 2017 +0800
Committer: zhangduo 
Committed: Thu Nov 16 09:43:08 2017 +0800

--
 .../src/main/resources/hbase/checkstyle-suppressions.xml  | 3 ++-
 hbase-checkstyle/src/main/resources/hbase/checkstyle.xml  | 1 +
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/d4babbf0/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
--
diff --git 
a/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml 
b/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
index 46009e9..1ecae86 100644
--- a/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
+++ b/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
@@ -33,5 +33,6 @@
 
   
   
-  
+  
+  
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/d4babbf0/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml
--
diff --git a/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml 
b/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml
index b423095..2240096 100644
--- a/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml
+++ b/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml
@@ -67,6 +67,7 @@
 http://checkstyle.sourceforge.net/config_imports.html -->
 
 
+  
   
   
   



hbase git commit: HBASE-19262 Revisit checkstyle rules

2017-11-15 Thread zhangduo
Repository: hbase
Updated Branches:
  refs/heads/branch-1.4 9a075fe73 -> 2200397fc


HBASE-19262 Revisit checkstyle rules


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/2200397f
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/2200397f
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/2200397f

Branch: refs/heads/branch-1.4
Commit: 2200397fc3f9ff5da04c97269bec5b548d3485a6
Parents: 9a075fe
Author: zhangduo 
Authored: Wed Nov 15 15:38:02 2017 +0800
Committer: zhangduo 
Committed: Thu Nov 16 09:43:25 2017 +0800

--
 .../src/main/resources/hbase/checkstyle-suppressions.xml  | 3 ++-
 hbase-checkstyle/src/main/resources/hbase/checkstyle.xml  | 1 +
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/2200397f/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
--
diff --git 
a/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml 
b/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
index 46009e9..1ecae86 100644
--- a/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
+++ b/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
@@ -33,5 +33,6 @@
 
   
   
-  
+  
+  
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/2200397f/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml
--
diff --git a/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml 
b/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml
index b423095..2240096 100644
--- a/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml
+++ b/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml
@@ -67,6 +67,7 @@
 http://checkstyle.sourceforge.net/config_imports.html -->
 
 
+  
   
   
   



hbase git commit: HBASE-19278 Reenable cleanup in test teardown in TestAccessController3 disabled by HBASE-14614

2017-11-15 Thread stack
Repository: hbase
Updated Branches:
  refs/heads/branch-2 bc3542c0f -> 887489079


HBASE-19278 Reenable cleanup in test teardown in TestAccessController3 disabled 
by HBASE-14614

Remove a few unused imports.

Remove TestAsyncRegionAdminApi#testOffline, a test for a condition that
no longer exists (no offlining supported in hbase2).

M 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController3.java
 Uncomment cleanup called in test teardown.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/88748907
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/88748907
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/88748907

Branch: refs/heads/branch-2
Commit: 88748907980c7cc665c9676d0a3f6ac5ceedfeb6
Parents: bc3542c
Author: Michael Stack 
Authored: Wed Nov 15 19:03:50 2017 -0800
Committer: Michael Stack 
Committed: Wed Nov 15 23:27:28 2017 -0800

--
 .../hadoop/hbase/TestRegionRebalancing.java |  4 +--
 .../hbase/client/TestAsyncRegionAdminApi.java   | 28 
 .../procedure/TestServerCrashProcedure.java |  1 -
 .../security/access/TestAccessController3.java  |  6 ++---
 4 files changed, 4 insertions(+), 35 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/88748907/hbase-server/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java
index cb9f768..467aada 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java
@@ -1,5 +1,4 @@
-/**
- *
+/*
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
  * distributed with this work for additional information
@@ -43,7 +42,6 @@ import org.apache.hadoop.hbase.util.JVMClusterUtil;
 import org.apache.hadoop.hbase.util.Threads;
 import org.junit.After;
 import org.junit.Before;
-import org.junit.Ignore;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 import org.junit.runner.RunWith;

http://git-wip-us.apache.org/repos/asf/hbase/blob/88748907/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncRegionAdminApi.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncRegionAdminApi.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncRegionAdminApi.java
index 9b552b4..1e3af40 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncRegionAdminApi.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncRegionAdminApi.java
@@ -138,34 +138,6 @@ public class TestAsyncRegionAdminApi extends 
TestAsyncAdminBase {
 }
   }
 
-  @Ignore @Test
-  // Turning off this tests in AMv2. Doesn't make sense.Offlining means 
something
-  // different now.
-  // You can't 'offline' a region unless you know what you are doing
-  // Will cause the Master to tell the regionserver to shut itself down because
-  // regionserver is reporting the state as OPEN.
-  public void testOfflineRegion() throws Exception {
-RegionInfo hri = createTableAndGetOneRegion(tableName);
-
-RegionStates regionStates =
-
TEST_UTIL.getHBaseCluster().getMaster().getAssignmentManager().getRegionStates();
-admin.offline(hri.getRegionName()).get();
-
-long timeoutTime = System.currentTimeMillis() + 3000;
-while (true) {
-  if 
(regionStates.getRegionByStateOfTable(tableName).get(RegionState.State.OFFLINE)
-  .stream().anyMatch(r -> RegionInfo.COMPARATOR.compare(r, hri) == 0)) 
break;
-  long now = System.currentTimeMillis();
-  if (now > timeoutTime) {
-fail("Failed to offline the region in time");
-break;
-  }
-  Thread.sleep(10);
-}
-RegionState regionState = regionStates.getRegionState(hri);
-assertTrue(regionState.isOffline());
-  }
-
   @Test
   public void testGetRegionByStateOfTable() throws Exception {
 RegionInfo hri = createTableAndGetOneRegion(tableName);

http://git-wip-us.apache.org/repos/asf/hbase/blob/88748907/hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestServerCrashProcedure.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestServerCrashProcedure.java
 

hbase git commit: HBASE-19278 Reenable cleanup in test teardown in TestAccessController3 disabled by HBASE-14614

2017-11-15 Thread stack
Repository: hbase
Updated Branches:
  refs/heads/master 54827cf61 -> 92f53218e


HBASE-19278 Reenable cleanup in test teardown in TestAccessController3 disabled 
by HBASE-14614

Remove a few unused imports.

Remove TestAsyncRegionAdminApi#testOffline, a test for a condition that
no longer exists (no offlining supported in hbase2).

M 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController3.java
 Uncomment cleanup called in test teardown.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/92f53218
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/92f53218
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/92f53218

Branch: refs/heads/master
Commit: 92f53218e32f3aacc1f96ef8a4f2254f47c0bb42
Parents: 54827cf
Author: Michael Stack 
Authored: Wed Nov 15 19:03:50 2017 -0800
Committer: Michael Stack 
Committed: Wed Nov 15 23:26:58 2017 -0800

--
 .../hadoop/hbase/TestRegionRebalancing.java |  4 +--
 .../hbase/client/TestAsyncRegionAdminApi.java   | 28 
 .../procedure/TestServerCrashProcedure.java |  1 -
 .../security/access/TestAccessController3.java  |  6 ++---
 4 files changed, 4 insertions(+), 35 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/92f53218/hbase-server/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java
index cb9f768..467aada 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java
@@ -1,5 +1,4 @@
-/**
- *
+/*
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
  * distributed with this work for additional information
@@ -43,7 +42,6 @@ import org.apache.hadoop.hbase.util.JVMClusterUtil;
 import org.apache.hadoop.hbase.util.Threads;
 import org.junit.After;
 import org.junit.Before;
-import org.junit.Ignore;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 import org.junit.runner.RunWith;

http://git-wip-us.apache.org/repos/asf/hbase/blob/92f53218/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncRegionAdminApi.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncRegionAdminApi.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncRegionAdminApi.java
index 9b552b4..1e3af40 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncRegionAdminApi.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncRegionAdminApi.java
@@ -138,34 +138,6 @@ public class TestAsyncRegionAdminApi extends 
TestAsyncAdminBase {
 }
   }
 
-  @Ignore @Test
-  // Turning off this tests in AMv2. Doesn't make sense.Offlining means 
something
-  // different now.
-  // You can't 'offline' a region unless you know what you are doing
-  // Will cause the Master to tell the regionserver to shut itself down because
-  // regionserver is reporting the state as OPEN.
-  public void testOfflineRegion() throws Exception {
-RegionInfo hri = createTableAndGetOneRegion(tableName);
-
-RegionStates regionStates =
-
TEST_UTIL.getHBaseCluster().getMaster().getAssignmentManager().getRegionStates();
-admin.offline(hri.getRegionName()).get();
-
-long timeoutTime = System.currentTimeMillis() + 3000;
-while (true) {
-  if 
(regionStates.getRegionByStateOfTable(tableName).get(RegionState.State.OFFLINE)
-  .stream().anyMatch(r -> RegionInfo.COMPARATOR.compare(r, hri) == 0)) 
break;
-  long now = System.currentTimeMillis();
-  if (now > timeoutTime) {
-fail("Failed to offline the region in time");
-break;
-  }
-  Thread.sleep(10);
-}
-RegionState regionState = regionStates.getRegionState(hri);
-assertTrue(regionState.isOffline());
-  }
-
   @Test
   public void testGetRegionByStateOfTable() throws Exception {
 RegionInfo hri = createTableAndGetOneRegion(tableName);

http://git-wip-us.apache.org/repos/asf/hbase/blob/92f53218/hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestServerCrashProcedure.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestServerCrashProcedure.java
 

hbase git commit: HBASE-18356 Enable TestFavoredStochasticBalancerPickers#testPickers that was disabled by Proc-V2 AM in HBASE-14614

2017-11-15 Thread stack
Repository: hbase
Updated Branches:
  refs/heads/master 92f53218e -> 6d39b011d


HBASE-18356 Enable TestFavoredStochasticBalancerPickers#testPickers that was 
disabled by Proc-V2 AM in HBASE-14614

Rebase/Fixup

Signed-off-by: Thiruvel Thirumoolan 
Signed-off-by: Michael Stack 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/6d39b011
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/6d39b011
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/6d39b011

Branch: refs/heads/master
Commit: 6d39b011d413f0cee98549a293fefa187a912ad4
Parents: 92f5321
Author: Michael Stack 
Authored: Wed Nov 15 19:50:41 2017 -0800
Committer: Michael Stack 
Committed: Wed Nov 15 23:32:33 2017 -0800

--
 .../hbase/favored/FavoredNodesManager.java  |  18 +++
 .../TestFavoredNodeAssignmentHelper.java|   2 +-
 .../hadoop/hbase/master/TestMasterMetrics.java  |   2 -
 .../TestFavoredStochasticBalancerPickers.java   | 125 ++-
 4 files changed, 116 insertions(+), 31 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/6d39b011/hbase-server/src/main/java/org/apache/hadoop/hbase/favored/FavoredNodesManager.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/favored/FavoredNodesManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/favored/FavoredNodesManager.java
index dbba5c9..7705b3d 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/favored/FavoredNodesManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/favored/FavoredNodesManager.java
@@ -46,6 +46,7 @@ import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.net.NetUtils;
 import org.apache.yetus.audience.InterfaceAudience;
 
+import 
org.apache.hadoop.hbase.shaded.com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.hbase.shaded.com.google.common.collect.Lists;
 import org.apache.hadoop.hbase.shaded.com.google.common.collect.Maps;
 import org.apache.hadoop.hbase.shaded.com.google.common.collect.Sets;
@@ -287,6 +288,23 @@ public class FavoredNodesManager {
 }
   }
 
+  @VisibleForTesting
+  public synchronized Set getRegionsOfFavoredNode(ServerName 
serverName) {
+Set regionInfos = Sets.newHashSet();
+
+ServerName serverToUse = ServerName.valueOf(serverName.getHostAndPort(), 
NON_STARTCODE);
+if (primaryRSToRegionMap.containsKey(serverToUse)) {
+  regionInfos.addAll(primaryRSToRegionMap.get(serverToUse));
+}
+if (secondaryRSToRegionMap.containsKey(serverToUse)) {
+  regionInfos.addAll(secondaryRSToRegionMap.get(serverToUse));
+}
+if (teritiaryRSToRegionMap.containsKey(serverToUse)) {
+  regionInfos.addAll(teritiaryRSToRegionMap.get(serverToUse));
+}
+return regionInfos;
+  }
+
   public RackManager getRackManager() {
 return rackManager;
   }

http://git-wip-us.apache.org/repos/asf/hbase/blob/6d39b011/hbase-server/src/test/java/org/apache/hadoop/hbase/favored/TestFavoredNodeAssignmentHelper.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/favored/TestFavoredNodeAssignmentHelper.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/favored/TestFavoredNodeAssignmentHelper.java
index 24bb4bd..ffb39f8 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/favored/TestFavoredNodeAssignmentHelper.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/favored/TestFavoredNodeAssignmentHelper.java
@@ -1,4 +1,4 @@
-/**
+/*
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
  * distributed with this work for additional information

http://git-wip-us.apache.org/repos/asf/hbase/blob/6d39b011/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterMetrics.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterMetrics.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterMetrics.java
index 69baa5f..b300818 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterMetrics.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterMetrics.java
@@ -54,13 +54,11 @@ public class TestMasterMetrics {
 public MyMaster(Configuration conf) throws IOException, KeeperException, 
InterruptedException {
   super(conf);
 }
-/*
 @Override
 protected void tryRegionServerReport(
 long reportStartTime, long reportEndTime) {
   // do nothing
 }
-*/
   }
 
   @BeforeClass


hbase git commit: HBASE-18356 Enable TestFavoredStochasticBalancerPickers#testPickers that was disabled by Proc-V2 AM in HBASE-14614

2017-11-15 Thread stack
Repository: hbase
Updated Branches:
  refs/heads/branch-2 887489079 -> 99fbe7355


HBASE-18356 Enable TestFavoredStochasticBalancerPickers#testPickers that was 
disabled by Proc-V2 AM in HBASE-14614

Rebase/Fixup

Signed-off-by: Thiruvel Thirumoolan 
Signed-off-by: Michael Stack 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/99fbe735
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/99fbe735
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/99fbe735

Branch: refs/heads/branch-2
Commit: 99fbe73552e617f8988351da3b6ef4c475a80611
Parents: 8874890
Author: Michael Stack 
Authored: Wed Nov 15 19:50:41 2017 -0800
Committer: Michael Stack 
Committed: Wed Nov 15 23:36:23 2017 -0800

--
 .../hbase/favored/FavoredNodesManager.java  |  18 +++
 .../TestFavoredNodeAssignmentHelper.java|   2 +-
 .../hadoop/hbase/master/TestMasterMetrics.java  |   2 -
 .../TestFavoredStochasticBalancerPickers.java   | 125 ++-
 4 files changed, 116 insertions(+), 31 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/99fbe735/hbase-server/src/main/java/org/apache/hadoop/hbase/favored/FavoredNodesManager.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/favored/FavoredNodesManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/favored/FavoredNodesManager.java
index dbba5c9..7705b3d 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/favored/FavoredNodesManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/favored/FavoredNodesManager.java
@@ -46,6 +46,7 @@ import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.net.NetUtils;
 import org.apache.yetus.audience.InterfaceAudience;
 
+import 
org.apache.hadoop.hbase.shaded.com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.hbase.shaded.com.google.common.collect.Lists;
 import org.apache.hadoop.hbase.shaded.com.google.common.collect.Maps;
 import org.apache.hadoop.hbase.shaded.com.google.common.collect.Sets;
@@ -287,6 +288,23 @@ public class FavoredNodesManager {
 }
   }
 
+  @VisibleForTesting
+  public synchronized Set getRegionsOfFavoredNode(ServerName 
serverName) {
+Set regionInfos = Sets.newHashSet();
+
+ServerName serverToUse = ServerName.valueOf(serverName.getHostAndPort(), 
NON_STARTCODE);
+if (primaryRSToRegionMap.containsKey(serverToUse)) {
+  regionInfos.addAll(primaryRSToRegionMap.get(serverToUse));
+}
+if (secondaryRSToRegionMap.containsKey(serverToUse)) {
+  regionInfos.addAll(secondaryRSToRegionMap.get(serverToUse));
+}
+if (teritiaryRSToRegionMap.containsKey(serverToUse)) {
+  regionInfos.addAll(teritiaryRSToRegionMap.get(serverToUse));
+}
+return regionInfos;
+  }
+
   public RackManager getRackManager() {
 return rackManager;
   }

http://git-wip-us.apache.org/repos/asf/hbase/blob/99fbe735/hbase-server/src/test/java/org/apache/hadoop/hbase/favored/TestFavoredNodeAssignmentHelper.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/favored/TestFavoredNodeAssignmentHelper.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/favored/TestFavoredNodeAssignmentHelper.java
index 24bb4bd..ffb39f8 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/favored/TestFavoredNodeAssignmentHelper.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/favored/TestFavoredNodeAssignmentHelper.java
@@ -1,4 +1,4 @@
-/**
+/*
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
  * distributed with this work for additional information

http://git-wip-us.apache.org/repos/asf/hbase/blob/99fbe735/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterMetrics.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterMetrics.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterMetrics.java
index 69baa5f..b300818 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterMetrics.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterMetrics.java
@@ -54,13 +54,11 @@ public class TestMasterMetrics {
 public MyMaster(Configuration conf) throws IOException, KeeperException, 
InterruptedException {
   super(conf);
 }
-/*
 @Override
 protected void tryRegionServerReport(
 long reportStartTime, long reportEndTime) {
   // do nothing
 }
-*/
   }
 
   @BeforeClass


hbase git commit: HBASE-19270 Reenable TestRegionMergeTransactionOnCluster#testMergeWithReplicas disable by HBASE-14614

2017-11-15 Thread stack
Repository: hbase
Updated Branches:
  refs/heads/master 6d39b011d -> b35e18ccc


HBASE-19270 Reenable TestRegionMergeTransactionOnCluster#testMergeWithReplicas 
disable by HBASE-14614


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/b35e18cc
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/b35e18cc
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/b35e18cc

Branch: refs/heads/master
Commit: b35e18cccd8e990db458004abefab7fe3c8105c0
Parents: 6d39b01
Author: Michael Stack 
Authored: Wed Nov 15 14:56:17 2017 -0800
Committer: Michael Stack 
Committed: Wed Nov 15 23:37:48 2017 -0800

--
 .../TestRegionMergeTransactionOnCluster.java| 21 +---
 1 file changed, 9 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/b35e18cc/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionMergeTransactionOnCluster.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionMergeTransactionOnCluster.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionMergeTransactionOnCluster.java
index bdcc559..d046a13 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionMergeTransactionOnCluster.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionMergeTransactionOnCluster.java
@@ -1,4 +1,4 @@
-/**
+/*
  * Copyright The Apache Software Foundation
  *
  * Licensed to the Apache Software Foundation (ASF) under one or more
@@ -37,7 +37,6 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.CategoryBasedTimeout;
-import org.apache.hadoop.hbase.CoordinatedStateManager;
 import org.apache.hadoop.hbase.HBaseTestingUtility;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.MetaTableAccessor;
@@ -62,7 +61,7 @@ import org.apache.hadoop.hbase.master.RegionState;
 import org.apache.hadoop.hbase.master.assignment.AssignmentManager;
 import org.apache.hadoop.hbase.master.assignment.RegionStates;
 import org.apache.hadoop.hbase.procedure2.ProcedureTestingUtility;
-import org.apache.hadoop.hbase.testclassification.MediumTests;
+import org.apache.hadoop.hbase.testclassification.LargeTests;
 import org.apache.hadoop.hbase.testclassification.RegionServerTests;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.hbase.util.FSUtils;
@@ -73,8 +72,6 @@ import org.apache.hadoop.util.StringUtils;
 import org.apache.zookeeper.KeeperException;
 import org.junit.AfterClass;
 import org.junit.BeforeClass;
-import org.junit.ClassRule;
-import org.junit.Ignore;
 import org.junit.Rule;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
@@ -88,14 +85,14 @@ import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProto
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos.ReportRegionStateTransitionRequest;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos.ReportRegionStateTransitionResponse;
 
-@Category({RegionServerTests.class, MediumTests.class})
+@Category({RegionServerTests.class, LargeTests.class})
 public class TestRegionMergeTransactionOnCluster {
-  private static final Log LOG = LogFactory
-  .getLog(TestRegionMergeTransactionOnCluster.class);
+  private static final Log LOG = 
LogFactory.getLog(TestRegionMergeTransactionOnCluster.class);
   @Rule public TestName name = new TestName();
-  @ClassRule
-  public static final TestRule timeout =
-  CategoryBasedTimeout.forClass(TestRegionMergeTransactionOnCluster.class);
+  @Rule public final TestRule timeout = CategoryBasedTimeout.builder().
+  withTimeout(this.getClass()).
+  withLookingForStuckThread(true).
+  build();
 
   private static final int NB_SERVERS = 3;
 
@@ -357,7 +354,7 @@ public class TestRegionMergeTransactionOnCluster {
 }
   }
 
-  @Ignore @Test // DISABLED FOR NOW. DON'T KNOW HOW IT IS SUPPOSED TO WORK.
+  @Test
   public void testMergeWithReplicas() throws Exception {
 final TableName tableName = TableName.valueOf(name.getMethodName());
 // Create table and load data.



hbase git commit: HBASE-18964 Deprecated RowProcessor and Region#processRowsWithLocks() methods that take RowProcessor as an argument

2017-11-15 Thread stack
Repository: hbase
Updated Branches:
  refs/heads/branch-2 853ab2f94 -> e9612e6c8


HBASE-18964 Deprecated RowProcessor and Region#processRowsWithLocks() methods 
that take RowProcessor as an argument

Signed-off-by: Michael Stack 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/e9612e6c
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/e9612e6c
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/e9612e6c

Branch: refs/heads/branch-2
Commit: e9612e6c89808fd6ac09edca1630f28f2e3ab7ef
Parents: 853ab2f
Author: Umesh Agashe 
Authored: Tue Nov 14 14:22:49 2017 -0800
Committer: Michael Stack 
Committed: Wed Nov 15 23:40:20 2017 -0800

--
 .../java/org/apache/hadoop/hbase/regionserver/HRegion.java  | 2 +-
 .../java/org/apache/hadoop/hbase/regionserver/Region.java   | 9 +
 .../org/apache/hadoop/hbase/regionserver/RowProcessor.java  | 8 ++--
 3 files changed, 16 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/e9612e6c/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
index 3a3cb03..14d6a9d 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
@@ -3853,7 +3853,7 @@ public class HRegion implements HeapSize, 
PropagatingConfigurationObserver, Regi
   nonceKey.getNonceGroup(), nonceKey.getNonce(), 
batchOp.getOrigLogSeqNum());
 }
 
-// STEP 6. Complete mvcc for all but last writeEntry (for replay case)
+// Complete mvcc for all but last writeEntry (for replay case)
 if (it.hasNext() && writeEntry != null) {
   mvcc.complete(writeEntry);
   writeEntry = null;

http://git-wip-us.apache.org/repos/asf/hbase/blob/e9612e6c/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java
index 2d66d52..75f02a3 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java
@@ -409,7 +409,10 @@ public interface Region extends ConfigurationObserver {
* Performs atomic multiple reads and writes on a given row.
*
* @param processor The object defines the reads and writes to a row.
+   * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. For 
customization, use
+   * Coprocessors instead.
*/
+  @Deprecated
   void processRowsWithLocks(RowProcessor processor) throws IOException;
 
   /**
@@ -418,9 +421,12 @@ public interface Region extends ConfigurationObserver {
* @param processor The object defines the reads and writes to a row.
* @param nonceGroup Optional nonce group of the operation (client Id)
* @param nonce Optional nonce of the operation (unique random id to ensure 
"more idempotence")
+   * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. For 
customization, use
+   * Coprocessors instead.
*/
   // TODO Should not be exposing with params nonceGroup, nonce. Change when 
doing the jira for
   // Changing processRowsWithLocks and RowProcessor
+  @Deprecated
   void processRowsWithLocks(RowProcessor processor, long nonceGroup, long 
nonce)
   throws IOException;
 
@@ -432,9 +438,12 @@ public interface Region extends ConfigurationObserver {
*Use a negative number to switch off the time bound
* @param nonceGroup Optional nonce group of the operation (client Id)
* @param nonce Optional nonce of the operation (unique random id to ensure 
"more idempotence")
+   * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. For 
customization, use
+   * Coprocessors instead.
*/
   // TODO Should not be exposing with params nonceGroup, nonce. Change when 
doing the jira for
   // Changing processRowsWithLocks and RowProcessor
+  @Deprecated
   void processRowsWithLocks(RowProcessor processor, long timeout, long 
nonceGroup, long nonce)
   throws IOException;
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/e9612e6c/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RowProcessor.java
--
diff --git 

hbase git commit: HBASE-18964 Deprecated RowProcessor and Region#processRowsWithLocks() methods that take RowProcessor as an argument

2017-11-15 Thread stack
Repository: hbase
Updated Branches:
  refs/heads/master b35e18ccc -> 570d786ac


HBASE-18964 Deprecated RowProcessor and Region#processRowsWithLocks() methods 
that take RowProcessor as an argument

Signed-off-by: Michael Stack 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/570d786a
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/570d786a
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/570d786a

Branch: refs/heads/master
Commit: 570d786ac44a92d5b04cdf0b5ae6a707db486b03
Parents: b35e18c
Author: Umesh Agashe 
Authored: Tue Nov 14 14:22:49 2017 -0800
Committer: Michael Stack 
Committed: Wed Nov 15 23:40:00 2017 -0800

--
 .../java/org/apache/hadoop/hbase/regionserver/HRegion.java  | 2 +-
 .../java/org/apache/hadoop/hbase/regionserver/Region.java   | 9 +
 .../org/apache/hadoop/hbase/regionserver/RowProcessor.java  | 8 ++--
 3 files changed, 16 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/570d786a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
index 3a3cb03..14d6a9d 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
@@ -3853,7 +3853,7 @@ public class HRegion implements HeapSize, 
PropagatingConfigurationObserver, Regi
   nonceKey.getNonceGroup(), nonceKey.getNonce(), 
batchOp.getOrigLogSeqNum());
 }
 
-// STEP 6. Complete mvcc for all but last writeEntry (for replay case)
+// Complete mvcc for all but last writeEntry (for replay case)
 if (it.hasNext() && writeEntry != null) {
   mvcc.complete(writeEntry);
   writeEntry = null;

http://git-wip-us.apache.org/repos/asf/hbase/blob/570d786a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java
index 2d66d52..75f02a3 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java
@@ -409,7 +409,10 @@ public interface Region extends ConfigurationObserver {
* Performs atomic multiple reads and writes on a given row.
*
* @param processor The object defines the reads and writes to a row.
+   * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. For 
customization, use
+   * Coprocessors instead.
*/
+  @Deprecated
   void processRowsWithLocks(RowProcessor processor) throws IOException;
 
   /**
@@ -418,9 +421,12 @@ public interface Region extends ConfigurationObserver {
* @param processor The object defines the reads and writes to a row.
* @param nonceGroup Optional nonce group of the operation (client Id)
* @param nonce Optional nonce of the operation (unique random id to ensure 
"more idempotence")
+   * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. For 
customization, use
+   * Coprocessors instead.
*/
   // TODO Should not be exposing with params nonceGroup, nonce. Change when 
doing the jira for
   // Changing processRowsWithLocks and RowProcessor
+  @Deprecated
   void processRowsWithLocks(RowProcessor processor, long nonceGroup, long 
nonce)
   throws IOException;
 
@@ -432,9 +438,12 @@ public interface Region extends ConfigurationObserver {
*Use a negative number to switch off the time bound
* @param nonceGroup Optional nonce group of the operation (client Id)
* @param nonce Optional nonce of the operation (unique random id to ensure 
"more idempotence")
+   * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. For 
customization, use
+   * Coprocessors instead.
*/
   // TODO Should not be exposing with params nonceGroup, nonce. Change when 
doing the jira for
   // Changing processRowsWithLocks and RowProcessor
+  @Deprecated
   void processRowsWithLocks(RowProcessor processor, long timeout, long 
nonceGroup, long nonce)
   throws IOException;
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/570d786a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RowProcessor.java
--
diff --git 

hbase git commit: HBASE-19270 Reenable TestRegionMergeTransactionOnCluster#testMergeWithReplicas disable by HBASE-14614

2017-11-15 Thread stack
Repository: hbase
Updated Branches:
  refs/heads/branch-2 99fbe7355 -> 853ab2f94


HBASE-19270 Reenable TestRegionMergeTransactionOnCluster#testMergeWithReplicas 
disable by HBASE-14614


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/853ab2f9
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/853ab2f9
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/853ab2f9

Branch: refs/heads/branch-2
Commit: 853ab2f943fe65a3c6bff85fecd62a743be464d2
Parents: 99fbe73
Author: Michael Stack 
Authored: Wed Nov 15 14:56:17 2017 -0800
Committer: Michael Stack 
Committed: Wed Nov 15 23:38:07 2017 -0800

--
 .../TestRegionMergeTransactionOnCluster.java| 21 +---
 1 file changed, 9 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/853ab2f9/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionMergeTransactionOnCluster.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionMergeTransactionOnCluster.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionMergeTransactionOnCluster.java
index bdcc559..d046a13 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionMergeTransactionOnCluster.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionMergeTransactionOnCluster.java
@@ -1,4 +1,4 @@
-/**
+/*
  * Copyright The Apache Software Foundation
  *
  * Licensed to the Apache Software Foundation (ASF) under one or more
@@ -37,7 +37,6 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.CategoryBasedTimeout;
-import org.apache.hadoop.hbase.CoordinatedStateManager;
 import org.apache.hadoop.hbase.HBaseTestingUtility;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.MetaTableAccessor;
@@ -62,7 +61,7 @@ import org.apache.hadoop.hbase.master.RegionState;
 import org.apache.hadoop.hbase.master.assignment.AssignmentManager;
 import org.apache.hadoop.hbase.master.assignment.RegionStates;
 import org.apache.hadoop.hbase.procedure2.ProcedureTestingUtility;
-import org.apache.hadoop.hbase.testclassification.MediumTests;
+import org.apache.hadoop.hbase.testclassification.LargeTests;
 import org.apache.hadoop.hbase.testclassification.RegionServerTests;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.hbase.util.FSUtils;
@@ -73,8 +72,6 @@ import org.apache.hadoop.util.StringUtils;
 import org.apache.zookeeper.KeeperException;
 import org.junit.AfterClass;
 import org.junit.BeforeClass;
-import org.junit.ClassRule;
-import org.junit.Ignore;
 import org.junit.Rule;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
@@ -88,14 +85,14 @@ import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProto
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos.ReportRegionStateTransitionRequest;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos.ReportRegionStateTransitionResponse;
 
-@Category({RegionServerTests.class, MediumTests.class})
+@Category({RegionServerTests.class, LargeTests.class})
 public class TestRegionMergeTransactionOnCluster {
-  private static final Log LOG = LogFactory
-  .getLog(TestRegionMergeTransactionOnCluster.class);
+  private static final Log LOG = 
LogFactory.getLog(TestRegionMergeTransactionOnCluster.class);
   @Rule public TestName name = new TestName();
-  @ClassRule
-  public static final TestRule timeout =
-  CategoryBasedTimeout.forClass(TestRegionMergeTransactionOnCluster.class);
+  @Rule public final TestRule timeout = CategoryBasedTimeout.builder().
+  withTimeout(this.getClass()).
+  withLookingForStuckThread(true).
+  build();
 
   private static final int NB_SERVERS = 3;
 
@@ -357,7 +354,7 @@ public class TestRegionMergeTransactionOnCluster {
 }
   }
 
-  @Ignore @Test // DISABLED FOR NOW. DON'T KNOW HOW IT IS SUPPOSED TO WORK.
+  @Test
   public void testMergeWithReplicas() throws Exception {
 final TableName tableName = TableName.valueOf(name.getMethodName());
 // Create table and load data.



[1/3] hbase git commit: HBASE-19251 Merge RawAsyncTable and AsyncTable

2017-11-15 Thread zhangduo
Repository: hbase
Updated Branches:
  refs/heads/branch-2 e063b231d -> bc3542c0f


http://git-wip-us.apache.org/repos/asf/hbase/blob/bc3542c0/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
--
diff --git 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
index 2917605..e2d23e5 100644
--- 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
+++ 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
@@ -18,6 +18,11 @@
  */
 package org.apache.hadoop.hbase;
 
+import com.codahale.metrics.Histogram;
+import com.codahale.metrics.UniformReservoir;
+import com.fasterxml.jackson.databind.MapperFeature;
+import com.fasterxml.jackson.databind.ObjectMapper;
+
 import java.io.IOException;
 import java.io.PrintStream;
 import java.lang.reflect.Constructor;
@@ -61,7 +66,6 @@ import org.apache.hadoop.hbase.client.Durability;
 import org.apache.hadoop.hbase.client.Get;
 import org.apache.hadoop.hbase.client.Increment;
 import org.apache.hadoop.hbase.client.Put;
-import org.apache.hadoop.hbase.client.RawAsyncTable;
 import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.client.ResultScanner;
 import org.apache.hadoop.hbase.client.RowMutations;
@@ -80,8 +84,6 @@ import org.apache.hadoop.hbase.io.hfile.RandomDistribution;
 import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
 import org.apache.hadoop.hbase.regionserver.BloomType;
 import org.apache.hadoop.hbase.regionserver.CompactingMemStore;
-import org.apache.hadoop.hbase.shaded.com.google.common.base.MoreObjects;
-import 
org.apache.hadoop.hbase.shaded.com.google.common.util.concurrent.ThreadFactoryBuilder;
 import org.apache.hadoop.hbase.trace.HBaseHTraceConfiguration;
 import org.apache.hadoop.hbase.trace.SpanReceiverHost;
 import org.apache.hadoop.hbase.trace.TraceUtil;
@@ -105,10 +107,8 @@ import org.apache.htrace.core.Sampler;
 import org.apache.htrace.core.TraceScope;
 import org.apache.yetus.audience.InterfaceAudience;
 
-import com.codahale.metrics.Histogram;
-import com.codahale.metrics.UniformReservoir;
-import com.fasterxml.jackson.databind.MapperFeature;
-import com.fasterxml.jackson.databind.ObjectMapper;
+import org.apache.hadoop.hbase.shaded.com.google.common.base.MoreObjects;
+import 
org.apache.hadoop.hbase.shaded.com.google.common.util.concurrent.ThreadFactoryBuilder;
 
 /**
  * Script used evaluating HBase performance and scalability.  Runs a HBase
@@ -1302,7 +1302,7 @@ public class PerformanceEvaluation extends Configured 
implements Tool {
   }
 
   static abstract class AsyncTableTest extends AsyncTest {
-protected RawAsyncTable table;
+protected AsyncTable table;
 
 AsyncTableTest(AsyncConnection con, TestOptions options, Status status) {
   super(con, options, status);
@@ -1310,7 +1310,7 @@ public class PerformanceEvaluation extends Configured 
implements Tool {
 
 @Override
 void onStartup() throws IOException {
-  this.table = connection.getRawTable(TableName.valueOf(opts.tableName));
+  this.table = connection.getTable(TableName.valueOf(opts.tableName));
 }
 
 @Override
@@ -1435,7 +1435,7 @@ public class PerformanceEvaluation extends Configured 
implements Tool {
 
   static class AsyncScanTest extends AsyncTableTest {
 private ResultScanner testScanner;
-private AsyncTable asyncTable;
+private AsyncTable asyncTable;
 
 AsyncScanTest(AsyncConnection con, TestOptions options, Status status) {
   super(con, options, status);

http://git-wip-us.apache.org/repos/asf/hbase/blob/bc3542c0/hbase-server/src/test/java/org/apache/hadoop/hbase/client/AbstractTestAsyncTableScan.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/AbstractTestAsyncTableScan.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/AbstractTestAsyncTableScan.java
index 13e0e7c..5831bfc 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/AbstractTestAsyncTableScan.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/AbstractTestAsyncTableScan.java
@@ -24,6 +24,7 @@ import java.io.IOException;
 import java.io.UncheckedIOException;
 import java.util.Arrays;
 import java.util.List;
+import java.util.concurrent.ForkJoinPool;
 import java.util.function.Supplier;
 import java.util.stream.Collectors;
 import java.util.stream.IntStream;
@@ -63,7 +64,7 @@ public abstract class AbstractTestAsyncTableScan {
 TEST_UTIL.createTable(TABLE_NAME, FAMILY, splitKeys);
 TEST_UTIL.waitTableAvailable(TABLE_NAME);
 ASYNC_CONN = 
ConnectionFactory.createAsyncConnection(TEST_UTIL.getConfiguration()).get();
-ASYNC_CONN.getRawTable(TABLE_NAME).putAll(IntStream.range(0, COUNT)
+

[3/3] hbase git commit: HBASE-19251 Merge RawAsyncTable and AsyncTable

2017-11-15 Thread zhangduo
HBASE-19251 Merge RawAsyncTable and AsyncTable


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/bc3542c0
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/bc3542c0
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/bc3542c0

Branch: refs/heads/branch-2
Commit: bc3542c0fb33dd4e4d0f279bf742d9f642f9504e
Parents: e063b23
Author: zhangduo 
Authored: Thu Nov 16 14:36:28 2017 +0800
Committer: zhangduo 
Committed: Thu Nov 16 14:37:51 2017 +0800

--
 .../hadoop/hbase/AsyncMetaTableAccessor.java|  35 +-
 .../client/AdvancedScanResultConsumer.java  | 121 
 .../apache/hadoop/hbase/client/AsyncAdmin.java  |  17 +-
 .../client/AsyncBufferedMutatorBuilderImpl.java |   4 +-
 .../hbase/client/AsyncBufferedMutatorImpl.java  |   6 +-
 .../hadoop/hbase/client/AsyncClientScanner.java |   4 +-
 .../hadoop/hbase/client/AsyncConnection.java|  29 +-
 .../hbase/client/AsyncConnectionImpl.java   |  90 +--
 .../hadoop/hbase/client/AsyncHBaseAdmin.java|   7 +-
 .../hbase/client/AsyncNonMetaRegionLocator.java |  85 ++-
 .../client/AsyncRpcRetryingCallerFactory.java   |   4 +-
 .../AsyncScanSingleRegionRpcRetryingCaller.java |  38 +-
 .../AsyncSingleRequestRpcRetryingCaller.java|   4 +-
 .../apache/hadoop/hbase/client/AsyncTable.java  | 570 ++-
 .../hadoop/hbase/client/AsyncTableBase.java | 414 --
 .../hadoop/hbase/client/AsyncTableBuilder.java  |  26 +-
 .../hbase/client/AsyncTableBuilderBase.java |  21 +-
 .../hadoop/hbase/client/AsyncTableImpl.java |  83 ++-
 .../hbase/client/AsyncTableResultScanner.java   |  11 +-
 .../hadoop/hbase/client/ConnectionUtils.java|  23 +-
 .../hadoop/hbase/client/RawAsyncHBaseAdmin.java |  26 +-
 .../hadoop/hbase/client/RawAsyncTable.java  | 263 -
 .../hadoop/hbase/client/RawAsyncTableImpl.java  |  69 ++-
 .../hbase/client/RawScanResultConsumer.java | 137 -
 .../hadoop/hbase/client/ScanResultConsumer.java |  28 +-
 .../hbase/client/ScanResultConsumerBase.java|  48 ++
 .../hadoop/hbase/client/ServiceCaller.java  |  61 ++
 .../coprocessor/AsyncAggregationClient.java |  31 +-
 .../client/TestAsyncAggregationClient.java  |   4 +-
 .../client/example/AsyncClientExample.java  |   2 +-
 .../hbase/client/example/HttpProxyExample.java  |  43 +-
 .../hadoop/hbase/PerformanceEvaluation.java |  20 +-
 .../client/AbstractTestAsyncTableScan.java  |  55 +-
 .../client/BufferingScanResultConsumer.java |  89 +++
 .../client/SimpleRawScanResultConsumer.java |  84 ---
 .../hbase/client/TestAsyncBufferMutator.java|   2 +-
 .../hbase/client/TestAsyncClusterAdminApi.java  |   4 +-
 .../hbase/client/TestAsyncRegionAdminApi.java   |  36 +-
 ...TestAsyncSingleRequestRpcRetryingCaller.java |  12 +-
 .../hadoop/hbase/client/TestAsyncTable.java |  24 +-
 .../hbase/client/TestAsyncTableAdminApi.java| 128 ++---
 .../hbase/client/TestAsyncTableBatch.java   |  68 +--
 .../client/TestAsyncTableGetMultiThreaded.java  |  18 +-
 .../hbase/client/TestAsyncTableNoncedRetry.java |   4 +-
 .../hadoop/hbase/client/TestAsyncTableScan.java |   7 +-
 .../hbase/client/TestAsyncTableScanAll.java |  20 +-
 .../hbase/client/TestAsyncTableScanMetrics.java |   4 +-
 .../client/TestAsyncTableScanRenewLease.java|   6 +-
 .../hbase/client/TestAsyncTableScanner.java |  20 +-
 ...stAsyncTableScannerCloseWhileSuspending.java |   2 +-
 .../hbase/client/TestRawAsyncScanCursor.java|   8 +-
 .../TestRawAsyncTableLimitedScanWithFilter.java |   4 +-
 .../client/TestRawAsyncTablePartialScan.java|   8 +-
 .../hbase/client/TestRawAsyncTableScan.java |   8 +-
 54 files changed, 1480 insertions(+), 1455 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/bc3542c0/hbase-client/src/main/java/org/apache/hadoop/hbase/AsyncMetaTableAccessor.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/AsyncMetaTableAccessor.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/AsyncMetaTableAccessor.java
index 6f41bd0..4c1d602 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/AsyncMetaTableAccessor.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/AsyncMetaTableAccessor.java
@@ -38,10 +38,10 @@ import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.hbase.MetaTableAccessor.CollectingVisitor;
 import org.apache.hadoop.hbase.MetaTableAccessor.QueryType;
 import org.apache.hadoop.hbase.MetaTableAccessor.Visitor;
+import org.apache.hadoop.hbase.client.AdvancedScanResultConsumer;
+import org.apache.hadoop.hbase.client.AsyncTable;
 import org.apache.hadoop.hbase.client.Consistency;
 import 

[2/3] hbase git commit: HBASE-19251 Merge RawAsyncTable and AsyncTable

2017-11-15 Thread zhangduo
http://git-wip-us.apache.org/repos/asf/hbase/blob/bc3542c0/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncTableBase.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncTableBase.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncTableBase.java
deleted file mode 100644
index 7d24c4f..000
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncTableBase.java
+++ /dev/null
@@ -1,414 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hbase.client;
-
-import static java.util.stream.Collectors.toList;
-import static org.apache.hadoop.hbase.client.ConnectionUtils.allOf;
-import static 
org.apache.hadoop.hbase.client.ConnectionUtils.toCheckExistenceOnly;
-
-import org.apache.hadoop.hbase.CompareOperator;
-import org.apache.hadoop.hbase.shaded.com.google.common.base.Preconditions;
-
-import java.util.List;
-import java.util.concurrent.CompletableFuture;
-import java.util.concurrent.TimeUnit;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hbase.TableName;
-import org.apache.yetus.audience.InterfaceAudience;
-import org.apache.hadoop.hbase.util.Bytes;
-
-/**
- * The base interface for asynchronous version of Table. Obtain an instance 
from a
- * {@link AsyncConnection}.
- * 
- * The implementation is required to be thread safe.
- * 
- * Usually the implementation will not throw any exception directly. You need 
to get the exception
- * from the returned {@link CompletableFuture}.
- * @since 2.0.0
- */
-@InterfaceAudience.Public
-public interface AsyncTableBase {
-
-  /**
-   * Gets the fully qualified table name instance of this table.
-   */
-  TableName getName();
-
-  /**
-   * Returns the {@link org.apache.hadoop.conf.Configuration} object used by 
this instance.
-   * 
-   * The reference returned is not a copy, so any change made to it will 
affect this instance.
-   */
-  Configuration getConfiguration();
-
-  /**
-   * Get timeout of each rpc request in this Table instance. It will be 
overridden by a more
-   * specific rpc timeout config such as readRpcTimeout or writeRpcTimeout.
-   * @see #getReadRpcTimeout(TimeUnit)
-   * @see #getWriteRpcTimeout(TimeUnit)
-   * @param unit the unit of time the timeout to be represented in
-   * @return rpc timeout in the specified time unit
-   */
-  long getRpcTimeout(TimeUnit unit);
-
-  /**
-   * Get timeout of each rpc read request in this Table instance.
-   * @param unit the unit of time the timeout to be represented in
-   * @return read rpc timeout in the specified time unit
-   */
-  long getReadRpcTimeout(TimeUnit unit);
-
-  /**
-   * Get timeout of each rpc write request in this Table instance.
-   * @param unit the unit of time the timeout to be represented in
-   * @return write rpc timeout in the specified time unit
-   */
-  long getWriteRpcTimeout(TimeUnit unit);
-
-  /**
-   * Get timeout of each operation in Table instance.
-   * @param unit the unit of time the timeout to be represented in
-   * @return operation rpc timeout in the specified time unit
-   */
-  long getOperationTimeout(TimeUnit unit);
-
-  /**
-   * Get the timeout of a single operation in a scan. It works like operation 
timeout for other
-   * operations.
-   * @param unit the unit of time the timeout to be represented in
-   * @return scan rpc timeout in the specified time unit
-   */
-  long getScanTimeout(TimeUnit unit);
-
-  /**
-   * Test for the existence of columns in the table, as specified by the Get.
-   * 
-   * This will return true if the Get matches one or more keys, false if not.
-   * 
-   * This is a server-side call so it prevents any data from being transfered 
to the client.
-   * @return true if the specified Get matches one or more keys, false if not. 
The return value will
-   * be wrapped by a {@link CompletableFuture}.
-   */
-  default CompletableFuture exists(Get get) {
-return get(toCheckExistenceOnly(get)).thenApply(r -> r.getExists());
-  }
-
-  /**
-   * Extracts certain cells from a given row.
-   * @param get The object that specifies what data to fetch and from 

[3/3] hbase git commit: HBASE-19251 Merge RawAsyncTable and AsyncTable

2017-11-15 Thread zhangduo
HBASE-19251 Merge RawAsyncTable and AsyncTable


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/54827cf6
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/54827cf6
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/54827cf6

Branch: refs/heads/master
Commit: 54827cf6139277c8f7c5cfd6833cd4c33a08e9b1
Parents: 3a46550
Author: zhangduo 
Authored: Thu Nov 16 14:36:28 2017 +0800
Committer: zhangduo 
Committed: Thu Nov 16 14:36:28 2017 +0800

--
 .../hadoop/hbase/AsyncMetaTableAccessor.java|  35 +-
 .../client/AdvancedScanResultConsumer.java  | 121 
 .../apache/hadoop/hbase/client/AsyncAdmin.java  |  17 +-
 .../client/AsyncBufferedMutatorBuilderImpl.java |   4 +-
 .../hbase/client/AsyncBufferedMutatorImpl.java  |   6 +-
 .../hadoop/hbase/client/AsyncClientScanner.java |   4 +-
 .../hadoop/hbase/client/AsyncConnection.java|  29 +-
 .../hbase/client/AsyncConnectionImpl.java   |  90 +--
 .../hadoop/hbase/client/AsyncHBaseAdmin.java|   7 +-
 .../hbase/client/AsyncNonMetaRegionLocator.java |  85 ++-
 .../client/AsyncRpcRetryingCallerFactory.java   |   4 +-
 .../AsyncScanSingleRegionRpcRetryingCaller.java |  38 +-
 .../AsyncSingleRequestRpcRetryingCaller.java|   4 +-
 .../apache/hadoop/hbase/client/AsyncTable.java  | 570 ++-
 .../hadoop/hbase/client/AsyncTableBase.java | 414 --
 .../hadoop/hbase/client/AsyncTableBuilder.java  |  26 +-
 .../hbase/client/AsyncTableBuilderBase.java |  21 +-
 .../hadoop/hbase/client/AsyncTableImpl.java |  83 ++-
 .../hbase/client/AsyncTableResultScanner.java   |  11 +-
 .../hadoop/hbase/client/ConnectionUtils.java|  23 +-
 .../hadoop/hbase/client/RawAsyncHBaseAdmin.java |  26 +-
 .../hadoop/hbase/client/RawAsyncTable.java  | 263 -
 .../hadoop/hbase/client/RawAsyncTableImpl.java  |  69 ++-
 .../hbase/client/RawScanResultConsumer.java | 137 -
 .../hadoop/hbase/client/ScanResultConsumer.java |  28 +-
 .../hbase/client/ScanResultConsumerBase.java|  48 ++
 .../hadoop/hbase/client/ServiceCaller.java  |  61 ++
 .../coprocessor/AsyncAggregationClient.java |  31 +-
 .../client/TestAsyncAggregationClient.java  |   4 +-
 .../client/example/AsyncClientExample.java  |   2 +-
 .../hbase/client/example/HttpProxyExample.java  |  43 +-
 .../hadoop/hbase/PerformanceEvaluation.java |  20 +-
 .../client/AbstractTestAsyncTableScan.java  |  55 +-
 .../client/BufferingScanResultConsumer.java |  89 +++
 .../client/SimpleRawScanResultConsumer.java |  84 ---
 .../hbase/client/TestAsyncBufferMutator.java|   2 +-
 .../hbase/client/TestAsyncClusterAdminApi.java  |   4 +-
 .../hbase/client/TestAsyncRegionAdminApi.java   |  36 +-
 ...TestAsyncSingleRequestRpcRetryingCaller.java |  12 +-
 .../hadoop/hbase/client/TestAsyncTable.java |  24 +-
 .../hbase/client/TestAsyncTableAdminApi.java| 128 ++---
 .../hbase/client/TestAsyncTableBatch.java   |  68 +--
 .../client/TestAsyncTableGetMultiThreaded.java  |  18 +-
 .../hbase/client/TestAsyncTableNoncedRetry.java |   4 +-
 .../hadoop/hbase/client/TestAsyncTableScan.java |   7 +-
 .../hbase/client/TestAsyncTableScanAll.java |  20 +-
 .../hbase/client/TestAsyncTableScanMetrics.java |   4 +-
 .../client/TestAsyncTableScanRenewLease.java|   6 +-
 .../hbase/client/TestAsyncTableScanner.java |  20 +-
 ...stAsyncTableScannerCloseWhileSuspending.java |   2 +-
 .../hbase/client/TestRawAsyncScanCursor.java|   8 +-
 .../TestRawAsyncTableLimitedScanWithFilter.java |   4 +-
 .../client/TestRawAsyncTablePartialScan.java|   8 +-
 .../hbase/client/TestRawAsyncTableScan.java |   8 +-
 54 files changed, 1480 insertions(+), 1455 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/54827cf6/hbase-client/src/main/java/org/apache/hadoop/hbase/AsyncMetaTableAccessor.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/AsyncMetaTableAccessor.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/AsyncMetaTableAccessor.java
index 6f41bd0..4c1d602 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/AsyncMetaTableAccessor.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/AsyncMetaTableAccessor.java
@@ -38,10 +38,10 @@ import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.hbase.MetaTableAccessor.CollectingVisitor;
 import org.apache.hadoop.hbase.MetaTableAccessor.QueryType;
 import org.apache.hadoop.hbase.MetaTableAccessor.Visitor;
+import org.apache.hadoop.hbase.client.AdvancedScanResultConsumer;
+import org.apache.hadoop.hbase.client.AsyncTable;
 import org.apache.hadoop.hbase.client.Consistency;
 import 

[1/3] hbase git commit: HBASE-19251 Merge RawAsyncTable and AsyncTable

2017-11-15 Thread zhangduo
Repository: hbase
Updated Branches:
  refs/heads/master 3a4655019 -> 54827cf61


http://git-wip-us.apache.org/repos/asf/hbase/blob/54827cf6/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
--
diff --git 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
index 2917605..e2d23e5 100644
--- 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
+++ 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
@@ -18,6 +18,11 @@
  */
 package org.apache.hadoop.hbase;
 
+import com.codahale.metrics.Histogram;
+import com.codahale.metrics.UniformReservoir;
+import com.fasterxml.jackson.databind.MapperFeature;
+import com.fasterxml.jackson.databind.ObjectMapper;
+
 import java.io.IOException;
 import java.io.PrintStream;
 import java.lang.reflect.Constructor;
@@ -61,7 +66,6 @@ import org.apache.hadoop.hbase.client.Durability;
 import org.apache.hadoop.hbase.client.Get;
 import org.apache.hadoop.hbase.client.Increment;
 import org.apache.hadoop.hbase.client.Put;
-import org.apache.hadoop.hbase.client.RawAsyncTable;
 import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.client.ResultScanner;
 import org.apache.hadoop.hbase.client.RowMutations;
@@ -80,8 +84,6 @@ import org.apache.hadoop.hbase.io.hfile.RandomDistribution;
 import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
 import org.apache.hadoop.hbase.regionserver.BloomType;
 import org.apache.hadoop.hbase.regionserver.CompactingMemStore;
-import org.apache.hadoop.hbase.shaded.com.google.common.base.MoreObjects;
-import 
org.apache.hadoop.hbase.shaded.com.google.common.util.concurrent.ThreadFactoryBuilder;
 import org.apache.hadoop.hbase.trace.HBaseHTraceConfiguration;
 import org.apache.hadoop.hbase.trace.SpanReceiverHost;
 import org.apache.hadoop.hbase.trace.TraceUtil;
@@ -105,10 +107,8 @@ import org.apache.htrace.core.Sampler;
 import org.apache.htrace.core.TraceScope;
 import org.apache.yetus.audience.InterfaceAudience;
 
-import com.codahale.metrics.Histogram;
-import com.codahale.metrics.UniformReservoir;
-import com.fasterxml.jackson.databind.MapperFeature;
-import com.fasterxml.jackson.databind.ObjectMapper;
+import org.apache.hadoop.hbase.shaded.com.google.common.base.MoreObjects;
+import 
org.apache.hadoop.hbase.shaded.com.google.common.util.concurrent.ThreadFactoryBuilder;
 
 /**
  * Script used evaluating HBase performance and scalability.  Runs a HBase
@@ -1302,7 +1302,7 @@ public class PerformanceEvaluation extends Configured 
implements Tool {
   }
 
   static abstract class AsyncTableTest extends AsyncTest {
-protected RawAsyncTable table;
+protected AsyncTable table;
 
 AsyncTableTest(AsyncConnection con, TestOptions options, Status status) {
   super(con, options, status);
@@ -1310,7 +1310,7 @@ public class PerformanceEvaluation extends Configured 
implements Tool {
 
 @Override
 void onStartup() throws IOException {
-  this.table = connection.getRawTable(TableName.valueOf(opts.tableName));
+  this.table = connection.getTable(TableName.valueOf(opts.tableName));
 }
 
 @Override
@@ -1435,7 +1435,7 @@ public class PerformanceEvaluation extends Configured 
implements Tool {
 
   static class AsyncScanTest extends AsyncTableTest {
 private ResultScanner testScanner;
-private AsyncTable asyncTable;
+private AsyncTable asyncTable;
 
 AsyncScanTest(AsyncConnection con, TestOptions options, Status status) {
   super(con, options, status);

http://git-wip-us.apache.org/repos/asf/hbase/blob/54827cf6/hbase-server/src/test/java/org/apache/hadoop/hbase/client/AbstractTestAsyncTableScan.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/AbstractTestAsyncTableScan.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/AbstractTestAsyncTableScan.java
index 13e0e7c..5831bfc 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/AbstractTestAsyncTableScan.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/AbstractTestAsyncTableScan.java
@@ -24,6 +24,7 @@ import java.io.IOException;
 import java.io.UncheckedIOException;
 import java.util.Arrays;
 import java.util.List;
+import java.util.concurrent.ForkJoinPool;
 import java.util.function.Supplier;
 import java.util.stream.Collectors;
 import java.util.stream.IntStream;
@@ -63,7 +64,7 @@ public abstract class AbstractTestAsyncTableScan {
 TEST_UTIL.createTable(TABLE_NAME, FAMILY, splitKeys);
 TEST_UTIL.waitTableAvailable(TABLE_NAME);
 ASYNC_CONN = 
ConnectionFactory.createAsyncConnection(TEST_UTIL.getConfiguration()).get();
-ASYNC_CONN.getRawTable(TABLE_NAME).putAll(IntStream.range(0, COUNT)
+

[2/3] hbase git commit: HBASE-19251 Merge RawAsyncTable and AsyncTable

2017-11-15 Thread zhangduo
http://git-wip-us.apache.org/repos/asf/hbase/blob/54827cf6/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncTableBase.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncTableBase.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncTableBase.java
deleted file mode 100644
index 7d24c4f..000
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncTableBase.java
+++ /dev/null
@@ -1,414 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hbase.client;
-
-import static java.util.stream.Collectors.toList;
-import static org.apache.hadoop.hbase.client.ConnectionUtils.allOf;
-import static 
org.apache.hadoop.hbase.client.ConnectionUtils.toCheckExistenceOnly;
-
-import org.apache.hadoop.hbase.CompareOperator;
-import org.apache.hadoop.hbase.shaded.com.google.common.base.Preconditions;
-
-import java.util.List;
-import java.util.concurrent.CompletableFuture;
-import java.util.concurrent.TimeUnit;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hbase.TableName;
-import org.apache.yetus.audience.InterfaceAudience;
-import org.apache.hadoop.hbase.util.Bytes;
-
-/**
- * The base interface for asynchronous version of Table. Obtain an instance 
from a
- * {@link AsyncConnection}.
- * 
- * The implementation is required to be thread safe.
- * 
- * Usually the implementation will not throw any exception directly. You need 
to get the exception
- * from the returned {@link CompletableFuture}.
- * @since 2.0.0
- */
-@InterfaceAudience.Public
-public interface AsyncTableBase {
-
-  /**
-   * Gets the fully qualified table name instance of this table.
-   */
-  TableName getName();
-
-  /**
-   * Returns the {@link org.apache.hadoop.conf.Configuration} object used by 
this instance.
-   * 
-   * The reference returned is not a copy, so any change made to it will 
affect this instance.
-   */
-  Configuration getConfiguration();
-
-  /**
-   * Get timeout of each rpc request in this Table instance. It will be 
overridden by a more
-   * specific rpc timeout config such as readRpcTimeout or writeRpcTimeout.
-   * @see #getReadRpcTimeout(TimeUnit)
-   * @see #getWriteRpcTimeout(TimeUnit)
-   * @param unit the unit of time the timeout to be represented in
-   * @return rpc timeout in the specified time unit
-   */
-  long getRpcTimeout(TimeUnit unit);
-
-  /**
-   * Get timeout of each rpc read request in this Table instance.
-   * @param unit the unit of time the timeout to be represented in
-   * @return read rpc timeout in the specified time unit
-   */
-  long getReadRpcTimeout(TimeUnit unit);
-
-  /**
-   * Get timeout of each rpc write request in this Table instance.
-   * @param unit the unit of time the timeout to be represented in
-   * @return write rpc timeout in the specified time unit
-   */
-  long getWriteRpcTimeout(TimeUnit unit);
-
-  /**
-   * Get timeout of each operation in Table instance.
-   * @param unit the unit of time the timeout to be represented in
-   * @return operation rpc timeout in the specified time unit
-   */
-  long getOperationTimeout(TimeUnit unit);
-
-  /**
-   * Get the timeout of a single operation in a scan. It works like operation 
timeout for other
-   * operations.
-   * @param unit the unit of time the timeout to be represented in
-   * @return scan rpc timeout in the specified time unit
-   */
-  long getScanTimeout(TimeUnit unit);
-
-  /**
-   * Test for the existence of columns in the table, as specified by the Get.
-   * 
-   * This will return true if the Get matches one or more keys, false if not.
-   * 
-   * This is a server-side call so it prevents any data from being transfered 
to the client.
-   * @return true if the specified Get matches one or more keys, false if not. 
The return value will
-   * be wrapped by a {@link CompletableFuture}.
-   */
-  default CompletableFuture exists(Get get) {
-return get(toCheckExistenceOnly(get)).thenApply(r -> r.getExists());
-  }
-
-  /**
-   * Extracts certain cells from a given row.
-   * @param get The object that specifies what data to fetch and from 

[2/2] hbase git commit: HBASE-19009 implement modifyTable and enable/disableTableReplication for AsyncAdmin

2017-11-15 Thread zghao
HBASE-19009 implement modifyTable and enable/disableTableReplication for 
AsyncAdmin


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/d885e223
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/d885e223
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/d885e223

Branch: refs/heads/branch-2
Commit: d885e2232df6ac4c65b3a87eb45780b8fff60b91
Parents: fb79e9d
Author: Guanghao Zhang 
Authored: Sun Nov 12 20:16:20 2017 +0800
Committer: Guanghao Zhang 
Committed: Thu Nov 16 07:19:34 2017 +0800

--
 .../apache/hadoop/hbase/client/AsyncAdmin.java  |  18 +
 .../hadoop/hbase/client/AsyncHBaseAdmin.java|  17 +-
 .../hbase/client/ColumnFamilyDescriptor.java|  27 ++
 .../apache/hadoop/hbase/client/HBaseAdmin.java  | 220 ++---
 .../hadoop/hbase/client/RawAsyncHBaseAdmin.java | 313 -
 .../hadoop/hbase/client/TableDescriptor.java|  51 +-
 .../hbase/client/TableDescriptorBuilder.java|  21 +-
 .../client/replication/ReplicationAdmin.java|   8 +-
 .../replication/ReplicationPeerConfigUtil.java  | 468 +++
 .../replication/ReplicationSerDeHelper.java | 437 -
 .../replication/ReplicationPeerConfig.java  |  20 +
 .../hbase/shaded/protobuf/RequestConverter.java |   6 +-
 .../replication/ReplicationPeerZKImpl.java  |   6 +-
 .../replication/ReplicationPeersZKImpl.java |  14 +-
 .../hadoop/hbase/master/MasterRpcServices.java  |  10 +-
 .../replication/master/TableCFsUpdater.java |  14 +-
 .../client/TestAsyncReplicationAdminApi.java|   2 -
 ...estAsyncReplicationAdminApiWithClusters.java | 242 ++
 .../replication/TestReplicationAdmin.java   |  16 +-
 .../replication/TestMasterReplication.java  |   4 +-
 .../replication/TestPerTableCFReplication.java  |  62 +--
 .../replication/master/TestTableCFsUpdater.java |  27 +-
 22 files changed, 1261 insertions(+), 742 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/d885e223/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
index f251a8f..722e8b5 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
@@ -141,6 +141,12 @@ public interface AsyncAdmin {
*/
   CompletableFuture createTable(TableDescriptor desc, byte[][] 
splitKeys);
 
+  /*
+   * Modify an existing table, more IRB friendly version.
+   * @param desc modified description of the table
+   */
+  CompletableFuture modifyTable(TableDescriptor desc);
+
   /**
* Deletes a table.
* @param tableName name of table to delete
@@ -553,6 +559,18 @@ public interface AsyncAdmin {
   CompletableFuture listReplicatedTableCFs();
 
   /**
+   * Enable a table's replication switch.
+   * @param tableName name of the table
+   */
+  CompletableFuture enableTableReplication(TableName tableName);
+
+  /**
+   * Disable a table's replication switch.
+   * @param tableName name of the table
+   */
+  CompletableFuture disableTableReplication(TableName tableName);
+
+  /**
* Take a snapshot for the given table. If the table is enabled, a 
FLUSH-type snapshot will be
* taken. If the table is disabled, an offline snapshot is taken. Snapshots 
are considered unique
* based on the name of the snapshot. Attempts to take a snapshot 
with the same name (even

http://git-wip-us.apache.org/repos/asf/hbase/blob/d885e223/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
index 250a38c..5a20291 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
@@ -128,6 +128,11 @@ public class AsyncHBaseAdmin implements AsyncAdmin {
   }
 
   @Override
+  public CompletableFuture modifyTable(TableDescriptor desc) {
+return wrap(rawAdmin.modifyTable(desc));
+  }
+
+  @Override
   public CompletableFuture deleteTable(TableName tableName) {
 return wrap(rawAdmin.deleteTable(tableName));
   }
@@ -420,6 +425,16 @@ public class AsyncHBaseAdmin implements AsyncAdmin {
   }
 
   @Override
+  public CompletableFuture enableTableReplication(TableName tableName) {
+return 

hbase git commit: HBASE-18357 Enable disabled tests in TestHCM that were disabled by Proc-V2 AM in HBASE-14614

2017-11-15 Thread stack
Repository: hbase
Updated Branches:
  refs/heads/branch-2 c5ad80175 -> e063b231d


HBASE-18357 Enable disabled tests in TestHCM that were disabled by Proc-V2 AM 
in HBASE-14614

Restore testRegionCaching and testMulti to working state (required
fixing move procedure and looking for a new exception).

testClusterStatus is broke because multicast is broken.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/e063b231
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/e063b231
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/e063b231

Branch: refs/heads/branch-2
Commit: e063b231da4f714f37dc3d3dfc2e10ca7652c894
Parents: c5ad801
Author: Michael Stack 
Authored: Wed Nov 15 10:18:08 2017 -0800
Committer: Michael Stack 
Committed: Wed Nov 15 18:40:00 2017 -0800

--
 .../master/assignment/MoveRegionProcedure.java  |   3 +-
 .../hbase/client/TestDropTimeoutRequest.java| 133 +++
 .../org/apache/hadoop/hbase/client/TestHCM.java |  76 ---
 3 files changed, 159 insertions(+), 53 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/e063b231/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/MoveRegionProcedure.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/MoveRegionProcedure.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/MoveRegionProcedure.java
index 624806a..4caed28 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/MoveRegionProcedure.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/MoveRegionProcedure.java
@@ -65,7 +65,8 @@ public class MoveRegionProcedure extends 
AbstractStateMachineRegionProcedurehttp://git-wip-us.apache.org/repos/asf/hbase/blob/e063b231/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestDropTimeoutRequest.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestDropTimeoutRequest.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestDropTimeoutRequest.java
new file mode 100644
index 000..46aa72f
--- /dev/null
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestDropTimeoutRequest.java
@@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.CategoryBasedTimeout;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.coprocessor.ObserverContext;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessor;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.coprocessor.RegionObserver;
+import org.apache.hadoop.hbase.testclassification.MediumTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Threads;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.rules.TestName;
+import org.junit.rules.TestRule;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.atomic.AtomicLong;
+
+/**
+ * Test a drop timeout request.
+ * This test used to be in TestHCM but it has particulare requirements -- i.e. 
one handler only --
+ * so run it apart from the rest of TestHCM.
+ */
+@Category({MediumTests.class})
+public class TestDropTimeoutRequest {
+  @Rule
+  public final TestRule timeout = CategoryBasedTimeout.builder()
+  .withTimeout(this.getClass())
+  

hbase git commit: HBASE-18357 Enable disabled tests in TestHCM that were disabled by Proc-V2 AM in HBASE-14614

2017-11-15 Thread stack
Repository: hbase
Updated Branches:
  refs/heads/master d4babbf06 -> 3a4655019


HBASE-18357 Enable disabled tests in TestHCM that were disabled by Proc-V2 AM 
in HBASE-14614

Restore testRegionCaching and testMulti to working state (required
fixing move procedure and looking for a new exception).

testClusterStatus is broke because multicast is broken.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/3a465501
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/3a465501
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/3a465501

Branch: refs/heads/master
Commit: 3a4655019dee68d4d0d18726f12b33fefbce078d
Parents: d4babbf
Author: Michael Stack 
Authored: Wed Nov 15 10:18:08 2017 -0800
Committer: Michael Stack 
Committed: Wed Nov 15 18:39:28 2017 -0800

--
 .../master/assignment/MoveRegionProcedure.java  |   3 +-
 .../hbase/client/TestDropTimeoutRequest.java| 133 +++
 .../org/apache/hadoop/hbase/client/TestHCM.java |  76 ---
 3 files changed, 159 insertions(+), 53 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/3a465501/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/MoveRegionProcedure.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/MoveRegionProcedure.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/MoveRegionProcedure.java
index 624806a..4caed28 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/MoveRegionProcedure.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/MoveRegionProcedure.java
@@ -65,7 +65,8 @@ public class MoveRegionProcedure extends 
AbstractStateMachineRegionProcedurehttp://git-wip-us.apache.org/repos/asf/hbase/blob/3a465501/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestDropTimeoutRequest.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestDropTimeoutRequest.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestDropTimeoutRequest.java
new file mode 100644
index 000..46aa72f
--- /dev/null
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestDropTimeoutRequest.java
@@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.CategoryBasedTimeout;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.coprocessor.ObserverContext;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessor;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.coprocessor.RegionObserver;
+import org.apache.hadoop.hbase.testclassification.MediumTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Threads;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.rules.TestName;
+import org.junit.rules.TestRule;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.atomic.AtomicLong;
+
+/**
+ * Test a drop timeout request.
+ * This test used to be in TestHCM but it has particulare requirements -- i.e. 
one handler only --
+ * so run it apart from the rest of TestHCM.
+ */
+@Category({MediumTests.class})
+public class TestDropTimeoutRequest {
+  @Rule
+  public final TestRule timeout = CategoryBasedTimeout.builder()
+  .withTimeout(this.getClass())
+  

[2/2] hbase git commit: HBASE-19248 Move tests that need to look at Connection internals to test of said internals.

2017-11-15 Thread busbey
HBASE-19248 Move tests that need to look at Connection internals to test of 
said internals.

Signed-off-by: zhangduo 
Signed-off-by: Chia-Ping Tsai 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/9c85d001
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/9c85d001
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/9c85d001

Branch: refs/heads/branch-2
Commit: 9c85d0017f1452f266253d64fde8d513eb571f75
Parents: a1d86d9
Author: Sean Busbey 
Authored: Mon Nov 13 18:52:33 2017 -0600
Committer: Sean Busbey 
Committed: Wed Nov 15 08:40:03 2017 -0600

--
 .../hadoop/hbase/client/ZKAsyncRegistry.java|  10 +-
 .../org/apache/hadoop/hbase/TestZooKeeper.java  | 121 ---
 .../hbase/client/TestZKAsyncRegistry.java   |  22 
 3 files changed, 30 insertions(+), 123 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/9c85d001/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ZKAsyncRegistry.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ZKAsyncRegistry.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ZKAsyncRegistry.java
index fedd527..e36de01 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ZKAsyncRegistry.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ZKAsyncRegistry.java
@@ -44,6 +44,9 @@ import org.apache.hadoop.hbase.RegionLocations;
 import org.apache.hadoop.hbase.ServerName;
 import org.apache.hadoop.hbase.exceptions.DeserializationException;
 import org.apache.hadoop.hbase.master.RegionState;
+import 
org.apache.hadoop.hbase.shaded.com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.ZooKeeperProtos;
 import org.apache.hadoop.hbase.util.Pair;
 import org.apache.hadoop.hbase.util.Threads;
 import org.apache.hadoop.hbase.zookeeper.ZKConfig;
@@ -51,8 +54,6 @@ import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
 import org.apache.yetus.audience.InterfaceAudience;
 import org.apache.zookeeper.data.Stat;
 
-import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
-import org.apache.hadoop.hbase.shaded.protobuf.generated.ZooKeeperProtos;
 
 /**
  * Fetch the registry data from zookeeper.
@@ -115,6 +116,11 @@ class ZKAsyncRegistry implements AsyncRegistry {
 return exec(zk.getData(), znodePaths.clusterIdZNode, 
ZKAsyncRegistry::getClusterId);
   }
 
+  @VisibleForTesting
+  CuratorFramework getCuratorFramework() {
+return zk;
+  }
+
   private static ZooKeeperProtos.MetaRegionServer getMetaProto(CuratorEvent 
event)
   throws IOException {
 byte[] data = event.getData();

http://git-wip-us.apache.org/repos/asf/hbase/blob/9c85d001/hbase-server/src/test/java/org/apache/hadoop/hbase/TestZooKeeper.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/TestZooKeeper.java 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/TestZooKeeper.java
index f75c7a4..d546d5d 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/TestZooKeeper.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/TestZooKeeper.java
@@ -26,8 +26,6 @@ import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
 import java.io.IOException;
-import java.lang.reflect.InvocationTargetException;
-import java.lang.reflect.Method;
 import java.util.List;
 import java.util.Map;
 
@@ -35,9 +33,6 @@ import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.client.Admin;
-import org.apache.hadoop.hbase.client.Connection;
-import org.apache.hadoop.hbase.client.ConnectionFactory;
-import org.apache.hadoop.hbase.client.Get;
 import org.apache.hadoop.hbase.client.Put;
 import org.apache.hadoop.hbase.client.RegionInfo;
 import org.apache.hadoop.hbase.client.Result;
@@ -61,15 +56,12 @@ import org.apache.zookeeper.CreateMode;
 import org.apache.zookeeper.KeeperException;
 import org.apache.zookeeper.ZooDefs;
 import org.apache.zookeeper.ZooKeeper;
-import org.apache.zookeeper.ZooKeeper.States;
 import org.apache.zookeeper.data.ACL;
 import org.apache.zookeeper.data.Stat;
 import org.junit.After;
 import org.junit.AfterClass;
-import org.junit.Assert;
 import org.junit.Before;
 import org.junit.BeforeClass;
-import org.junit.Ignore;
 import org.junit.Rule;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
@@ -131,92 +123,6 @@ public class TestZooKeeper {
 }
   }

[1/2] hbase git commit: HBASE-19248 Move tests that need to look at Connection internals to test of said internals.

2017-11-15 Thread busbey
Repository: hbase
Updated Branches:
  refs/heads/branch-2 a1d86d90b -> 9c85d0017
  refs/heads/master 7d7048744 -> df98d6848


HBASE-19248 Move tests that need to look at Connection internals to test of 
said internals.

Signed-off-by: zhangduo 
Signed-off-by: Chia-Ping Tsai 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/df98d684
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/df98d684
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/df98d684

Branch: refs/heads/master
Commit: df98d6848f2848579c1893a42132dee3cc5d907d
Parents: 7d70487
Author: Sean Busbey 
Authored: Mon Nov 13 18:52:33 2017 -0600
Committer: Sean Busbey 
Committed: Wed Nov 15 08:38:12 2017 -0600

--
 .../hadoop/hbase/client/ZKAsyncRegistry.java|  10 +-
 .../org/apache/hadoop/hbase/TestZooKeeper.java  | 121 ---
 .../hbase/client/TestZKAsyncRegistry.java   |  22 
 3 files changed, 30 insertions(+), 123 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/df98d684/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ZKAsyncRegistry.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ZKAsyncRegistry.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ZKAsyncRegistry.java
index fedd527..e36de01 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ZKAsyncRegistry.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ZKAsyncRegistry.java
@@ -44,6 +44,9 @@ import org.apache.hadoop.hbase.RegionLocations;
 import org.apache.hadoop.hbase.ServerName;
 import org.apache.hadoop.hbase.exceptions.DeserializationException;
 import org.apache.hadoop.hbase.master.RegionState;
+import 
org.apache.hadoop.hbase.shaded.com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.ZooKeeperProtos;
 import org.apache.hadoop.hbase.util.Pair;
 import org.apache.hadoop.hbase.util.Threads;
 import org.apache.hadoop.hbase.zookeeper.ZKConfig;
@@ -51,8 +54,6 @@ import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
 import org.apache.yetus.audience.InterfaceAudience;
 import org.apache.zookeeper.data.Stat;
 
-import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
-import org.apache.hadoop.hbase.shaded.protobuf.generated.ZooKeeperProtos;
 
 /**
  * Fetch the registry data from zookeeper.
@@ -115,6 +116,11 @@ class ZKAsyncRegistry implements AsyncRegistry {
 return exec(zk.getData(), znodePaths.clusterIdZNode, 
ZKAsyncRegistry::getClusterId);
   }
 
+  @VisibleForTesting
+  CuratorFramework getCuratorFramework() {
+return zk;
+  }
+
   private static ZooKeeperProtos.MetaRegionServer getMetaProto(CuratorEvent 
event)
   throws IOException {
 byte[] data = event.getData();

http://git-wip-us.apache.org/repos/asf/hbase/blob/df98d684/hbase-server/src/test/java/org/apache/hadoop/hbase/TestZooKeeper.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/TestZooKeeper.java 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/TestZooKeeper.java
index f75c7a4..d546d5d 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/TestZooKeeper.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/TestZooKeeper.java
@@ -26,8 +26,6 @@ import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
 import java.io.IOException;
-import java.lang.reflect.InvocationTargetException;
-import java.lang.reflect.Method;
 import java.util.List;
 import java.util.Map;
 
@@ -35,9 +33,6 @@ import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.client.Admin;
-import org.apache.hadoop.hbase.client.Connection;
-import org.apache.hadoop.hbase.client.ConnectionFactory;
-import org.apache.hadoop.hbase.client.Get;
 import org.apache.hadoop.hbase.client.Put;
 import org.apache.hadoop.hbase.client.RegionInfo;
 import org.apache.hadoop.hbase.client.Result;
@@ -61,15 +56,12 @@ import org.apache.zookeeper.CreateMode;
 import org.apache.zookeeper.KeeperException;
 import org.apache.zookeeper.ZooDefs;
 import org.apache.zookeeper.ZooKeeper;
-import org.apache.zookeeper.ZooKeeper.States;
 import org.apache.zookeeper.data.ACL;
 import org.apache.zookeeper.data.Stat;
 import org.junit.After;
 import org.junit.AfterClass;
-import org.junit.Assert;
 import org.junit.Before;
 import org.junit.BeforeClass;
-import org.junit.Ignore;
 import org.junit.Rule;
 import 

[43/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/apidocs/org/apache/hadoop/hbase/util/class-use/Bytes.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/util/class-use/Bytes.html 
b/apidocs/org/apache/hadoop/hbase/util/class-use/Bytes.html
index 8cdf469..680dcc7 100644
--- a/apidocs/org/apache/hadoop/hbase/util/class-use/Bytes.html
+++ b/apidocs/org/apache/hadoop/hbase/util/class-use/Bytes.html
@@ -129,13 +129,13 @@
 
 
 Bytes
-HColumnDescriptor.getValue(Byteskey)
+HTableDescriptor.getValue(Byteskey)
 Deprecated.
 
 
 
 Bytes
-HTableDescriptor.getValue(Byteskey)
+HColumnDescriptor.getValue(Byteskey)
 Deprecated.
 
 
@@ -150,25 +150,25 @@
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true;
 title="class or interface in java.util">MapBytes,Bytes
-HColumnDescriptor.getValues()
+HTableDescriptor.getValues()
 Deprecated.
 
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true;
 title="class or interface in java.util">MapBytes,Bytes
-HColumnDescriptor.getValues()
+HTableDescriptor.getValues()
 Deprecated.
 
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true;
 title="class or interface in java.util">MapBytes,Bytes
-HTableDescriptor.getValues()
+HColumnDescriptor.getValues()
 Deprecated.
 
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true;
 title="class or interface in java.util">MapBytes,Bytes
-HTableDescriptor.getValues()
+HColumnDescriptor.getValues()
 Deprecated.
 
 
@@ -183,13 +183,13 @@
 
 
 Bytes
-HColumnDescriptor.getValue(Byteskey)
+HTableDescriptor.getValue(Byteskey)
 Deprecated.
 
 
 
 Bytes
-HTableDescriptor.getValue(Byteskey)
+HColumnDescriptor.getValue(Byteskey)
 Deprecated.
 
 
@@ -236,13 +236,13 @@
 
 
 Bytes
-TableDescriptor.getValue(Byteskey)
-Getter for accessing the metadata associated with the 
key.
-
+ColumnFamilyDescriptor.getValue(Byteskey)
 
 
 Bytes
-ColumnFamilyDescriptor.getValue(Byteskey)
+TableDescriptor.getValue(Byteskey)
+Getter for accessing the metadata associated with the 
key.
+
 
 
 
@@ -255,14 +255,6 @@
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true;
 title="class or interface in java.util">MapBytes,Bytes
-TableDescriptor.getValues()
-
-
-http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true;
 title="class or interface in java.util">MapBytes,Bytes
-TableDescriptor.getValues()
-
-
-http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true;
 title="class or interface in java.util">MapBytes,Bytes
 ColumnFamilyDescriptor.getValues()
 It clone all bytes of all elements.
 
@@ -273,6 +265,14 @@
 It clone all bytes of all elements.
 
 
+
+http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true;
 title="class or interface in java.util">MapBytes,Bytes
+TableDescriptor.getValues()
+
+
+http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true;
 title="class or interface in java.util">MapBytes,Bytes
+TableDescriptor.getValues()
+
 
 
 
@@ -284,13 +284,13 @@
 
 
 Bytes
-TableDescriptor.getValue(Byteskey)
-Getter for accessing the metadata associated with the 
key.
-
+ColumnFamilyDescriptor.getValue(Byteskey)
 
 
 Bytes
-ColumnFamilyDescriptor.getValue(Byteskey)
+TableDescriptor.getValue(Byteskey)
+Getter for accessing the metadata associated with the 
key.
+
 
 
 TableDescriptorBuilder

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/apidocs/org/apache/hadoop/hbase/util/class-use/Order.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/util/class-use/Order.html 
b/apidocs/org/apache/hadoop/hbase/util/class-use/Order.html
index a03450e..f77ee9b 100644
--- a/apidocs/org/apache/hadoop/hbase/util/class-use/Order.html
+++ b/apidocs/org/apache/hadoop/hbase/util/class-use/Order.html
@@ -112,15 +112,15 @@
 
 
 protected Order
-RawBytes.order
+RawString.order
 
 
 protected Order
-OrderedBytesBase.order
+RawBytes.order
 
 
 protected Order
-RawString.order
+OrderedBytesBase.order
 
 
 
@@ -133,7 +133,7 @@
 
 
 Order
-RawBytes.getOrder()
+RawByte.getOrder()
 
 
 Order
@@ -141,66 +141,66 @@
 
 
 Order
-RawShort.getOrder()
+RawFloat.getOrder()
 
 
 Order
-TerminatedWrapper.getOrder()
+PBType.getOrder()
 
 
 Order
-OrderedBytesBase.getOrder()
+RawInteger.getOrder()
 
 
 Order
-RawFloat.getOrder()
+DataType.getOrder()
+Retrieve the sort Order imposed by this data type, 
or null when
+ natural ordering is not preserved.
+
 
 
 Order
-Union2.getOrder()
+RawLong.getOrder()
 
 
 Order
-Struct.getOrder()
+RawShort.getOrder()
 
 
 Order
-RawInteger.getOrder()
+RawString.getOrder()
 
 
 Order
-PBType.getOrder()
+RawBytes.getOrder()
 
 
 Order
-Union3.getOrder()
+Struct.getOrder()
 
 
 Order
-RawString.getOrder()
+Union3.getOrder()
 
 
 Order
-RawByte.getOrder()
+RawDouble.getOrder()
 
 
 Order

[36/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/index-all.html
--
diff --git a/devapidocs/index-all.html b/devapidocs/index-all.html
index 1c93364..bb59007 100644
--- a/devapidocs/index-all.html
+++ b/devapidocs/index-all.html
@@ -16710,7 +16710,7 @@
 
 context
 - Variable in class org.apache.hadoop.hbase.regionserver.HRegion.RowLockImpl
 
-Context(Configuration,
 FileSystem, String, UUID, ReplicationPeer, MetricsSource, TableDescriptors, 
Abortable) - Constructor for class 
org.apache.hadoop.hbase.replication.ReplicationEndpoint.Context
+Context(Configuration,
 Configuration, FileSystem, String, UUID, ReplicationPeer, MetricsSource, 
TableDescriptors, Abortable) - Constructor for class 
org.apache.hadoop.hbase.replication.ReplicationEndpoint.Context
 
 CONTEXT
 - Static variable in interface org.apache.hadoop.hbase.rest.MetricsRESTSource
 
@@ -25400,6 +25400,8 @@
 Drops the memstore contents after replaying a flush 
descriptor or region open event replay
  if the memstore edits have seqNums smaller than the given seq id
 
+dropOnDeletedTables
 - Variable in class org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint
+
 DroppedSnapshotException - Exception in org.apache.hadoop.hbase
 
 Thrown during flush if the possibility snapshot content was 
not properly
@@ -29742,6 +29744,8 @@
 
 FilterBase()
 - Constructor for class org.apache.hadoop.hbase.filter.FilterBase
 
+filterBatches(ListListWAL.Entry,
 TableName) - Method in class 
org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint
+
 filterBulk(ArrayListHStoreFile)
 - Method in class org.apache.hadoop.hbase.regionserver.compactions.SortedCompactionPolicy
 
 filterByPrefix(ListString,
 String...) - Static method in class 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper
@@ -39366,6 +39370,8 @@
 
 getLoadStatistics()
 - Method in class org.apache.hadoop.hbase.regionserver.HRegion
 
+getLocalConfiguration()
 - Method in class org.apache.hadoop.hbase.replication.ReplicationEndpoint.Context
+
 getLocalFs(Configuration)
 - Static method in class org.apache.hadoop.hbase.fs.HFileSystem
 
 Wrap a LocalFileSystem within a HFileSystem.
@@ -45112,8 +45118,6 @@
 This method should return any additional data that is 
needed on the
  server side to construct the ColumnInterpreter.
 
-getRequestData()
 - Method in class org.apache.hadoop.hbase.regionserver.MultiRowMutationProcessor
-
 getRequestData()
 - Method in interface org.apache.hadoop.hbase.regionserver.RowProcessor
 
 This method should return any additional data that is 
needed on the
@@ -45235,8 +45239,6 @@
 
 getResult()
 - Method in class org.apache.hadoop.hbase.regionserver.HRegion.PrepareFlushResult
 
-getResult()
 - Method in class org.apache.hadoop.hbase.regionserver.MultiRowMutationProcessor
-
 getResult()
 - Method in interface org.apache.hadoop.hbase.regionserver.RowProcessor
 
 Obtain the processing result.
@@ -45613,8 +45615,6 @@
 
 getRows(ByteBuffer,
 ListByteBuffer, MapByteBuffer, ByteBuffer) - Method 
in class org.apache.hadoop.hbase.thrift.ThriftServerRunner.HBaseHandler
 
-getRowsToLock()
 - Method in class org.apache.hadoop.hbase.regionserver.MultiRowMutationProcessor
-
 getRowsToLock()
 - Method in interface org.apache.hadoop.hbase.regionserver.RowProcessor
 
 Rows to lock while operation.
@@ -57086,8 +57086,6 @@
 
 Initialize this region.
 
-initialize(MultiRowMutationProtos.MultiRowMutationProcessorRequest)
 - Method in class org.apache.hadoop.hbase.regionserver.MultiRowMutationProcessor
-
 initialize(Server,
 FileSystem, Path, Path, WALFileLengthProvider) - Method in 
interface org.apache.hadoop.hbase.regionserver.ReplicationService
 
 Initializes the replication service object.
@@ -63886,6 +63884,10 @@
 
 localBuffer
 - Static variable in class org.apache.hadoop.hbase.client.Result
 
+localConf
 - Variable in class org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint
+
+localConf
 - Variable in class org.apache.hadoop.hbase.replication.ReplicationEndpoint.Context
+
 localDir
 - Variable in class org.apache.hadoop.hbase.util.DynamicClassLoader
 
 locale
 - Variable in class org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.SimpleReporter.Builder
@@ -70486,8 +70488,6 @@
 
 This one can be update
 
-miniBatch
 - Variable in class org.apache.hadoop.hbase.regionserver.MultiRowMutationProcessor
-
 MiniBatchOperationInProgressT - Class in org.apache.hadoop.hbase.regionserver
 
 Wraps together the mutations which are applied as a batch 
to the region and their operation
@@ -71556,17 +71556,11 @@
 MultiRowMutationEndpoint - Class in org.apache.hadoop.hbase.coprocessor
 
 This class demonstrates how to implement atomic multi row 
transactions using
- HRegion.mutateRowsWithLocks(java.util.Collection,
 java.util.Collection)
+ HRegion.mutateRowsWithLocks(Collection,
 Collection, long, long)
  and 

[23/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/client/class-use/Durability.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/class-use/Durability.html 
b/devapidocs/org/apache/hadoop/hbase/client/class-use/Durability.html
index 9590a3f..26f18b4 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/class-use/Durability.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/class-use/Durability.html
@@ -215,14 +215,14 @@ service.
 
 
 Durability
-TableDescriptor.getDurability()
-Returns the durability setting for the table.
+Mutation.getDurability()
+Get the current durability
 
 
 
 Durability
-Mutation.getDurability()
-Get the current durability
+TableDescriptor.getDurability()
+Returns the durability setting for the table.
 
 
 
@@ -249,7 +249,7 @@ the order they are declared.
 
 
 long
-HTable.incrementColumnValue(byte[]row,
+Table.incrementColumnValue(byte[]row,
 byte[]family,
 byte[]qualifier,
 longamount,
@@ -259,7 +259,7 @@ the order they are declared.
 
 
 long
-Table.incrementColumnValue(byte[]row,
+HTable.incrementColumnValue(byte[]row,
 byte[]family,
 byte[]qualifier,
 longamount,
@@ -278,37 +278,37 @@ the order they are declared.
 
 
 
-Delete
-Delete.setDurability(Durabilityd)
-
-
 TableDescriptorBuilder
 TableDescriptorBuilder.setDurability(Durabilitydurability)
 
-
+
 TableDescriptorBuilder.ModifyableTableDescriptor
 TableDescriptorBuilder.ModifyableTableDescriptor.setDurability(Durabilitydurability)
 Sets the Durability 
setting for the table.
 
 
-
-Increment
-Increment.setDurability(Durabilityd)
-
 
-Put
-Put.setDurability(Durabilityd)
-
-
 Append
 Append.setDurability(Durabilityd)
 
-
+
 Mutation
 Mutation.setDurability(Durabilityd)
 Set the durability for this mutation
 
 
+
+Delete
+Delete.setDurability(Durabilityd)
+
+
+Increment
+Increment.setDurability(Durabilityd)
+
+
+Put
+Put.setDurability(Durabilityd)
+
 
 
 
@@ -444,15 +444,11 @@ the order they are declared.
 
 
 Durability
-RowProcessor.useDurability()
-
-
-Durability
 BaseRowProcessor.useDurability()
 
-
+
 Durability
-MultiRowMutationProcessor.useDurability()
+RowProcessor.useDurability()
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/client/class-use/Get.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/client/class-use/Get.html 
b/devapidocs/org/apache/hadoop/hbase/client/class-use/Get.html
index d421f1d..9536e81 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/class-use/Get.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/class-use/Get.html
@@ -406,13 +406,13 @@ service.
 
 
 boolean
-HTable.exists(Getget)
+Table.exists(Getget)
 Test for the existence of columns in the table, as 
specified by the Get.
 
 
 
 boolean
-Table.exists(Getget)
+HTable.exists(Getget)
 Test for the existence of columns in the table, as 
specified by the Get.
 
 
@@ -423,16 +423,6 @@ service.
 
 
 
-http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFutureResult
-RawAsyncTableImpl.get(Getget)
-
-
-Result
-HTable.get(Getget)
-Extracts certain cells from a given row.
-
-
-
 Result
 Table.get(Getget)
 Extracts certain cells from a given row.
@@ -444,6 +434,16 @@ service.
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFutureResult
+RawAsyncTableImpl.get(Getget)
+
+
+Result
+HTable.get(Getget)
+Extracts certain cells from a given row.
+
+
+
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFutureResult
 AsyncTableBase.get(Getget)
 Extracts certain cells from a given row.
 
@@ -468,14 +468,14 @@ service.
 
 
 boolean[]
-HTable.exists(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListGetgets)
-
-
-boolean[]
 Table.exists(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListGetgets)
 Test for the existence of columns in the table, as 
specified by the Gets.
 
 
+
+boolean[]
+HTable.exists(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListGetgets)
+
 
 default http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">Listhttp://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or 

[14/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.ObserverOperationWithResult.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.ObserverOperationWithResult.html
 
b/devapidocs/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.ObserverOperationWithResult.html
index 1430cbf..a2eafc9 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.ObserverOperationWithResult.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.ObserverOperationWithResult.html
@@ -127,7 +127,7 @@ var activeTableTab = "activeTableTab";
 
 
 
-public abstract class CoprocessorHost.ObserverOperationWithResultO,R
+public abstract class CoprocessorHost.ObserverOperationWithResultO,R
 extends CoprocessorHost.ObserverOperationO
 
 
@@ -266,7 +266,7 @@ extends 
 
 result
-privateR result
+privateR result
 
 
 
@@ -285,7 +285,7 @@ extends 
 
 ObserverOperationWithResult
-publicObserverOperationWithResult(CoprocessorHost.ObserverGetterC,OobserverGetter,
+publicObserverOperationWithResult(CoprocessorHost.ObserverGetterC,OobserverGetter,
Rresult)
 
 
@@ -297,7 +297,7 @@ extends 
 
 ObserverOperationWithResult
-publicObserverOperationWithResult(CoprocessorHost.ObserverGetterC,OobserverGetter,
+publicObserverOperationWithResult(CoprocessorHost.ObserverGetterC,OobserverGetter,
Rresult,
booleanbypassable)
 
@@ -310,7 +310,7 @@ extends 
 
 ObserverOperationWithResult
-publicObserverOperationWithResult(CoprocessorHost.ObserverGetterC,OobserverGetter,
+publicObserverOperationWithResult(CoprocessorHost.ObserverGetterC,OobserverGetter,
Rresult,
Useruser)
 
@@ -323,7 +323,7 @@ extends 
 
 ObserverOperationWithResult
-privateObserverOperationWithResult(CoprocessorHost.ObserverGetterC,OobserverGetter,
+privateObserverOperationWithResult(CoprocessorHost.ObserverGetterC,OobserverGetter,
 Rresult,
 Useruser,
 booleanbypassable)
@@ -345,7 +345,7 @@ extends 
 
 call
-protected abstractRcall(Oobserver)
+protected abstractRcall(Oobserver)
throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
 
 Throws:
@@ -359,7 +359,7 @@ extends 
 
 getResult
-protectedRgetResult()
+protectedRgetResult()
 
 
 
@@ -368,7 +368,7 @@ extends 
 
 callObserver
-voidcallObserver()
+voidcallObserver()
throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
 
 Specified by:

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.ObserverOperationWithoutResult.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.ObserverOperationWithoutResult.html
 
b/devapidocs/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.ObserverOperationWithoutResult.html
index cc376cb..934349c 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.ObserverOperationWithoutResult.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.ObserverOperationWithoutResult.html
@@ -131,7 +131,7 @@ var activeTableTab = "activeTableTab";
 
 
 
-public abstract class CoprocessorHost.ObserverOperationWithoutResultO
+public abstract class CoprocessorHost.ObserverOperationWithoutResultO
 extends CoprocessorHost.ObserverOperationO
 
 
@@ -246,7 +246,7 @@ extends 
 
 ObserverOperationWithoutResult
-publicObserverOperationWithoutResult(CoprocessorHost.ObserverGetterC,OobserverGetter)
+publicObserverOperationWithoutResult(CoprocessorHost.ObserverGetterC,OobserverGetter)
 
 
 
@@ -255,7 +255,7 @@ extends 
 
 ObserverOperationWithoutResult
-publicObserverOperationWithoutResult(CoprocessorHost.ObserverGetterC,OobserverGetter,
+publicObserverOperationWithoutResult(CoprocessorHost.ObserverGetterC,OobserverGetter,
   Useruser)
 
 
@@ -265,7 +265,7 @@ extends 
 
 ObserverOperationWithoutResult
-publicObserverOperationWithoutResult(CoprocessorHost.ObserverGetterC,OobserverGetter,
+publicObserverOperationWithoutResult(CoprocessorHost.ObserverGetterC,OobserverGetter,
   Useruser,
   booleanbypassable)
 
@@ -286,7 +286,7 @@ extends 
 
 call
-protected abstractvoidcall(Oobserver)
+protected abstractvoidcall(Oobserver)
   throws 

[39/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/checkstyle-aggregate.html
--
diff --git a/checkstyle-aggregate.html b/checkstyle-aggregate.html
index 9048224..b2b69bd 100644
--- a/checkstyle-aggregate.html
+++ b/checkstyle-aggregate.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase  Checkstyle Results
 
@@ -289,7 +289,7 @@
 3426
 0
 0
-21359
+21358
 
 Files
 
@@ -702,7 +702,7 @@
 org/apache/hadoop/hbase/PerformanceEvaluation.java
 0
 0
-32
+33
 
 org/apache/hadoop/hbase/PerformanceEvaluationCommons.java
 0
@@ -5267,7 +5267,7 @@
 org/apache/hadoop/hbase/mapred/TestIdentityTableMap.java
 0
 0
-4
+3
 
 org/apache/hadoop/hbase/mapred/TestMultiTableSnapshotInputFormat.java
 0
@@ -5447,7 +5447,7 @@
 org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.java
 0
 0
-5
+4
 
 org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatTestBase.java
 0
@@ -8294,1600 +8294,1600 @@
 0
 1
 
-org/apache/hadoop/hbase/regionserver/MultiRowMutationProcessor.java
-0
-0
-4
-
 org/apache/hadoop/hbase/regionserver/MultiVersionConcurrencyControl.java
 0
 0
 6
-
+
 org/apache/hadoop/hbase/regionserver/MutableOnlineRegions.java
 0
 0
 1
-
+
 org/apache/hadoop/hbase/regionserver/MutableSegment.java
 0
 0
 4
-
+
 org/apache/hadoop/hbase/regionserver/NoOpHeapMemoryTuner.java
 0
 0
 1
-
+
 org/apache/hadoop/hbase/regionserver/NoTagByteBufferChunkCell.java
 0
 0
 1
-
+
 org/apache/hadoop/hbase/regionserver/NonLazyKeyValueScanner.java
 0
 0
 1
-
+
 org/apache/hadoop/hbase/regionserver/NonReversedNonLazyKeyValueScanner.java
 0
 0
 1
-
+
 org/apache/hadoop/hbase/regionserver/OOMERegionServer.java
 0
 0
 1
-
+
 org/apache/hadoop/hbase/regionserver/OnlineRegions.java
 0
 0
 6
-
+
 org/apache/hadoop/hbase/regionserver/OperationStatus.java
 0
 0
 1
-
+
 org/apache/hadoop/hbase/regionserver/RSDumpServlet.java
 0
 0
 2
-
+
 org/apache/hadoop/hbase/regionserver/RSRpcServices.java
 0
 0
-136
-
+132
+
 org/apache/hadoop/hbase/regionserver/RSStatusServlet.java
 0
 0
 5
-
+
 org/apache/hadoop/hbase/regionserver/Region.java
 0
 0
 31
-
+
 org/apache/hadoop/hbase/regionserver/RegionAsTable.java
 0
 0
 15
-
+
 org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
 0
 0
 80
-
+
 org/apache/hadoop/hbase/regionserver/RegionScanner.java
 0
 0
 1
-
+
 org/apache/hadoop/hbase/regionserver/RegionServerAccounting.java
 0
 0
 1
-
+
 org/apache/hadoop/hbase/regionserver/RegionServerCoprocessorHost.java
 0
 0
 7
-
+
 org/apache/hadoop/hbase/regionserver/RegionServerServices.java
 0
 0
 7
-
+
 org/apache/hadoop/hbase/regionserver/RegionServicesForStores.java
 0
 0
 7
-
+
 org/apache/hadoop/hbase/regionserver/RegionSplitPolicy.java
 0
 0
 7
-
+
 org/apache/hadoop/hbase/regionserver/ReplicationService.java
 0
 0
 3
-
+
 org/apache/hadoop/hbase/regionserver/ReplicationSinkService.java
 0
 0
 2
-
+
 org/apache/hadoop/hbase/regionserver/ReplicationSourceService.java
 0
 0
 1
-
+
 org/apache/hadoop/hbase/regionserver/ReversedKeyValueHeap.java
 0
 0
 7
-
+
 org/apache/hadoop/hbase/regionserver/ReversedRegionScannerImpl.java
 0
 0
 6
-
+
 org/apache/hadoop/hbase/regionserver/ReversedStoreScanner.java
 0
 0
 2
-
+
 org/apache/hadoop/hbase/regionserver/RowProcessor.java
 0
 0
 6
-
+
 org/apache/hadoop/hbase/regionserver/RpcSchedulerFactory.java
 0
 0
 2
-
+
 org/apache/hadoop/hbase/regionserver/ScanInfo.java
 0
 0
 5
-
+
 org/apache/hadoop/hbase/regionserver/ScanOptions.java
 0
 0
 3
-
+
 org/apache/hadoop/hbase/regionserver/ScannerContext.java
 0
 0
 13
-
+
 org/apache/hadoop/hbase/regionserver/ScannerIdGenerator.java
 0
 0
 7
-
+
 org/apache/hadoop/hbase/regionserver/SecureBulkLoadEndpointClient.java
 0
 0
 3
-
+
 org/apache/hadoop/hbase/regionserver/SecureBulkLoadManager.java
 0
 0
 5
-
+
 org/apache/hadoop/hbase/regionserver/Segment.java
 0
 0
 3
-
+
 org/apache/hadoop/hbase/regionserver/SegmentFactory.java
 0
 0
 12
-
+
 org/apache/hadoop/hbase/regionserver/SegmentScanner.java
 0
 0
 3
-
+
 org/apache/hadoop/hbase/regionserver/ServerNonceManager.java
 0
 0
 10
-
+
 org/apache/hadoop/hbase/regionserver/ShipperListener.java
 0
 0
 1
-
+
 org/apache/hadoop/hbase/regionserver/ShutdownHook.java
 0
 0
 7
-
+
 org/apache/hadoop/hbase/regionserver/SimpleRpcSchedulerFactory.java
 0
 0
 4
-
+
 org/apache/hadoop/hbase/regionserver/SplitLogWorker.java
 0
 0
 3
-
+
 org/apache/hadoop/hbase/regionserver/SplitRequest.java
 0
 0
 2
-
+
 org/apache/hadoop/hbase/regionserver/SteppingSplitPolicy.java
 0
 0
 3
-
+
 org/apache/hadoop/hbase/regionserver/Store.java
 0
 0
 4
-
+
 org/apache/hadoop/hbase/regionserver/StoreFileComparators.java
 0
 0
 2
-
+
 org/apache/hadoop/hbase/regionserver/StoreFileInfo.java
 0
 0
 23
-
+
 org/apache/hadoop/hbase/regionserver/StoreFileManager.java
 0
 0
 3
-
+
 org/apache/hadoop/hbase/regionserver/StoreFileReader.java
 0
 0
 5
-
+
 org/apache/hadoop/hbase/regionserver/StoreFileScanner.java
 0
 0
 10
-
+
 

[18/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/client/class-use/ResultScanner.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/class-use/ResultScanner.html 
b/devapidocs/org/apache/hadoop/hbase/client/class-use/ResultScanner.html
index e845e0f..0038ea7 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/class-use/ResultScanner.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/class-use/ResultScanner.html
@@ -214,9 +214,9 @@ service.
 
 
 
-ResultScanner
-HTable.getScanner(byte[]family)
-The underlying HTable must 
not be closed.
+default ResultScanner
+AsyncTable.getScanner(byte[]family)
+Gets a scanner on the current table for the given 
family.
 
 
 
@@ -226,16 +226,16 @@ service.
 
 
 
-default ResultScanner
-AsyncTable.getScanner(byte[]family)
-Gets a scanner on the current table for the given 
family.
+ResultScanner
+HTable.getScanner(byte[]family)
+The underlying HTable must 
not be closed.
 
 
 
-ResultScanner
-HTable.getScanner(byte[]family,
+default ResultScanner
+AsyncTable.getScanner(byte[]family,
   byte[]qualifier)
-The underlying HTable must 
not be closed.
+Gets a scanner on the current table for the given family 
and qualifier.
 
 
 
@@ -246,16 +246,16 @@ service.
 
 
 
-default ResultScanner
-AsyncTable.getScanner(byte[]family,
+ResultScanner
+HTable.getScanner(byte[]family,
   byte[]qualifier)
-Gets a scanner on the current table for the given family 
and qualifier.
+The underlying HTable must 
not be closed.
 
 
 
 ResultScanner
-HTable.getScanner(Scanscan)
-The underlying HTable must 
not be closed.
+AsyncTable.getScanner(Scanscan)
+Returns a scanner on the current table as specified by the 
Scan 
object.
 
 
 
@@ -271,8 +271,8 @@ service.
 
 
 ResultScanner
-AsyncTable.getScanner(Scanscan)
-Returns a scanner on the current table as specified by the 
Scan 
object.
+HTable.getScanner(Scanscan)
+The underlying HTable must 
not be closed.
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/client/class-use/RetriesExhaustedWithDetailsException.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/class-use/RetriesExhaustedWithDetailsException.html
 
b/devapidocs/org/apache/hadoop/hbase/client/class-use/RetriesExhaustedWithDetailsException.html
index 34a7506..e52b10d 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/client/class-use/RetriesExhaustedWithDetailsException.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/client/class-use/RetriesExhaustedWithDetailsException.html
@@ -106,11 +106,11 @@
 
 
 RetriesExhaustedWithDetailsException
-AsyncRequestFutureImpl.getErrors()
+AsyncRequestFuture.getErrors()
 
 
 RetriesExhaustedWithDetailsException
-AsyncRequestFuture.getErrors()
+AsyncRequestFutureImpl.getErrors()
 
 
 (package private) RetriesExhaustedWithDetailsException

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallable.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallable.html 
b/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallable.html
index c67d003..88f3213 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallable.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallable.html
@@ -234,36 +234,28 @@
 
 
 
-T
-RpcRetryingCallerImpl.callWithoutRetries(RetryingCallableTcallable,
-  intcallTimeout)
-
-
 T
 RpcRetryingCaller.callWithoutRetries(RetryingCallableTcallable,
   intcallTimeout)
 Call the server once only.
 
 
-
+
 T
-RpcRetryingCallerImpl.callWithRetries(RetryingCallableTcallable,
-   intcallTimeout)
+RpcRetryingCallerImpl.callWithoutRetries(RetryingCallableTcallable,
+  intcallTimeout)
 
-
+
 T
 RpcRetryingCaller.callWithRetries(RetryingCallableTcallable,
intcallTimeout)
 Retries if invocation fails.
 
 
-
-RetryingCallerInterceptorContext
-NoOpRetryingInterceptorContext.prepare(RetryingCallable?callable)
-
 
-FastFailInterceptorContext
-FastFailInterceptorContext.prepare(RetryingCallable?callable)
+T
+RpcRetryingCallerImpl.callWithRetries(RetryingCallableTcallable,
+   intcallTimeout)
 
 
 abstract RetryingCallerInterceptorContext
@@ -275,13 +267,11 @@
 
 
 RetryingCallerInterceptorContext
-NoOpRetryingInterceptorContext.prepare(RetryingCallable?callable,
-   inttries)
+NoOpRetryingInterceptorContext.prepare(RetryingCallable?callable)
 
 
 FastFailInterceptorContext
-FastFailInterceptorContext.prepare(RetryingCallable?callable,
-   inttries)
+FastFailInterceptorContext.prepare(RetryingCallable?callable)
 
 
 abstract RetryingCallerInterceptorContext
@@ -292,6 

[06/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/ipc/class-use/PriorityFunction.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/ipc/class-use/PriorityFunction.html 
b/devapidocs/org/apache/hadoop/hbase/ipc/class-use/PriorityFunction.html
index 7ba86da..eabef95 100644
--- a/devapidocs/org/apache/hadoop/hbase/ipc/class-use/PriorityFunction.html
+++ b/devapidocs/org/apache/hadoop/hbase/ipc/class-use/PriorityFunction.html
@@ -114,15 +114,15 @@
 
 
 private PriorityFunction
-RpcExecutor.priority
+SimpleRpcScheduler.priority
 
 
 private PriorityFunction
-RpcExecutor.CallPriorityComparator.priority
+RpcExecutor.priority
 
 
 private PriorityFunction
-SimpleRpcScheduler.priority
+RpcExecutor.CallPriorityComparator.priority
 
 
 
@@ -319,7 +319,7 @@
 
 
 RpcScheduler
-FifoRpcSchedulerFactory.create(org.apache.hadoop.conf.Configurationconf,
+RpcSchedulerFactory.create(org.apache.hadoop.conf.Configurationconf,
   PriorityFunctionpriority)
 Deprecated.
 
@@ -333,16 +333,18 @@
 
 
 RpcScheduler
-RpcSchedulerFactory.create(org.apache.hadoop.conf.Configurationconf,
+FifoRpcSchedulerFactory.create(org.apache.hadoop.conf.Configurationconf,
   PriorityFunctionpriority)
 Deprecated.
 
 
 
 RpcScheduler
-FifoRpcSchedulerFactory.create(org.apache.hadoop.conf.Configurationconf,
+RpcSchedulerFactory.create(org.apache.hadoop.conf.Configurationconf,
   PriorityFunctionpriority,
-  Abortableserver)
+  Abortableserver)
+Constructs a RpcScheduler.
+
 
 
 RpcScheduler
@@ -352,11 +354,9 @@
 
 
 RpcScheduler
-RpcSchedulerFactory.create(org.apache.hadoop.conf.Configurationconf,
+FifoRpcSchedulerFactory.create(org.apache.hadoop.conf.Configurationconf,
   PriorityFunctionpriority,
-  Abortableserver)
-Constructs a RpcScheduler.
-
+  Abortableserver)
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/ipc/class-use/RpcCallback.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/ipc/class-use/RpcCallback.html 
b/devapidocs/org/apache/hadoop/hbase/ipc/class-use/RpcCallback.html
index 44f28ec..6516d78 100644
--- a/devapidocs/org/apache/hadoop/hbase/ipc/class-use/RpcCallback.html
+++ b/devapidocs/org/apache/hadoop/hbase/ipc/class-use/RpcCallback.html
@@ -123,13 +123,13 @@
 
 
 void
-RpcCallContext.setCallBack(RpcCallbackcallback)
-Sets a callback which has to be executed at the end of this 
RPC call.
-
+ServerCall.setCallBack(RpcCallbackcallback)
 
 
 void
-ServerCall.setCallBack(RpcCallbackcallback)
+RpcCallContext.setCallBack(RpcCallbackcallback)
+Sets a callback which has to be executed at the end of this 
RPC call.
+
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/ipc/class-use/RpcControllerFactory.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/ipc/class-use/RpcControllerFactory.html 
b/devapidocs/org/apache/hadoop/hbase/ipc/class-use/RpcControllerFactory.html
index 5e1a1ac..d7b0e6d 100644
--- a/devapidocs/org/apache/hadoop/hbase/ipc/class-use/RpcControllerFactory.html
+++ b/devapidocs/org/apache/hadoop/hbase/ipc/class-use/RpcControllerFactory.html
@@ -131,24 +131,32 @@
 
 
 
-protected RpcControllerFactory
-RegionAdminServiceCallable.rpcControllerFactory
-
-
 private RpcControllerFactory
 ConnectionImplementation.rpcControllerFactory
 
+
+protected RpcControllerFactory
+ClientScanner.rpcControllerFactory
+
 
+protected RpcControllerFactory
+RegionAdminServiceCallable.rpcControllerFactory
+
+
 (package private) RpcControllerFactory
 AsyncConnectionImpl.rpcControllerFactory
 
-
+
 private RpcControllerFactory
 HTable.rpcControllerFactory
 
+
+private RpcControllerFactory
+HBaseAdmin.rpcControllerFactory
+
 
 private RpcControllerFactory
-RpcRetryingCallerWithReadReplicas.rpcControllerFactory
+SecureBulkLoadClient.rpcControllerFactory
 
 
 protected RpcControllerFactory
@@ -156,15 +164,7 @@
 
 
 private RpcControllerFactory
-HBaseAdmin.rpcControllerFactory
-
-
-private RpcControllerFactory
-SecureBulkLoadClient.rpcControllerFactory
-
-
-protected RpcControllerFactory
-ClientScanner.rpcControllerFactory
+RpcRetryingCallerWithReadReplicas.rpcControllerFactory
 
 
 (package private) RpcControllerFactory
@@ -181,11 +181,11 @@
 
 
 RpcControllerFactory
-ClusterConnection.getRpcControllerFactory()
+ConnectionImplementation.getRpcControllerFactory()
 
 
 RpcControllerFactory
-ConnectionImplementation.getRpcControllerFactory()
+ClusterConnection.getRpcControllerFactory()
 
 
 private RpcControllerFactory

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/ipc/class-use/RpcExecutor.Handler.html
--
diff --git 

[24/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/client/class-use/ColumnFamilyDescriptor.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/class-use/ColumnFamilyDescriptor.html
 
b/devapidocs/org/apache/hadoop/hbase/client/class-use/ColumnFamilyDescriptor.html
index 9c7293e..6261f0d 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/client/class-use/ColumnFamilyDescriptor.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/client/class-use/ColumnFamilyDescriptor.html
@@ -326,10 +326,8 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
-AsyncAdmin.addColumnFamily(TableNametableName,
-   ColumnFamilyDescriptorcolumnFamily)
-Add a column family to an existing table.
-
+AsyncHBaseAdmin.addColumnFamily(TableNametableName,
+   ColumnFamilyDescriptorcolumnFamily)
 
 
 void
@@ -339,18 +337,20 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 
-http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
-RawAsyncHBaseAdmin.addColumnFamily(TableNametableName,
-   ColumnFamilyDescriptorcolumnFamily)
-
-
 void
 HBaseAdmin.addColumnFamily(TableNametableName,
ColumnFamilyDescriptorcolumnFamily)
 
+
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
+AsyncAdmin.addColumnFamily(TableNametableName,
+   ColumnFamilyDescriptorcolumnFamily)
+Add a column family to an existing table.
+
+
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
-AsyncHBaseAdmin.addColumnFamily(TableNametableName,
+RawAsyncHBaseAdmin.addColumnFamily(TableNametableName,
ColumnFamilyDescriptorcolumnFamily)
 
 
@@ -392,10 +392,8 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
-AsyncAdmin.modifyColumnFamily(TableNametableName,
-  ColumnFamilyDescriptorcolumnFamily)
-Modify an existing column family on a table.
-
+AsyncHBaseAdmin.modifyColumnFamily(TableNametableName,
+  ColumnFamilyDescriptorcolumnFamily)
 
 
 void
@@ -405,18 +403,20 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 
-http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
-RawAsyncHBaseAdmin.modifyColumnFamily(TableNametableName,
-  ColumnFamilyDescriptorcolumnFamily)
-
-
 void
 HBaseAdmin.modifyColumnFamily(TableNametableName,
   ColumnFamilyDescriptorcolumnFamily)
 
+
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
+AsyncAdmin.modifyColumnFamily(TableNametableName,
+  ColumnFamilyDescriptorcolumnFamily)
+Modify an existing column family on a table.
+
+
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
-AsyncHBaseAdmin.modifyColumnFamily(TableNametableName,
+RawAsyncHBaseAdmin.modifyColumnFamily(TableNametableName,
   

[05/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/master/assignment/class-use/RegionStates.RegionStateNode.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/master/assignment/class-use/RegionStates.RegionStateNode.html
 
b/devapidocs/org/apache/hadoop/hbase/master/assignment/class-use/RegionStates.RegionStateNode.html
index f37c444..71724c6 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/master/assignment/class-use/RegionStates.RegionStateNode.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/master/assignment/class-use/RegionStates.RegionStateNode.html
@@ -255,7 +255,7 @@
 
 
 protected void
-AssignProcedure.finishTransition(MasterProcedureEnvenv,
+UnassignProcedure.finishTransition(MasterProcedureEnvenv,
 RegionStates.RegionStateNoderegionNode)
 
 
@@ -265,7 +265,7 @@
 
 
 protected void
-UnassignProcedure.finishTransition(MasterProcedureEnvenv,
+AssignProcedure.finishTransition(MasterProcedureEnvenv,
 RegionStates.RegionStateNoderegionNode)
 
 
@@ -316,7 +316,7 @@
 
 
 protected boolean
-AssignProcedure.remoteCallFailed(MasterProcedureEnvenv,
+UnassignProcedure.remoteCallFailed(MasterProcedureEnvenv,
 RegionStates.RegionStateNoderegionNode,
 http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in 
java.io">IOExceptionexception)
 
@@ -328,7 +328,7 @@
 
 
 protected boolean
-UnassignProcedure.remoteCallFailed(MasterProcedureEnvenv,
+AssignProcedure.remoteCallFailed(MasterProcedureEnvenv,
 RegionStates.RegionStateNoderegionNode,
 http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in 
java.io">IOExceptionexception)
 
@@ -353,10 +353,10 @@
 
 
 protected void
-AssignProcedure.reportTransition(MasterProcedureEnvenv,
+UnassignProcedure.reportTransition(MasterProcedureEnvenv,
 RegionStates.RegionStateNoderegionNode,
 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos.RegionStateTransition.TransitionCodecode,
-longopenSeqNum)
+longseqId)
 
 
 protected abstract void
@@ -367,10 +367,10 @@
 
 
 protected void
-UnassignProcedure.reportTransition(MasterProcedureEnvenv,
+AssignProcedure.reportTransition(MasterProcedureEnvenv,
 RegionStates.RegionStateNoderegionNode,
 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos.RegionStateTransition.TransitionCodecode,
-longseqId)
+longopenSeqNum)
 
 
 private boolean
@@ -381,7 +381,7 @@
 
 
 protected boolean
-AssignProcedure.startTransition(MasterProcedureEnvenv,
+UnassignProcedure.startTransition(MasterProcedureEnvenv,
RegionStates.RegionStateNoderegionNode)
 
 
@@ -391,7 +391,7 @@
 
 
 protected boolean
-UnassignProcedure.startTransition(MasterProcedureEnvenv,
+AssignProcedure.startTransition(MasterProcedureEnvenv,
RegionStates.RegionStateNoderegionNode)
 
 
@@ -404,7 +404,7 @@
 
 
 protected boolean
-AssignProcedure.updateTransition(MasterProcedureEnvenv,
+UnassignProcedure.updateTransition(MasterProcedureEnvenv,
 RegionStates.RegionStateNoderegionNode)
 
 
@@ -416,7 +416,7 @@
 
 
 protected boolean
-UnassignProcedure.updateTransition(MasterProcedureEnvenv,
+AssignProcedure.updateTransition(MasterProcedureEnvenv,
 RegionStates.RegionStateNoderegionNode)
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/master/balancer/class-use/BaseLoadBalancer.Cluster.Action.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/master/balancer/class-use/BaseLoadBalancer.Cluster.Action.html
 
b/devapidocs/org/apache/hadoop/hbase/master/balancer/class-use/BaseLoadBalancer.Cluster.Action.html
index 513000d..4ddb4d7 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/master/balancer/class-use/BaseLoadBalancer.Cluster.Action.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/master/balancer/class-use/BaseLoadBalancer.Cluster.Action.html
@@ -137,14 +137,6 @@
 
 
 
-protected BaseLoadBalancer.Cluster.Action
-FavoredStochasticBalancer.FavoredNodeLocalityPicker.generate(BaseLoadBalancer.Clustercluster)
-
-
-(package private) BaseLoadBalancer.Cluster.Action
-FavoredStochasticBalancer.FavoredNodeLoadPicker.generate(BaseLoadBalancer.Clustercluster)
-
-
 (package private) abstract BaseLoadBalancer.Cluster.Action
 StochasticLoadBalancer.CandidateGenerator.generate(BaseLoadBalancer.Clustercluster)
 
@@ -170,6 +162,14 @@
 
 
 protected BaseLoadBalancer.Cluster.Action
+FavoredStochasticBalancer.FavoredNodeLocalityPicker.generate(BaseLoadBalancer.Clustercluster)
+
+
+(package private) BaseLoadBalancer.Cluster.Action

[07/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/CachedBlock.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/CachedBlock.html 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/CachedBlock.html
index 8108f0a..59ee24f 100644
--- a/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/CachedBlock.html
+++ b/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/CachedBlock.html
@@ -150,15 +150,15 @@
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/Iterator.html?is-external=true;
 title="class or interface in java.util">IteratorCachedBlock
-CombinedBlockCache.iterator()
+BlockCache.iterator()
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/Iterator.html?is-external=true;
 title="class or interface in java.util">IteratorCachedBlock
-BlockCache.iterator()
+LruBlockCache.iterator()
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/Iterator.html?is-external=true;
 title="class or interface in java.util">IteratorCachedBlock
-LruBlockCache.iterator()
+CombinedBlockCache.iterator()
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/Iterator.html?is-external=true;
 title="class or interface in java.util">IteratorCachedBlock

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFile.Writer.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFile.Writer.html 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFile.Writer.html
index a8e355e..7d29f77 100644
--- a/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFile.Writer.html
+++ b/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFile.Writer.html
@@ -143,18 +143,18 @@
 
 
 void
-HFileDataBlockEncoderImpl.saveMetadata(HFile.Writerwriter)
-
-
-void
 NoOpDataBlockEncoder.saveMetadata(HFile.Writerwriter)
 
-
+
 void
 HFileDataBlockEncoder.saveMetadata(HFile.Writerwriter)
 Save metadata in HFile which will be written to disk
 
 
+
+void
+HFileDataBlockEncoderImpl.saveMetadata(HFile.Writerwriter)
+
 
 
 
@@ -203,18 +203,18 @@
 
 
 
-void
-RowColBloomContext.addLastBloomKey(HFile.Writerwriter)
+abstract void
+BloomContext.addLastBloomKey(HFile.Writerwriter)
+Adds the last bloom key to the HFile Writer as part of 
StorefileWriter close.
+
 
 
 void
 RowBloomContext.addLastBloomKey(HFile.Writerwriter)
 
 
-abstract void
-BloomContext.addLastBloomKey(HFile.Writerwriter)
-Adds the last bloom key to the HFile Writer as part of 
StorefileWriter close.
-
+void
+RowColBloomContext.addLastBloomKey(HFile.Writerwriter)
 
 
 static BloomFilterWriter

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileBlock.Writer.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileBlock.Writer.html 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileBlock.Writer.html
index 6b0eb0b..ded8848 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileBlock.Writer.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileBlock.Writer.html
@@ -106,15 +106,15 @@
 
 
 
+private HFileBlock.Writer
+HFileBlockIndex.BlockIndexWriter.blockWriter
+
+
 protected HFileBlock.Writer
 HFileWriterImpl.blockWriter
 block writer
 
 
-
-private HFileBlock.Writer
-HFileBlockIndex.BlockIndexWriter.blockWriter
-
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileContext.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileContext.html 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileContext.html
index 2af7d60..eff6d3c 100644
--- a/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileContext.html
+++ b/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileContext.html
@@ -136,15 +136,15 @@
 
 
 HFileContext
-HFileBlockDecodingContext.getHFileContext()
+HFileBlockEncodingContext.getHFileContext()
 
 
 HFileContext
-HFileBlockDefaultDecodingContext.getHFileContext()
+HFileBlockDecodingContext.getHFileContext()
 
 
 HFileContext
-HFileBlockEncodingContext.getHFileContext()
+HFileBlockDefaultDecodingContext.getHFileContext()
 
 
 HFileContext
@@ -224,23 +224,23 @@
 
 
 private HFileContext
+HFile.WriterFactory.fileContext
+
+
+private HFileContext
 HFileBlock.fileContext
 Meta data that holds meta information on the 
hfileblock.
 
 
-
+
 private HFileContext
 HFileBlock.Writer.fileContext
 Meta data that holds information about the hfileblock
 
 
-
-private HFileContext
-HFileBlock.FSReaderImpl.fileContext
-
 
 private HFileContext
-HFile.WriterFactory.fileContext

hbase-site git commit: INFRA-10751 Empty commit

2017-11-15 Thread git-site-role
Repository: hbase-site
Updated Branches:
  refs/heads/asf-site cba900e48 -> 303878753


INFRA-10751 Empty commit


Project: http://git-wip-us.apache.org/repos/asf/hbase-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase-site/commit/30387875
Tree: http://git-wip-us.apache.org/repos/asf/hbase-site/tree/30387875
Diff: http://git-wip-us.apache.org/repos/asf/hbase-site/diff/30387875

Branch: refs/heads/asf-site
Commit: 303878753b04d808aa9e18fb1b7794a1ff132ce6
Parents: cba900e
Author: jenkins 
Authored: Wed Nov 15 15:30:49 2017 +
Committer: jenkins 
Committed: Wed Nov 15 15:30:49 2017 +

--

--




[37/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/constant-values.html
--
diff --git a/devapidocs/constant-values.html b/devapidocs/constant-values.html
index 499e3b4..66c04fe 100644
--- a/devapidocs/constant-values.html
+++ b/devapidocs/constant-values.html
@@ -2305,398 +2305,405 @@
 "hbase.replication.conf.dir"
 
 
+
+
+publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
+REPLICATION_DROP_ON_DELETED_TABLE_KEY
+"hbase.replication.drop.on.deleted.table"
+
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 REPLICATION_META_FAMILY_STR
 "rep_meta"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 REPLICATION_POSITION_FAMILY_STR
 "rep_position"
 
-
+
 
 
 publicstaticfinalint
 REPLICATION_QOS
 5
 
-
+
 
 
 publicstaticfinalint
 REPLICATION_SCOPE_GLOBAL
 1
 
-
+
 
 
 publicstaticfinalint
 REPLICATION_SCOPE_LOCAL
 0
 
-
+
 
 
 publicstaticfinalint
 REPLICATION_SCOPE_SERIAL
 2
 
-
+
 
 
 publicstaticfinallong
 REPLICATION_SERIALLY_WAITING_DEFAULT
 1L
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 REPLICATION_SERIALLY_WAITING_KEY
 "hbase.serial.replication.waitingMs"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 REPLICATION_SERVICE_CLASSNAME_DEFAULT
 "org.apache.hadoop.hbase.replication.regionserver.Replication"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 REPLICATION_SINK_SERVICE_CLASSNAME
 "hbase.replication.sink.service"
 
-
+
 
 
 publicstaticfinalint
 REPLICATION_SOURCE_MAXTHREADS_DEFAULT
 10
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 REPLICATION_SOURCE_MAXTHREADS_KEY
 "hbase.replication.source.maxthreads"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 REPLICATION_SOURCE_SERVICE_CLASSNAME
 "hbase.replication.source.service"
 
-
+
 
 
 publicstaticfinalint
 REPLICATION_SOURCE_TOTAL_BUFFER_DFAULT
 268435456
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 REPLICATION_SOURCE_TOTAL_BUFFER_KEY
 "replication.total.buffer.quota"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 RPC_CODEC_CONF_KEY
 "hbase.client.rpc.codec"
 
-
+
 
 
 publicstaticfinalbyte
 RPC_CURRENT_VERSION
 0
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 SEQNUM_QUALIFIER_STR
 "seqnumDuringOpen"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 SERVER_QUALIFIER_STR
 "server"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 SERVERNAME_QUALIFIER_STR
 "sn"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 SNAPSHOT_DIR_NAME
 ".hbase-snapshot"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 SNAPSHOT_RESTORE_FAILSAFE_NAME
 "hbase.snapshot.restore.failsafe.name"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 SNAPSHOT_RESTORE_TAKE_FAILSAFE_SNAPSHOT
 "hbase.snapshot.restore.take.failsafe.snapshot"
 
-
+
 
 
 publicstaticfinalint
 SOCKET_RETRY_WAIT_MS
 200
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 SPLIT_LOGDIR_NAME
 "splitWAL"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 STARTCODE_QUALIFIER_STR
 "serverstartcode"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 

[02/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/master/procedure/class-use/TableProcedureInterface.TableOperationType.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/master/procedure/class-use/TableProcedureInterface.TableOperationType.html
 
b/devapidocs/org/apache/hadoop/hbase/master/procedure/class-use/TableProcedureInterface.TableOperationType.html
index c015b8e..1114374 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/master/procedure/class-use/TableProcedureInterface.TableOperationType.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/master/procedure/class-use/TableProcedureInterface.TableOperationType.html
@@ -112,19 +112,19 @@
 
 
 TableProcedureInterface.TableOperationType
-MoveRegionProcedure.getTableOperationType()
+UnassignProcedure.getTableOperationType()
 
 
 TableProcedureInterface.TableOperationType
-GCMergedRegionsProcedure.getTableOperationType()
+MoveRegionProcedure.getTableOperationType()
 
 
 TableProcedureInterface.TableOperationType
-AssignProcedure.getTableOperationType()
+GCRegionProcedure.getTableOperationType()
 
 
 TableProcedureInterface.TableOperationType
-GCRegionProcedure.getTableOperationType()
+GCMergedRegionsProcedure.getTableOperationType()
 
 
 TableProcedureInterface.TableOperationType
@@ -136,7 +136,7 @@
 
 
 TableProcedureInterface.TableOperationType
-UnassignProcedure.getTableOperationType()
+AssignProcedure.getTableOperationType()
 
 
 
@@ -185,31 +185,33 @@
 
 
 TableProcedureInterface.TableOperationType
-CloneSnapshotProcedure.getTableOperationType()
+DeleteTableProcedure.getTableOperationType()
 
 
 TableProcedureInterface.TableOperationType
-DeleteTableProcedure.getTableOperationType()
+DisableTableProcedure.getTableOperationType()
 
 
 TableProcedureInterface.TableOperationType
-TruncateTableProcedure.getTableOperationType()
+DeleteNamespaceProcedure.getTableOperationType()
 
 
 TableProcedureInterface.TableOperationType
-DeleteNamespaceProcedure.getTableOperationType()
+CreateNamespaceProcedure.getTableOperationType()
 
 
 TableProcedureInterface.TableOperationType
-DisableTableProcedure.getTableOperationType()
+TableProcedureInterface.getTableOperationType()
+Given an operation type we can take decisions about what to 
do with pending operations.
+
 
 
 TableProcedureInterface.TableOperationType
-RecoverMetaProcedure.getTableOperationType()
+EnableTableProcedure.getTableOperationType()
 
 
-abstract TableProcedureInterface.TableOperationType
-AbstractStateMachineTableProcedure.getTableOperationType()
+TableProcedureInterface.TableOperationType
+CreateTableProcedure.getTableOperationType()
 
 
 abstract TableProcedureInterface.TableOperationType
@@ -217,37 +219,35 @@
 
 
 TableProcedureInterface.TableOperationType
-CreateNamespaceProcedure.getTableOperationType()
+ModifyNamespaceProcedure.getTableOperationType()
 
 
 abstract TableProcedureInterface.TableOperationType
 AbstractStateMachineRegionProcedure.getTableOperationType()
 
 
-TableProcedureInterface.TableOperationType
-EnableTableProcedure.getTableOperationType()
+abstract TableProcedureInterface.TableOperationType
+AbstractStateMachineTableProcedure.getTableOperationType()
 
 
 TableProcedureInterface.TableOperationType
-TableProcedureInterface.getTableOperationType()
-Given an operation type we can take decisions about what to 
do with pending operations.
-
+CloneSnapshotProcedure.getTableOperationType()
 
 
 TableProcedureInterface.TableOperationType
-CreateTableProcedure.getTableOperationType()
+ModifyTableProcedure.getTableOperationType()
 
 
 TableProcedureInterface.TableOperationType
-ModifyNamespaceProcedure.getTableOperationType()
+RecoverMetaProcedure.getTableOperationType()
 
 
 TableProcedureInterface.TableOperationType
-ModifyTableProcedure.getTableOperationType()
+RestoreSnapshotProcedure.getTableOperationType()
 
 
 TableProcedureInterface.TableOperationType
-RestoreSnapshotProcedure.getTableOperationType()
+TruncateTableProcedure.getTableOperationType()
 
 
 static TableProcedureInterface.TableOperationType

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/master/snapshot/class-use/SnapshotManager.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/master/snapshot/class-use/SnapshotManager.html
 
b/devapidocs/org/apache/hadoop/hbase/master/snapshot/class-use/SnapshotManager.html
index cc89917..0bb2dc3 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/master/snapshot/class-use/SnapshotManager.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/master/snapshot/class-use/SnapshotManager.html
@@ -121,11 +121,11 @@
 
 
 SnapshotManager
-MasterServices.getSnapshotManager()
+HMaster.getSnapshotManager()
 
 
 SnapshotManager
-HMaster.getSnapshotManager()
+MasterServices.getSnapshotManager()
 
 
 


[29/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/class-use/ServerName.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/class-use/ServerName.html 
b/devapidocs/org/apache/hadoop/hbase/class-use/ServerName.html
index f2615f0..33c2a40 100644
--- a/devapidocs/org/apache/hadoop/hbase/class-use/ServerName.html
+++ b/devapidocs/org/apache/hadoop/hbase/class-use/ServerName.html
@@ -723,31 +723,31 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 private ServerName
-AsyncRequestFutureImpl.SingleServerRequestRunnable.server
+FastFailInterceptorContext.server
 
 
 private ServerName
-FastFailInterceptorContext.server
+AsyncRequestFutureImpl.SingleServerRequestRunnable.server
 
 
 private ServerName
-AsyncAdminRequestRetryingCaller.serverName
+AsyncRpcRetryingCallerFactory.AdminRequestCallerBuilder.serverName
 
 
 private ServerName
-ConnectionUtils.ShortCircuitingClusterConnection.serverName
+AsyncRpcRetryingCallerFactory.ServerRequestCallerBuilder.serverName
 
 
 private ServerName
-AsyncServerRequestRpcRetryingCaller.serverName
+AsyncAdminRequestRetryingCaller.serverName
 
 
 private ServerName
-AsyncRpcRetryingCallerFactory.AdminRequestCallerBuilder.serverName
+AsyncServerRequestRpcRetryingCaller.serverName
 
 
 private ServerName
-AsyncRpcRetryingCallerFactory.ServerRequestCallerBuilder.serverName
+ConnectionUtils.ShortCircuitingClusterConnection.serverName
 
 
 
@@ -830,9 +830,7 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListServerName
-AsyncAdmin.clearDeadServers(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListServerNameservers)
-Clear dead region servers from master.
-
+AsyncHBaseAdmin.clearDeadServers(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListServerNameservers)
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListServerName
@@ -841,16 +839,18 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 
-http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListServerName
-RawAsyncHBaseAdmin.clearDeadServers(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListServerNameservers)
-
-
 http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListServerName
 HBaseAdmin.clearDeadServers(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListServerNameservers)
 
+
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListServerName
+AsyncAdmin.clearDeadServers(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListServerNameservers)
+Clear dead region servers from master.
+
+
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListServerName
-AsyncHBaseAdmin.clearDeadServers(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListServerNameservers)
+RawAsyncHBaseAdmin.clearDeadServers(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListServerNameservers)
 
 
 default http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/Collection.html?is-external=true;
 title="class or interface in java.util">CollectionServerName
@@ -883,10 +883,8 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility 

[19/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/client/class-use/Result.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/client/class-use/Result.html 
b/devapidocs/org/apache/hadoop/hbase/client/class-use/Result.html
index ec0de14..e75cd67 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/class-use/Result.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/class-use/Result.html
@@ -298,7 +298,7 @@ service.
 
 
 private static HRegionLocation
-MetaTableAccessor.getRegionLocation(Resultr,
+AsyncMetaTableAccessor.getRegionLocation(Resultr,
  RegionInforegionInfo,
  intreplicaId)
 Returns the HRegionLocation parsed from the given meta row 
Result
@@ -307,7 +307,7 @@ service.
 
 
 private static HRegionLocation
-AsyncMetaTableAccessor.getRegionLocation(Resultr,
+MetaTableAccessor.getRegionLocation(Resultr,
  RegionInforegionInfo,
  intreplicaId)
 Returns the HRegionLocation parsed from the given meta row 
Result
@@ -315,55 +315,55 @@ service.
 
 
 
-static RegionLocations
-MetaTableAccessor.getRegionLocations(Resultr)
+private static http://docs.oracle.com/javase/8/docs/api/java/util/Optional.html?is-external=true;
 title="class or interface in java.util">OptionalRegionLocations
+AsyncMetaTableAccessor.getRegionLocations(Resultr)
 Returns an HRegionLocationList extracted from the 
result.
 
 
 
-private static http://docs.oracle.com/javase/8/docs/api/java/util/Optional.html?is-external=true;
 title="class or interface in java.util">OptionalRegionLocations
-AsyncMetaTableAccessor.getRegionLocations(Resultr)
+static RegionLocations
+MetaTableAccessor.getRegionLocations(Resultr)
 Returns an HRegionLocationList extracted from the 
result.
 
 
 
 private static long
-MetaTableAccessor.getSeqNumDuringOpen(Resultr,
+AsyncMetaTableAccessor.getSeqNumDuringOpen(Resultr,
intreplicaId)
 The latest seqnum that the server writing to meta observed 
when opening the region.
 
 
 
 private static long
-AsyncMetaTableAccessor.getSeqNumDuringOpen(Resultr,
+MetaTableAccessor.getSeqNumDuringOpen(Resultr,
intreplicaId)
 The latest seqnum that the server writing to meta observed 
when opening the region.
 
 
 
-static ServerName
-MetaTableAccessor.getServerName(Resultr,
+private static http://docs.oracle.com/javase/8/docs/api/java/util/Optional.html?is-external=true;
 title="class or interface in java.util">OptionalServerName
+AsyncMetaTableAccessor.getServerName(Resultr,
  intreplicaId)
 Returns a ServerName from catalog table Result.
 
 
 
-private static http://docs.oracle.com/javase/8/docs/api/java/util/Optional.html?is-external=true;
 title="class or interface in java.util">OptionalServerName
-AsyncMetaTableAccessor.getServerName(Resultr,
+static ServerName
+MetaTableAccessor.getServerName(Resultr,
  intreplicaId)
 Returns a ServerName from catalog table Result.
 
 
 
+private static http://docs.oracle.com/javase/8/docs/api/java/util/Optional.html?is-external=true;
 title="class or interface in java.util">OptionalTableState
+AsyncMetaTableAccessor.getTableState(Resultr)
+
+
 static TableState
 MetaTableAccessor.getTableState(Resultr)
 Decode table state from META Result.
 
 
-
-private static http://docs.oracle.com/javase/8/docs/api/java/util/Optional.html?is-external=true;
 title="class or interface in java.util">OptionalTableState
-AsyncMetaTableAccessor.getTableState(Resultr)
-
 
 void
 AsyncMetaTableAccessor.MetaTableRawScanResultConsumer.onNext(Result[]results,
@@ -459,13 +459,13 @@ service.
 ClientScanner.cache
 
 
-private http://docs.oracle.com/javase/8/docs/api/java/util/Deque.html?is-external=true;
 title="class or interface in java.util">DequeResult
-BatchScanResultCache.partialResults
-
-
 private http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListResult
 CompleteScanResultCache.partialResults
 
+
+private http://docs.oracle.com/javase/8/docs/api/java/util/Deque.html?is-external=true;
 title="class or interface in java.util">DequeResult
+BatchScanResultCache.partialResults
+
 
 private http://docs.oracle.com/javase/8/docs/api/java/util/Queue.html?is-external=true;
 title="class or interface in java.util">QueueResult
 AsyncTableResultScanner.queue
@@ -488,7 +488,7 @@ service.
 
 
 Result[]
-BatchScanResultCache.addAndGet(Result[]results,
+AllowPartialScanResultCache.addAndGet(Result[]results,
  booleanisHeartbeatMessage)
 
 
@@ -498,22 +498,26 @@ service.
 
 
 Result[]
-AllowPartialScanResultCache.addAndGet(Result[]results,
+BatchScanResultCache.addAndGet(Result[]results,
  booleanisHeartbeatMessage)
 
 
 Result
-HTable.append(Appendappend)
+Table.append(Appendappend)
 Appends values to one or more columns within a single 
row.
 
 
 
 Result
-Table.append(Appendappend)

[27/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/class-use/TableName.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/class-use/TableName.html 
b/devapidocs/org/apache/hadoop/hbase/class-use/TableName.html
index 96bbdee..794e6f4 100644
--- a/devapidocs/org/apache/hadoop/hbase/class-use/TableName.html
+++ b/devapidocs/org/apache/hadoop/hbase/class-use/TableName.html
@@ -2026,119 +2026,119 @@ service.
 
 
 private TableName
-SnapshotDescription.table
+RegionCoprocessorRpcChannel.table
 
 
 private TableName
-RegionCoprocessorRpcChannel.table
+SnapshotDescription.table
 
 
 private TableName
-RawAsyncTableImpl.tableName
+HRegionLocator.tableName
 
 
 private TableName
-RegionServerCallable.tableName
+ScannerCallableWithReplicas.tableName
 
 
 protected TableName
-RegionAdminServiceCallable.tableName
+ClientScanner.tableName
 
 
 private TableName
-BufferedMutatorImpl.tableName
+AsyncClientScanner.tableName
 
 
 private TableName
-AsyncProcessTask.tableName
+AsyncRpcRetryingCallerFactory.SingleRequestCallerBuilder.tableName
 
 
 private TableName
-AsyncProcessTask.Builder.tableName
+AsyncRpcRetryingCallerFactory.BatchCallerBuilder.tableName
 
 
 private TableName
-AsyncRequestFutureImpl.tableName
+RegionInfoBuilder.tableName
 
 
-protected TableName
-TableBuilderBase.tableName
+private TableName
+RegionInfoBuilder.MutableRegionInfo.tableName
 
 
 private TableName
-AsyncBatchRpcRetryingCaller.tableName
+RawAsyncTableImpl.tableName
 
 
 private TableName
-RegionInfoBuilder.tableName
+RegionCoprocessorRpcChannelImpl.tableName
 
 
 private TableName
-RegionInfoBuilder.MutableRegionInfo.tableName
+AsyncTableRegionLocatorImpl.tableName
 
 
-private TableName
-HTable.tableName
+protected TableName
+RegionAdminServiceCallable.tableName
 
 
 private TableName
-TableState.tableName
+HTable.tableName
 
 
-protected TableName
-RpcRetryingCallerWithReadReplicas.tableName
+private TableName
+BufferedMutatorImpl.tableName
 
 
-protected TableName
-AsyncTableBuilderBase.tableName
+private TableName
+AsyncBatchRpcRetryingCaller.tableName
 
 
 private TableName
-AsyncSingleRequestRpcRetryingCaller.tableName
+BufferedMutatorParams.tableName
 
 
 private TableName
-ScannerCallableWithReplicas.tableName
+HBaseAdmin.TableFuture.tableName
 
 
-protected TableName
-RawAsyncHBaseAdmin.TableProcedureBiConsumer.tableName
+private TableName
+AsyncRequestFutureImpl.tableName
 
 
 private TableName
-AsyncTableRegionLocatorImpl.tableName
+AsyncProcessTask.tableName
 
 
 private TableName
-HBaseAdmin.TableFuture.tableName
+AsyncProcessTask.Builder.tableName
 
 
-private TableName
-RegionCoprocessorRpcChannelImpl.tableName
+protected TableName
+RawAsyncHBaseAdmin.TableProcedureBiConsumer.tableName
 
 
-protected TableName
-ClientScanner.tableName
+private TableName
+RegionServerCallable.tableName
 
 
 private TableName
-BufferedMutatorParams.tableName
+AsyncSingleRequestRpcRetryingCaller.tableName
 
 
-private TableName
-AsyncClientScanner.tableName
+protected TableName
+TableBuilderBase.tableName
 
 
-private TableName
-AsyncRpcRetryingCallerFactory.SingleRequestCallerBuilder.tableName
+protected TableName
+RpcRetryingCallerWithReadReplicas.tableName
 
 
-private TableName
-AsyncRpcRetryingCallerFactory.BatchCallerBuilder.tableName
+protected TableName
+AsyncTableBuilderBase.tableName
 
 
 private TableName
-HRegionLocator.tableName
+TableState.tableName
 
 
 
@@ -2180,83 +2180,83 @@ service.
 
 
 TableName
-RawAsyncTableImpl.getName()
+Table.getName()
+Gets the fully qualified table name instance of this 
table.
+
 
 
 TableName
-RegionLocator.getName()
-Gets the fully qualified table name instance of this 
table.
-
+HRegionLocator.getName()
 
 
 TableName
-BufferedMutatorImpl.getName()
+AsyncTableRegionLocator.getName()
+Gets the fully qualified table name instance of the table 
whose region we want to locate.
+
 
 
 TableName
-BufferedMutator.getName()
-Gets the fully qualified table name instance of the table 
that this BufferedMutator writes to.
-
+AsyncTableImpl.getName()
 
 
 TableName
-HTable.getName()
+RawAsyncTableImpl.getName()
 
 
 TableName
-AsyncBufferedMutator.getName()
-Gets the fully qualified table name instance of the table 
that this
- AsyncBufferedMutator writes to.
-
+AsyncTableRegionLocatorImpl.getName()
 
 
 TableName
-Table.getName()
-Gets the fully qualified table name instance of this 
table.
+BufferedMutator.getName()
+Gets the fully qualified table name instance of the table 
that this BufferedMutator writes to.
 
 
 
 TableName
-AsyncTableImpl.getName()
+RegionLocator.getName()
+Gets the fully qualified table name instance of this 
table.
+
 
 
 TableName
-AsyncTableRegionLocatorImpl.getName()
+AsyncBufferedMutatorImpl.getName()
 
 
 TableName
-AsyncTableRegionLocator.getName()
-Gets the fully qualified table name instance of the table 
whose region we want to locate.
-
+HTable.getName()

[38/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/checkstyle.rss
--
diff --git a/checkstyle.rss b/checkstyle.rss
index fdfe139..ae24f24 100644
--- a/checkstyle.rss
+++ b/checkstyle.rss
@@ -26,7 +26,7 @@ under the License.
 2007 - 2017 The Apache Software Foundation
 
   File: 3426,
- Errors: 21359,
+ Errors: 21358,
  Warnings: 0,
  Infos: 0
   
@@ -1852,7 +1852,7 @@ under the License.
   
   
 
-  http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.regionserver.DateTieredStoreEngine.java;>org/apache/hadoop/hbase/regionserver/DateTieredStoreEngine.java
+  http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.mob.MobUtils.java;>org/apache/hadoop/hbase/mob/MobUtils.java
 
 
   0
@@ -1861,12 +1861,12 @@ under the License.
   0
 
 
-  1
+  15
 
   
   
 
-  http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.mob.MobUtils.java;>org/apache/hadoop/hbase/mob/MobUtils.java
+  http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.regionserver.DateTieredStoreEngine.java;>org/apache/hadoop/hbase/regionserver/DateTieredStoreEngine.java
 
 
   0
@@ -1875,7 +1875,7 @@ under the License.
   0
 
 
-  15
+  1
 
   
   
@@ -3700,7 +3700,7 @@ under the License.
   
   
 
-  http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.util.HFileArchiveTestingUtil.java;>org/apache/hadoop/hbase/util/HFileArchiveTestingUtil.java
+  http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.quotas.TestQuotaObserverChore.java;>org/apache/hadoop/hbase/quotas/TestQuotaObserverChore.java
 
 
   0
@@ -3709,12 +3709,12 @@ under the License.
   0
 
 
-  9
+  1
 
   
   
 
-  http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.quotas.TestQuotaObserverChore.java;>org/apache/hadoop/hbase/quotas/TestQuotaObserverChore.java
+  http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.util.HFileArchiveTestingUtil.java;>org/apache/hadoop/hbase/util/HFileArchiveTestingUtil.java
 
 
   0
@@ -3723,7 +3723,7 @@ under the License.
   0
 
 
-  1
+  9
 
   
   
@@ -4479,7 +4479,7 @@ under the License.
   0
 
 
-  32
+  33
 
   
   
@@ -4708,7 +4708,7 @@ under the License.
   
   
 
-  http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.zookeeper.PendingWatcher.java;>org/apache/hadoop/hbase/zookeeper/PendingWatcher.java
+  http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.client.RegionOfflineException.java;>org/apache/hadoop/hbase/client/RegionOfflineException.java
 
 
   0
@@ -4722,7 +4722,7 @@ under the License.
   
   
 
-  http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.client.RegionOfflineException.java;>org/apache/hadoop/hbase/client/RegionOfflineException.java
+  http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.zookeeper.PendingWatcher.java;>org/apache/hadoop/hbase/zookeeper/PendingWatcher.java
 
 
   0
@@ -5254,7 +5254,7 @@ under the License.
   
   
 
-  http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.regionserver.TestSwitchToStreamRead.java;>org/apache/hadoop/hbase/regionserver/TestSwitchToStreamRead.java
+  http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.snapshot.TestRegionSnapshotTask.java;>org/apache/hadoop/hbase/snapshot/TestRegionSnapshotTask.java
 
 
   0
@@ -5263,12 +5263,12 @@ under the License.
   0
 
 
- 

[31/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/class-use/HRegionLocation.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/class-use/HRegionLocation.html 
b/devapidocs/org/apache/hadoop/hbase/class-use/HRegionLocation.html
index 46e033c..8521855 100644
--- a/devapidocs/org/apache/hadoop/hbase/class-use/HRegionLocation.html
+++ b/devapidocs/org/apache/hadoop/hbase/class-use/HRegionLocation.html
@@ -162,7 +162,7 @@ service.
 
 
 private static HRegionLocation
-MetaTableAccessor.getRegionLocation(Resultr,
+AsyncMetaTableAccessor.getRegionLocation(Resultr,
  RegionInforegionInfo,
  intreplicaId)
 Returns the HRegionLocation parsed from the given meta row 
Result
@@ -171,7 +171,7 @@ service.
 
 
 private static HRegionLocation
-AsyncMetaTableAccessor.getRegionLocation(Resultr,
+MetaTableAccessor.getRegionLocation(Resultr,
  RegionInforegionInfo,
  intreplicaId)
 Returns the HRegionLocation parsed from the given meta row 
Result
@@ -304,6 +304,14 @@ service.
 HTableMultiplexer.FlushWorker.addr
 
 
+HRegionLocation
+AsyncClientScanner.OpenScannerResponse.loc
+
+
+private HRegionLocation
+AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder.loc
+
+
 private HRegionLocation
 AsyncScanSingleRegionRpcRetryingCaller.loc
 
@@ -312,23 +320,15 @@ service.
 AsyncBatchRpcRetryingCaller.RegionRequest.loc
 
 
-HRegionLocation
-AsyncClientScanner.OpenScannerResponse.loc
+protected HRegionLocation
+RegionAdminServiceCallable.location
 
 
-private HRegionLocation
-AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder.loc
-
-
 protected HRegionLocation
 RegionServerCallable.location
 Some subclasses want to set their own location.
 
 
-
-protected HRegionLocation
-RegionAdminServiceCallable.location
-
 
 
 
@@ -371,11 +371,11 @@ service.
 
 
 protected HRegionLocation
-RegionServerCallable.getLocation()
+MultiServerCallable.getLocation()
 
 
 protected HRegionLocation
-MultiServerCallable.getLocation()
+RegionServerCallable.getLocation()
 
 
 HRegionLocation
@@ -383,43 +383,43 @@ service.
 
 
 HRegionLocation
-RegionLocator.getRegionLocation(byte[]row)
+HRegionLocator.getRegionLocation(byte[]row)
 Finds the region on which the given row is being 
served.
 
 
 
 HRegionLocation
-HRegionLocator.getRegionLocation(byte[]row)
+RegionLocator.getRegionLocation(byte[]row)
 Finds the region on which the given row is being 
served.
 
 
 
 HRegionLocation
-RegionLocator.getRegionLocation(byte[]row,
+HRegionLocator.getRegionLocation(byte[]row,
  booleanreload)
 Finds the region on which the given row is being 
served.
 
 
 
 HRegionLocation
-HRegionLocator.getRegionLocation(byte[]row,
+RegionLocator.getRegionLocation(byte[]row,
  booleanreload)
 Finds the region on which the given row is being 
served.
 
 
 
 HRegionLocation
-ClusterConnection.getRegionLocation(TableNametableName,
+ConnectionImplementation.getRegionLocation(TableNametableName,
  byte[]row,
- booleanreload)
-Find region location hosting passed row
-
+ booleanreload)
 
 
 HRegionLocation
-ConnectionImplementation.getRegionLocation(TableNametableName,
+ClusterConnection.getRegionLocation(TableNametableName,
  byte[]row,
- booleanreload)
+ booleanreload)
+Find region location hosting passed row
+
 
 
 private HRegionLocation
@@ -434,15 +434,20 @@ service.
 
 
 HRegionLocation
+ConnectionImplementation.locateRegion(byte[]regionName)
+
+
+HRegionLocation
 ClusterConnection.locateRegion(byte[]regionName)
 Gets the location of the region of regionName.
 
 
-
+
 HRegionLocation
-ConnectionImplementation.locateRegion(byte[]regionName)
+ConnectionImplementation.locateRegion(TableNametableName,
+byte[]row)
 
-
+
 HRegionLocation
 ClusterConnection.locateRegion(TableNametableName,
 byte[]row)
@@ -450,11 +455,6 @@ service.
  lives in.
 
 
-
-HRegionLocation
-ConnectionImplementation.locateRegion(TableNametableName,
-byte[]row)
-
 
 private HRegionLocation
 AsyncNonMetaRegionLocator.locateRowBeforeInCache(AsyncNonMetaRegionLocator.TableCachetableCache,
@@ -469,17 +469,17 @@ service.
 
 
 HRegionLocation
+ConnectionImplementation.relocateRegion(TableNametableName,
+  byte[]row)
+
+
+HRegionLocation
 ClusterConnection.relocateRegion(TableNametableName,
   byte[]row)
 Find the location of the region of tableName that 
row
  lives in, ignoring any value that might be in the cache.
 
 
-
-HRegionLocation
-ConnectionImplementation.relocateRegion(TableNametableName,
-  byte[]row)
-
 
 
 
@@ -491,13 +491,13 @@ service.
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListHRegionLocation
-RegionLocator.getAllRegionLocations()

[01/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
Repository: hbase-site
Updated Branches:
  refs/heads/asf-site 07c67a9cf -> cba900e48


http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/nio/class-use/ByteBuff.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/nio/class-use/ByteBuff.html 
b/devapidocs/org/apache/hadoop/hbase/nio/class-use/ByteBuff.html
index b95b1c2..a6c4306 100644
--- a/devapidocs/org/apache/hadoop/hbase/nio/class-use/ByteBuff.html
+++ b/devapidocs/org/apache/hadoop/hbase/nio/class-use/ByteBuff.html
@@ -161,23 +161,23 @@
 
 
 Codec.Decoder
-CellCodec.getDecoder(ByteBuffbuf)
+KeyValueCodec.getDecoder(ByteBuffbuf)
 
 
 Codec.Decoder
-Codec.getDecoder(ByteBuffbuf)
+CellCodecWithTags.getDecoder(ByteBuffbuf)
 
 
 Codec.Decoder
-KeyValueCodec.getDecoder(ByteBuffbuf)
+Codec.getDecoder(ByteBuffbuf)
 
 
 Codec.Decoder
-KeyValueCodecWithTags.getDecoder(ByteBuffbuf)
+CellCodec.getDecoder(ByteBuffbuf)
 
 
 Codec.Decoder
-CellCodecWithTags.getDecoder(ByteBuffbuf)
+KeyValueCodecWithTags.getDecoder(ByteBuffbuf)
 
 
 Codec.Decoder
@@ -259,20 +259,20 @@
 
 
 
-private ByteBuff
-RowIndexSeekerV1.currentBuffer
+protected ByteBuff
+BufferedDataBlockEncoder.SeekerState.currentBuffer
 
 
 protected ByteBuff
-RowIndexSeekerV1.SeekerState.currentBuffer
+BufferedDataBlockEncoder.BufferedEncodedSeeker.currentBuffer
 
 
-protected ByteBuff
-BufferedDataBlockEncoder.SeekerState.currentBuffer
+private ByteBuff
+RowIndexSeekerV1.currentBuffer
 
 
 protected ByteBuff
-BufferedDataBlockEncoder.BufferedEncodedSeeker.currentBuffer
+RowIndexSeekerV1.SeekerState.currentBuffer
 
 
 private ByteBuff
@@ -295,23 +295,23 @@
 
 
 Cell
-RowIndexCodecV1.getFirstKeyCellInBlock(ByteBuffblock)
+CopyKeyDataBlockEncoder.getFirstKeyCellInBlock(ByteBuffblock)
 
 
 Cell
-CopyKeyDataBlockEncoder.getFirstKeyCellInBlock(ByteBuffblock)
+PrefixKeyDeltaEncoder.getFirstKeyCellInBlock(ByteBuffblock)
 
 
 Cell
-DiffKeyDeltaEncoder.getFirstKeyCellInBlock(ByteBuffblock)
+FastDiffDeltaEncoder.getFirstKeyCellInBlock(ByteBuffblock)
 
 
 Cell
-FastDiffDeltaEncoder.getFirstKeyCellInBlock(ByteBuffblock)
+DiffKeyDeltaEncoder.getFirstKeyCellInBlock(ByteBuffblock)
 
 
 Cell
-PrefixKeyDeltaEncoder.getFirstKeyCellInBlock(ByteBuffblock)
+RowIndexCodecV1.getFirstKeyCellInBlock(ByteBuffblock)
 
 
 void
@@ -338,11 +338,11 @@
 
 
 void
-RowIndexSeekerV1.setCurrentBuffer(ByteBuffbuffer)
+BufferedDataBlockEncoder.BufferedEncodedSeeker.setCurrentBuffer(ByteBuffbuffer)
 
 
 void
-BufferedDataBlockEncoder.BufferedEncodedSeeker.setCurrentBuffer(ByteBuffbuffer)
+RowIndexSeekerV1.setCurrentBuffer(ByteBuffbuffer)
 
 
 
@@ -498,21 +498,21 @@
 
 
 void
-ByteBufferIOEngine.write(ByteBuffsrcBuffer,
- longoffset)
-
-
-void
 FileIOEngine.write(ByteBuffsrcBuffer,
  longoffset)
 
-
+
 void
 IOEngine.write(ByteBuffsrcBuffer,
  longoffset)
 Transfers the data from the given MultiByteBuffer to 
IOEngine
 
 
+
+void
+ByteBufferIOEngine.write(ByteBuffsrcBuffer,
+ longoffset)
+
 
 void
 FileMmapEngine.write(ByteBuffsrcBuffer,
@@ -812,6 +812,15 @@
  intindex)
 
 
+MultiByteBuff
+MultiByteBuff.put(intoffset,
+   ByteBuffsrc,
+   intsrcOffset,
+   intlength)
+Copies from a src MBB to this MBB.
+
+
+
 abstract ByteBuff
 ByteBuff.put(intoffset,
ByteBuffsrc,
@@ -820,22 +829,13 @@
 Copies the contents from the src ByteBuff to this 
ByteBuff.
 
 
-
+
 SingleByteBuff
 SingleByteBuff.put(intoffset,
ByteBuffsrc,
intsrcOffset,
intlength)
 
-
-MultiByteBuff
-MultiByteBuff.put(intoffset,
-   ByteBuffsrc,
-   intsrcOffset,
-   intlength)
-Copies from a src MBB to this MBB.
-
-
 
 static int
 ByteBuff.readCompressedInt(ByteBuffbuf)



[30/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/class-use/RegionLoad.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/class-use/RegionLoad.html 
b/devapidocs/org/apache/hadoop/hbase/class-use/RegionLoad.html
index bf942d8..ac36b77 100644
--- a/devapidocs/org/apache/hadoop/hbase/class-use/RegionLoad.html
+++ b/devapidocs/org/apache/hadoop/hbase/class-use/RegionLoad.html
@@ -159,33 +159,33 @@
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionLoad
+AsyncHBaseAdmin.getRegionLoads(ServerNameserverName)
+
+
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionLoad
 AsyncAdmin.getRegionLoads(ServerNameserverName)
 Get a list of RegionLoad of all regions hosted on a 
region seerver.
 
 
-
+
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionLoad
 RawAsyncHBaseAdmin.getRegionLoads(ServerNameserverName)
 
-
+
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionLoad
-AsyncHBaseAdmin.getRegionLoads(ServerNameserverName)
+AsyncHBaseAdmin.getRegionLoads(ServerNameserverName,
+  TableNametableName)
 
-
+
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionLoad
 AsyncAdmin.getRegionLoads(ServerNameserverName,
   TableNametableName)
 Get a list of RegionLoad of all regions hosted on a 
region seerver for a table.
 
 
-
-http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionLoad
-RawAsyncHBaseAdmin.getRegionLoads(ServerNameserverName,
-  TableNametableName)
-
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionLoad
-AsyncHBaseAdmin.getRegionLoads(ServerNameserverName,
+RawAsyncHBaseAdmin.getRegionLoads(ServerNameserverName,
   TableNametableName)
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/class-use/RegionLocations.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/class-use/RegionLocations.html 
b/devapidocs/org/apache/hadoop/hbase/class-use/RegionLocations.html
index dac3d58..0274f16 100644
--- a/devapidocs/org/apache/hadoop/hbase/class-use/RegionLocations.html
+++ b/devapidocs/org/apache/hadoop/hbase/class-use/RegionLocations.html
@@ -217,15 +217,15 @@
   booleanuseCache)
 
 
-(package private) RegionLocations
-ConnectionImplementation.getCachedLocation(TableNametableName,
+RegionLocations
+MetaCache.getCachedLocation(TableNametableName,
  byte[]row)
 Search the cache for a location that fits our table and row 
key.
 
 
 
-RegionLocations
-MetaCache.getCachedLocation(TableNametableName,
+(package private) RegionLocations
+ConnectionImplementation.getCachedLocation(TableNametableName,
  byte[]row)
 Search the cache for a location that fits our table and row 
key.
 
@@ -254,21 +254,21 @@
 
 
 RegionLocations
-ClusterConnection.locateRegion(TableNametableName,
+ConnectionImplementation.locateRegion(TableNametableName,
 byte[]row,
 booleanuseCache,
 booleanretry)
 
 
 RegionLocations
-ConnectionImplementation.locateRegion(TableNametableName,
+ClusterConnection.locateRegion(TableNametableName,
 

[12/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/filter/class-use/Filter.ReturnCode.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/filter/class-use/Filter.ReturnCode.html 
b/devapidocs/org/apache/hadoop/hbase/filter/class-use/Filter.ReturnCode.html
index 51f1a63..2cfe6f1 100644
--- a/devapidocs/org/apache/hadoop/hbase/filter/class-use/Filter.ReturnCode.html
+++ b/devapidocs/org/apache/hadoop/hbase/filter/class-use/Filter.ReturnCode.html
@@ -151,111 +151,111 @@
 
 
 Filter.ReturnCode
-ColumnPrefixFilter.filterCell(Cellcell)
+ValueFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-ColumnCountGetFilter.filterCell(Cellc)
+SkipFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-RowFilter.filterCell(Cellv)
+FilterListBase.filterCell(Cellc)
 
 
 Filter.ReturnCode
-FuzzyRowFilter.filterCell(Cellc)
+FamilyFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-Filter.filterCell(Cellc)
-A way to filter based on the column family, column 
qualifier and/or the column value.
-
+ColumnPrefixFilter.filterCell(Cellcell)
 
 
 Filter.ReturnCode
-RandomRowFilter.filterCell(Cellc)
+PageFilter.filterCell(Cellignored)
 
 
 Filter.ReturnCode
-FirstKeyOnlyFilter.filterCell(Cellc)
+RowFilter.filterCell(Cellv)
 
 
 Filter.ReturnCode
-SkipFilter.filterCell(Cellc)
+ColumnRangeFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-TimestampsFilter.filterCell(Cellc)
+ColumnCountGetFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-ValueFilter.filterCell(Cellc)
+MultipleColumnPrefixFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-KeyOnlyFilter.filterCell(Cellignored)
+ColumnPaginationFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-FamilyFilter.filterCell(Cellc)
+DependentColumnFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-QualifierFilter.filterCell(Cellc)
+InclusiveStopFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-FilterList.filterCell(Cellc)
+KeyOnlyFilter.filterCell(Cellignored)
 
 
 Filter.ReturnCode
-ColumnRangeFilter.filterCell(Cellc)
+MultiRowRangeFilter.filterCell(Cellignored)
 
 
 Filter.ReturnCode
-ColumnPaginationFilter.filterCell(Cellc)
+Filter.filterCell(Cellc)
+A way to filter based on the column family, column 
qualifier and/or the column value.
+
 
 
 Filter.ReturnCode
-WhileMatchFilter.filterCell(Cellc)
+FirstKeyOnlyFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-MultiRowRangeFilter.filterCell(Cellignored)
+WhileMatchFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-PrefixFilter.filterCell(Cellc)
+FirstKeyValueMatchingQualifiersFilter.filterCell(Cellc)
+Deprecated.
+
 
 
 Filter.ReturnCode
-DependentColumnFilter.filterCell(Cellc)
+TimestampsFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-FirstKeyValueMatchingQualifiersFilter.filterCell(Cellc)
-Deprecated.
-
+FuzzyRowFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-PageFilter.filterCell(Cellignored)
+FilterList.filterCell(Cellc)
 
 
 Filter.ReturnCode
-FilterListBase.filterCell(Cellc)
+RandomRowFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-InclusiveStopFilter.filterCell(Cellc)
+PrefixFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-MultipleColumnPrefixFilter.filterCell(Cellc)
+SingleColumnValueFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-SingleColumnValueFilter.filterCell(Cellc)
+QualifierFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
@@ -271,158 +271,158 @@
 
 
 Filter.ReturnCode
-ColumnPrefixFilter.filterKeyValue(Cellc)
+ValueFilter.filterKeyValue(Cellc)
 Deprecated.
 
 
 
 Filter.ReturnCode
-ColumnCountGetFilter.filterKeyValue(Cellc)
+SkipFilter.filterKeyValue(Cellc)
 Deprecated.
 
 
 
 Filter.ReturnCode
-RowFilter.filterKeyValue(Cellc)
-Deprecated.
-
+FilterListBase.filterKeyValue(Cellc)
 
 
 Filter.ReturnCode
-FuzzyRowFilter.filterKeyValue(Cellc)
+FamilyFilter.filterKeyValue(Cellc)
 Deprecated.
 
 
 
 Filter.ReturnCode
-Filter.filterKeyValue(Cellc)
-Deprecated.
-As of release 2.0.0, this 
will be removed in HBase 3.0.0.
- Instead use filterCell(Cell)
-
+ColumnPrefixFilter.filterKeyValue(Cellc)
+Deprecated.
 
 
 
 Filter.ReturnCode
-RandomRowFilter.filterKeyValue(Cellc)
+PageFilter.filterKeyValue(Cellc)
 Deprecated.
 
 
 
 Filter.ReturnCode
-FirstKeyOnlyFilter.filterKeyValue(Cellc)
+RowFilter.filterKeyValue(Cellc)
 Deprecated.
 
 
 
 Filter.ReturnCode
-SkipFilter.filterKeyValue(Cellc)
+ColumnRangeFilter.filterKeyValue(Cellc)
 Deprecated.
 
 
 
 Filter.ReturnCode
-TimestampsFilter.filterKeyValue(Cellc)
+ColumnCountGetFilter.filterKeyValue(Cellc)
 Deprecated.
 
 
 
 Filter.ReturnCode
-ValueFilter.filterKeyValue(Cellc)
+MultipleColumnPrefixFilter.filterKeyValue(Cellc)
 Deprecated.
 
 
 
 Filter.ReturnCode
-KeyOnlyFilter.filterKeyValue(Cellignored)
+ColumnPaginationFilter.filterKeyValue(Cellc)
 Deprecated.
 
 
 
 Filter.ReturnCode
-FamilyFilter.filterKeyValue(Cellc)
+DependentColumnFilter.filterKeyValue(Cellc)
 Deprecated.
 
 
 
 Filter.ReturnCode
-QualifierFilter.filterKeyValue(Cellc)
+InclusiveStopFilter.filterKeyValue(Cellc)
 

[03/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/master/procedure/class-use/MasterProcedureEnv.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/master/procedure/class-use/MasterProcedureEnv.html
 
b/devapidocs/org/apache/hadoop/hbase/master/procedure/class-use/MasterProcedureEnv.html
index 932633e..37e7f28 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/master/procedure/class-use/MasterProcedureEnv.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/master/procedure/class-use/MasterProcedureEnv.html
@@ -129,11 +129,11 @@
 
 
 ProcedureExecutorMasterProcedureEnv
-MasterServices.getMasterProcedureExecutor()
+HMaster.getMasterProcedureExecutor()
 
 
 ProcedureExecutorMasterProcedureEnv
-HMaster.getMasterProcedureExecutor()
+MasterServices.getMasterProcedureExecutor()
 
 
 
@@ -186,15 +186,15 @@
 
 
 protected Procedure.LockState
-GCRegionProcedure.acquireLock(MasterProcedureEnvenv)
+RegionTransitionProcedure.acquireLock(MasterProcedureEnvenv)
 
 
 protected Procedure.LockState
-MergeTableRegionsProcedure.acquireLock(MasterProcedureEnvenv)
+GCRegionProcedure.acquireLock(MasterProcedureEnvenv)
 
 
 protected Procedure.LockState
-RegionTransitionProcedure.acquireLock(MasterProcedureEnvenv)
+MergeTableRegionsProcedure.acquireLock(MasterProcedureEnvenv)
 
 
 protected boolean
@@ -287,7 +287,7 @@
 
 
 protected void
-AssignProcedure.finishTransition(MasterProcedureEnvenv,
+UnassignProcedure.finishTransition(MasterProcedureEnvenv,
 RegionStates.RegionStateNoderegionNode)
 
 
@@ -297,7 +297,7 @@
 
 
 protected void
-UnassignProcedure.finishTransition(MasterProcedureEnvenv,
+AssignProcedure.finishTransition(MasterProcedureEnvenv,
 RegionStates.RegionStateNoderegionNode)
 
 
@@ -306,7 +306,7 @@
 
 
 protected ProcedureMetrics
-AssignProcedure.getProcedureMetrics(MasterProcedureEnvenv)
+UnassignProcedure.getProcedureMetrics(MasterProcedureEnvenv)
 
 
 protected ProcedureMetrics
@@ -318,7 +318,7 @@
 
 
 protected ProcedureMetrics
-UnassignProcedure.getProcedureMetrics(MasterProcedureEnvenv)
+AssignProcedure.getProcedureMetrics(MasterProcedureEnvenv)
 
 
 (package private) static 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.GetRegionInfoResponse
@@ -349,7 +349,7 @@
 
 
 ServerName
-AssignProcedure.getServer(MasterProcedureEnvenv)
+UnassignProcedure.getServer(MasterProcedureEnvenv)
 
 
 abstract ServerName
@@ -359,7 +359,7 @@
 
 
 ServerName
-UnassignProcedure.getServer(MasterProcedureEnvenv)
+AssignProcedure.getServer(MasterProcedureEnvenv)
 
 
 private ServerName
@@ -376,19 +376,19 @@
 
 
 protected boolean
-MergeTableRegionsProcedure.hasLock(MasterProcedureEnvenv)
+RegionTransitionProcedure.hasLock(MasterProcedureEnvenv)
 
 
 protected boolean
-RegionTransitionProcedure.hasLock(MasterProcedureEnvenv)
+MergeTableRegionsProcedure.hasLock(MasterProcedureEnvenv)
 
 
 protected boolean
-MergeTableRegionsProcedure.holdLock(MasterProcedureEnvenv)
+RegionTransitionProcedure.holdLock(MasterProcedureEnvenv)
 
 
 protected boolean
-RegionTransitionProcedure.holdLock(MasterProcedureEnvenv)
+MergeTableRegionsProcedure.holdLock(MasterProcedureEnvenv)
 
 
 private boolean
@@ -502,15 +502,15 @@
 
 
 protected void
-MergeTableRegionsProcedure.releaseLock(MasterProcedureEnvenv)
+RegionTransitionProcedure.releaseLock(MasterProcedureEnvenv)
 
 
 protected void
-RegionTransitionProcedure.releaseLock(MasterProcedureEnvenv)
+MergeTableRegionsProcedure.releaseLock(MasterProcedureEnvenv)
 
 
 RemoteProcedureDispatcher.RemoteOperation
-AssignProcedure.remoteCallBuild(MasterProcedureEnvenv,
+UnassignProcedure.remoteCallBuild(MasterProcedureEnvenv,
ServerNameserverName)
 
 
@@ -520,7 +520,7 @@
 
 
 RemoteProcedureDispatcher.RemoteOperation
-UnassignProcedure.remoteCallBuild(MasterProcedureEnvenv,
+AssignProcedure.remoteCallBuild(MasterProcedureEnvenv,
ServerNameserverName)
 
 
@@ -531,7 +531,7 @@
 
 
 protected boolean
-AssignProcedure.remoteCallFailed(MasterProcedureEnvenv,
+UnassignProcedure.remoteCallFailed(MasterProcedureEnvenv,
 RegionStates.RegionStateNoderegionNode,
 http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in 
java.io">IOExceptionexception)
 
@@ -543,7 +543,7 @@
 
 
 protected boolean
-UnassignProcedure.remoteCallFailed(MasterProcedureEnvenv,
+AssignProcedure.remoteCallFailed(MasterProcedureEnvenv,
 RegionStates.RegionStateNoderegionNode,
 http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in 
java.io">IOExceptionexception)
 
@@ -555,10 +555,10 @@
 
 
 protected void
-AssignProcedure.reportTransition(MasterProcedureEnvenv,
+UnassignProcedure.reportTransition(MasterProcedureEnvenv,
 RegionStates.RegionStateNoderegionNode,

[08/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/BlockCacheKey.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/BlockCacheKey.html 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/BlockCacheKey.html
index 40cc4d2..b357af9 100644
--- a/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/BlockCacheKey.html
+++ b/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/BlockCacheKey.html
@@ -168,23 +168,23 @@
 
 
 void
-CombinedBlockCache.cacheBlock(BlockCacheKeycacheKey,
-  Cacheablebuf)
-
-
-void
 BlockCache.cacheBlock(BlockCacheKeycacheKey,
   Cacheablebuf)
 Add block to cache (defaults to not in-memory).
 
 
-
+
 void
 LruBlockCache.cacheBlock(BlockCacheKeycacheKey,
   Cacheablebuf)
 Cache the block with the specified name and buffer.
 
 
+
+void
+CombinedBlockCache.cacheBlock(BlockCacheKeycacheKey,
+  Cacheablebuf)
+
 
 void
 MemcachedBlockCache.cacheBlock(BlockCacheKeycacheKey,
@@ -192,35 +192,35 @@
 
 
 void
-CombinedBlockCache.cacheBlock(BlockCacheKeycacheKey,
+BlockCache.cacheBlock(BlockCacheKeycacheKey,
   Cacheablebuf,
   booleaninMemory,
-  booleancacheDataInL1)
+  booleancacheDataInL1)
+Add block to cache.
+
 
 
 void
-InclusiveCombinedBlockCache.cacheBlock(BlockCacheKeycacheKey,
+LruBlockCache.cacheBlock(BlockCacheKeycacheKey,
   Cacheablebuf,
   booleaninMemory,
-  booleancacheDataInL1)
+  booleancacheDataInL1)
+Cache the block with the specified name and buffer.
+
 
 
 void
-BlockCache.cacheBlock(BlockCacheKeycacheKey,
+CombinedBlockCache.cacheBlock(BlockCacheKeycacheKey,
   Cacheablebuf,
   booleaninMemory,
-  booleancacheDataInL1)
-Add block to cache.
-
+  booleancacheDataInL1)
 
 
 void
-LruBlockCache.cacheBlock(BlockCacheKeycacheKey,
+InclusiveCombinedBlockCache.cacheBlock(BlockCacheKeycacheKey,
   Cacheablebuf,
   booleaninMemory,
-  booleancacheDataInL1)
-Cache the block with the specified name and buffer.
-
+  booleancacheDataInL1)
 
 
 void
@@ -237,53 +237,53 @@
 
 
 boolean
-CombinedBlockCache.evictBlock(BlockCacheKeycacheKey)
-
-
-boolean
 BlockCache.evictBlock(BlockCacheKeycacheKey)
 Evict block from cache.
 
 
-
+
 boolean
 LruBlockCache.evictBlock(BlockCacheKeycacheKey)
 
+
+boolean
+CombinedBlockCache.evictBlock(BlockCacheKeycacheKey)
+
 
 boolean
 MemcachedBlockCache.evictBlock(BlockCacheKeycacheKey)
 
 
 Cacheable
-CombinedBlockCache.getBlock(BlockCacheKeycacheKey,
+BlockCache.getBlock(BlockCacheKeycacheKey,
 booleancaching,
 booleanrepeat,
-booleanupdateCacheMetrics)
+booleanupdateCacheMetrics)
+Fetch block from cache.
+
 
 
 Cacheable
-InclusiveCombinedBlockCache.getBlock(BlockCacheKeycacheKey,
+LruBlockCache.getBlock(BlockCacheKeycacheKey,
 booleancaching,
 booleanrepeat,
-booleanupdateCacheMetrics)
+booleanupdateCacheMetrics)
+Get the buffer of the block with the specified name.
+
 
 
 Cacheable
-BlockCache.getBlock(BlockCacheKeycacheKey,
+CombinedBlockCache.getBlock(BlockCacheKeycacheKey,
 booleancaching,
 booleanrepeat,
-booleanupdateCacheMetrics)
-Fetch block from cache.
-
+booleanupdateCacheMetrics)
 
 
 Cacheable
-LruBlockCache.getBlock(BlockCacheKeycacheKey,
+InclusiveCombinedBlockCache.getBlock(BlockCacheKeycacheKey,
 booleancaching,
 booleanrepeat,
-booleanupdateCacheMetrics)
-Get the buffer of the block with the specified name.
-
+booleanupdateCacheMetrics)
 
 
 Cacheable
@@ -310,22 +310,22 @@
 
 
 void
-CombinedBlockCache.returnBlock(BlockCacheKeycacheKey,
-   Cacheableblock)
-
-
-void
 BlockCache.returnBlock(BlockCacheKeycacheKey,
Cacheableblock)
 Called when the scanner using the block decides to return 
the block once its usage
  is over.
 
 
-
+
 void
 LruBlockCache.returnBlock(BlockCacheKeycacheKey,
Cacheableblock)
 
+
+void
+CombinedBlockCache.returnBlock(BlockCacheKeycacheKey,
+   Cacheableblock)
+
 
 void
 MemcachedBlockCache.returnBlock(BlockCacheKeycacheKey,
@@ -510,14 +510,14 @@
 
 
 void
-BucketCache.BucketEntryGroup.add(http://docs.oracle.com/javase/8/docs/api/java/util/Map.Entry.html?is-external=true;
 title="class or interface in java.util">Map.EntryBlockCacheKey,BucketCache.BucketEntryblock)
-
-
-void
 CachedEntryQueue.add(http://docs.oracle.com/javase/8/docs/api/java/util/Map.Entry.html?is-external=true;
 title="class or interface in java.util">Map.EntryBlockCacheKey,BucketCache.BucketEntryentry)
 Attempt to add the specified entry to this queue.
 
 
+
+void
+BucketCache.BucketEntryGroup.add(http://docs.oracle.com/javase/8/docs/api/java/util/Map.Entry.html?is-external=true;
 title="class or interface in 

[25/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/client/class-use/AsyncAdminBuilder.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/class-use/AsyncAdminBuilder.html 
b/devapidocs/org/apache/hadoop/hbase/client/class-use/AsyncAdminBuilder.html
index a59507e..4a8daf0 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/class-use/AsyncAdminBuilder.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/class-use/AsyncAdminBuilder.html
@@ -121,34 +121,34 @@
 
 
 AsyncAdminBuilder
-AsyncConnectionImpl.getAdminBuilder()
-
-
-AsyncAdminBuilder
 AsyncConnection.getAdminBuilder()
 Returns an AsyncAdminBuilder for creating 
AsyncAdmin.
 
 
-
+
 AsyncAdminBuilder
-AsyncConnectionImpl.getAdminBuilder(http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ExecutorService.html?is-external=true;
 title="class or interface in 
java.util.concurrent">ExecutorServicepool)
+AsyncConnectionImpl.getAdminBuilder()
 
-
+
 AsyncAdminBuilder
 AsyncConnection.getAdminBuilder(http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ExecutorService.html?is-external=true;
 title="class or interface in 
java.util.concurrent">ExecutorServicepool)
 Returns an AsyncAdminBuilder for creating 
AsyncAdmin.
 
 
-
+
 AsyncAdminBuilder
-AsyncAdminBuilderBase.setMaxAttempts(intmaxAttempts)
+AsyncConnectionImpl.getAdminBuilder(http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ExecutorService.html?is-external=true;
 title="class or interface in 
java.util.concurrent">ExecutorServicepool)
 
-
+
 AsyncAdminBuilder
 AsyncAdminBuilder.setMaxAttempts(intmaxAttempts)
 Set the max attempt times for an admin operation.
 
 
+
+AsyncAdminBuilder
+AsyncAdminBuilderBase.setMaxAttempts(intmaxAttempts)
+
 
 default AsyncAdminBuilder
 AsyncAdminBuilder.setMaxRetries(intmaxRetries)
@@ -157,50 +157,50 @@
 
 
 AsyncAdminBuilder
-AsyncAdminBuilderBase.setOperationTimeout(longtimeout,
-   http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html?is-external=true;
 title="class or interface in 
java.util.concurrent">TimeUnitunit)
-
-
-AsyncAdminBuilder
 AsyncAdminBuilder.setOperationTimeout(longtimeout,
http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html?is-external=true;
 title="class or interface in 
java.util.concurrent">TimeUnitunit)
 Set timeout for a whole admin operation.
 
 
-
+
 AsyncAdminBuilder
-AsyncAdminBuilderBase.setRetryPause(longtimeout,
- http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html?is-external=true;
 title="class or interface in 
java.util.concurrent">TimeUnitunit)
+AsyncAdminBuilderBase.setOperationTimeout(longtimeout,
+   http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html?is-external=true;
 title="class or interface in 
java.util.concurrent">TimeUnitunit)
 
-
+
 AsyncAdminBuilder
 AsyncAdminBuilder.setRetryPause(longtimeout,
  http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html?is-external=true;
 title="class or interface in 
java.util.concurrent">TimeUnitunit)
 Set the base pause time for retrying.
 
 
-
+
 AsyncAdminBuilder
-AsyncAdminBuilderBase.setRpcTimeout(longtimeout,
+AsyncAdminBuilderBase.setRetryPause(longtimeout,
  http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html?is-external=true;
 title="class or interface in 
java.util.concurrent">TimeUnitunit)
 
-
+
 AsyncAdminBuilder
 AsyncAdminBuilder.setRpcTimeout(longtimeout,
  http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html?is-external=true;
 title="class or interface in 
java.util.concurrent">TimeUnitunit)
 Set timeout for each rpc request.
 
 
-
+
 AsyncAdminBuilder
-AsyncAdminBuilderBase.setStartLogErrorsCnt(intstartLogErrorsCnt)
+AsyncAdminBuilderBase.setRpcTimeout(longtimeout,
+ http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html?is-external=true;
 title="class or interface in 
java.util.concurrent">TimeUnitunit)
 
-
+
 AsyncAdminBuilder
 AsyncAdminBuilder.setStartLogErrorsCnt(intstartLogErrorsCnt)
 Set the number of retries that are allowed before we start 
to log.
 
 
+
+AsyncAdminBuilder
+AsyncAdminBuilderBase.setStartLogErrorsCnt(intstartLogErrorsCnt)
+
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/client/class-use/AsyncAdminRequestRetryingCaller.Callable.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/class-use/AsyncAdminRequestRetryingCaller.Callable.html
 
b/devapidocs/org/apache/hadoop/hbase/client/class-use/AsyncAdminRequestRetryingCaller.Callable.html
index 3931eda..90ee3b5 100644
--- 

[41/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/apidocs/overview-tree.html
--
diff --git a/apidocs/overview-tree.html b/apidocs/overview-tree.html
index e8708c2..3a6add8 100644
--- a/apidocs/overview-tree.html
+++ b/apidocs/overview-tree.html
@@ -880,33 +880,33 @@
 java.lang.http://docs.oracle.com/javase/8/docs/api/java/lang/Enum.html?is-external=true;
 title="class or interface in java.lang">EnumE (implements java.lang.http://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true;
 title="class or interface in java.lang">ComparableT, java.io.http://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html?is-external=true;
 title="class or interface in java.io">Serializable)
 
 org.apache.hadoop.hbase.util.Order
-org.apache.hadoop.hbase.KeepDeletedCells
 org.apache.hadoop.hbase.MemoryCompactionPolicy
+org.apache.hadoop.hbase.KeepDeletedCells
 org.apache.hadoop.hbase.CompareOperator
 org.apache.hadoop.hbase.ProcedureState
 org.apache.hadoop.hbase.CellBuilderType
-org.apache.hadoop.hbase.filter.BitComparator.BitwiseOp
 org.apache.hadoop.hbase.filter.FilterList.Operator
 org.apache.hadoop.hbase.filter.CompareFilter.CompareOp
-org.apache.hadoop.hbase.filter.Filter.ReturnCode
+org.apache.hadoop.hbase.filter.BitComparator.BitwiseOp
 org.apache.hadoop.hbase.filter.RegexStringComparator.EngineType
+org.apache.hadoop.hbase.filter.Filter.ReturnCode
 org.apache.hadoop.hbase.io.encoding.DataBlockEncoding
 org.apache.hadoop.hbase.regionserver.BloomType
+org.apache.hadoop.hbase.quotas.SpaceViolationPolicy
 org.apache.hadoop.hbase.quotas.ThrottlingException.Type
 org.apache.hadoop.hbase.quotas.QuotaScope
-org.apache.hadoop.hbase.quotas.ThrottleType
 org.apache.hadoop.hbase.quotas.QuotaType
-org.apache.hadoop.hbase.quotas.SpaceViolationPolicy
-org.apache.hadoop.hbase.client.Durability
+org.apache.hadoop.hbase.quotas.ThrottleType
 org.apache.hadoop.hbase.client.SnapshotType
-org.apache.hadoop.hbase.client.MasterSwitchType
-org.apache.hadoop.hbase.client.CompactType
+org.apache.hadoop.hbase.client.Durability
 org.apache.hadoop.hbase.client.MobCompactPartitionPolicy
-org.apache.hadoop.hbase.client.CompactionState
-org.apache.hadoop.hbase.client.Scan.ReadType
-org.apache.hadoop.hbase.client.RequestController.ReturnCode
 org.apache.hadoop.hbase.client.IsolationLevel
+org.apache.hadoop.hbase.client.RequestController.ReturnCode
+org.apache.hadoop.hbase.client.Scan.ReadType
+org.apache.hadoop.hbase.client.CompactionState
+org.apache.hadoop.hbase.client.MasterSwitchType
 org.apache.hadoop.hbase.client.Consistency
+org.apache.hadoop.hbase.client.CompactType
 org.apache.hadoop.hbase.client.security.SecurityCapability
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/apidocs/src-html/org/apache/hadoop/hbase/HConstants.html
--
diff --git a/apidocs/src-html/org/apache/hadoop/hbase/HConstants.html 
b/apidocs/src-html/org/apache/hadoop/hbase/HConstants.html
index 7e14b42..11bfb15 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/HConstants.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/HConstants.html
@@ -1208,126 +1208,131 @@
 1200  public static final String 
REPLICATION_SOURCE_MAXTHREADS_KEY =
 1201  
"hbase.replication.source.maxthreads";
 1202
-1203  public static final int 
REPLICATION_SOURCE_MAXTHREADS_DEFAULT = 10;
-1204
-1205  /** Configuration key for SplitLog 
manager timeout */
-1206  public static final String 
HBASE_SPLITLOG_MANAGER_TIMEOUT = "hbase.splitlog.manager.timeout";
-1207
-1208  /**
-1209   * Configuration keys for Bucket 
cache
-1210   */
-1211  // TODO moving these bucket cache 
implementation specific configs to this level is violation of
-1212  // encapsulation. But as these has to 
be referred from hbase-common and bucket cache
-1213  // sits in hbase-server, there were no 
other go! Can we move the cache implementation to
-1214  // hbase-common?
-1215
-1216  /**
-1217   * Current ioengine options in 
include: heap, offheap and file:PATH (where PATH is the path
-1218   * to the file that will host the 
file-based cache.  See BucketCache#getIOEngineFromName() for
-1219   * list of supported ioengine 
options.
-1220   * pSet this option and a 
non-zero {@link #BUCKET_CACHE_SIZE_KEY} to enable bucket cache.
-1221   */
-1222  public static final String 
BUCKET_CACHE_IOENGINE_KEY = "hbase.bucketcache.ioengine";
-1223
-1224  /**
-1225   * When using bucket cache, this is a 
float that EITHER represents a percentage of total heap
-1226   * memory size to give to the cache 
(if lt; 1.0) OR, it is the capacity in
-1227   * megabytes of the cache.
-1228   */
-1229  public static final String 
BUCKET_CACHE_SIZE_KEY = "hbase.bucketcache.size";
-1230
-1231  /**
-1232   * HConstants for fast fail on the 
client side follow
+1203  /** Drop edits for tables that been 
deleted from the replication source and target */

[13/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/exceptions/class-use/DeserializationException.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/exceptions/class-use/DeserializationException.html
 
b/devapidocs/org/apache/hadoop/hbase/exceptions/class-use/DeserializationException.html
index 5544c71..82c9df9 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/exceptions/class-use/DeserializationException.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/exceptions/class-use/DeserializationException.html
@@ -146,17 +146,15 @@
 
 
 
-static HColumnDescriptor
-HColumnDescriptor.parseFrom(byte[]bytes)
-Deprecated.
-
-
-
 static HTableDescriptor
 HTableDescriptor.parseFrom(byte[]bytes)
 Deprecated.
 
 
+
+static ClusterId
+ClusterId.parseFrom(byte[]bytes)
+
 
 static HRegionInfo
 HRegionInfo.parseFrom(byte[]bytes)
@@ -167,8 +165,10 @@
 
 
 
-static ClusterId
-ClusterId.parseFrom(byte[]bytes)
+static HColumnDescriptor
+HColumnDescriptor.parseFrom(byte[]bytes)
+Deprecated.
+
 
 
 static SplitLogTask
@@ -222,17 +222,17 @@
 TableDescriptorBuilder.ModifyableTableDescriptor.parseFrom(byte[]bytes)
 
 
+static RegionInfo
+RegionInfo.parseFrom(byte[]bytes)
+
+
 static ColumnFamilyDescriptor
 ColumnFamilyDescriptorBuilder.parseFrom(byte[]pbBytes)
 
-
+
 private static ColumnFamilyDescriptor
 ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor.parseFrom(byte[]bytes)
 
-
-static RegionInfo
-RegionInfo.parseFrom(byte[]bytes)
-
 
 static RegionInfo
 RegionInfo.parseFrom(byte[]bytes,
@@ -307,111 +307,111 @@
 ByteArrayComparable.parseFrom(byte[]pbBytes)
 
 
-static ColumnPrefixFilter
-ColumnPrefixFilter.parseFrom(byte[]pbBytes)
+static SingleColumnValueExcludeFilter
+SingleColumnValueExcludeFilter.parseFrom(byte[]pbBytes)
 
 
-static ColumnCountGetFilter
-ColumnCountGetFilter.parseFrom(byte[]pbBytes)
+static ValueFilter
+ValueFilter.parseFrom(byte[]pbBytes)
 
 
-static RowFilter
-RowFilter.parseFrom(byte[]pbBytes)
+static SkipFilter
+SkipFilter.parseFrom(byte[]pbBytes)
 
 
-static FuzzyRowFilter
-FuzzyRowFilter.parseFrom(byte[]pbBytes)
+static FamilyFilter
+FamilyFilter.parseFrom(byte[]pbBytes)
 
 
-static BinaryComparator
-BinaryComparator.parseFrom(byte[]pbBytes)
+static BinaryPrefixComparator
+BinaryPrefixComparator.parseFrom(byte[]pbBytes)
 
 
-static RegexStringComparator
-RegexStringComparator.parseFrom(byte[]pbBytes)
+static NullComparator
+NullComparator.parseFrom(byte[]pbBytes)
 
 
-static Filter
-Filter.parseFrom(byte[]pbBytes)
-Concrete implementers can signal a failure condition in 
their code by throwing an
- http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException.
-
+static BigDecimalComparator
+BigDecimalComparator.parseFrom(byte[]pbBytes)
 
 
-static RandomRowFilter
-RandomRowFilter.parseFrom(byte[]pbBytes)
+static ColumnPrefixFilter
+ColumnPrefixFilter.parseFrom(byte[]pbBytes)
 
 
-static FirstKeyOnlyFilter
-FirstKeyOnlyFilter.parseFrom(byte[]pbBytes)
+static PageFilter
+PageFilter.parseFrom(byte[]pbBytes)
 
 
-static SkipFilter
-SkipFilter.parseFrom(byte[]pbBytes)
+static BitComparator
+BitComparator.parseFrom(byte[]pbBytes)
 
 
-static BinaryPrefixComparator
-BinaryPrefixComparator.parseFrom(byte[]pbBytes)
+static RowFilter
+RowFilter.parseFrom(byte[]pbBytes)
 
 
-static TimestampsFilter
-TimestampsFilter.parseFrom(byte[]pbBytes)
+static ColumnRangeFilter
+ColumnRangeFilter.parseFrom(byte[]pbBytes)
 
 
-static ValueFilter
-ValueFilter.parseFrom(byte[]pbBytes)
+static ColumnCountGetFilter
+ColumnCountGetFilter.parseFrom(byte[]pbBytes)
 
 
-static KeyOnlyFilter
-KeyOnlyFilter.parseFrom(byte[]pbBytes)
+static SubstringComparator
+SubstringComparator.parseFrom(byte[]pbBytes)
 
 
-static FamilyFilter
-FamilyFilter.parseFrom(byte[]pbBytes)
+static MultipleColumnPrefixFilter
+MultipleColumnPrefixFilter.parseFrom(byte[]pbBytes)
 
 
-static QualifierFilter
-QualifierFilter.parseFrom(byte[]pbBytes)
+static ColumnPaginationFilter
+ColumnPaginationFilter.parseFrom(byte[]pbBytes)
 
 
-static FilterList
-FilterList.parseFrom(byte[]pbBytes)
+static DependentColumnFilter
+DependentColumnFilter.parseFrom(byte[]pbBytes)
 
 
-static BigDecimalComparator
-BigDecimalComparator.parseFrom(byte[]pbBytes)
+static BinaryComparator
+BinaryComparator.parseFrom(byte[]pbBytes)
 
 
-static ColumnRangeFilter
-ColumnRangeFilter.parseFrom(byte[]pbBytes)
+static InclusiveStopFilter
+InclusiveStopFilter.parseFrom(byte[]pbBytes)
 
 
-static ColumnPaginationFilter
-ColumnPaginationFilter.parseFrom(byte[]pbBytes)
+static KeyOnlyFilter
+KeyOnlyFilter.parseFrom(byte[]pbBytes)
 
 
-static SubstringComparator
-SubstringComparator.parseFrom(byte[]pbBytes)
+static MultiRowRangeFilter
+MultiRowRangeFilter.parseFrom(byte[]pbBytes)
 
 
-static WhileMatchFilter
-WhileMatchFilter.parseFrom(byte[]pbBytes)
+static Filter
+Filter.parseFrom(byte[]pbBytes)

[42/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/apidocs/org/apache/hadoop/hbase/util/class-use/PositionedByteRange.html
--
diff --git 
a/apidocs/org/apache/hadoop/hbase/util/class-use/PositionedByteRange.html 
b/apidocs/org/apache/hadoop/hbase/util/class-use/PositionedByteRange.html
index bfce4fe..06e9c8f 100644
--- a/apidocs/org/apache/hadoop/hbase/util/class-use/PositionedByteRange.html
+++ b/apidocs/org/apache/hadoop/hbase/util/class-use/PositionedByteRange.html
@@ -125,104 +125,104 @@
 
 
 byte[]
-RawBytes.decode(PositionedByteRangesrc)
+OrderedBlobVar.decode(PositionedByteRangesrc)
 
 
-T
-FixedLengthWrapper.decode(PositionedByteRangesrc)
+http://docs.oracle.com/javase/8/docs/api/java/lang/Number.html?is-external=true;
 title="class or interface in java.lang">Number
+OrderedNumeric.decode(PositionedByteRangesrc)
 
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/Short.html?is-external=true;
 title="class or interface in java.lang">Short
-RawShort.decode(PositionedByteRangesrc)
+http://docs.oracle.com/javase/8/docs/api/java/lang/Byte.html?is-external=true;
 title="class or interface in java.lang">Byte
+RawByte.decode(PositionedByteRangesrc)
 
 
-T
-TerminatedWrapper.decode(PositionedByteRangesrc)
+http://docs.oracle.com/javase/8/docs/api/java/lang/Integer.html?is-external=true;
 title="class or interface in java.lang">Integer
+OrderedInt32.decode(PositionedByteRangesrc)
 
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/Float.html?is-external=true;
 title="class or interface in java.lang">Float
-OrderedFloat32.decode(PositionedByteRangesrc)
+T
+FixedLengthWrapper.decode(PositionedByteRangesrc)
 
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/Double.html?is-external=true;
 title="class or interface in java.lang">Double
-OrderedFloat64.decode(PositionedByteRangesrc)
+http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
+OrderedString.decode(PositionedByteRangesrc)
 
 
 http://docs.oracle.com/javase/8/docs/api/java/lang/Float.html?is-external=true;
 title="class or interface in java.lang">Float
 RawFloat.decode(PositionedByteRangesrc)
 
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/Byte.html?is-external=true;
 title="class or interface in java.lang">Byte
-OrderedInt8.decode(PositionedByteRangesrc)
+http://docs.oracle.com/javase/8/docs/api/java/lang/Integer.html?is-external=true;
 title="class or interface in java.lang">Integer
+RawInteger.decode(PositionedByteRangesrc)
 
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object[]
-Struct.decode(PositionedByteRangesrc)
+T
+DataType.decode(PositionedByteRangesrc)
+Read an instance of T from the buffer 
src.
+
 
 
-byte[]
-OrderedBlob.decode(PositionedByteRangesrc)
+http://docs.oracle.com/javase/8/docs/api/java/lang/Long.html?is-external=true;
 title="class or interface in java.lang">Long
+RawLong.decode(PositionedByteRangesrc)
 
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/Integer.html?is-external=true;
 title="class or interface in java.lang">Integer
-RawInteger.decode(PositionedByteRangesrc)
-
-
 http://docs.oracle.com/javase/8/docs/api/java/lang/Short.html?is-external=true;
 title="class or interface in java.lang">Short
-OrderedInt16.decode(PositionedByteRangesrc)
+RawShort.decode(PositionedByteRangesrc)
 
-
+
 http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 RawString.decode(PositionedByteRangesrc)
 
+
+byte[]
+RawBytes.decode(PositionedByteRangesrc)
+
 
 byte[]
-OrderedBlobVar.decode(PositionedByteRangesrc)
+OrderedBlob.decode(PositionedByteRangesrc)
 
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/Byte.html?is-external=true;
 title="class or interface in java.lang">Byte
-RawByte.decode(PositionedByteRangesrc)
+http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object[]
+Struct.decode(PositionedByteRangesrc)
 
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
-OrderedString.decode(PositionedByteRangesrc)
+http://docs.oracle.com/javase/8/docs/api/java/lang/Double.html?is-external=true;
 title="class or interface in java.lang">Double
+RawDouble.decode(PositionedByteRangesrc)
 
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/Integer.html?is-external=true;
 title="class or interface in java.lang">Integer
-OrderedInt32.decode(PositionedByteRangesrc)
+http://docs.oracle.com/javase/8/docs/api/java/lang/Short.html?is-external=true;
 title="class or interface in java.lang">Short
+OrderedInt16.decode(PositionedByteRangesrc)
 
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/Long.html?is-external=true;
 title="class or interface in java.lang">Long

[44/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/apidocs/org/apache/hadoop/hbase/io/class-use/ImmutableBytesWritable.html
--
diff --git 
a/apidocs/org/apache/hadoop/hbase/io/class-use/ImmutableBytesWritable.html 
b/apidocs/org/apache/hadoop/hbase/io/class-use/ImmutableBytesWritable.html
index ec301de..841abc0 100644
--- a/apidocs/org/apache/hadoop/hbase/io/class-use/ImmutableBytesWritable.html
+++ b/apidocs/org/apache/hadoop/hbase/io/class-use/ImmutableBytesWritable.html
@@ -175,23 +175,23 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 org.apache.hadoop.mapred.RecordReaderImmutableBytesWritable,Result
-TableInputFormatBase.getRecordReader(org.apache.hadoop.mapred.InputSplitsplit,
+TableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplitsplit,
org.apache.hadoop.mapred.JobConfjob,
-   org.apache.hadoop.mapred.Reporterreporter)
-Builds a TableRecordReader.
-
+   
org.apache.hadoop.mapred.Reporterreporter)
 
 
 org.apache.hadoop.mapred.RecordReaderImmutableBytesWritable,Result
-TableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplitsplit,
+MultiTableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplitsplit,
org.apache.hadoop.mapred.JobConfjob,

org.apache.hadoop.mapred.Reporterreporter)
 
 
 org.apache.hadoop.mapred.RecordReaderImmutableBytesWritable,Result
-MultiTableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplitsplit,
+TableInputFormatBase.getRecordReader(org.apache.hadoop.mapred.InputSplitsplit,
org.apache.hadoop.mapred.JobConfjob,
-   
org.apache.hadoop.mapred.Reporterreporter)
+   org.apache.hadoop.mapred.Reporterreporter)
+Builds a TableRecordReader.
+
 
 
 
@@ -324,9 +324,9 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 org.apache.hadoop.mapreduce.RecordReaderImmutableBytesWritable,Result
-TableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplitsplit,
+MultiTableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplitsplit,
   
org.apache.hadoop.mapreduce.TaskAttemptContextcontext)
-Builds a TableRecordReader.
+Builds a TableRecordReader.
 
 
 
@@ -336,19 +336,19 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 org.apache.hadoop.mapreduce.RecordReaderImmutableBytesWritable,Result
-MultiTableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplitsplit,
+TableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplitsplit,
   
org.apache.hadoop.mapreduce.TaskAttemptContextcontext)
-Builds a TableRecordReader.
+Builds a TableRecordReader.
 
 
 
-org.apache.hadoop.mapreduce.RecordWriterImmutableBytesWritable,Cell
-HFileOutputFormat2.getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContextcontext)
-
-
 org.apache.hadoop.mapreduce.RecordWriterImmutableBytesWritable,Mutation
 MultiTableOutputFormat.getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContextcontext)
 
+
+org.apache.hadoop.mapreduce.RecordWriterImmutableBytesWritable,Cell
+HFileOutputFormat2.getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContextcontext)
+
 
 
 
@@ -375,6 +375,12 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 int
+SimpleTotalOrderPartitioner.getPartition(ImmutableBytesWritablekey,
+VALUEvalue,
+intreduces)
+
+
+int
 HRegionPartitioner.getPartition(ImmutableBytesWritablekey,
 VALUEvalue,
 intnumPartitions)
@@ -382,12 +388,6 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
  number of partitions i.e.
 
 
-
-int
-SimpleTotalOrderPartitioner.getPartition(ImmutableBytesWritablekey,
-VALUEvalue,
-intreduces)
-
 
 void
 IdentityTableMapper.map(ImmutableBytesWritablekey,

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/apidocs/org/apache/hadoop/hbase/io/class-use/TimeRange.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/io/class-use/TimeRange.html 
b/apidocs/org/apache/hadoop/hbase/io/class-use/TimeRange.html
index dca98a0..4cd3d3b 100644
--- a/apidocs/org/apache/hadoop/hbase/io/class-use/TimeRange.html
+++ b/apidocs/org/apache/hadoop/hbase/io/class-use/TimeRange.html
@@ -123,19 +123,19 @@
 
 
 TimeRange
-Increment.getTimeRange()
-Gets the TimeRange used for this increment.
+Get.getTimeRange()
+Method for retrieving the get's TimeRange
 
 
 
 TimeRange
-Scan.getTimeRange()
+Increment.getTimeRange()
+Gets the TimeRange used for this increment.
+
 
 
 TimeRange
-Get.getTimeRange()
-Method for retrieving the get's TimeRange
-
+Scan.getTimeRange()
 
 
 


[49/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/apidocs/org/apache/hadoop/hbase/HConstants.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/HConstants.html 
b/apidocs/org/apache/hadoop/hbase/HConstants.html
index 2cdf92a..d511d60 100644
--- a/apidocs/org/apache/hadoop/hbase/HConstants.html
+++ b/apidocs/org/apache/hadoop/hbase/HConstants.html
@@ -1478,318 +1478,326 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 
+static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
+REPLICATION_DROP_ON_DELETED_TABLE_KEY
+Drop edits for tables that been deleted from the 
replication source and target
+
+
+
 static byte[]
 REPLICATION_META_FAMILY
 The replication meta family
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 REPLICATION_META_FAMILY_STR
 The replication meta family as a string
 
 
-
+
 static byte[]
 REPLICATION_POSITION_FAMILY
 The replication position family
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 REPLICATION_POSITION_FAMILY_STR
 The replication position family as a string
 
 
-
+
 static int
 REPLICATION_QOS
 
-
+
 static int
 REPLICATION_SCOPE_GLOBAL
 Scope tag for globally scoped data.
 
 
-
+
 static int
 REPLICATION_SCOPE_LOCAL
 Scope tag for locally scoped data.
 
 
-
+
 static int
 REPLICATION_SCOPE_SERIAL
 Scope tag for serially scoped data
  This data will be replicated to all peers by the order of sequence id.
 
 
-
+
 static long
 REPLICATION_SERIALLY_WAITING_DEFAULT
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 REPLICATION_SERIALLY_WAITING_KEY
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 REPLICATION_SERVICE_CLASSNAME_DEFAULT
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 REPLICATION_SINK_SERVICE_CLASSNAME
 
-
+
 static int
-REPLICATION_SOURCE_MAXTHREADS_DEFAULT
+REPLICATION_SOURCE_MAXTHREADS_DEFAULT
+Maximum number of threads used by the replication source 
for shipping edits to the sinks
+
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 REPLICATION_SOURCE_MAXTHREADS_KEY
 Maximum number of threads used by the replication source 
for shipping edits to the sinks
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 REPLICATION_SOURCE_SERVICE_CLASSNAME
 
-
+
 static int
 REPLICATION_SOURCE_TOTAL_BUFFER_DFAULT
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 REPLICATION_SOURCE_TOTAL_BUFFER_KEY
 Max total size of buffered entries in all replication 
peers.
 
 
-
+
 static int[]
 RETRY_BACKOFF
 Retrying we multiply hbase.client.pause setting by what we 
have in this array until we
  run out of array items.
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 RPC_CODEC_CONF_KEY
 Configuration key for setting RPC codec class name
 
 
-
+
 static byte
 RPC_CURRENT_VERSION
 
-
+
 static byte[]
 RPC_HEADER
 The first four bytes of Hadoop RPC connections
 
 
-
+
 static byte[]
 SEQNUM_QUALIFIER
 The open seqnum column qualifier
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 SEQNUM_QUALIFIER_STR
 The open seqnum column qualifier
 
 
-
+
 static byte[]
 SERVER_QUALIFIER
 The server column qualifier
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 SERVER_QUALIFIER_STR
 The server column qualifier
 
 
-
+
 static byte[]
 SERVERNAME_QUALIFIER
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 SERVERNAME_QUALIFIER_STR
 The serverName column qualifier.
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 SNAPSHOT_DIR_NAME
 Name of the directory to store all snapshots.
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 SNAPSHOT_RESTORE_FAILSAFE_NAME
 
-
+
 static 

[46/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/apidocs/org/apache/hadoop/hbase/client/class-use/Durability.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/client/class-use/Durability.html 
b/apidocs/org/apache/hadoop/hbase/client/class-use/Durability.html
index 95305fb..afdf17a 100644
--- a/apidocs/org/apache/hadoop/hbase/client/class-use/Durability.html
+++ b/apidocs/org/apache/hadoop/hbase/client/class-use/Durability.html
@@ -197,8 +197,8 @@ the order they are declared.
 
 
 
-long
-Table.incrementColumnValue(byte[]row,
+default http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Long.html?is-external=true;
 title="class or interface in java.lang">Long
+AsyncTableBase.incrementColumnValue(byte[]row,
 byte[]family,
 byte[]qualifier,
 longamount,
@@ -207,8 +207,8 @@ the order they are declared.
 
 
 
-default http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Long.html?is-external=true;
 title="class or interface in java.lang">Long
-AsyncTableBase.incrementColumnValue(byte[]row,
+long
+Table.incrementColumnValue(byte[]row,
 byte[]family,
 byte[]qualifier,
 longamount,
@@ -217,12 +217,12 @@ the order they are declared.
 
 
 
-Increment
-Increment.setDurability(Durabilityd)
+Append
+Append.setDurability(Durabilityd)
 
 
-Delete
-Delete.setDurability(Durabilityd)
+Increment
+Increment.setDurability(Durabilityd)
 
 
 Mutation
@@ -235,8 +235,8 @@ the order they are declared.
 Put.setDurability(Durabilityd)
 
 
-Append
-Append.setDurability(Durabilityd)
+Delete
+Delete.setDurability(Durabilityd)
 
 
 TableDescriptorBuilder

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/apidocs/org/apache/hadoop/hbase/client/class-use/Get.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/client/class-use/Get.html 
b/apidocs/org/apache/hadoop/hbase/client/class-use/Get.html
index f51c043..935dcd6 100644
--- a/apidocs/org/apache/hadoop/hbase/client/class-use/Get.html
+++ b/apidocs/org/apache/hadoop/hbase/client/class-use/Get.html
@@ -257,26 +257,26 @@
 
 
 
-boolean
-Table.exists(Getget)
+default http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true;
 title="class or interface in java.lang">Boolean
+AsyncTableBase.exists(Getget)
 Test for the existence of columns in the table, as 
specified by the Get.
 
 
 
-default http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true;
 title="class or interface in java.lang">Boolean
-AsyncTableBase.exists(Getget)
+boolean
+Table.exists(Getget)
 Test for the existence of columns in the table, as 
specified by the Get.
 
 
 
-Result
-Table.get(Getget)
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFutureResult
+AsyncTableBase.get(Getget)
 Extracts certain cells from a given row.
 
 
 
-http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFutureResult
-AsyncTableBase.get(Getget)
+Result
+Table.get(Getget)
 Extracts certain cells from a given row.
 
 
@@ -290,18 +290,24 @@
 
 
 
-boolean[]
-Table.exists(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListGetgets)
+default http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">Listhttp://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true;
 title="class or interface in java.lang">Boolean
+AsyncTableBase.exists(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListGetgets)
 Test for the existence of columns in the table, as 
specified by the Gets.
 
 
 
-default 

[17/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/client/class-use/Scan.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/client/class-use/Scan.html 
b/devapidocs/org/apache/hadoop/hbase/client/class-use/Scan.html
index 4e9ad44..d9fb34d 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/class-use/Scan.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/class-use/Scan.html
@@ -283,14 +283,6 @@ service.
 
 
 private Scan
-AsyncScanSingleRegionRpcRetryingCaller.scan
-
-
-protected Scan
-ScannerCallable.scan
-
-
-private Scan
 ScannerCallableWithReplicas.scan
 
 
@@ -307,6 +299,14 @@ service.
 
 
 private Scan
+AsyncScanSingleRegionRpcRetryingCaller.scan
+
+
+protected Scan
+ScannerCallable.scan
+
+
+private Scan
 TableSnapshotScanner.scan
 
 
@@ -339,11 +339,11 @@ service.
 
 
 protected Scan
-ScannerCallable.getScan()
+ClientScanner.getScan()
 
 
 protected Scan
-ClientScanner.getScan()
+ScannerCallable.getScan()
 
 
 Scan
@@ -638,8 +638,8 @@ service.
 
 
 ResultScanner
-HTable.getScanner(Scanscan)
-The underlying HTable must 
not be closed.
+AsyncTable.getScanner(Scanscan)
+Returns a scanner on the current table as specified by the 
Scan 
object.
 
 
 
@@ -655,8 +655,8 @@ service.
 
 
 ResultScanner
-AsyncTable.getScanner(Scanscan)
-Returns a scanner on the current table as specified by the 
Scan 
object.
+HTable.getScanner(Scanscan)
+The underlying HTable must 
not be closed.
 
 
 
@@ -689,16 +689,16 @@ service.
 
 
 void
-AsyncTableImpl.scan(Scanscan,
-ScanResultConsumerconsumer)
-
-
-void
 AsyncTable.scan(Scanscan,
 ScanResultConsumerconsumer)
 The scan API uses the observer pattern.
 
 
+
+void
+AsyncTableImpl.scan(Scanscan,
+ScanResultConsumerconsumer)
+
 
 private void
 AsyncTableImpl.scan0(Scanscan,
@@ -706,11 +706,11 @@ service.
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListResult
-RawAsyncTableImpl.scanAll(Scanscan)
+AsyncTableImpl.scanAll(Scanscan)
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListResult
-AsyncTableImpl.scanAll(Scanscan)
+RawAsyncTableImpl.scanAll(Scanscan)
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListResult
@@ -1311,17 +1311,17 @@ service.
 
 
 private Scan
-TableSnapshotInputFormatImpl.RecordReader.scan
+TableInputFormatBase.scan
+Holds the details for the internal scanner.
+
 
 
 private Scan
-TableRecordReaderImpl.scan
+TableSnapshotInputFormatImpl.RecordReader.scan
 
 
 private Scan
-TableInputFormatBase.scan
-Holds the details for the internal scanner.
-
+TableRecordReaderImpl.scan
 
 
 
@@ -1371,14 +1371,14 @@ service.
 
 
 Scan
-TableSplit.getScan()
-Returns a Scan object from the stored string 
representation.
+TableInputFormatBase.getScan()
+Gets the scan defining the actual details like columns 
etc.
 
 
 
 Scan
-TableInputFormatBase.getScan()
-Gets the scan defining the actual details like columns 
etc.
+TableSplit.getScan()
+Returns a Scan object from the stored string 
representation.
 
 
 
@@ -1624,13 +1624,13 @@ service.
 
 
 void
-TableRecordReaderImpl.setScan(Scanscan)
+TableInputFormatBase.setScan(Scanscan)
 Sets the scan defining the actual details like columns 
etc.
 
 
 
 void
-TableInputFormatBase.setScan(Scanscan)
+TableRecordReaderImpl.setScan(Scanscan)
 Sets the scan defining the actual details like columns 
etc.
 
 
@@ -1697,6 +1697,12 @@ service.
 
 
 
+static void
+MultiTableSnapshotInputFormat.setInput(org.apache.hadoop.conf.Configurationconfiguration,
+http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true;
 title="class or interface in java.util">Maphttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String,http://docs.oracle.com/javase/8/docs/api/java/util/Collection.html?is-external=true;
 title="class or interface in java.util">CollectionScansnapshotScans,
+org.apache.hadoop.fs.PathtmpRestoreDir)
+
+
 void
 MultiTableSnapshotInputFormatImpl.setInput(org.apache.hadoop.conf.Configurationconf,
 http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true;
 title="class or interface in 

[10/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/io/class-use/ImmutableBytesWritable.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/io/class-use/ImmutableBytesWritable.html 
b/devapidocs/org/apache/hadoop/hbase/io/class-use/ImmutableBytesWritable.html
index 53b63c6..67eb2fa 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/io/class-use/ImmutableBytesWritable.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/io/class-use/ImmutableBytesWritable.html
@@ -162,11 +162,11 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 ImmutableBytesWritable
-TableSnapshotInputFormat.TableSnapshotRecordReader.createKey()
+TableRecordReader.createKey()
 
 
 ImmutableBytesWritable
-TableRecordReader.createKey()
+TableSnapshotInputFormat.TableSnapshotRecordReader.createKey()
 
 
 ImmutableBytesWritable
@@ -183,9 +183,11 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 org.apache.hadoop.mapred.RecordReaderImmutableBytesWritable,Result
-TableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplitsplit,
+TableInputFormatBase.getRecordReader(org.apache.hadoop.mapred.InputSplitsplit,
org.apache.hadoop.mapred.JobConfjob,
-   
org.apache.hadoop.mapred.Reporterreporter)
+   org.apache.hadoop.mapred.Reporterreporter)
+Builds a TableRecordReader.
+
 
 
 org.apache.hadoop.mapred.RecordReaderImmutableBytesWritable,Result
@@ -195,11 +197,9 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 org.apache.hadoop.mapred.RecordReaderImmutableBytesWritable,Result
-TableInputFormatBase.getRecordReader(org.apache.hadoop.mapred.InputSplitsplit,
+TableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplitsplit,
org.apache.hadoop.mapred.JobConfjob,
-   org.apache.hadoop.mapred.Reporterreporter)
-Builds a TableRecordReader.
-
+   
org.apache.hadoop.mapred.Reporterreporter)
 
 
 
@@ -218,10 +218,12 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 void
-RowCounter.RowCounterMapper.map(ImmutableBytesWritablerow,
-   Resultvalues,
+IdentityTableMap.map(ImmutableBytesWritablekey,
+   Resultvalue,
org.apache.hadoop.mapred.OutputCollectorImmutableBytesWritable,Resultoutput,
-   org.apache.hadoop.mapred.Reporterreporter)
+   org.apache.hadoop.mapred.Reporterreporter)
+Pass the key, value to reduce
+
 
 
 void
@@ -234,21 +236,19 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 void
-IdentityTableMap.map(ImmutableBytesWritablekey,
-   Resultvalue,
+RowCounter.RowCounterMapper.map(ImmutableBytesWritablerow,
+   Resultvalues,
org.apache.hadoop.mapred.OutputCollectorImmutableBytesWritable,Resultoutput,
-   org.apache.hadoop.mapred.Reporterreporter)
-Pass the key, value to reduce
-
+   org.apache.hadoop.mapred.Reporterreporter)
 
 
 boolean
-TableSnapshotInputFormat.TableSnapshotRecordReader.next(ImmutableBytesWritablekey,
+TableRecordReader.next(ImmutableBytesWritablekey,
 Resultvalue)
 
 
 boolean
-TableRecordReader.next(ImmutableBytesWritablekey,
+TableSnapshotInputFormat.TableSnapshotRecordReader.next(ImmutableBytesWritablekey,
 Resultvalue)
 
 
@@ -281,10 +281,12 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 void
-RowCounter.RowCounterMapper.map(ImmutableBytesWritablerow,
-   Resultvalues,
+IdentityTableMap.map(ImmutableBytesWritablekey,
+   Resultvalue,
org.apache.hadoop.mapred.OutputCollectorImmutableBytesWritable,Resultoutput,
-   org.apache.hadoop.mapred.Reporterreporter)
+   org.apache.hadoop.mapred.Reporterreporter)
+Pass the key, value to reduce
+
 
 
 void
@@ -297,12 +299,10 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 void
-IdentityTableMap.map(ImmutableBytesWritablekey,
-   Resultvalue,
+RowCounter.RowCounterMapper.map(ImmutableBytesWritablerow,
+   Resultvalues,
org.apache.hadoop.mapred.OutputCollectorImmutableBytesWritable,Resultoutput,
-   org.apache.hadoop.mapred.Reporterreporter)
-Pass the key, value to reduce
-
+   org.apache.hadoop.mapred.Reporterreporter)
 
 
 void
@@ -349,7 +349,7 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 private ImmutableBytesWritable
-TableRecordReaderImpl.key
+MultithreadedTableMapper.SubMapRecordReader.key
 
 
 private ImmutableBytesWritable
@@ -357,7 +357,7 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 private ImmutableBytesWritable
-MultithreadedTableMapper.SubMapRecordReader.key
+TableRecordReaderImpl.key
 
 
 (package private) ImmutableBytesWritable
@@ -427,33 +427,33 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 ImmutableBytesWritable
-TableSnapshotInputFormat.TableSnapshotRegionRecordReader.getCurrentKey()

[40/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.html
 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.html
index 60da4ee..cdc47c2 100644
--- 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.html
+++ 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.html
@@ -186,122 +186,124 @@
 178
 179ListInputSplit splits = new 
ArrayList();
 180Iterator iter = 
tableMaps.entrySet().iterator();
-181while (iter.hasNext()) {
-182  Map.EntryTableName, 
ListScan entry = (Map.EntryTableName, ListScan) 
iter.next();
-183  TableName tableName = 
entry.getKey();
-184  ListScan scanList = 
entry.getValue();
-185
-186  try (Connection conn = 
ConnectionFactory.createConnection(context.getConfiguration());
-187Table table = 
conn.getTable(tableName);
-188RegionLocator regionLocator = 
conn.getRegionLocator(tableName)) {
-189RegionSizeCalculator 
sizeCalculator = new RegionSizeCalculator(
-190regionLocator, 
conn.getAdmin());
-191Pairbyte[][], byte[][] 
keys = regionLocator.getStartEndKeys();
-192for (Scan scan : scanList) {
-193  if (keys == null || 
keys.getFirst() == null || keys.getFirst().length == 0) {
-194throw new 
IOException("Expecting at least one region for table : "
-195+ 
tableName.getNameAsString());
-196  }
-197  int count = 0;
+181// Make a single Connection to the 
Cluster and use it across all tables.
+182try (Connection conn = 
ConnectionFactory.createConnection(context.getConfiguration())) {
+183  while (iter.hasNext()) {
+184Map.EntryTableName, 
ListScan entry = (Map.EntryTableName, ListScan) 
iter.next();
+185TableName tableName = 
entry.getKey();
+186ListScan scanList = 
entry.getValue();
+187try (Table table = 
conn.getTable(tableName);
+188 RegionLocator regionLocator 
= conn.getRegionLocator(tableName)) {
+189  RegionSizeCalculator 
sizeCalculator = new RegionSizeCalculator(
+190  regionLocator, 
conn.getAdmin());
+191  Pairbyte[][], byte[][] 
keys = regionLocator.getStartEndKeys();
+192  for (Scan scan : scanList) {
+193if (keys == null || 
keys.getFirst() == null || keys.getFirst().length == 0) {
+194  throw new 
IOException("Expecting at least one region for table : "
+195  + 
tableName.getNameAsString());
+196}
+197int count = 0;
 198
-199  byte[] startRow = 
scan.getStartRow();
-200  byte[] stopRow = 
scan.getStopRow();
+199byte[] startRow = 
scan.getStartRow();
+200byte[] stopRow = 
scan.getStopRow();
 201
-202  for (int i = 0; i  
keys.getFirst().length; i++) {
-203if 
(!includeRegionInSplit(keys.getFirst()[i], keys.getSecond()[i])) {
-204  continue;
-205}
+202for (int i = 0; i  
keys.getFirst().length; i++) {
+203  if 
(!includeRegionInSplit(keys.getFirst()[i], keys.getSecond()[i])) {
+204continue;
+205  }
 206
-207if ((startRow.length == 0 || 
keys.getSecond()[i].length == 0 ||
-208
Bytes.compareTo(startRow, keys.getSecond()[i])  0) 
-209(stopRow.length == 0 
|| Bytes.compareTo(stopRow,
-210
keys.getFirst()[i])  0)) {
-211  byte[] splitStart = 
startRow.length == 0 ||
-212  
Bytes.compareTo(keys.getFirst()[i], startRow) = 0 ?
-213  keys.getFirst()[i] 
: startRow;
-214  byte[] splitStop = 
(stopRow.length == 0 ||
-215  
Bytes.compareTo(keys.getSecond()[i], stopRow) = 0) 
-216  
keys.getSecond()[i].length  0 ?
-217  keys.getSecond()[i] 
: stopRow;
+207  if ((startRow.length == 0 
|| keys.getSecond()[i].length == 0 ||
+208  
Bytes.compareTo(startRow, keys.getSecond()[i])  0) 
+209  (stopRow.length == 0 || 
Bytes.compareTo(stopRow,
+210  keys.getFirst()[i]) 
 0)) {
+211byte[] splitStart = 
startRow.length == 0 ||
+212
Bytes.compareTo(keys.getFirst()[i], startRow) = 0 ?
+213keys.getFirst()[i] : 
startRow;
+214byte[] splitStop = 
(stopRow.length == 0 ||
+215
Bytes.compareTo(keys.getSecond()[i], stopRow) = 0) 
+216
keys.getSecond()[i].length  0 ?
+217keys.getSecond()[i] : 
stopRow;
 218
-219  HRegionLocation 
hregionLocation = 

[22/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/client/class-use/Put.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/client/class-use/Put.html 
b/devapidocs/org/apache/hadoop/hbase/client/class-use/Put.html
index d7a15f2..d15747c 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/class-use/Put.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/class-use/Put.html
@@ -629,7 +629,7 @@ service.
 
 
 boolean
-HTable.checkAndPut(byte[]row,
+Table.checkAndPut(byte[]row,
byte[]family,
byte[]qualifier,
byte[]value,
@@ -640,7 +640,7 @@ service.
 
 
 boolean
-Table.checkAndPut(byte[]row,
+HTable.checkAndPut(byte[]row,
byte[]family,
byte[]qualifier,
byte[]value,
@@ -651,33 +651,33 @@ service.
 
 
 boolean
-HTable.checkAndPut(byte[]row,
+Table.checkAndPut(byte[]row,
byte[]family,
byte[]qualifier,
CompareFilter.CompareOpcompareOp,
byte[]value,
Putput)
-Atomically checks if a row/family/qualifier value matches 
the expected
- value.
+Deprecated.
+Since 2.0.0. Will be 
removed in 3.0.0. Use
+  Table.checkAndPut(byte[],
 byte[], byte[], CompareOperator, byte[], Put)}
+
 
 
 
 boolean
-Table.checkAndPut(byte[]row,
+HTable.checkAndPut(byte[]row,
byte[]family,
byte[]qualifier,
CompareFilter.CompareOpcompareOp,
byte[]value,
Putput)
-Deprecated.
-Since 2.0.0. Will be 
removed in 3.0.0. Use
-  Table.checkAndPut(byte[],
 byte[], byte[], CompareOperator, byte[], Put)}
-
+Atomically checks if a row/family/qualifier value matches 
the expected
+ value.
 
 
 
 boolean
-HTable.checkAndPut(byte[]row,
+Table.checkAndPut(byte[]row,
byte[]family,
byte[]qualifier,
CompareOperatorop,
@@ -689,7 +689,7 @@ service.
 
 
 boolean
-Table.checkAndPut(byte[]row,
+HTable.checkAndPut(byte[]row,
byte[]family,
byte[]qualifier,
CompareOperatorop,
@@ -728,16 +728,6 @@ service.
 
 
 
-http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
-RawAsyncTableImpl.put(Putput)
-
-
-void
-HTable.put(Putput)
-Puts some data in the table.
-
-
-
 void
 Table.put(Putput)
 Puts some data in the table.
@@ -749,6 +739,16 @@ service.
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
+RawAsyncTableImpl.put(Putput)
+
+
+void
+HTable.put(Putput)
+Puts some data in the table.
+
+
+
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
 AsyncTableBase.put(Putput)
 Puts some data to the table.
 
@@ -778,11 +778,11 @@ service.
 
 
 void
-BufferedMutatorImpl.validatePut(Putput)
+HTable.validatePut(Putput)
 
 
 void
-HTable.validatePut(Putput)
+BufferedMutatorImpl.validatePut(Putput)
 
 
 static void
@@ -808,16 +808,6 @@ service.
 
 
 
-http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">Listhttp://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
-RawAsyncTableImpl.put(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListPutputs)
-
-
-void
-HTable.put(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListPutputs)
-Batch puts the specified data into the table.
-
-
-
 void
 Table.put(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListPutputs)
 Batch puts the specified data into the table.
@@ -829,6 +819,16 @@ service.
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">Listhttp://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in 

[21/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/client/class-use/RegionInfo.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/class-use/RegionInfo.html 
b/devapidocs/org/apache/hadoop/hbase/client/class-use/RegionInfo.html
index d436b28..78b3155 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/class-use/RegionInfo.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/class-use/RegionInfo.html
@@ -516,7 +516,7 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 private static HRegionLocation
-MetaTableAccessor.getRegionLocation(Resultr,
+AsyncMetaTableAccessor.getRegionLocation(Resultr,
  RegionInforegionInfo,
  intreplicaId)
 Returns the HRegionLocation parsed from the given meta row 
Result
@@ -525,7 +525,7 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 private static HRegionLocation
-AsyncMetaTableAccessor.getRegionLocation(Resultr,
+MetaTableAccessor.getRegionLocation(Resultr,
  RegionInforegionInfo,
  intreplicaId)
 Returns the HRegionLocation parsed from the given meta row 
Result
@@ -969,17 +969,17 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionInfo
-AsyncAdmin.getOnlineRegions(ServerNameserverName)
-Get all the online regions on a region server.
-
+AsyncHBaseAdmin.getOnlineRegions(ServerNameserverName)
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionInfo
-RawAsyncHBaseAdmin.getOnlineRegions(ServerNameserverName)
+AsyncAdmin.getOnlineRegions(ServerNameserverName)
+Get all the online regions on a region server.
+
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionInfo
-AsyncHBaseAdmin.getOnlineRegions(ServerNameserverName)
+RawAsyncHBaseAdmin.getOnlineRegions(ServerNameserverName)
 
 
 (package private) PairRegionInfo,ServerName
@@ -1013,17 +1013,17 @@ Input/OutputFormats, a table indexing MapReduce job, 
and utility methods.
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionInfo
-AsyncAdmin.getTableRegions(TableNametableName)
-Get the regions of a given table.
-
+AsyncHBaseAdmin.getTableRegions(TableNametableName)
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionInfo
-RawAsyncHBaseAdmin.getTableRegions(TableNametableName)
+AsyncAdmin.getTableRegions(TableNametableName)
+Get the regions of a given table.
+
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionInfo
-AsyncHBaseAdmin.getTableRegions(TableNametableName)
+RawAsyncHBaseAdmin.getTableRegions(TableNametableName)
 
 
 static http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionInfo
@@ -1800,15 +1800,15 @@ Input/OutputFormats, a table indexing MapReduce job, 
and utility methods.
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListServerName
-FavoredNodesPlan.getFavoredNodes(RegionInforegion)
+FavoredNodeLoadBalancer.getFavoredNodes(RegionInforegionInfo)
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListServerName
-FavoredNodesManager.getFavoredNodes(RegionInforegionInfo)
+FavoredNodesPlan.getFavoredNodes(RegionInforegion)
 
 
 

[26/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/class-use/TableNotDisabledException.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/class-use/TableNotDisabledException.html 
b/devapidocs/org/apache/hadoop/hbase/class-use/TableNotDisabledException.html
index 4378d2e..7131751 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/class-use/TableNotDisabledException.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/class-use/TableNotDisabledException.html
@@ -104,13 +104,13 @@
 
 
 void
-MasterServices.checkTableModifiable(TableNametableName)
-Check table is modifiable; i.e.
-
+HMaster.checkTableModifiable(TableNametableName)
 
 
 void
-HMaster.checkTableModifiable(TableNametableName)
+MasterServices.checkTableModifiable(TableNametableName)
+Check table is modifiable; i.e.
+
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/class-use/TableNotFoundException.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/class-use/TableNotFoundException.html 
b/devapidocs/org/apache/hadoop/hbase/class-use/TableNotFoundException.html
index 71c9d33..1e9a0a9 100644
--- a/devapidocs/org/apache/hadoop/hbase/class-use/TableNotFoundException.html
+++ b/devapidocs/org/apache/hadoop/hbase/class-use/TableNotFoundException.html
@@ -157,13 +157,13 @@
 
 
 void
-MasterServices.checkTableModifiable(TableNametableName)
-Check table is modifiable; i.e.
-
+HMaster.checkTableModifiable(TableNametableName)
 
 
 void
-HMaster.checkTableModifiable(TableNametableName)
+MasterServices.checkTableModifiable(TableNametableName)
+Check table is modifiable; i.e.
+
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/class-use/Tag.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/class-use/Tag.html 
b/devapidocs/org/apache/hadoop/hbase/class-use/Tag.html
index 8a26ebd..6442178 100644
--- a/devapidocs/org/apache/hadoop/hbase/class-use/Tag.html
+++ b/devapidocs/org/apache/hadoop/hbase/class-use/Tag.html
@@ -166,18 +166,18 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 static Tag
-PrivateCellUtil.getTag(Cellcell,
+CellUtil.getTag(Cellcell,
   bytetype)
-Retrieve Cell's first tag, matching the passed in type
+Deprecated.
+As of release 2.0.0, this 
will be removed in HBase 3.0.0.
+
 
 
 
 static Tag
-CellUtil.getTag(Cellcell,
+PrivateCellUtil.getTag(Cellcell,
   bytetype)
-Deprecated.
-As of release 2.0.0, this 
will be removed in HBase 3.0.0.
-
+Retrieve Cell's first tag, matching the passed in type
 
 
 
@@ -229,23 +229,17 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 static http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListTag
-PrivateCellUtil.getTags(Cellcell)
-
-
-static http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListTag
 CellUtil.getTags(Cellcell)
 Deprecated.
 As of release 2.0.0, this 
will be removed in HBase 3.0.0.
 
 
 
-
-private static http://docs.oracle.com/javase/8/docs/api/java/util/Iterator.html?is-external=true;
 title="class or interface in java.util">IteratorTag
-PrivateCellUtil.tagsIterator(byte[]tags,
-intoffset,
-intlength)
-
 
+static http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListTag
+PrivateCellUtil.getTags(Cellcell)
+
+
 static http://docs.oracle.com/javase/8/docs/api/java/util/Iterator.html?is-external=true;
 title="class or interface in java.util">IteratorTag
 CellUtil.tagsIterator(byte[]tags,
 intoffset,
@@ -256,6 +250,12 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 
+
+private static http://docs.oracle.com/javase/8/docs/api/java/util/Iterator.html?is-external=true;
 title="class or interface in java.util">IteratorTag
+PrivateCellUtil.tagsIterator(byte[]tags,
+intoffset,
+intlength)
+
 
 private static http://docs.oracle.com/javase/8/docs/api/java/util/Iterator.html?is-external=true;
 title="class or interface in java.util">IteratorTag
 PrivateCellUtil.tagsIterator(http://docs.oracle.com/javase/8/docs/api/java/nio/ByteBuffer.html?is-external=true;
 title="class or interface in java.nio">ByteBuffertags,
@@ -386,12 +386,12 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 static Cell
-PrivateCellUtil.createCell(Cellcell,
+CellUtil.createCell(Cellcell,
   http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListTagtags)
 
 
 static Cell

[35/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/HConstants.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/HConstants.html 
b/devapidocs/org/apache/hadoop/hbase/HConstants.html
index bd701c1..3b14bfa 100644
--- a/devapidocs/org/apache/hadoop/hbase/HConstants.html
+++ b/devapidocs/org/apache/hadoop/hbase/HConstants.html
@@ -1499,318 +1499,326 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 
+static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
+REPLICATION_DROP_ON_DELETED_TABLE_KEY
+Drop edits for tables that been deleted from the 
replication source and target
+
+
+
 static byte[]
 REPLICATION_META_FAMILY
 The replication meta family
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 REPLICATION_META_FAMILY_STR
 The replication meta family as a string
 
 
-
+
 static byte[]
 REPLICATION_POSITION_FAMILY
 The replication position family
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 REPLICATION_POSITION_FAMILY_STR
 The replication position family as a string
 
 
-
+
 static int
 REPLICATION_QOS
 
-
+
 static int
 REPLICATION_SCOPE_GLOBAL
 Scope tag for globally scoped data.
 
 
-
+
 static int
 REPLICATION_SCOPE_LOCAL
 Scope tag for locally scoped data.
 
 
-
+
 static int
 REPLICATION_SCOPE_SERIAL
 Scope tag for serially scoped data
  This data will be replicated to all peers by the order of sequence id.
 
 
-
+
 static long
 REPLICATION_SERIALLY_WAITING_DEFAULT
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 REPLICATION_SERIALLY_WAITING_KEY
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 REPLICATION_SERVICE_CLASSNAME_DEFAULT
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 REPLICATION_SINK_SERVICE_CLASSNAME
 
-
+
 static int
-REPLICATION_SOURCE_MAXTHREADS_DEFAULT
+REPLICATION_SOURCE_MAXTHREADS_DEFAULT
+Maximum number of threads used by the replication source 
for shipping edits to the sinks
+
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 REPLICATION_SOURCE_MAXTHREADS_KEY
 Maximum number of threads used by the replication source 
for shipping edits to the sinks
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 REPLICATION_SOURCE_SERVICE_CLASSNAME
 
-
+
 static int
 REPLICATION_SOURCE_TOTAL_BUFFER_DFAULT
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 REPLICATION_SOURCE_TOTAL_BUFFER_KEY
 Max total size of buffered entries in all replication 
peers.
 
 
-
+
 static int[]
 RETRY_BACKOFF
 Retrying we multiply hbase.client.pause setting by what we 
have in this array until we
  run out of array items.
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 RPC_CODEC_CONF_KEY
 Configuration key for setting RPC codec class name
 
 
-
+
 static byte
 RPC_CURRENT_VERSION
 
-
+
 static byte[]
 RPC_HEADER
 The first four bytes of Hadoop RPC connections
 
 
-
+
 static byte[]
 SEQNUM_QUALIFIER
 The open seqnum column qualifier
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 SEQNUM_QUALIFIER_STR
 The open seqnum column qualifier
 
 
-
+
 static byte[]
 SERVER_QUALIFIER
 The server column qualifier
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 SERVER_QUALIFIER_STR
 The server column qualifier
 
 
-
+
 static byte[]
 SERVERNAME_QUALIFIER
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 SERVERNAME_QUALIFIER_STR
 The serverName column qualifier.
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 SNAPSHOT_DIR_NAME
 Name of the directory to store all snapshots.
 
 
-
+
 static http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 SNAPSHOT_RESTORE_FAILSAFE_NAME
 
-
+
 static 

[48/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/apidocs/org/apache/hadoop/hbase/class-use/Cell.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/class-use/Cell.html 
b/apidocs/org/apache/hadoop/hbase/class-use/Cell.html
index d197835..c10ded7 100644
--- a/apidocs/org/apache/hadoop/hbase/class-use/Cell.html
+++ b/apidocs/org/apache/hadoop/hbase/class-use/Cell.html
@@ -1074,15 +1074,15 @@ Input/OutputFormats, a table indexing MapReduce job, 
and utility methods.
 
 
 
-Increment
-Increment.add(Cellcell)
-Add the specified KeyValue to this operation.
+Append
+Append.add(Cellcell)
+Add column and value to this Append operation.
 
 
 
-Delete
-Delete.add(Cellkv)
-Add an existing delete marker to this Delete object.
+Increment
+Increment.add(Cellcell)
+Add the specified KeyValue to this operation.
 
 
 
@@ -1092,9 +1092,9 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 
-Append
-Append.add(Cellcell)
-Add column and value to this Append operation.
+Delete
+Delete.add(Cellkv)
+Add an existing delete marker to this Delete object.
 
 
 
@@ -1177,12 +1177,12 @@ Input/OutputFormats, a table indexing MapReduce job, 
and utility methods.
   booleanmayHaveMoreCellsInRow)
 
 
-Increment
-Increment.setFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true;
 title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellmap)
+Append
+Append.setFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true;
 title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellmap)
 
 
-Delete
-Delete.setFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true;
 title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellmap)
+Increment
+Increment.setFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true;
 title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellmap)
 
 
 Mutation
@@ -1195,8 +1195,8 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 Put.setFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true;
 title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellmap)
 
 
-Append
-Append.setFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true;
 title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellmap)
+Delete
+Delete.setFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true;
 title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellmap)
 
 
 
@@ -1214,67 +1214,67 @@ Input/OutputFormats, a table indexing MapReduce job, 
and utility methods.
 
 
 Cell
-ColumnPrefixFilter.getNextCellHint(Cellcell)
+FilterList.getNextCellHint(CellcurrentCell)
 
 
 Cell
-TimestampsFilter.getNextCellHint(CellcurrentCell)
-Pick the next cell that the scanner should seek to.
-
+MultipleColumnPrefixFilter.getNextCellHint(Cellcell)
 
 
 Cell
-MultiRowRangeFilter.getNextCellHint(CellcurrentKV)
+ColumnRangeFilter.getNextCellHint(Cellcell)
 
 
-Cell
-ColumnPaginationFilter.getNextCellHint(Cellcell)
+abstract Cell
+Filter.getNextCellHint(CellcurrentCell)
+If the filter returns the match code SEEK_NEXT_USING_HINT, 
then it should also tell which is
+ the next key it must seek to.
+
 
 
 Cell
-ColumnRangeFilter.getNextCellHint(Cellcell)
+ColumnPaginationFilter.getNextCellHint(Cellcell)
 
 
 Cell
-FilterList.getNextCellHint(CellcurrentCell)
+FuzzyRowFilter.getNextCellHint(CellcurrentCell)
 
 
 Cell
-MultipleColumnPrefixFilter.getNextCellHint(Cellcell)
+TimestampsFilter.getNextCellHint(CellcurrentCell)
+Pick the next cell that the scanner should seek to.
+
 
 
-abstract Cell
-Filter.getNextCellHint(CellcurrentCell)
-If the filter returns the match code SEEK_NEXT_USING_HINT, 
then it should also tell which is
- the next key it must seek to.
-
+Cell
+ColumnPrefixFilter.getNextCellHint(Cellcell)
 
 
 Cell

[28/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/class-use/TableDescriptors.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/class-use/TableDescriptors.html 
b/devapidocs/org/apache/hadoop/hbase/class-use/TableDescriptors.html
index c95dbba..e0ba149 100644
--- a/devapidocs/org/apache/hadoop/hbase/class-use/TableDescriptors.html
+++ b/devapidocs/org/apache/hadoop/hbase/class-use/TableDescriptors.html
@@ -122,11 +122,11 @@
 
 
 TableDescriptors
-MasterServices.getTableDescriptors()
+HMaster.getTableDescriptors()
 
 
 TableDescriptors
-HMaster.getTableDescriptors()
+MasterServices.getTableDescriptors()
 
 
 
@@ -219,7 +219,8 @@
 
 
 
-Context(org.apache.hadoop.conf.Configurationconf,
+Context(org.apache.hadoop.conf.ConfigurationlocalConf,
+   org.apache.hadoop.conf.Configurationconf,
org.apache.hadoop.fs.FileSystemfs,
http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringpeerId,
http://docs.oracle.com/javase/8/docs/api/java/util/UUID.html?is-external=true;
 title="class or interface in java.util">UUIDclusterId,



[34/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/class-use/Cell.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/class-use/Cell.html 
b/devapidocs/org/apache/hadoop/hbase/class-use/Cell.html
index f909add..3425d77 100644
--- a/devapidocs/org/apache/hadoop/hbase/class-use/Cell.html
+++ b/devapidocs/org/apache/hadoop/hbase/class-use/Cell.html
@@ -529,34 +529,34 @@ service.
 
 
 static Cell
-PrivateCellUtil.createCell(Cellcell,
+CellUtil.createCell(Cellcell,
   byte[]tags)
 
 
 static Cell
-CellUtil.createCell(Cellcell,
+PrivateCellUtil.createCell(Cellcell,
   byte[]tags)
 
 
 static Cell
-PrivateCellUtil.createCell(Cellcell,
+CellUtil.createCell(Cellcell,
   byte[]value,
   byte[]tags)
 
 
 static Cell
-CellUtil.createCell(Cellcell,
+PrivateCellUtil.createCell(Cellcell,
   byte[]value,
   byte[]tags)
 
 
 static Cell
-PrivateCellUtil.createCell(Cellcell,
+CellUtil.createCell(Cellcell,
   http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListTagtags)
 
 
 static Cell
-CellUtil.createCell(Cellcell,
+PrivateCellUtil.createCell(Cellcell,
   http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListTagtags)
 
 
@@ -739,16 +739,16 @@ service.
 
 
 static byte[]
-PrivateCellUtil.cloneTags(Cellcell)
-
-
-static byte[]
 CellUtil.cloneTags(Cellcell)
 Deprecated.
 As of HBase-2.0. Will be 
removed in HBase-3.0.
 
 
 
+
+static byte[]
+PrivateCellUtil.cloneTags(Cellcell)
+
 
 static byte[]
 CellUtil.cloneValue(Cellcell)
@@ -762,11 +762,6 @@ service.
 
 
 int
-CellComparatorImpl.compare(Cella,
-   Cellb)
-
-
-int
 KeyValue.MetaComparator.compare(Cellleft,
Cellright)
 Deprecated.
@@ -774,7 +769,7 @@ service.
  table.
 
 
-
+
 int
 KeyValue.KVComparator.compare(Cellleft,
Cellright)
@@ -783,6 +778,11 @@ service.
  rowkey, colfam/qual, timestamp, type, mvcc
 
 
+
+int
+CellComparatorImpl.compare(Cella,
+   Cellb)
+
 
 int
 CellComparatorImpl.compare(Cella,
@@ -793,27 +793,27 @@ service.
 
 
 static int
-PrivateCellUtil.compare(CellComparatorcomparator,
+CellUtil.compare(CellComparatorcomparator,
Cellleft,
byte[]key,
intoffset,
intlength)
-Used when a cell needs to be compared with a key byte[] 
such as cases of finding the index from
- the index block, bloom keys from the bloom blocks This byte[] is expected to 
be serialized in
- the KeyValue serialization format If the KeyValue (Cell's) serialization 
format changes this
- method cannot be used.
+Deprecated.
+As of HBase-2.0. Will be 
removed in HBase-3.0
+
 
 
 
 static int
-CellUtil.compare(CellComparatorcomparator,
+PrivateCellUtil.compare(CellComparatorcomparator,
Cellleft,
byte[]key,
intoffset,
intlength)
-Deprecated.
-As of HBase-2.0. Will be 
removed in HBase-3.0
-
+Used when a cell needs to be compared with a key byte[] 
such as cases of finding the index from
+ the index block, bloom keys from the bloom blocks This byte[] is expected to 
be serialized in
+ the KeyValue serialization format If the KeyValue (Cell's) serialization 
format changes this
+ method cannot be used.
 
 
 
@@ -1016,23 +1016,23 @@ service.
 
 
 int
+KeyValue.KVComparator.compareRows(Cellleft,
+   Cellright)
+Deprecated.
+
+
+
+int
 CellComparatorImpl.compareRows(Cellleft,
Cellright)
 Compares the rows of the left and right cell.
 
 
-
+
 int
 CellComparatorImpl.MetaCellComparator.compareRows(Cellleft,
Cellright)
 
-
-int
-KeyValue.KVComparator.compareRows(Cellleft,
-   Cellright)
-Deprecated.
-
-
 
 int
 CellComparator.compareTimestamps(CellleftCell,
@@ -1042,17 +1042,17 @@ service.
 
 
 int
-CellComparatorImpl.compareTimestamps(Cellleft,
+KeyValue.KVComparator.compareTimestamps(Cellleft,
  Cellright)
-Compares cell's timestamps in DESCENDING order.
-
+Deprecated.
+
 
 
 int
-KeyValue.KVComparator.compareTimestamps(Cellleft,
+CellComparatorImpl.compareTimestamps(Cellleft,
  Cellright)
-Deprecated.
-
+Compares cell's timestamps in DESCENDING order.
+
 
 
 static int
@@ -1239,34 +1239,34 @@ service.
 
 
 static Cell
-PrivateCellUtil.createCell(Cellcell,
+CellUtil.createCell(Cellcell,
   byte[]tags)
 
 
 static Cell
-CellUtil.createCell(Cellcell,
+PrivateCellUtil.createCell(Cellcell,
   byte[]tags)
 
 
 static Cell
-PrivateCellUtil.createCell(Cellcell,
+CellUtil.createCell(Cellcell,
   byte[]value,
   byte[]tags)
 
 
 static Cell
-CellUtil.createCell(Cellcell,
+PrivateCellUtil.createCell(Cellcell,
   byte[]value,
   byte[]tags)
 
 
 static Cell
-PrivateCellUtil.createCell(Cellcell,
+CellUtil.createCell(Cellcell,
   

[47/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/apidocs/org/apache/hadoop/hbase/class-use/CompareOperator.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/class-use/CompareOperator.html 
b/apidocs/org/apache/hadoop/hbase/class-use/CompareOperator.html
index 5f6d9d4..e9a69de 100644
--- a/apidocs/org/apache/hadoop/hbase/class-use/CompareOperator.html
+++ b/apidocs/org/apache/hadoop/hbase/class-use/CompareOperator.html
@@ -197,11 +197,11 @@ the order they are declared.
 
 
 protected CompareOperator
-SingleColumnValueFilter.op
+CompareFilter.op
 
 
 protected CompareOperator
-CompareFilter.op
+SingleColumnValueFilter.op
 
 
 
@@ -223,11 +223,11 @@ the order they are declared.
 
 
 CompareOperator
-SingleColumnValueFilter.getCompareOperator()
+CompareFilter.getCompareOperator()
 
 
 CompareOperator
-CompareFilter.getCompareOperator()
+SingleColumnValueFilter.getCompareOperator()
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/apidocs/org/apache/hadoop/hbase/class-use/TableName.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/class-use/TableName.html 
b/apidocs/org/apache/hadoop/hbase/class-use/TableName.html
index ff1834a..7286958 100644
--- a/apidocs/org/apache/hadoop/hbase/class-use/TableName.html
+++ b/apidocs/org/apache/hadoop/hbase/class-use/TableName.html
@@ -418,38 +418,38 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 TableName
-AsyncTableRegionLocator.getName()
-Gets the fully qualified table name instance of the table 
whose region we want to locate.
+BufferedMutator.getName()
+Gets the fully qualified table name instance of the table 
that this BufferedMutator writes to.
 
 
 
 TableName
-BufferedMutator.getName()
-Gets the fully qualified table name instance of the table 
that this BufferedMutator writes to.
+AsyncTableBase.getName()
+Gets the fully qualified table name instance of this 
table.
 
 
 
 TableName
-AsyncBufferedMutator.getName()
-Gets the fully qualified table name instance of the table 
that this
- AsyncBufferedMutator writes to.
+RegionLocator.getName()
+Gets the fully qualified table name instance of this 
table.
 
 
 
 TableName
-Table.getName()
-Gets the fully qualified table name instance of this 
table.
+AsyncBufferedMutator.getName()
+Gets the fully qualified table name instance of the table 
that this
+ AsyncBufferedMutator writes to.
 
 
 
 TableName
-AsyncTableBase.getName()
-Gets the fully qualified table name instance of this 
table.
+AsyncTableRegionLocator.getName()
+Gets the fully qualified table name instance of the table 
whose region we want to locate.
 
 
 
 TableName
-RegionLocator.getName()
+Table.getName()
 Gets the fully qualified table name instance of this 
table.
 
 
@@ -465,13 +465,13 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 TableName
-TableDescriptor.getTableName()
-Get the name of the table
-
+SnapshotDescription.getTableName()
 
 
 TableName
-SnapshotDescription.getTableName()
+TableDescriptor.getTableName()
+Get the name of the table
+
 
 
 TableName
@@ -846,18 +846,18 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 
-default AsyncBufferedMutator
-AsyncConnection.getBufferedMutator(TableNametableName)
-Retrieve an AsyncBufferedMutator for 
performing client-side buffering of writes.
-
-
-
 BufferedMutator
 Connection.getBufferedMutator(TableNametableName)
 
  Retrieve a BufferedMutator for performing 
client-side buffering of writes.
 
 
+
+default AsyncBufferedMutator
+AsyncConnection.getBufferedMutator(TableNametableName)
+Retrieve an AsyncBufferedMutator for 
performing client-side buffering of writes.
+
+
 
 default AsyncBufferedMutator
 AsyncConnection.getBufferedMutator(TableNametableName,
@@ -945,17 +945,17 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 
-AsyncTableRegionLocator
-AsyncConnection.getRegionLocator(TableNametableName)
-Retrieve a AsyncRegionLocator implementation to inspect 
region information on a table.
-
-
-
 RegionLocator
 Connection.getRegionLocator(TableNametableName)
 Retrieve a RegionLocator implementation to inspect region 
information on a table.
 
 
+
+AsyncTableRegionLocator
+AsyncConnection.getRegionLocator(TableNametableName)
+Retrieve a AsyncRegionLocator implementation to inspect 
region information on a table.
+
+
 
 http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionInfo
 Admin.getRegions(TableNametableName)
@@ -969,31 +969,31 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 
-default AsyncTable
-AsyncConnection.getTable(TableNametableName,
+default Table
+Connection.getTable(TableNametableName,
 

[51/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
Published site at .


Project: http://git-wip-us.apache.org/repos/asf/hbase-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase-site/commit/cba900e4
Tree: http://git-wip-us.apache.org/repos/asf/hbase-site/tree/cba900e4
Diff: http://git-wip-us.apache.org/repos/asf/hbase-site/diff/cba900e4

Branch: refs/heads/asf-site
Commit: cba900e48739f27c733bf9f4245d685af0acc189
Parents: 07c67a9
Author: jenkins 
Authored: Wed Nov 15 15:30:04 2017 +
Committer: jenkins 
Committed: Wed Nov 15 15:30:04 2017 +

--
 acid-semantics.html | 4 +-
 apache_hbase_reference_guide.pdf| 4 +-
 apidocs/constant-values.html|   119 +-
 apidocs/deprecated-list.html|76 +-
 apidocs/index-all.html  |12 +-
 .../apache/hadoop/hbase/CompareOperator.html| 4 +-
 apidocs/org/apache/hadoop/hbase/HConstants.html |   239 +-
 .../apache/hadoop/hbase/KeepDeletedCells.html   | 4 +-
 .../hadoop/hbase/MemoryCompactionPolicy.html| 4 +-
 .../org/apache/hadoop/hbase/class-use/Cell.html |   298 +-
 .../hadoop/hbase/class-use/CompareOperator.html | 8 +-
 .../hadoop/hbase/class-use/TableName.html   |80 +-
 .../apache/hadoop/hbase/client/CompactType.html | 4 +-
 .../apache/hadoop/hbase/client/Consistency.html | 4 +-
 .../apache/hadoop/hbase/client/Durability.html  | 4 +-
 .../hadoop/hbase/client/IsolationLevel.html | 4 +-
 .../hadoop/hbase/client/MasterSwitchType.html   | 4 +-
 .../hbase/client/MobCompactPartitionPolicy.html | 4 +-
 .../client/RequestController.ReturnCode.html| 4 +-
 .../RetriesExhaustedWithDetailsException.html   |54 +-
 .../hadoop/hbase/client/Scan.ReadType.html  | 4 +-
 .../hadoop/hbase/client/SnapshotType.html   | 4 +-
 .../hadoop/hbase/client/class-use/Append.html   | 8 +-
 .../hbase/client/class-use/Consistency.html | 8 +-
 .../hadoop/hbase/client/class-use/Delete.html   |20 +-
 .../hbase/client/class-use/Durability.html  |20 +-
 .../hadoop/hbase/client/class-use/Get.html  |46 +-
 .../hbase/client/class-use/Increment.html   | 8 +-
 .../hbase/client/class-use/IsolationLevel.html  | 8 +-
 .../hadoop/hbase/client/class-use/Mutation.html | 8 +-
 .../hadoop/hbase/client/class-use/Put.html  |24 +-
 .../hadoop/hbase/client/class-use/Result.html   |22 +-
 .../hbase/client/class-use/ResultScanner.html   |26 +-
 .../hadoop/hbase/client/class-use/Row.html  | 8 +-
 .../hbase/client/class-use/RowMutations.html| 8 +-
 .../hadoop/hbase/client/class-use/Scan.html |22 +-
 .../hadoop/hbase/client/package-tree.html   |12 +-
 .../client/security/SecurityCapability.html | 4 +-
 .../hbase/filter/CompareFilter.CompareOp.html   | 4 +-
 .../filter/class-use/ByteArrayComparable.html   | 8 +-
 .../class-use/CompareFilter.CompareOp.html  | 8 +-
 .../filter/class-use/Filter.ReturnCode.html |   114 +-
 .../hadoop/hbase/filter/class-use/Filter.html   |56 +-
 .../hadoop/hbase/filter/package-tree.html   | 4 +-
 .../io/class-use/ImmutableBytesWritable.html|42 +-
 .../hadoop/hbase/io/class-use/TimeRange.html|12 +-
 .../hbase/io/crypto/class-use/Cipher.html   |18 +-
 .../hbase/io/encoding/DataBlockEncoding.html| 4 +-
 .../mapreduce/MultiTableInputFormatBase.html| 8 +-
 .../mapreduce/class-use/TableRecordReader.html  | 4 +-
 .../org/apache/hadoop/hbase/package-tree.html   | 2 +-
 .../apache/hadoop/hbase/quotas/QuotaType.html   | 4 +-
 .../hbase/quotas/SpaceViolationPolicy.html  | 4 +-
 .../hadoop/hbase/quotas/ThrottleType.html   | 4 +-
 .../hbase/quotas/ThrottlingException.Type.html  | 4 +-
 .../hadoop/hbase/quotas/package-tree.html   | 4 +-
 .../hadoop/hbase/regionserver/BloomType.html| 4 +-
 apidocs/org/apache/hadoop/hbase/util/Order.html | 4 +-
 .../hadoop/hbase/util/class-use/ByteRange.html  |   124 +-
 .../hadoop/hbase/util/class-use/Bytes.html  |48 +-
 .../hadoop/hbase/util/class-use/Order.html  |44 +-
 .../util/class-use/PositionedByteRange.html |   356 +-
 apidocs/overview-tree.html  |22 +-
 .../org/apache/hadoop/hbase/HConstants.html |   231 +-
 .../RetriesExhaustedWithDetailsException.html   |   309 +-
 .../mapreduce/MultiTableInputFormatBase.html|   220 +-
 book.html   | 2 +-
 bulk-loads.html | 4 +-
 checkstyle-aggregate.html   | 42014 -
 checkstyle.rss  |   290 +-
 coc.html| 4 +-
 cygwin.html | 4 +-
 dependencies.html 

[32/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/class-use/ClusterStatus.Option.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/class-use/ClusterStatus.Option.html 
b/devapidocs/org/apache/hadoop/hbase/class-use/ClusterStatus.Option.html
index 37df0dd..c395de4 100644
--- a/devapidocs/org/apache/hadoop/hbase/class-use/ClusterStatus.Option.html
+++ b/devapidocs/org/apache/hadoop/hbase/class-use/ClusterStatus.Option.html
@@ -141,7 +141,7 @@ the order they are declared.
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFutureClusterStatus
-AsyncAdmin.getClusterStatus(http://docs.oracle.com/javase/8/docs/api/java/util/EnumSet.html?is-external=true;
 title="class or interface in java.util">EnumSetClusterStatus.Optionoptions)
+AsyncHBaseAdmin.getClusterStatus(http://docs.oracle.com/javase/8/docs/api/java/util/EnumSet.html?is-external=true;
 title="class or interface in java.util">EnumSetClusterStatus.Optionoptions)
 
 
 ClusterStatus
@@ -150,16 +150,16 @@ the order they are declared.
 
 
 
-http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFutureClusterStatus
-RawAsyncHBaseAdmin.getClusterStatus(http://docs.oracle.com/javase/8/docs/api/java/util/EnumSet.html?is-external=true;
 title="class or interface in java.util">EnumSetClusterStatus.Optionoptions)
-
-
 ClusterStatus
 HBaseAdmin.getClusterStatus(http://docs.oracle.com/javase/8/docs/api/java/util/EnumSet.html?is-external=true;
 title="class or interface in java.util">EnumSetClusterStatus.Optionoptions)
 
+
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFutureClusterStatus
+AsyncAdmin.getClusterStatus(http://docs.oracle.com/javase/8/docs/api/java/util/EnumSet.html?is-external=true;
 title="class or interface in java.util">EnumSetClusterStatus.Optionoptions)
+
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFutureClusterStatus
-AsyncHBaseAdmin.getClusterStatus(http://docs.oracle.com/javase/8/docs/api/java/util/EnumSet.html?is-external=true;
 title="class or interface in java.util">EnumSetClusterStatus.Optionoptions)
+RawAsyncHBaseAdmin.getClusterStatus(http://docs.oracle.com/javase/8/docs/api/java/util/EnumSet.html?is-external=true;
 title="class or interface in java.util">EnumSetClusterStatus.Optionoptions)
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/class-use/ClusterStatus.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/class-use/ClusterStatus.html 
b/devapidocs/org/apache/hadoop/hbase/class-use/ClusterStatus.html
index 2b1f776..48bced7 100644
--- a/devapidocs/org/apache/hadoop/hbase/class-use/ClusterStatus.html
+++ b/devapidocs/org/apache/hadoop/hbase/class-use/ClusterStatus.html
@@ -179,27 +179,27 @@
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFutureClusterStatus
-AsyncAdmin.getClusterStatus()
+AsyncHBaseAdmin.getClusterStatus()
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFutureClusterStatus
-RawAsyncHBaseAdmin.getClusterStatus()
+AsyncAdmin.getClusterStatus()
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFutureClusterStatus
-AsyncHBaseAdmin.getClusterStatus()
+RawAsyncHBaseAdmin.getClusterStatus()
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFutureClusterStatus
-AsyncAdmin.getClusterStatus(http://docs.oracle.com/javase/8/docs/api/java/util/EnumSet.html?is-external=true;
 title="class or interface in java.util">EnumSetClusterStatus.Optionoptions)
+AsyncHBaseAdmin.getClusterStatus(http://docs.oracle.com/javase/8/docs/api/java/util/EnumSet.html?is-external=true;
 title="class or interface in java.util">EnumSetClusterStatus.Optionoptions)
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFutureClusterStatus

[33/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/class-use/CellBuilder.DataType.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/class-use/CellBuilder.DataType.html 
b/devapidocs/org/apache/hadoop/hbase/class-use/CellBuilder.DataType.html
index 80b569a..2128d5c 100644
--- a/devapidocs/org/apache/hadoop/hbase/class-use/CellBuilder.DataType.html
+++ b/devapidocs/org/apache/hadoop/hbase/class-use/CellBuilder.DataType.html
@@ -125,17 +125,17 @@ the order they are declared.
 
 
 
-ExtendedCellBuilder
-ExtendedCellBuilderImpl.setType(CellBuilder.DataTypetype)
-
-
 CellBuilder
 CellBuilder.setType(CellBuilder.DataTypetype)
 
-
+
 ExtendedCellBuilder
 ExtendedCellBuilder.setType(CellBuilder.DataTypetype)
 
+
+ExtendedCellBuilder
+ExtendedCellBuilderImpl.setType(CellBuilder.DataTypetype)
+
 
 private static KeyValue.Type
 ExtendedCellBuilderImpl.toKeyValueType(CellBuilder.DataTypetype)

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/class-use/CellComparator.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/class-use/CellComparator.html 
b/devapidocs/org/apache/hadoop/hbase/class-use/CellComparator.html
index e7df486..380d3db 100644
--- a/devapidocs/org/apache/hadoop/hbase/class-use/CellComparator.html
+++ b/devapidocs/org/apache/hadoop/hbase/class-use/CellComparator.html
@@ -172,27 +172,27 @@
 
 
 static int
-PrivateCellUtil.compare(CellComparatorcomparator,
+CellUtil.compare(CellComparatorcomparator,
Cellleft,
byte[]key,
intoffset,
intlength)
-Used when a cell needs to be compared with a key byte[] 
such as cases of finding the index from
- the index block, bloom keys from the bloom blocks This byte[] is expected to 
be serialized in
- the KeyValue serialization format If the KeyValue (Cell's) serialization 
format changes this
- method cannot be used.
+Deprecated.
+As of HBase-2.0. Will be 
removed in HBase-3.0
+
 
 
 
 static int
-CellUtil.compare(CellComparatorcomparator,
+PrivateCellUtil.compare(CellComparatorcomparator,
Cellleft,
byte[]key,
intoffset,
intlength)
-Deprecated.
-As of HBase-2.0. Will be 
removed in HBase-3.0
-
+Used when a cell needs to be compared with a key byte[] 
such as cases of finding the index from
+ the index block, bloom keys from the bloom blocks This byte[] is expected to 
be serialized in
+ the KeyValue serialization format If the KeyValue (Cell's) serialization 
format changes this
+ method cannot be used.
 
 
 
@@ -265,12 +265,12 @@
 
 
 int
-RowIndexSeekerV1.compareKey(CellComparatorcomparator,
+BufferedDataBlockEncoder.BufferedEncodedSeeker.compareKey(CellComparatorcomparator,
   Cellkey)
 
 
 int
-BufferedDataBlockEncoder.BufferedEncodedSeeker.compareKey(CellComparatorcomparator,
+RowIndexSeekerV1.compareKey(CellComparatorcomparator,
   Cellkey)
 
 
@@ -282,27 +282,27 @@
 
 
 DataBlockEncoder.EncodedSeeker
-RowIndexCodecV1.createSeeker(CellComparatorcomparator,
+CopyKeyDataBlockEncoder.createSeeker(CellComparatorcomparator,
 HFileBlockDecodingContextdecodingCtx)
 
 
 DataBlockEncoder.EncodedSeeker
-CopyKeyDataBlockEncoder.createSeeker(CellComparatorcomparator,
+PrefixKeyDeltaEncoder.createSeeker(CellComparatorcomparator,
 HFileBlockDecodingContextdecodingCtx)
 
 
 DataBlockEncoder.EncodedSeeker
-DiffKeyDeltaEncoder.createSeeker(CellComparatorcomparator,
+FastDiffDeltaEncoder.createSeeker(CellComparatorcomparator,
 HFileBlockDecodingContextdecodingCtx)
 
 
 DataBlockEncoder.EncodedSeeker
-FastDiffDeltaEncoder.createSeeker(CellComparatorcomparator,
+DiffKeyDeltaEncoder.createSeeker(CellComparatorcomparator,
 HFileBlockDecodingContextdecodingCtx)
 
 
 DataBlockEncoder.EncodedSeeker
-PrefixKeyDeltaEncoder.createSeeker(CellComparatorcomparator,
+RowIndexCodecV1.createSeeker(CellComparatorcomparator,
 HFileBlockDecodingContextdecodingCtx)
 
 
@@ -340,9 +340,9 @@
 
 
 
-protected CellComparator
-HFileWriterImpl.comparator
-Key comparator.
+private CellComparator
+HFileBlockIndex.CellBasedKeyBlockIndexReader.comparator
+Needed doing lookup on blocks.
 
 
 
@@ -356,9 +356,9 @@
 
 
 
-private CellComparator
-HFileBlockIndex.CellBasedKeyBlockIndexReader.comparator
-Needed doing lookup on blocks.
+protected CellComparator
+HFileWriterImpl.comparator
+Key comparator.
 
 
 
@@ -539,15 +539,15 @@
 
 
 private CellComparator
-StripeStoreFileManager.cellComparator
+DefaultStoreFileManager.cellComparator
 
 
 private CellComparator
-DefaultStoreFileManager.cellComparator
+StripeStoreFileManager.cellComparator
 
 
-protected CellComparator
-StripeMultiFileWriter.comparator
+private CellComparator
+StoreFileWriter.Builder.comparator
 
 
 protected CellComparator
@@ -555,31 +555,31 @@
 
 
 private CellComparator

[15/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/client/package-tree.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/client/package-tree.html 
b/devapidocs/org/apache/hadoop/hbase/client/package-tree.html
index a441486..aeb0a2a 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/package-tree.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/package-tree.html
@@ -542,25 +542,25 @@
 
 java.lang.http://docs.oracle.com/javase/8/docs/api/java/lang/Enum.html?is-external=true;
 title="class or interface in java.lang">EnumE (implements java.lang.http://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true;
 title="class or interface in java.lang">ComparableT, java.io.http://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html?is-external=true;
 title="class or interface in java.io">Serializable)
 
-org.apache.hadoop.hbase.client.SnapshotType
-org.apache.hadoop.hbase.client.RequestController.ReturnCode
-org.apache.hadoop.hbase.client.TableState.State
-org.apache.hadoop.hbase.client.AsyncScanSingleRegionRpcRetryingCaller.ScanControllerState
-org.apache.hadoop.hbase.client.AsyncScanSingleRegionRpcRetryingCaller.ScanResumerState
-org.apache.hadoop.hbase.client.ScannerCallable.MoreResults
-org.apache.hadoop.hbase.client.HBaseAdmin.ReplicationState
+org.apache.hadoop.hbase.client.Durability
+org.apache.hadoop.hbase.client.AbstractResponse.ResponseType
 org.apache.hadoop.hbase.client.Scan.ReadType
-org.apache.hadoop.hbase.client.RegionLocateType
 org.apache.hadoop.hbase.client.AsyncProcessTask.SubmittedRows
-org.apache.hadoop.hbase.client.MasterSwitchType
-org.apache.hadoop.hbase.client.IsolationLevel
+org.apache.hadoop.hbase.client.ScannerCallable.MoreResults
+org.apache.hadoop.hbase.client.Consistency
 org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.Retry
+org.apache.hadoop.hbase.client.RequestController.ReturnCode
 org.apache.hadoop.hbase.client.CompactionState
+org.apache.hadoop.hbase.client.SnapshotType
+org.apache.hadoop.hbase.client.MasterSwitchType
+org.apache.hadoop.hbase.client.TableState.State
+org.apache.hadoop.hbase.client.AsyncScanSingleRegionRpcRetryingCaller.ScanControllerState
+org.apache.hadoop.hbase.client.IsolationLevel
 org.apache.hadoop.hbase.client.CompactType
-org.apache.hadoop.hbase.client.Durability
-org.apache.hadoop.hbase.client.AbstractResponse.ResponseType
-org.apache.hadoop.hbase.client.Consistency
+org.apache.hadoop.hbase.client.HBaseAdmin.ReplicationState
 org.apache.hadoop.hbase.client.MobCompactPartitionPolicy
+org.apache.hadoop.hbase.client.RegionLocateType
+org.apache.hadoop.hbase.client.AsyncScanSingleRegionRpcRetryingCaller.ScanResumerState
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/client/package-use.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/client/package-use.html 
b/devapidocs/org/apache/hadoop/hbase/client/package-use.html
index 6843ede..7d7604b 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/package-use.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/package-use.html
@@ -3309,16 +3309,6 @@ service.
 
 
 
-Append
-Performs Append operations on a single row.
-
-
-
-Increment
-Used to perform Increment operations on a single row.
-
-
-
 OperationWithAttributes
 
 
@@ -3349,21 +3339,31 @@ service.
 
 
 
+Append
+Performs Append operations on a single row.
+
+
+
 Delete
 Used to perform Delete operations on a single row.
 
 
-
+
 Durability
 Enum describing the durability guarantees for tables and Mutations
  Note that the items must be sorted in order of increasing durability
 
 
-
+
 Get
 Used to perform Get operations on a single row.
 
 
+
+Increment
+Used to perform Increment operations on a single row.
+
+
 
 OperationWithAttributes
 
@@ -3378,24 +3378,29 @@ service.
 
 
 
+Result
+Single row result of a Get or Scan query.
+
+
+
 ResultScanner
 Interface for client-side scanning.
 
 
-
+
 RowMutations
 Performs multiple mutations atomically on a single 
row.
 
 
-
+
 Scan
 Used to perform Scan operations.
 
 
-
+
 Scan.ReadType
 
-
+
 Table
 Used to communicate with a single HBase table.
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/client/replication/class-use/TableCFs.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/replication/class-use/TableCFs.html 
b/devapidocs/org/apache/hadoop/hbase/client/replication/class-use/TableCFs.html
index 7b4c300..8881b49 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/client/replication/class-use/TableCFs.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/client/replication/class-use/TableCFs.html
@@ -106,9 +106,7 @@
 
 
 

[04/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/master/class-use/RegionPlan.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/master/class-use/RegionPlan.html 
b/devapidocs/org/apache/hadoop/hbase/master/class-use/RegionPlan.html
index 77d772c..ef24a47 100644
--- a/devapidocs/org/apache/hadoop/hbase/master/class-use/RegionPlan.html
+++ b/devapidocs/org/apache/hadoop/hbase/master/class-use/RegionPlan.html
@@ -277,7 +277,10 @@
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionPlan
-FavoredStochasticBalancer.balanceCluster(http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true;
 title="class or interface in java.util">MapServerName,http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionInfoclusterState)
+SimpleLoadBalancer.balanceCluster(http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true;
 title="class or interface in java.util">MapServerName,http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionInfoclusterMap)
+Generate a global load balancing plan according to the 
specified map of
+ server information to the most loaded regions of each server.
+
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionPlan
@@ -287,19 +290,16 @@
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionPlan
-SimpleLoadBalancer.balanceCluster(http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true;
 title="class or interface in java.util">MapServerName,http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionInfoclusterMap)
-Generate a global load balancing plan according to the 
specified map of
- server information to the most loaded regions of each server.
-
+FavoredStochasticBalancer.balanceCluster(http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true;
 title="class or interface in java.util">MapServerName,http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionInfoclusterState)
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionPlan
-StochasticLoadBalancer.balanceCluster(TableNametableName,
+SimpleLoadBalancer.balanceCluster(TableNametableName,
   http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true;
 title="class or interface in java.util">MapServerName,http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionInfoclusterState)
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionPlan
-SimpleLoadBalancer.balanceCluster(TableNametableName,
+StochasticLoadBalancer.balanceCluster(TableNametableName,
   http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true;
 title="class or interface in java.util">MapServerName,http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegionInfoclusterState)
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/master/class-use/ServerManager.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/master/class-use/ServerManager.html 
b/devapidocs/org/apache/hadoop/hbase/master/class-use/ServerManager.html
index e800403..1152117 100644
--- a/devapidocs/org/apache/hadoop/hbase/master/class-use/ServerManager.html
+++ b/devapidocs/org/apache/hadoop/hbase/master/class-use/ServerManager.html
@@ -131,11 +131,11 @@
 
 
 ServerManager
-MasterServices.getServerManager()
+HMaster.getServerManager()
 
 
 ServerManager
-HMaster.getServerManager()
+MasterServices.getServerManager()
 
 
 
@@ -209,11 +209,11 @@
 
 
 private ServerManager
-DrainingServerTracker.serverManager
+RegionServerTracker.serverManager
 
 
 private ServerManager
-RegionServerTracker.serverManager
+DrainingServerTracker.serverManager
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/master/class-use/SplitLogManager.ResubmitDirective.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/master/class-use/SplitLogManager.ResubmitDirective.html
 

[50/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/apidocs/deprecated-list.html
--
diff --git a/apidocs/deprecated-list.html b/apidocs/deprecated-list.html
index 71d650e..1b0561e 100644
--- a/apidocs/deprecated-list.html
+++ b/apidocs/deprecated-list.html
@@ -663,82 +663,82 @@
 
 
 
-org.apache.hadoop.hbase.filter.ValueFilter.filterKeyValue(Cell)
+org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(Cell)
 
 
-org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter.filterKeyValue(Cell)
+org.apache.hadoop.hbase.filter.WhileMatchFilter.filterKeyValue(Cell)
 
 
-org.apache.hadoop.hbase.filter.ColumnPrefixFilter.filterKeyValue(Cell)
+org.apache.hadoop.hbase.filter.PageFilter.filterKeyValue(Cell)
 
 
-org.apache.hadoop.hbase.filter.TimestampsFilter.filterKeyValue(Cell)
+org.apache.hadoop.hbase.filter.MultipleColumnPrefixFilter.filterKeyValue(Cell)
 
 
-org.apache.hadoop.hbase.filter.ColumnCountGetFilter.filterKeyValue(Cell)
+org.apache.hadoop.hbase.filter.InclusiveStopFilter.filterKeyValue(Cell)
 
 
-org.apache.hadoop.hbase.filter.WhileMatchFilter.filterKeyValue(Cell)
+org.apache.hadoop.hbase.filter.KeyOnlyFilter.filterKeyValue(Cell)
 
 
 org.apache.hadoop.hbase.filter.RowFilter.filterKeyValue(Cell)
 
 
-org.apache.hadoop.hbase.filter.RandomRowFilter.filterKeyValue(Cell)
+org.apache.hadoop.hbase.filter.ColumnRangeFilter.filterKeyValue(Cell)
 
 
-org.apache.hadoop.hbase.filter.MultiRowRangeFilter.filterKeyValue(Cell)
+org.apache.hadoop.hbase.filter.FamilyFilter.filterKeyValue(Cell)
 
 
-org.apache.hadoop.hbase.filter.InclusiveStopFilter.filterKeyValue(Cell)
+org.apache.hadoop.hbase.filter.RandomRowFilter.filterKeyValue(Cell)
 
 
-org.apache.hadoop.hbase.filter.SingleColumnValueFilter.filterKeyValue(Cell)
+org.apache.hadoop.hbase.filter.FirstKeyValueMatchingQualifiersFilter.filterKeyValue(Cell)
 
 
-org.apache.hadoop.hbase.filter.DependentColumnFilter.filterKeyValue(Cell)
+org.apache.hadoop.hbase.filter.SkipFilter.filterKeyValue(Cell)
 
 
-org.apache.hadoop.hbase.filter.QualifierFilter.filterKeyValue(Cell)
+org.apache.hadoop.hbase.filter.DependentColumnFilter.filterKeyValue(Cell)
 
 
-org.apache.hadoop.hbase.filter.FirstKeyValueMatchingQualifiersFilter.filterKeyValue(Cell)
+org.apache.hadoop.hbase.filter.Filter.filterKeyValue(Cell)
+As of release 2.0.0, this 
will be removed in HBase 3.0.0.
+ Instead use filterCell(Cell)
+
 
 
 org.apache.hadoop.hbase.filter.ColumnPaginationFilter.filterKeyValue(Cell)
 
 
-org.apache.hadoop.hbase.filter.ColumnRangeFilter.filterKeyValue(Cell)
+org.apache.hadoop.hbase.filter.ValueFilter.filterKeyValue(Cell)
 
 
-org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(Cell)
+org.apache.hadoop.hbase.filter.ColumnCountGetFilter.filterKeyValue(Cell)
 
 
-org.apache.hadoop.hbase.filter.MultipleColumnPrefixFilter.filterKeyValue(Cell)
+org.apache.hadoop.hbase.filter.QualifierFilter.filterKeyValue(Cell)
 
 
-org.apache.hadoop.hbase.filter.SkipFilter.filterKeyValue(Cell)
+org.apache.hadoop.hbase.filter.PrefixFilter.filterKeyValue(Cell)
 
 
-org.apache.hadoop.hbase.filter.PageFilter.filterKeyValue(Cell)
+org.apache.hadoop.hbase.filter.FuzzyRowFilter.filterKeyValue(Cell)
 
 
-org.apache.hadoop.hbase.filter.PrefixFilter.filterKeyValue(Cell)
+org.apache.hadoop.hbase.filter.TimestampsFilter.filterKeyValue(Cell)
 
 
-org.apache.hadoop.hbase.filter.FamilyFilter.filterKeyValue(Cell)
+org.apache.hadoop.hbase.filter.ColumnPrefixFilter.filterKeyValue(Cell)
 
 
-org.apache.hadoop.hbase.filter.Filter.filterKeyValue(Cell)
-As of release 2.0.0, this 
will be removed in HBase 3.0.0.
- Instead use filterCell(Cell)
-
+org.apache.hadoop.hbase.filter.MultiRowRangeFilter.filterKeyValue(Cell)
 
 
-org.apache.hadoop.hbase.filter.FuzzyRowFilter.filterKeyValue(Cell)
+org.apache.hadoop.hbase.filter.SingleColumnValueFilter.filterKeyValue(Cell)
 
 
-org.apache.hadoop.hbase.filter.KeyOnlyFilter.filterKeyValue(Cell)
+org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter.filterKeyValue(Cell)
 
 
 org.apache.hadoop.hbase.filter.Filter.filterRowKey(byte[],
 int, int)
@@ -862,13 +862,13 @@
 org.apache.hadoop.hbase.rest.client.RemoteHTable.getOperationTimeout()
 
 
-org.apache.hadoop.hbase.filter.SingleColumnValueFilter.getOperator()
-since 2.0.0. Will be 
removed in 3.0.0. Use SingleColumnValueFilter.getCompareOperator()
 instead.
+org.apache.hadoop.hbase.filter.CompareFilter.getOperator()
+since 2.0.0. Will be 
removed in 3.0.0. Use CompareFilter.getCompareOperator()
 instead.
 
 
 
-org.apache.hadoop.hbase.filter.CompareFilter.getOperator()
-since 2.0.0. Will be 
removed in 3.0.0. Use CompareFilter.getCompareOperator()
 instead.
+org.apache.hadoop.hbase.filter.SingleColumnValueFilter.getOperator()
+since 2.0.0. Will be 
removed in 3.0.0. Use SingleColumnValueFilter.getCompareOperator()
 instead.
 
 
 
@@ -1461,27 +1461,27 @@
 
 
 
-org.apache.hadoop.hbase.client.Scan.setMaxVersions()

[45/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.ReturnCode.html
--
diff --git 
a/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.ReturnCode.html 
b/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.ReturnCode.html
index 39728e7..30331ac 100644
--- a/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.ReturnCode.html
+++ b/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.ReturnCode.html
@@ -107,27 +107,27 @@
 
 
 Filter.ReturnCode
-ValueFilter.filterCell(Cellc)
+FilterList.filterCell(Cellc)
 
 
 Filter.ReturnCode
-FirstKeyOnlyFilter.filterCell(Cellc)
+WhileMatchFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-ColumnPrefixFilter.filterCell(Cellcell)
+PageFilter.filterCell(Cellignored)
 
 
 Filter.ReturnCode
-TimestampsFilter.filterCell(Cellc)
+MultipleColumnPrefixFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-ColumnCountGetFilter.filterCell(Cellc)
+InclusiveStopFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-WhileMatchFilter.filterCell(Cellc)
+KeyOnlyFilter.filterCell(Cellignored)
 
 
 Filter.ReturnCode
@@ -135,33 +135,35 @@
 
 
 Filter.ReturnCode
-RandomRowFilter.filterCell(Cellc)
+ColumnRangeFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-MultiRowRangeFilter.filterCell(Cellignored)
+FamilyFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-InclusiveStopFilter.filterCell(Cellc)
+RandomRowFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-SingleColumnValueFilter.filterCell(Cellc)
+FirstKeyValueMatchingQualifiersFilter.filterCell(Cellc)
+Deprecated.
+
 
 
 Filter.ReturnCode
-DependentColumnFilter.filterCell(Cellc)
+SkipFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-QualifierFilter.filterCell(Cellc)
+DependentColumnFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-FirstKeyValueMatchingQualifiersFilter.filterCell(Cellc)
-Deprecated.
-
+Filter.filterCell(Cellc)
+A way to filter based on the column family, column 
qualifier and/or the column value.
+
 
 
 Filter.ReturnCode
@@ -169,87 +171,85 @@
 
 
 Filter.ReturnCode
-ColumnRangeFilter.filterCell(Cellc)
+ValueFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-FilterList.filterCell(Cellc)
+ColumnCountGetFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-MultipleColumnPrefixFilter.filterCell(Cellc)
+QualifierFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-SkipFilter.filterCell(Cellc)
+PrefixFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-PageFilter.filterCell(Cellignored)
+FuzzyRowFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-PrefixFilter.filterCell(Cellc)
+TimestampsFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-FamilyFilter.filterCell(Cellc)
+ColumnPrefixFilter.filterCell(Cellcell)
 
 
 Filter.ReturnCode
-Filter.filterCell(Cellc)
-A way to filter based on the column family, column 
qualifier and/or the column value.
-
+MultiRowRangeFilter.filterCell(Cellignored)
 
 
 Filter.ReturnCode
-FuzzyRowFilter.filterCell(Cellc)
+SingleColumnValueFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-KeyOnlyFilter.filterCell(Cellignored)
+FirstKeyOnlyFilter.filterCell(Cellc)
 
 
 Filter.ReturnCode
-ColumnPrefixFilter.filterColumn(Cellcell)
+MultipleColumnPrefixFilter.filterColumn(Cellcell)
 
 
 Filter.ReturnCode
-MultipleColumnPrefixFilter.filterColumn(Cellcell)
+ColumnPrefixFilter.filterColumn(Cellcell)
 
 
 Filter.ReturnCode
-ValueFilter.filterKeyValue(Cellc)
+FilterList.filterKeyValue(Cellc)
 Deprecated.
 
 
 
 Filter.ReturnCode
-FirstKeyOnlyFilter.filterKeyValue(Cellc)
+WhileMatchFilter.filterKeyValue(Cellc)
 Deprecated.
 
 
 
 Filter.ReturnCode
-ColumnPrefixFilter.filterKeyValue(Cellc)
+PageFilter.filterKeyValue(Cellc)
 Deprecated.
 
 
 
 Filter.ReturnCode
-TimestampsFilter.filterKeyValue(Cellc)
+MultipleColumnPrefixFilter.filterKeyValue(Cellc)
 Deprecated.
 
 
 
 Filter.ReturnCode
-ColumnCountGetFilter.filterKeyValue(Cellc)
+InclusiveStopFilter.filterKeyValue(Cellc)
 Deprecated.
 
 
 
 Filter.ReturnCode
-WhileMatchFilter.filterKeyValue(Cellc)
+KeyOnlyFilter.filterKeyValue(Cellignored)
 Deprecated.
 
 
@@ -261,44 +261,47 @@
 
 
 Filter.ReturnCode
-RandomRowFilter.filterKeyValue(Cellc)
+ColumnRangeFilter.filterKeyValue(Cellc)
 Deprecated.
 
 
 
 Filter.ReturnCode
-MultiRowRangeFilter.filterKeyValue(Cellignored)
+FamilyFilter.filterKeyValue(Cellc)
 Deprecated.
 
 
 
 Filter.ReturnCode
-InclusiveStopFilter.filterKeyValue(Cellc)
+RandomRowFilter.filterKeyValue(Cellc)
 Deprecated.
 
 
 
 Filter.ReturnCode
-SingleColumnValueFilter.filterKeyValue(Cellc)
+FirstKeyValueMatchingQualifiersFilter.filterKeyValue(Cellc)
 Deprecated.
 
 
 
 Filter.ReturnCode
-DependentColumnFilter.filterKeyValue(Cellc)
+SkipFilter.filterKeyValue(Cellc)
 Deprecated.
 
 
 
 Filter.ReturnCode
-QualifierFilter.filterKeyValue(Cellc)
+DependentColumnFilter.filterKeyValue(Cellc)
 Deprecated.
 
 
 
 Filter.ReturnCode
-FirstKeyValueMatchingQualifiersFilter.filterKeyValue(Cellc)
-Deprecated.
+Filter.filterKeyValue(Cellc)
+Deprecated.
+As of release 

[09/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/io/encoding/class-use/HFileBlockDecodingContext.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/io/encoding/class-use/HFileBlockDecodingContext.html
 
b/devapidocs/org/apache/hadoop/hbase/io/encoding/class-use/HFileBlockDecodingContext.html
index 2df0d13..a694809 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/io/encoding/class-use/HFileBlockDecodingContext.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/io/encoding/class-use/HFileBlockDecodingContext.html
@@ -166,27 +166,27 @@
 
 
 DataBlockEncoder.EncodedSeeker
-RowIndexCodecV1.createSeeker(CellComparatorcomparator,
+CopyKeyDataBlockEncoder.createSeeker(CellComparatorcomparator,
 HFileBlockDecodingContextdecodingCtx)
 
 
 DataBlockEncoder.EncodedSeeker
-CopyKeyDataBlockEncoder.createSeeker(CellComparatorcomparator,
+PrefixKeyDeltaEncoder.createSeeker(CellComparatorcomparator,
 HFileBlockDecodingContextdecodingCtx)
 
 
 DataBlockEncoder.EncodedSeeker
-DiffKeyDeltaEncoder.createSeeker(CellComparatorcomparator,
+FastDiffDeltaEncoder.createSeeker(CellComparatorcomparator,
 HFileBlockDecodingContextdecodingCtx)
 
 
 DataBlockEncoder.EncodedSeeker
-FastDiffDeltaEncoder.createSeeker(CellComparatorcomparator,
+DiffKeyDeltaEncoder.createSeeker(CellComparatorcomparator,
 HFileBlockDecodingContextdecodingCtx)
 
 
 DataBlockEncoder.EncodedSeeker
-PrefixKeyDeltaEncoder.createSeeker(CellComparatorcomparator,
+RowIndexCodecV1.createSeeker(CellComparatorcomparator,
 HFileBlockDecodingContextdecodingCtx)
 
 
@@ -198,13 +198,13 @@
 
 
 http://docs.oracle.com/javase/8/docs/api/java/nio/ByteBuffer.html?is-external=true;
 title="class or interface in java.nio">ByteBuffer
-RowIndexCodecV1.decodeKeyValues(http://docs.oracle.com/javase/8/docs/api/java/io/DataInputStream.html?is-external=true;
 title="class or interface in java.io">DataInputStreamsource,
-   HFileBlockDecodingContextdecodingCtx)
+BufferedDataBlockEncoder.decodeKeyValues(http://docs.oracle.com/javase/8/docs/api/java/io/DataInputStream.html?is-external=true;
 title="class or interface in java.io">DataInputStreamsource,
+   HFileBlockDecodingContextblkDecodingCtx)
 
 
 http://docs.oracle.com/javase/8/docs/api/java/nio/ByteBuffer.html?is-external=true;
 title="class or interface in java.nio">ByteBuffer
-BufferedDataBlockEncoder.decodeKeyValues(http://docs.oracle.com/javase/8/docs/api/java/io/DataInputStream.html?is-external=true;
 title="class or interface in java.io">DataInputStreamsource,
-   HFileBlockDecodingContextblkDecodingCtx)
+RowIndexCodecV1.decodeKeyValues(http://docs.oracle.com/javase/8/docs/api/java/io/DataInputStream.html?is-external=true;
 title="class or interface in java.io">DataInputStreamsource,
+   HFileBlockDecodingContextdecodingCtx)
 
 
 
@@ -279,18 +279,18 @@
 
 
 HFileBlockDecodingContext
-HFileDataBlockEncoderImpl.newDataBlockDecodingContext(HFileContextfileContext)
-
-
-HFileBlockDecodingContext
 NoOpDataBlockEncoder.newDataBlockDecodingContext(HFileContextmeta)
 
-
+
 HFileBlockDecodingContext
 HFileDataBlockEncoder.newDataBlockDecodingContext(HFileContextfileContext)
 create a encoder specific decoding context for 
reading.
 
 
+
+HFileBlockDecodingContext
+HFileDataBlockEncoderImpl.newDataBlockDecodingContext(HFileContextfileContext)
+
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/io/encoding/class-use/HFileBlockDefaultDecodingContext.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/io/encoding/class-use/HFileBlockDefaultDecodingContext.html
 
b/devapidocs/org/apache/hadoop/hbase/io/encoding/class-use/HFileBlockDefaultDecodingContext.html
index 9f3340f..337ccf5 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/io/encoding/class-use/HFileBlockDefaultDecodingContext.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/io/encoding/class-use/HFileBlockDefaultDecodingContext.html
@@ -116,36 +116,36 @@
  HFileBlockDefaultDecodingContextdecodingCtx)
 
 
-protected http://docs.oracle.com/javase/8/docs/api/java/nio/ByteBuffer.html?is-external=true;
 title="class or interface in java.nio">ByteBuffer
-CopyKeyDataBlockEncoder.internalDecodeKeyValues(http://docs.oracle.com/javase/8/docs/api/java/io/DataInputStream.html?is-external=true;
 title="class or interface in java.io">DataInputStreamsource,
+protected abstract http://docs.oracle.com/javase/8/docs/api/java/nio/ByteBuffer.html?is-external=true;
 title="class or interface in java.nio">ByteBuffer
+BufferedDataBlockEncoder.internalDecodeKeyValues(http://docs.oracle.com/javase/8/docs/api/java/io/DataInputStream.html?is-external=true;
 title="class or interface in java.io">DataInputStreamsource,

[16/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/client/class-use/TableDescriptor.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/class-use/TableDescriptor.html 
b/devapidocs/org/apache/hadoop/hbase/client/class-use/TableDescriptor.html
index a1ef4d1..94335f9 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/class-use/TableDescriptor.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/class-use/TableDescriptor.html
@@ -411,14 +411,14 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 TableDescriptor
-HTable.getDescriptor()
-
-
-TableDescriptor
 Table.getDescriptor()
 Gets the table 
descriptor for this table.
 
 
+
+TableDescriptor
+HTable.getDescriptor()
+
 
 TableDescriptor
 Admin.getDescriptor(TableNametableName)
@@ -467,17 +467,17 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFutureTableDescriptor
-AsyncAdmin.getTableDescriptor(TableNametableName)
-Method for getting the tableDescriptor
-
+AsyncHBaseAdmin.getTableDescriptor(TableNametableName)
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFutureTableDescriptor
-RawAsyncHBaseAdmin.getTableDescriptor(TableNametableName)
+AsyncAdmin.getTableDescriptor(TableNametableName)
+Method for getting the tableDescriptor
+
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFutureTableDescriptor
-AsyncHBaseAdmin.getTableDescriptor(TableNametableName)
+RawAsyncHBaseAdmin.getTableDescriptor(TableNametableName)
 
 
 private http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListTableDescriptor
@@ -543,37 +543,37 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListTableDescriptor
+AsyncHBaseAdmin.listTables(booleanincludeSysTables)
+
+
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListTableDescriptor
 AsyncAdmin.listTables(booleanincludeSysTables)
 List all the tables.
 
 
-
+
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListTableDescriptor
 RawAsyncHBaseAdmin.listTables(booleanincludeSysTables)
 
-
+
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListTableDescriptor
-AsyncHBaseAdmin.listTables(booleanincludeSysTables)
+AsyncHBaseAdmin.listTables(http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html?is-external=true;
 title="class or interface in java.util.regex">Patternpattern,
+  booleanincludeSysTables)
 
-
+
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListTableDescriptor
 AsyncAdmin.listTables(http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html?is-external=true;
 title="class or interface in java.util.regex">Patternpattern,
   booleanincludeSysTables)
 List all the tables matching the given pattern.
 
 
-
+
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface 

[20/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/client/class-use/RegionLocateType.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/class-use/RegionLocateType.html 
b/devapidocs/org/apache/hadoop/hbase/client/class-use/RegionLocateType.html
index 4345522..aec63fb 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/class-use/RegionLocateType.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/class-use/RegionLocateType.html
@@ -106,7 +106,7 @@
 
 
 private RegionLocateType
-AsyncSingleRequestRpcRetryingCaller.locateType
+AsyncRpcRetryingCallerFactory.SingleRequestCallerBuilder.locateType
 
 
 RegionLocateType
@@ -114,7 +114,7 @@
 
 
 private RegionLocateType
-AsyncRpcRetryingCallerFactory.SingleRequestCallerBuilder.locateType
+AsyncSingleRequestRpcRetryingCaller.locateType
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/client/class-use/RegionLocator.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/class-use/RegionLocator.html 
b/devapidocs/org/apache/hadoop/hbase/client/class-use/RegionLocator.html
index 2dc7051..ef64eb1 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/class-use/RegionLocator.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/class-use/RegionLocator.html
@@ -208,14 +208,14 @@ service.
 
 
 private RegionLocator
-HFileOutputFormat2.TableInfo.regionLocator
-
-
-private RegionLocator
 TableInputFormatBase.regionLocator
 The RegionLocator of the 
table.
 
 
+
+private RegionLocator
+HFileOutputFormat2.TableInfo.regionLocator
+
 
 
 
@@ -226,15 +226,15 @@ service.
 
 
 
-RegionLocator
-HFileOutputFormat2.TableInfo.getRegionLocator()
-
-
 protected RegionLocator
 TableInputFormatBase.getRegionLocator()
 Allows subclasses to get the RegionLocator.
 
 
+
+RegionLocator
+HFileOutputFormat2.TableInfo.getRegionLocator()
+
 
 
 



[11/51] [partial] hbase-site git commit: Published site at .

2017-11-15 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/cba900e4/devapidocs/org/apache/hadoop/hbase/filter/class-use/Filter.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/filter/class-use/Filter.html 
b/devapidocs/org/apache/hadoop/hbase/filter/class-use/Filter.html
index fe4b719..d4a07ab 100644
--- a/devapidocs/org/apache/hadoop/hbase/filter/class-use/Filter.html
+++ b/devapidocs/org/apache/hadoop/hbase/filter/class-use/Filter.html
@@ -488,15 +488,15 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 static Filter
-ColumnPrefixFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
+SingleColumnValueExcludeFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
 
 
 static Filter
-ColumnCountGetFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
+ValueFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
 
 
 static Filter
-RowFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
+FamilyFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
 
 
 static Filter
@@ -506,63 +506,63 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 static Filter
-FirstKeyOnlyFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
+ColumnPrefixFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
 
 
 static Filter
-TimestampsFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
+PageFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
 
 
 static Filter
-ValueFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
+RowFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
 
 
 static Filter
-KeyOnlyFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
+ColumnRangeFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
 
 
 static Filter
-FamilyFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
+ColumnCountGetFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
 
 
 static Filter
-QualifierFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
+MultipleColumnPrefixFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
 
 
 static Filter
-ColumnRangeFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
+ColumnPaginationFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
 
 
 static Filter

[2/6] hbase git commit: HBASE-19223 Remove references to Date Tiered compaction from branch-1.1 ref guide

2017-11-15 Thread busbey
HBASE-19223 Remove references to Date Tiered compaction from branch-1.1 ref 
guide

Signed-off-by: Yu Li 
Signed-off-by: Mike Drob 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/540bf082
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/540bf082
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/540bf082

Branch: refs/heads/branch-1.1
Commit: 540bf082a24be02202ff3c15d94881b6861c4645
Parents: 434097d
Author: Sean Busbey 
Authored: Thu Nov 9 08:04:20 2017 -0600
Committer: Sean Busbey 
Committed: Wed Nov 15 10:29:37 2017 -0600

--
 src/main/asciidoc/_chapters/architecture.adoc | 101 -
 src/main/asciidoc/_chapters/upgrading.adoc|  11 ++-
 2 files changed, 7 insertions(+), 105 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/540bf082/src/main/asciidoc/_chapters/architecture.adoc
--
diff --git a/src/main/asciidoc/_chapters/architecture.adoc 
b/src/main/asciidoc/_chapters/architecture.adoc
index ebb0677..930fa60 100644
--- a/src/main/asciidoc/_chapters/architecture.adoc
+++ b/src/main/asciidoc/_chapters/architecture.adoc
@@ -2086,107 +2086,6 @@ Why?
 
 NOTE: This information is now included in the configuration parameter table in 
<>.
 
-[[ops.date.tiered]]
-= Date Tiered Compaction
-
-Date tiered compaction is a date-aware store file compaction strategy that is 
beneficial for time-range scans for time-series data.
-
-[[ops.date.tiered.when]]
-= When To Use Date Tiered Compactions
-
-Consider using Date Tiered Compaction for reads for limited time ranges, 
especially scans of recent data
-
-Don't use it for
-
-* random gets without a limited time range
-* frequent deletes and updates
-* Frequent out of order data writes creating long tails, especially writes 
with future timestamps
-* frequent bulk loads with heavily overlapping time ranges
-
-.Performance Improvements
-Performance testing has shown that the performance of time-range scans improve 
greatly for limited time ranges, especially scans of recent data.
-
-[[ops.date.tiered.enable]]
-== Enabling Date Tiered Compaction
-
-You can enable Date Tiered compaction for a table or a column family, by 
setting its `hbase.hstore.engine.class` to 
`org.apache.hadoop.hbase.regionserver.DateTieredStoreEngine`.
-
-You also need to set `hbase.hstore.blockingStoreFiles` to a high number, such 
as 60, if using all default settings, rather than the default value of 12). Use 
1.5~2 x projected file count if changing the parameters, Projected file count = 
windows per tier x tier count + incoming window min + files older than max age
-
-You also need to set `hbase.hstore.compaction.max` to the same value as 
`hbase.hstore.blockingStoreFiles` to unblock major compaction.
-
-.Procedure: Enable Date Tiered Compaction
-. Run one of following commands in the HBase shell.
-  Replace the table name `orders_table` with the name of your table.
-+
-[source,sql]
-
-alter 'orders_table', CONFIGURATION => {'hbase.hstore.engine.class' => 
'org.apache.hadoop.hbase.regionserver.DateTieredStoreEngine', 
'hbase.hstore.blockingStoreFiles' => '60', 'hbase.hstore.compaction.min'=>'2', 
'hbase.hstore.compaction.max'=>'60'}
-alter 'orders_table', {NAME => 'blobs_cf', CONFIGURATION => 
{'hbase.hstore.engine.class' => 
'org.apache.hadoop.hbase.regionserver.DateTieredStoreEngine', 
'hbase.hstore.blockingStoreFiles' => '60', 'hbase.hstore.compaction.min'=>'2', 
'hbase.hstore.compaction.max'=>'60'}}
-create 'orders_table', 'blobs_cf', CONFIGURATION => 
{'hbase.hstore.engine.class' => 
'org.apache.hadoop.hbase.regionserver.DateTieredStoreEngine', 
'hbase.hstore.blockingStoreFiles' => '60', 'hbase.hstore.compaction.min'=>'2', 
'hbase.hstore.compaction.max'=>'60'}
-
-
-. Configure other options if needed.
-  See <> for more information.
-
-.Procedure: Disable Date Tiered Compaction
-. Set the `hbase.hstore.engine.class` option to either nil or 
`org.apache.hadoop.hbase.regionserver.DefaultStoreEngine`.
-  Either option has the same effect.
-  Make sure you set the other options you changed to the original settings too.
-+
-[source,sql]
-
-alter 'orders_table', CONFIGURATION => {'hbase.hstore.engine.class' => 
'org.apache.hadoop.hbase.regionserver.DefaultStoreEngine', 
'hbase.hstore.blockingStoreFiles' => '12', 'hbase.hstore.compaction.min'=>'6', 
'hbase.hstore.compaction.max'=>'12'}}
-
-
-When you change the store engine either way, a major compaction will likely be 
performed on most regions.
-This is not necessary on new tables.
-
-[[ops.date.tiered.config]]
-== Configuring Date Tiered Compaction
-
-Each of the settings for date tiered compaction should be configured 

[1/6] hbase git commit: HBASE-19223 Remove references to Date Tiered compaction from branch-1.2 ref guide

2017-11-15 Thread busbey
Repository: hbase
Updated Branches:
  refs/heads/branch-1.1 434097db0 -> 540bf082a
  refs/heads/branch-1.2 31677c0aa -> 041fbe71b
  refs/heads/branch-1.3 17f11ae6c -> 565527c60
  refs/heads/branch-1.4 846753c18 -> 9a075fe73
  refs/heads/branch-2 9c85d0017 -> fb79e9d4a
  refs/heads/master df98d6848 -> d89682ea9


HBASE-19223 Remove references to Date Tiered compaction from branch-1.2 ref 
guide

Signed-off-by: Yu Li 
Signed-off-by: Mike Drob 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/041fbe71
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/041fbe71
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/041fbe71

Branch: refs/heads/branch-1.2
Commit: 041fbe71b6429170576395b35603e07157acc585
Parents: 31677c0
Author: Sean Busbey 
Authored: Thu Nov 9 08:04:20 2017 -0600
Committer: Sean Busbey 
Committed: Wed Nov 15 10:28:14 2017 -0600

--
 src/main/asciidoc/_chapters/architecture.adoc | 101 -
 src/main/asciidoc/_chapters/upgrading.adoc|  11 ++-
 2 files changed, 7 insertions(+), 105 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/041fbe71/src/main/asciidoc/_chapters/architecture.adoc
--
diff --git a/src/main/asciidoc/_chapters/architecture.adoc 
b/src/main/asciidoc/_chapters/architecture.adoc
index 9f59cd5..6ab5f48 100644
--- a/src/main/asciidoc/_chapters/architecture.adoc
+++ b/src/main/asciidoc/_chapters/architecture.adoc
@@ -2060,107 +2060,6 @@ Why?
 
 NOTE: This information is now included in the configuration parameter table in 
<>.
 
-[[ops.date.tiered]]
-= Date Tiered Compaction
-
-Date tiered compaction is a date-aware store file compaction strategy that is 
beneficial for time-range scans for time-series data.
-
-[[ops.date.tiered.when]]
-= When To Use Date Tiered Compactions
-
-Consider using Date Tiered Compaction for reads for limited time ranges, 
especially scans of recent data
-
-Don't use it for
-
-* random gets without a limited time range
-* frequent deletes and updates
-* Frequent out of order data writes creating long tails, especially writes 
with future timestamps
-* frequent bulk loads with heavily overlapping time ranges
-
-.Performance Improvements
-Performance testing has shown that the performance of time-range scans improve 
greatly for limited time ranges, especially scans of recent data.
-
-[[ops.date.tiered.enable]]
-== Enabling Date Tiered Compaction
-
-You can enable Date Tiered compaction for a table or a column family, by 
setting its `hbase.hstore.engine.class` to 
`org.apache.hadoop.hbase.regionserver.DateTieredStoreEngine`.
-
-You also need to set `hbase.hstore.blockingStoreFiles` to a high number, such 
as 60, if using all default settings, rather than the default value of 12). Use 
1.5~2 x projected file count if changing the parameters, Projected file count = 
windows per tier x tier count + incoming window min + files older than max age
-
-You also need to set `hbase.hstore.compaction.max` to the same value as 
`hbase.hstore.blockingStoreFiles` to unblock major compaction.
-
-.Procedure: Enable Date Tiered Compaction
-. Run one of following commands in the HBase shell.
-  Replace the table name `orders_table` with the name of your table.
-+
-[source,sql]
-
-alter 'orders_table', CONFIGURATION => {'hbase.hstore.engine.class' => 
'org.apache.hadoop.hbase.regionserver.DateTieredStoreEngine', 
'hbase.hstore.blockingStoreFiles' => '60', 'hbase.hstore.compaction.min'=>'2', 
'hbase.hstore.compaction.max'=>'60'}
-alter 'orders_table', {NAME => 'blobs_cf', CONFIGURATION => 
{'hbase.hstore.engine.class' => 
'org.apache.hadoop.hbase.regionserver.DateTieredStoreEngine', 
'hbase.hstore.blockingStoreFiles' => '60', 'hbase.hstore.compaction.min'=>'2', 
'hbase.hstore.compaction.max'=>'60'}}
-create 'orders_table', 'blobs_cf', CONFIGURATION => 
{'hbase.hstore.engine.class' => 
'org.apache.hadoop.hbase.regionserver.DateTieredStoreEngine', 
'hbase.hstore.blockingStoreFiles' => '60', 'hbase.hstore.compaction.min'=>'2', 
'hbase.hstore.compaction.max'=>'60'}
-
-
-. Configure other options if needed.
-  See <> for more information.
-
-.Procedure: Disable Date Tiered Compaction
-. Set the `hbase.hstore.engine.class` option to either nil or 
`org.apache.hadoop.hbase.regionserver.DefaultStoreEngine`.
-  Either option has the same effect.
-  Make sure you set the other options you changed to the original settings too.
-+
-[source,sql]
-
-alter 'orders_table', CONFIGURATION => {'hbase.hstore.engine.class' => 
'org.apache.hadoop.hbase.regionserver.DefaultStoreEngine', 
'hbase.hstore.blockingStoreFiles' => '12', 'hbase.hstore.compaction.min'=>'6', 

[4/6] hbase git commit: HBASE-19223 Note availability of Date Tiered Compaction in 1.y release for those upgrading from 0.98

2017-11-15 Thread busbey
HBASE-19223 Note availability of Date Tiered Compaction in 1.y release for 
those upgrading from 0.98

Signed-off-by: Yu Li 
Signed-off-by: Mike Drob 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/fb79e9d4
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/fb79e9d4
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/fb79e9d4

Branch: refs/heads/branch-2
Commit: fb79e9d4a769334b2c3b4a0b26eda409ad0bcfd2
Parents: 9c85d00
Author: Sean Busbey 
Authored: Thu Nov 9 08:04:20 2017 -0600
Committer: Sean Busbey 
Committed: Wed Nov 15 10:31:52 2017 -0600

--
 src/main/asciidoc/_chapters/upgrading.adoc | 11 +++
 1 file changed, 7 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/fb79e9d4/src/main/asciidoc/_chapters/upgrading.adoc
--
diff --git a/src/main/asciidoc/_chapters/upgrading.adoc 
b/src/main/asciidoc/_chapters/upgrading.adoc
index 987e2a7..47f9192 100644
--- a/src/main/asciidoc/_chapters/upgrading.adoc
+++ b/src/main/asciidoc/_chapters/upgrading.adoc
@@ -325,16 +325,16 @@ Quitting...
 == Upgrade Paths
 
 [[upgrade1.0]]
-=== Upgrading from 0.98.x to 1.0.x
+=== Upgrading from 0.98.x to 1.x
 
-In this section we first note the significant changes that come in with 1.0.0 
HBase and then we go over the upgrade process. Be sure to read the significant 
changes section with care so you avoid surprises.
+In this section we first note the significant changes that come in with 1.0.0+ 
HBase and then we go over the upgrade process. Be sure to read the significant 
changes section with care so you avoid surprises.
 
  Changes of Note!
 
-In here we list important changes that are in 1.0.0 since 0.98.x., changes you 
should be aware that will go into effect once you upgrade.
+In here we list important changes that are in 1.0.0+ since 0.98.x., changes 
you should be aware that will go into effect once you upgrade.
 
 [[zookeeper.3.4]]
-.ZooKeeper 3.4 is required in HBase 1.0.0
+.ZooKeeper 3.4 is required in HBase 1.0.0+
 See <>.
 
 [[default.ports.changed]]
@@ -363,6 +363,9 @@ to miss data. In particular, 0.98.11 defaults 
`hbase.client.scanner.max.result.s
 to 2 MB but other versions default to larger values. For this reason, be very 
careful
 using 0.98.11 servers with any other client version.
 
+.Availability of Date Tiered Compaction.
+The Date Tiered Compaction feature available as of 0.98.19 is available in the 
1.y release line starting in release 1.3.0. If you have enabled this feature 
for any tables you must upgrade to version 1.3.0 or later. If you attempt to 
use an earlier 1.y release, any tables configured to use date tiered compaction 
will fail to have their regions open.
+
 [[upgrade1.0.rolling.upgrade]]
  Rolling upgrade from 0.98.x to HBase 1.0.0
 .From 0.96.x to 1.0.0



[3/6] hbase git commit: HBASE-19223 Note availability of Date Tiered Compaction in 1.y release for those upgrading from 0.98

2017-11-15 Thread busbey
HBASE-19223 Note availability of Date Tiered Compaction in 1.y release for 
those upgrading from 0.98

Signed-off-by: Yu Li 
Signed-off-by: Mike Drob 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/d89682ea
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/d89682ea
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/d89682ea

Branch: refs/heads/master
Commit: d89682ea983d7c903a751583251880aaa894684c
Parents: df98d68
Author: Sean Busbey 
Authored: Thu Nov 9 08:04:20 2017 -0600
Committer: Sean Busbey 
Committed: Wed Nov 15 10:30:47 2017 -0600

--
 src/main/asciidoc/_chapters/upgrading.adoc | 11 +++
 1 file changed, 7 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/d89682ea/src/main/asciidoc/_chapters/upgrading.adoc
--
diff --git a/src/main/asciidoc/_chapters/upgrading.adoc 
b/src/main/asciidoc/_chapters/upgrading.adoc
index 9842ebd..fd8a86a 100644
--- a/src/main/asciidoc/_chapters/upgrading.adoc
+++ b/src/main/asciidoc/_chapters/upgrading.adoc
@@ -325,16 +325,16 @@ Quitting...
 == Upgrade Paths
 
 [[upgrade1.0]]
-=== Upgrading from 0.98.x to 1.0.x
+=== Upgrading from 0.98.x to 1.x
 
-In this section we first note the significant changes that come in with 1.0.0 
HBase and then we go over the upgrade process. Be sure to read the significant 
changes section with care so you avoid surprises.
+In this section we first note the significant changes that come in with 1.0.0+ 
HBase and then we go over the upgrade process. Be sure to read the significant 
changes section with care so you avoid surprises.
 
  Changes of Note!
 
-In here we list important changes that are in 1.0.0 since 0.98.x., changes you 
should be aware that will go into effect once you upgrade.
+In here we list important changes that are in 1.0.0+ since 0.98.x., changes 
you should be aware that will go into effect once you upgrade.
 
 [[zookeeper.3.4]]
-.ZooKeeper 3.4 is required in HBase 1.0.0
+.ZooKeeper 3.4 is required in HBase 1.0.0+
 See <>.
 
 [[default.ports.changed]]
@@ -363,6 +363,9 @@ to miss data. In particular, 0.98.11 defaults 
`hbase.client.scanner.max.result.s
 to 2 MB but other versions default to larger values. For this reason, be very 
careful
 using 0.98.11 servers with any other client version.
 
+.Availability of Date Tiered Compaction.
+The Date Tiered Compaction feature available as of 0.98.19 is available in the 
1.y release line starting in release 1.3.0. If you have enabled this feature 
for any tables you must upgrade to version 1.3.0 or later. If you attempt to 
use an earlier 1.y release, any tables configured to use date tiered compaction 
will fail to have their regions open.
+
 [[upgrade1.0.rolling.upgrade]]
  Rolling upgrade from 0.98.x to HBase 1.0.0
 .From 0.96.x to 1.0.0



[5/6] hbase git commit: HBASE-19223 Note availability of Date Tiered Compaction in 1.y release for those upgrading from 0.98

2017-11-15 Thread busbey
HBASE-19223 Note availability of Date Tiered Compaction in 1.y release for 
those upgrading from 0.98

Signed-off-by: Yu Li 
Signed-off-by: Mike Drob 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/9a075fe7
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/9a075fe7
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/9a075fe7

Branch: refs/heads/branch-1.4
Commit: 9a075fe73ae8b39980391e73a08b47c56bbbe910
Parents: 846753c
Author: Sean Busbey 
Authored: Thu Nov 9 08:04:20 2017 -0600
Committer: Sean Busbey 
Committed: Wed Nov 15 10:32:01 2017 -0600

--
 src/main/asciidoc/_chapters/upgrading.adoc | 11 +++
 1 file changed, 7 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/9a075fe7/src/main/asciidoc/_chapters/upgrading.adoc
--
diff --git a/src/main/asciidoc/_chapters/upgrading.adoc 
b/src/main/asciidoc/_chapters/upgrading.adoc
index 6b63833..9c02210 100644
--- a/src/main/asciidoc/_chapters/upgrading.adoc
+++ b/src/main/asciidoc/_chapters/upgrading.adoc
@@ -176,16 +176,16 @@ In the minor version-particular sections below, we call 
out where the versions a
 == Upgrade Paths
 
 [[upgrade1.0]]
-=== Upgrading from 0.98.x to 1.0.x
+=== Upgrading from 0.98.x to 1.x
 
-In this section we first note the significant changes that come in with 1.0.0 
HBase and then we go over the upgrade process. Be sure to read the significant 
changes section with care so you avoid surprises.
+In this section we first note the significant changes that come in with 1.0.0+ 
HBase and then we go over the upgrade process. Be sure to read the significant 
changes section with care so you avoid surprises.
 
  Changes of Note!
 
-In here we list important changes that are in 1.0.0 since 0.98.x., changes you 
should be aware that will go into effect once you upgrade.
+In here we list important changes that are in 1.0.0+ since 0.98.x., changes 
you should be aware that will go into effect once you upgrade.
 
 [[zookeeper.3.4]]
-.ZooKeeper 3.4 is required in HBase 1.0.0
+.ZooKeeper 3.4 is required in HBase 1.0.0+
 See <>.
 
 [[default.ports.changed]]
@@ -204,6 +204,9 @@ See the release notes on the issue 
link:https://issues.apache.org/jira/browse/HB
 .Distributed Log Replay
 <> is off by default in HBase 1.0.0. Enabling it can 
make a big difference improving HBase MTTR. Enable this feature if you are 
doing a clean stop/start when you are upgrading. You cannot rolling upgrade to 
this feature (caveat if you are running on a version of HBase in excess of 
HBase 0.98.4 -- see 
link:https://issues.apache.org/jira/browse/HBASE-12577[HBASE-12577 Disable 
distributed log replay by default] for more).
 
+.Availability of Date Tiered Compaction.
+The Date Tiered Compaction feature available as of 0.98.19 is available in the 
1.y release line starting in release 1.3.0. If you have enabled this feature 
for any tables you must upgrade to version 1.3.0 or later. If you attempt to 
use an earlier 1.y release, any tables configured to use date tiered compaction 
will fail to have their regions open.
+
 [[upgrade1.0.rolling.upgrade]]
  Rolling upgrade from 0.98.x to HBase 1.0.0
 .From 0.96.x to 1.0.0



[6/6] hbase git commit: HBASE-19223 Note availability of Date Tiered Compaction in 1.y release for those upgrading from 0.98

2017-11-15 Thread busbey
HBASE-19223 Note availability of Date Tiered Compaction in 1.y release for 
those upgrading from 0.98

Signed-off-by: Yu Li 
Signed-off-by: Mike Drob 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/565527c6
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/565527c6
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/565527c6

Branch: refs/heads/branch-1.3
Commit: 565527c6076b1f46e6d588345eea756724279da8
Parents: 17f11ae
Author: Sean Busbey 
Authored: Thu Nov 9 08:04:20 2017 -0600
Committer: Sean Busbey 
Committed: Wed Nov 15 10:36:30 2017 -0600

--
 src/main/asciidoc/_chapters/upgrading.adoc | 11 +++
 1 file changed, 7 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/565527c6/src/main/asciidoc/_chapters/upgrading.adoc
--
diff --git a/src/main/asciidoc/_chapters/upgrading.adoc 
b/src/main/asciidoc/_chapters/upgrading.adoc
index 9552024..bbdbcb4 100644
--- a/src/main/asciidoc/_chapters/upgrading.adoc
+++ b/src/main/asciidoc/_chapters/upgrading.adoc
@@ -177,16 +177,16 @@ In the minor version-particular sections below, we call 
out where the versions a
 == Upgrade Paths
 
 [[upgrade1.0]]
-=== Upgrading from 0.98.x to 1.0.x
+=== Upgrading from 0.98.x to 1.x
 
-In this section we first note the significant changes that come in with 1.0.0 
HBase and then we go over the upgrade process. Be sure to read the significant 
changes section with care so you avoid surprises.
+In this section we first note the significant changes that come in with 1.0.0+ 
HBase and then we go over the upgrade process. Be sure to read the significant 
changes section with care so you avoid surprises.
 
  Changes of Note!
 
-In here we list important changes that are in 1.0.0 since 0.98.x., changes you 
should be aware that will go into effect once you upgrade.
+In here we list important changes that are in 1.0.0+ since 0.98.x., changes 
you should be aware that will go into effect once you upgrade.
 
 [[zookeeper.3.4]]
-.ZooKeeper 3.4 is required in HBase 1.0.0
+.ZooKeeper 3.4 is required in HBase 1.0.0+
 See <>.
 
 [[default.ports.changed]]
@@ -219,6 +219,9 @@ to miss data. In particular, 0.98.11 defaults 
`hbase.client.scanner.max.result.s
 to 2 MB but other versions default to larger values. For this reason, be very 
careful
 using 0.98.11 servers with any other client version.
 
+.Availability of Date Tiered Compaction.
+The Date Tiered Compaction feature available as of 0.98.19 is available in the 
1.y release line starting in release 1.3.0. If you have enabled this feature 
for any tables you must upgrade to version 1.3.0 or later. If you attempt to 
use an earlier 1.y release, any tables configured to use date tiered compaction 
will fail to have their regions open.
+
 [[upgrade1.0.rolling.upgrade]]
  Rolling upgrade from 0.98.x to HBase 1.0.0
 .From 0.96.x to 1.0.0



hbase git commit: HBASE-19223 Note availability of Date Tiered Compaction in 1.y release for those upgrading from 0.98

2017-11-15 Thread busbey
Repository: hbase
Updated Branches:
  refs/heads/branch-1 641e797e0 -> 1a8ae5c1e


HBASE-19223 Note availability of Date Tiered Compaction in 1.y release for 
those upgrading from 0.98

Signed-off-by: Yu Li 
Signed-off-by: Mike Drob 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/1a8ae5c1
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/1a8ae5c1
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/1a8ae5c1

Branch: refs/heads/branch-1
Commit: 1a8ae5c1ece1636cfb2955e70926d582b5fc18cc
Parents: 641e797
Author: Sean Busbey 
Authored: Thu Nov 9 08:04:20 2017 -0600
Committer: Sean Busbey 
Committed: Wed Nov 15 10:37:53 2017 -0600

--
 src/main/asciidoc/_chapters/upgrading.adoc | 11 +++
 1 file changed, 7 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/1a8ae5c1/src/main/asciidoc/_chapters/upgrading.adoc
--
diff --git a/src/main/asciidoc/_chapters/upgrading.adoc 
b/src/main/asciidoc/_chapters/upgrading.adoc
index 6b63833..9c02210 100644
--- a/src/main/asciidoc/_chapters/upgrading.adoc
+++ b/src/main/asciidoc/_chapters/upgrading.adoc
@@ -176,16 +176,16 @@ In the minor version-particular sections below, we call 
out where the versions a
 == Upgrade Paths
 
 [[upgrade1.0]]
-=== Upgrading from 0.98.x to 1.0.x
+=== Upgrading from 0.98.x to 1.x
 
-In this section we first note the significant changes that come in with 1.0.0 
HBase and then we go over the upgrade process. Be sure to read the significant 
changes section with care so you avoid surprises.
+In this section we first note the significant changes that come in with 1.0.0+ 
HBase and then we go over the upgrade process. Be sure to read the significant 
changes section with care so you avoid surprises.
 
  Changes of Note!
 
-In here we list important changes that are in 1.0.0 since 0.98.x., changes you 
should be aware that will go into effect once you upgrade.
+In here we list important changes that are in 1.0.0+ since 0.98.x., changes 
you should be aware that will go into effect once you upgrade.
 
 [[zookeeper.3.4]]
-.ZooKeeper 3.4 is required in HBase 1.0.0
+.ZooKeeper 3.4 is required in HBase 1.0.0+
 See <>.
 
 [[default.ports.changed]]
@@ -204,6 +204,9 @@ See the release notes on the issue 
link:https://issues.apache.org/jira/browse/HB
 .Distributed Log Replay
 <> is off by default in HBase 1.0.0. Enabling it can 
make a big difference improving HBase MTTR. Enable this feature if you are 
doing a clean stop/start when you are upgrading. You cannot rolling upgrade to 
this feature (caveat if you are running on a version of HBase in excess of 
HBase 0.98.4 -- see 
link:https://issues.apache.org/jira/browse/HBASE-12577[HBASE-12577 Disable 
distributed log replay by default] for more).
 
+.Availability of Date Tiered Compaction.
+The Date Tiered Compaction feature available as of 0.98.19 is available in the 
1.y release line starting in release 1.3.0. If you have enabled this feature 
for any tables you must upgrade to version 1.3.0 or later. If you attempt to 
use an earlier 1.y release, any tables configured to use date tiered compaction 
will fail to have their regions open.
+
 [[upgrade1.0.rolling.upgrade]]
  Rolling upgrade from 0.98.x to HBase 1.0.0
 .From 0.96.x to 1.0.0