[hbase] branch branch-1 updated (c080923 -> 21f2edd)

2020-01-24 Thread janh
This is an automated email from the ASF dual-hosted git repository.

janh pushed a change to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git.


from c080923  HBASE-23710 - Priority configuration for system coprocessor
 add 21f2edd  HBASE-23627 Resolved remaining Checkstyle violations in 
hbase-thrift

No new revisions were added by this update.

Summary of changes:
 .../hadoop/hbase/thrift/IncrementCoalescer.java| 88 +-
 1 file changed, 36 insertions(+), 52 deletions(-)



[hbase-site] branch asf-site updated: INFRA-10751 Empty commit

2020-01-24 Thread git-site-role
This is an automated email from the ASF dual-hosted git repository.

git-site-role pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/hbase-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 1485d41  INFRA-10751 Empty commit
1485d41 is described below

commit 1485d41af685ec1bbea6a5b07ec82f458688aa7b
Author: jenkins 
AuthorDate: Fri Jan 24 14:45:08 2020 +

INFRA-10751 Empty commit



[hbase] branch branch-2.2 updated: HBASE-23683 Make HBaseInterClusterReplicationEndpoint more extensible… (#1047)

2020-01-24 Thread wchevreuil
This is an automated email from the ASF dual-hosted git repository.

wchevreuil pushed a commit to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.2 by this push:
 new 877564c  HBASE-23683 Make HBaseInterClusterReplicationEndpoint more 
extensible… (#1047)
877564c is described below

commit 877564c1aa77ca4389e80b6c28b8517709752408
Author: Wellington Ramos Chevreuil 
AuthorDate: Wed Jan 22 09:19:14 2020 +

HBASE-23683 Make HBaseInterClusterReplicationEndpoint more extensible… 
(#1047)

Signed-off-by: Bharath Vissapragada 
Signed-off-by: binlijin 
(cherry picked from commit 62e340901fa60afeb164a1ff22e6092483b0ac48)
---
 .../HBaseInterClusterReplicationEndpoint.java  | 29 --
 1 file changed, 27 insertions(+), 2 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java
index cc9d90e..1c1e053 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java
@@ -64,8 +64,10 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import 
org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
+import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
 import 
org.apache.hbase.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder;
 
+
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService.BlockingInterface;
 
 /**
@@ -114,6 +116,25 @@ public class HBaseInterClusterReplicationEndpoint extends 
HBaseReplicationEndpoi
   private boolean dropOnDeletedTables;
   private boolean isSerial = false;
 
+  /*
+   * Some implementations of HBaseInterClusterReplicationEndpoint may require 
instantiating
+   * different Connection implementations, or initialize it in a different way,
+   * so defining createConnection as protected for possible overridings.
+   */
+  protected Connection createConnection(Configuration conf) throws IOException 
{
+return ConnectionFactory.createConnection(conf);
+  }
+
+  /*
+   * Some implementations of HBaseInterClusterReplicationEndpoint may require 
instantiating
+   * different ReplicationSinkManager implementations, or initialize it in a 
different way,
+   * so defining createReplicationSinkManager as protected for possible 
overridings.
+   */
+  protected ReplicationSinkManager createReplicationSinkManager(Connection 
conn) {
+return new ReplicationSinkManager((ClusterConnection) conn, 
this.ctx.getPeerId(),
+  this, this.conf);
+  }
+
   @Override
   public void init(Context context) throws IOException {
 super.init(context);
@@ -133,12 +154,16 @@ public class HBaseInterClusterReplicationEndpoint extends 
HBaseReplicationEndpoi
 // TODO: This connection is replication specific or we should make it 
particular to
 // replication and make replication specific settings such as compression 
or codec to use
 // passing Cells.
-this.conn = (ClusterConnection) 
ConnectionFactory.createConnection(this.conf);
+Connection connection = createConnection(this.conf);
+//Since createConnection method may be overridden by extending classes, we 
need to make sure
+//it's indeed returning a ClusterConnection instance.
+Preconditions.checkState(connection instanceof ClusterConnection);
+this.conn = (ClusterConnection) connection;
 this.sleepForRetries =
 this.conf.getLong("replication.source.sleepforretries", 1000);
 this.metrics = context.getMetrics();
 // ReplicationQueueInfo parses the peerId out of the znode for us
-this.replicationSinkMgr = new ReplicationSinkManager(conn, 
ctx.getPeerId(), this, this.conf);
+this.replicationSinkMgr = createReplicationSinkManager(conn);
 // per sink thread pool
 this.maxThreads = 
this.conf.getInt(HConstants.REPLICATION_SOURCE_MAXTHREADS_KEY,
   HConstants.REPLICATION_SOURCE_MAXTHREADS_DEFAULT);



[hbase] 02/06: HBASE-23281: Track meta region locations in masters (#830)

2020-01-24 Thread ndimiduk
This is an automated email from the ASF dual-hosted git repository.

ndimiduk pushed a commit to branch HBASE-18095/client-locate-meta-no-zookeeper
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit 8571d389cfe7bb18dafad82ca011e78390a21061
Author: Bharath Vissapragada 
AuthorDate: Wed Dec 4 15:26:58 2019 -0800

HBASE-23281: Track meta region locations in masters (#830)

* HBASE-23281: Track meta region changes on masters

This patch adds a simple cache that tracks the meta region replica
locations. It keeps an eye on the region movements so that the
cached locations are not stale.

This information is used for servicing client RPCs for connections
that use master based registry (HBASE-18095). The RPC end points
will be added in a separate patch.

Signed-off-by: Nick Dimiduk 
---
 .../hadoop/hbase/shaded/protobuf/ProtobufUtil.java |  42 +++-
 .../apache/hadoop/hbase/zookeeper/ZNodePaths.java  |  19 +-
 .../org/apache/hadoop/hbase/master/HMaster.java|  18 +-
 .../hbase/master/MetaRegionLocationCache.java  | 249 +
 .../hbase/client/TestMetaRegionLocationCache.java  | 186 +++
 .../hbase/master/TestCloseAnOpeningRegion.java |   5 +-
 .../hbase/master/TestClusterRestartFailover.java   |   3 +-
 .../master/TestRegionsRecoveryConfigManager.java   |   5 +-
 .../hbase/master/TestShutdownBackupMaster.java |   3 +-
 .../assignment/TestOpenRegionProcedureBackoff.java |   3 +-
 .../assignment/TestOpenRegionProcedureHang.java|   2 +-
 .../TestRegionAssignedToMultipleRegionServers.java |   3 +-
 .../assignment/TestReportOnlineRegionsRace.java|   3 +-
 ...tReportRegionStateTransitionFromDeadServer.java |   3 +-
 .../TestReportRegionStateTransitionRetry.java  |   3 +-
 .../master/assignment/TestSCPGetRegionsRace.java   |   3 +-
 .../assignment/TestWakeUpUnexpectedProcedure.java  |   3 +-
 .../TestRegisterPeerWorkerWhenRestarting.java  |   3 +-
 .../hadoop/hbase/protobuf/TestProtobufUtil.java|  36 ++-
 .../TestRegionServerReportForDuty.java |   2 +-
 .../replication/TestReplicationProcedureRetry.java |   3 +-
 .../hadoop/hbase/zookeeper/MetaTableLocator.java   |  36 +--
 .../apache/hadoop/hbase/zookeeper/ZKWatcher.java   |  37 ++-
 23 files changed, 586 insertions(+), 84 deletions(-)

diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
index 5a71917..2adcea9 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
@@ -1,4 +1,4 @@
-/**
+/*
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
  * distributed with this work for additional information
@@ -80,6 +80,7 @@ import 
org.apache.hadoop.hbase.client.PackagePrivateFieldAccessor;
 import org.apache.hadoop.hbase.client.Put;
 import org.apache.hadoop.hbase.client.RegionInfoBuilder;
 import org.apache.hadoop.hbase.client.RegionLoadStats;
+import org.apache.hadoop.hbase.client.RegionReplicaUtil;
 import org.apache.hadoop.hbase.client.RegionStatesCount;
 import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.client.Scan;
@@ -93,6 +94,7 @@ import 
org.apache.hadoop.hbase.exceptions.DeserializationException;
 import org.apache.hadoop.hbase.filter.ByteArrayComparable;
 import org.apache.hadoop.hbase.filter.Filter;
 import org.apache.hadoop.hbase.io.TimeRange;
+import org.apache.hadoop.hbase.master.RegionState;
 import org.apache.hadoop.hbase.protobuf.ProtobufMagic;
 import org.apache.hadoop.hbase.protobuf.ProtobufMessageConverter;
 import org.apache.hadoop.hbase.quotas.QuotaScope;
@@ -3068,6 +3070,44 @@ public final class ProtobufUtil {
   }
 
   /**
+   * Get the Meta region state from the passed data bytes. Can handle both old 
and new style
+   * server names.
+   * @param data protobuf serialized data with meta server name.
+   * @param replicaId replica ID for this region
+   * @return RegionState instance corresponding to the serialized data.
+   * @throws DeserializationException if the data is invalid.
+   */
+  public static RegionState parseMetaRegionStateFrom(final byte[] data, int 
replicaId)
+  throws DeserializationException {
+RegionState.State state = RegionState.State.OPEN;
+ServerName serverName;
+if (data != null && data.length > 0 && ProtobufUtil.isPBMagicPrefix(data)) 
{
+  try {
+int prefixLen = ProtobufUtil.lengthOfPBMagic();
+ZooKeeperProtos.MetaRegionServer rl =
+ZooKeeperProtos.MetaRegionServer.parser().parseFrom(data, 
prefixLen,
+data.length - prefixLen);
+if (rl.hasState()) {
+  state = RegionState.State.convert(rl.getState());
+}
+HBaseProtos.ServerN

[hbase] branch HBASE-18095/client-locate-meta-no-zookeeper updated (d9bb034 -> 62da419)

2020-01-24 Thread ndimiduk
This is an automated email from the ASF dual-hosted git repository.

ndimiduk pushed a change to branch HBASE-18095/client-locate-meta-no-zookeeper
in repository https://gitbox.apache.org/repos/asf/hbase.git.


omit d9bb034  HBASE-23305: Master based registry implementation (#954)
omit 49fb451  HBASE-23648: Re-use underlying connection registry in 
RawAsyncHBaseAdmin (#994)
omit e2a9f11  HBASE-23604: Clarify AsyncRegistry usage in the code. (#957)
omit bc891a8  HBASE-23304: RPCs needed for client meta information lookup 
(#904)
omit 8d4314f  HBASE-23281: Track meta region locations in masters (#830)
omit 9c42b6a  HBASE-23275: Track active master's address in 
ActiveMasterManager (#812)
 add 8b7b097  HBASE-23687 DEBUG logging cleanup (#1040)
 add 4e60583   HBASE-23689: Bookmark for github PR to jira redirection 
(#1042)
 add 8cd6410  HBASE-23688 Update docs for setting up IntelliJ as a 
development environment (#1041)
 add a075d6a  HBASE-23569 : Validate that all default chores of HMaster are 
scheduled
 add c1ba3bf  HBASE-23691 Add 2.2.3 to download page (#1045)
 add d60ce17  fix 500/NPE of region.jsp (#1033)
 add cb78b10  HBASE-23683 Make HBaseInterClusterReplicationEndpoint more 
extensible (#1027)
 add fd05aab  HBASE-23665: Split unit tests from TestTableName into a 
separate test-only class. (#1032)
 add ceaeece  Revert "fix 500/NPE of region.jsp (#1033)"
 add a44f3b5  HBASE-23677 fix 500/NPE of region.jsp (#1033)
 add 278d9fd  HBASE-23674 Too many rit page Numbers show confusion
 add 19d3bed  HBASE-23694 After RegionProcedureStore completes migration of 
WALProcedureStore, still running WALProcedureStore.syncThread keeps trying to 
delete now inexistent log files. (#1048)
 add 0321f56  HBASE-23652 Move the unsupported procedure type check before 
migrating to RegionProcedureStore (#1018)
 add edc5368  HBASE-23695 Fail gracefully if no category is present
 add 04d789f  HBASE-23347 Allow custom authentication methods for RPCs
 add 00fc467  HBASE-23653 Expose content of meta table in web ui (#1020)
 add 3b64ea5  HBASE-23569 : Validate that all default chores of 
HRegionServer are scheduled (ADDENDUM)
 add c4395b5  HBASE-23703 Add HBase 2.2.3 documentation to website (#1059)
 add df8f80a  HBASE-23701 Try to converge automated checks around Category
 add 9e43231  HBASE-23690 Checkstyle plugin complains about our 
checkstyle.xml format; doc how to resolve mismatched version (#1044)
 add 5480493  HBASE-23612 Add new profile to make hbase build success on 
ARM (#959)
 add 70c8a5d  HBASE-23700 Upgrade checkstyle and plugin versions (#1056)
 add 65bcf55  HBASE-23653 Expose content of meta table in web ui; addendum 
(#1061)
 add 167892c  HBASE-23680 RegionProcedureStore missing cleaning of hfile 
archive (#1022)
 add 75b8501  HBASE-23661 Reduced number of Checkstyle violations in 
hbase-rest
 add 50e2644  HBASE-23686 Revert binary incompatible change in 
ByteRangeUtils and removed reflections in CommonFSUtils
 add 569ac12  HBASE-23156 start-hbase.sh failed with ClassNotFoundException 
when build with hadoop3 (#1067)
 add 00e64d8  HBASE-23347 Allow custom authentication methods for RPCs; 
addendum (#1060)
 add ba3463d  HBASE-23055 Alter hbase:meta (#1043)
 add 2ed81c6  HBASE-20516 Offheap read-path needs more detail (#1081)
 add bb56dfa  HBASE-23711 - Add test for MinVersions and KeepDeletedCells 
TTL (#1079)
 add ae6a2de  HBASE-23709 Unwrap the real user to properly dispatch 
proxy-user auth'n
 add 11b7ecb  HBASE-23719 Add 1.5.0 release to Downloads (#1083)
 add a58f2a4  HBASE-23720 [create-release] Update yetus version used from 
0.11.0 to 0.11.1
 add 6cdc4b1  HBASE-23705 Add CellComparator to HFileContext (#1062)
 add d6ac8b3  HBASE-23715 MasterFileSystem should not create MasterProcWALs 
dir on … (#1078)
 add 988d347  HBASE-23069 periodic dependency bump for Sep 2019 (#1082)
 add 3738578  HBASE-21065 Try ROW_INDEX_V1 encoding on meta table (fix 
bloomfilters… (#1012)
 add 0da0825   HBASE-23069 periodic dependency bump for Sep 2019 (#1082); 
ADDENDUM
 add fd9e19c  HBASE-23722 Real user might be null in non-proxy-user case
 add 44e66fc   HBASE-23069 periodic dependency bump for Sep 2019 (#1082); 
ADDENDUM  Remove staging repo added by mistake.
 add 7c61c39   HBASE-23069 periodic dependency bump for Sep 2019 
(#1082); ADDENDUM  AND undo thirdparty testing version update.
 add 2d6bb81  HBASE-23710 - Priority configuration for system coprocessors 
(#1077)
 add eda5df7  HBASE-23729 [Flakeys] 
TestRSGroupsBasics#testClearNotProcessedDeadServer fails most of the time
 new efebb84  HBASE-23275: Track active master's address in 
ActiveMasterManager (#812)
 new 8571d38  HBASE-23281: Track meta region locations in masters (#830)
 new 4f8fbba  HBASE-23304: RPCs needed for client meta information lookup

[hbase] 05/06: HBASE-23648: Re-use underlying connection registry in RawAsyncHBaseAdmin (#994)

2020-01-24 Thread ndimiduk
This is an automated email from the ASF dual-hosted git repository.

ndimiduk pushed a commit to branch HBASE-18095/client-locate-meta-no-zookeeper
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit 07c38260f51ac2ef09008980a95e052ba0a1c22b
Author: Bharath Vissapragada 
AuthorDate: Thu Jan 9 12:27:09 2020 -0800

HBASE-23648: Re-use underlying connection registry in RawAsyncHBaseAdmin 
(#994)

* HBASE-23648: Re-use underlying connection registry in RawAsyncHBaseAdmin

No need to create and close a new registry on demand. Audited other
usages of getRegistry() and the code looks fine.

* Fix checkstyle issues in RawAsyncHBaseAdmin
---
 .../hadoop/hbase/client/RawAsyncHBaseAdmin.java| 110 +
 1 file changed, 47 insertions(+), 63 deletions(-)

diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.java
index 3e5bea3..69bd611 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.java
@@ -1,4 +1,4 @@
-/**
+/*
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
  * distributed with this work for additional information
@@ -21,7 +21,6 @@ import static org.apache.hadoop.hbase.HConstants.HIGH_QOS;
 import static org.apache.hadoop.hbase.TableName.META_TABLE_NAME;
 import static org.apache.hadoop.hbase.util.FutureUtils.addListener;
 import static 
org.apache.hadoop.hbase.util.FutureUtils.unwrapCompletionException;
-
 import com.google.protobuf.Message;
 import com.google.protobuf.RpcChannel;
 import java.io.IOException;
@@ -46,7 +45,6 @@ import java.util.function.Supplier;
 import java.util.regex.Pattern;
 import java.util.stream.Collectors;
 import java.util.stream.Stream;
-import org.apache.commons.io.IOUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.AsyncMetaTableAccessor;
 import org.apache.hadoop.hbase.CacheEvictionStats;
@@ -99,14 +97,12 @@ import org.apache.hadoop.hbase.util.ForeignExceptionUtil;
 import org.apache.yetus.audience.InterfaceAudience;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
-
 import 
org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
 import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
 import org.apache.hbase.thirdparty.com.google.protobuf.RpcCallback;
 import org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer;
 import org.apache.hbase.thirdparty.io.netty.util.Timeout;
 import org.apache.hbase.thirdparty.io.netty.util.TimerTask;
-
 import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
 import org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
 import org.apache.hadoop.hbase.shaded.protobuf.generated.AccessControlProtos;
@@ -755,7 +751,8 @@ class RawAsyncHBaseAdmin implements AsyncAdmin {
   }
 
   @Override
-  public CompletableFuture addColumnFamily(TableName tableName, 
ColumnFamilyDescriptor columnFamily) {
+  public CompletableFuture addColumnFamily(
+  TableName tableName, ColumnFamilyDescriptor columnFamily) {
 return this. procedureCall(tableName,
   RequestConverter.buildAddColumnRequest(tableName, columnFamily, 
ng.getNonceGroup(),
 ng.newNonce()), (s, c, req, done) -> s.addColumn(c, req, done), (resp) 
-> resp.getProcId(),
@@ -809,10 +806,10 @@ class RawAsyncHBaseAdmin implements AsyncAdmin {
 . newMasterCaller()
 .action(
   (controller, stub) -> this
-  . call(
-controller, stub, 
RequestConverter.buildGetNamespaceDescriptorRequest(name), (s, c,
-req, done) -> s.getNamespaceDescriptor(c, req, done), 
(resp) -> ProtobufUtil
-
.toNamespaceDescriptor(resp.getNamespaceDescriptor(.call();
+  .
+  call(controller, stub, 
RequestConverter.buildGetNamespaceDescriptorRequest(name),
+(s, c, req, done) -> s.getNamespaceDescriptor(c, req, 
done), (resp)
+  -> 
ProtobufUtil.toNamespaceDescriptor(resp.getNamespaceDescriptor(.call();
   }
 
   @Override
@@ -830,13 +827,12 @@ class RawAsyncHBaseAdmin implements AsyncAdmin {
   @Override
   public CompletableFuture> 
listNamespaceDescriptors() {
 return this
-.> newMasterCaller()
-.action(
-  (controller, stub) -> this
-  .> call(
-controller, stub, 
ListNamespaceDescriptorsRequest.newBuilder().build(), (s, c, req,
-done) -> s.listNamespaceDescriptors(c, req, done), (resp) 
-> ProtobufUtil
-.toNamespaceDescriptorList(resp))).call();
+.> newMasterCaller().action((controller, 
stub) -> this
+  .> call(controller, stub,
+  ListName

[hbase] 06/06: HBASE-23305: Master based registry implementation (#954)

2020-01-24 Thread ndimiduk
This is an automated email from the ASF dual-hosted git repository.

ndimiduk pushed a commit to branch HBASE-18095/client-locate-meta-no-zookeeper
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit 62da419b23f4409837c57b61c42db18b90d9d259
Author: Bharath Vissapragada 
AuthorDate: Tue Jan 14 08:24:07 2020 -0800

HBASE-23305: Master based registry implementation (#954)

Implements a master based registry for clients.

 - Supports hedged RPCs (fan out configured via configs).
 - Parameterized existing client tests to run with multiple registry 
combinations.
 - Added unit-test coverage for the new registry implementation.

Signed-off-by: Nick Dimiduk 
Signed-off-by: stack 
Signed-off-by: Andrew Purtell 
---
 .../hbase/client/ConnectionRegistryFactory.java|   4 +-
 .../apache/hadoop/hbase/client/MasterRegistry.java | 226 +++
 .../MasterRegistryFetchException.java} |  32 +-
 .../apache/hadoop/hbase/ipc/AbstractRpcClient.java |  54 +--
 .../apache/hadoop/hbase/ipc/BlockingRpcClient.java |   7 +-
 .../apache/hadoop/hbase/ipc/HedgedRpcChannel.java  | 274 ++
 .../apache/hadoop/hbase/ipc/NettyRpcClient.java|  34 +-
 .../org/apache/hadoop/hbase/ipc/RpcClient.java |  19 +-
 .../hbase/client/TestConnectionRegistryLeak.java   |   3 +-
 .../java/org/apache/hadoop/hbase/HConstants.java   |  20 +-
 .../apache/hadoop/hbase/util/PrettyPrinter.java|  23 +-
 .../org/apache/hadoop/hbase/TableNameTestRule.java |  16 +-
 .../apache/hadoop/hbase/util/JVMClusterUtil.java   |   7 +
 .../apache/hadoop/hbase/HBaseTestingUtility.java   |   3 +
 .../hbase/client/DummyConnectionRegistry.java  |   3 +-
 .../hadoop/hbase/client/TestFromClientSide.java| 420 +++--
 .../client/TestFromClientSideWithCoprocessor.java  |  23 +-
 .../hadoop/hbase/client/TestMasterRegistry.java| 125 ++
 .../hbase/client/TestScannersFromClientSide.java   | 136 ---
 .../apache/hadoop/hbase/ipc/AbstractTestIPC.java   | 120 +-
 .../hbase/ipc/TestProtobufRpcServiceImpl.java  |  25 +-
 21 files changed, 1241 insertions(+), 333 deletions(-)

diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionRegistryFactory.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionRegistryFactory.java
index 80d358b..9308443 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionRegistryFactory.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionRegistryFactory.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.hbase.client;
 
+import static 
org.apache.hadoop.hbase.HConstants.CLIENT_CONNECTION_REGISTRY_IMPL_CONF_KEY;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.util.ReflectionUtils;
 import org.apache.yetus.audience.InterfaceAudience;
@@ -27,9 +28,6 @@ import org.apache.yetus.audience.InterfaceAudience;
 @InterfaceAudience.Private
 final class ConnectionRegistryFactory {
 
-  static final String CLIENT_CONNECTION_REGISTRY_IMPL_CONF_KEY =
-  "hbase.client.connection.registry.impl";
-
   private ConnectionRegistryFactory() {
   }
 
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MasterRegistry.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MasterRegistry.java
new file mode 100644
index 000..5680847
--- /dev/null
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MasterRegistry.java
@@ -0,0 +1,226 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import static org.apache.hadoop.hbase.HConstants.MASTER_ADDRS_DEFAULT;
+import static org.apache.hadoop.hbase.HConstants.MASTER_ADDRS_KEY;
+import static 
org.apache.hadoop.hbase.HConstants.MASTER_REGISTRY_ENABLE_HEDGED_READS_DEFAULT;
+import static 
org.apache.hadoop.hbase.HConstants.MASTER_REGISTRY_ENABLE_HEDGED_READS_KEY;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.CompletableFuture;
+import java.util.function.Function;
+import java.util.func

[hbase] 04/06: HBASE-23604: Clarify AsyncRegistry usage in the code. (#957)

2020-01-24 Thread ndimiduk
This is an automated email from the ASF dual-hosted git repository.

ndimiduk pushed a commit to branch HBASE-18095/client-locate-meta-no-zookeeper
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit 12bb41eb2ca1871687b4c00ffc6b219f8dcc3b2b
Author: Bharath Vissapragada 
AuthorDate: Fri Jan 3 14:27:01 2020 -0800

HBASE-23604: Clarify AsyncRegistry usage in the code. (#957)

* HBASE-23604: Cleanup AsyncRegistry interface

- Cleans up the method names to make more sense and adds a little
more javadocs for context. In future patches we can revisit
the name of the actual class to make it more self explanatory.

- Does AsyncRegistry -> ConnectionRegistry rename.
"async" ness of the registry is kind of implicit based on
the interface contents and need not be reflected in the name.

Signed-off-by: Nick Dimiduk 
Signed-off-by: stack 
Signed-off-by: Viraj Jasani 
---
 .../hadoop/hbase/client/AsyncConnectionImpl.java   | 11 ++-
 .../hbase/client/AsyncMetaRegionLocator.java   |  6 +++---
 .../hbase/client/AsyncTableRegionLocatorImpl.java  |  2 +-
 .../hadoop/hbase/client/ConnectionFactory.java |  2 +-
 ...{AsyncRegistry.java => ConnectionRegistry.java} | 13 +++--
 ...Factory.java => ConnectionRegistryFactory.java} | 22 --
 .../hadoop/hbase/client/RawAsyncHBaseAdmin.java| 17 +
 ...syncRegistry.java => ZKConnectionRegistry.java} | 18 +-
 ...istry.java => DoNothingConnectionRegistry.java} |  8 
 .../hbase/client/TestAsyncAdminRpcPriority.java|  2 +-
 .../client/TestAsyncMetaRegionLocatorFailFast.java | 10 +-
 .../hbase/client/TestAsyncTableRpcPriority.java|  2 +-
 ...ryLeak.java => TestConnectionRegistryLeak.java} | 14 +++---
 .../hbase/client/AsyncClusterConnectionImpl.java   |  4 ++--
 .../hbase/client/ClusterConnectionFactory.java |  2 +-
 .../example/TestZooKeeperTableArchiveClient.java   | 18 +++---
 .../hbase/client/AbstractTestRegionLocator.java|  3 ++-
 ...cRegistry.java => DummyConnectionRegistry.java} | 11 ++-
 .../hbase/client/RegionReplicaTestHelper.java  |  6 +++---
 .../client/TestAsyncAdminWithRegionReplicas.java   |  3 ++-
 .../hbase/client/TestAsyncMetaRegionLocator.java   |  4 ++--
 .../client/TestAsyncNonMetaRegionLocator.java  |  3 ++-
 ...stAsyncNonMetaRegionLocatorConcurrenyLimit.java |  3 ++-
 .../hbase/client/TestAsyncRegionLocator.java   |  3 ++-
 .../TestAsyncSingleRequestRpcRetryingCaller.java   |  3 ++-
 .../client/TestAsyncTableUseMetaReplicas.java  |  2 +-
 .../hbase/client/TestMetaRegionLocationCache.java  |  4 ++--
 ...Registry.java => TestZKConnectionRegistry.java} | 20 ++--
 .../regionserver/TestWALEntrySinkFilter.java   | 19 ---
 29 files changed, 124 insertions(+), 111 deletions(-)

diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java
index 78fad9e..9d90249 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java
@@ -85,7 +85,7 @@ class AsyncConnectionImpl implements AsyncConnection {
 
   private final User user;
 
-  final AsyncRegistry registry;
+  final ConnectionRegistry registry;
 
   private final int rpcTimeout;
 
@@ -122,7 +122,7 @@ class AsyncConnectionImpl implements AsyncConnection {
 
   private volatile ConnectionOverAsyncConnection conn;
 
-  public AsyncConnectionImpl(Configuration conf, AsyncRegistry registry, 
String clusterId,
+  public AsyncConnectionImpl(Configuration conf, ConnectionRegistry registry, 
String clusterId,
   SocketAddress localAddress, User user) {
 this.conf = conf;
 this.user = user;
@@ -136,7 +136,8 @@ class AsyncConnectionImpl implements AsyncConnection {
 } else {
   this.metrics = Optional.empty();
 }
-this.rpcClient = RpcClientFactory.createClient(conf, clusterId, 
localAddress, metrics.orElse(null));
+this.rpcClient = RpcClientFactory.createClient(
+conf, clusterId, localAddress, metrics.orElse(null));
 this.rpcControllerFactory = RpcControllerFactory.instantiate(conf);
 this.hostnameCanChange = conf.getBoolean(RESOLVE_HOSTNAME_ON_FAIL_KEY, 
true);
 this.rpcTimeout =
@@ -257,7 +258,7 @@ class AsyncConnectionImpl implements AsyncConnection {
   CompletableFuture getMasterStub() {
 return ConnectionUtils.getOrFetch(masterStub, masterStubMakeFuture, false, 
() -> {
   CompletableFuture future = new 
CompletableFuture<>();
-  addListener(registry.getMasterAddress(), (addr, error) -> {
+  addListener(registry.getActiveMaster(), (addr, error) -> {
 if (error != null) {
   future.completeExceptionally(error);
 } else if (addr == null) {
@@

[hbase] 03/06: HBASE-23304: RPCs needed for client meta information lookup (#904)

2020-01-24 Thread ndimiduk
This is an automated email from the ASF dual-hosted git repository.

ndimiduk pushed a commit to branch HBASE-18095/client-locate-meta-no-zookeeper
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit 4f8fbba0c01742f17fa2d85a4b944d7f42b7c2b1
Author: Bharath Vissapragada 
AuthorDate: Thu Dec 19 11:29:25 2019 -0800

HBASE-23304: RPCs needed for client meta information lookup (#904)

* HBASE-23304: RPCs needed for client meta information lookup

This patch implements the RPCs needed for the meta information
lookup during connection init. New tests added to cover the RPC
code paths. HBASE-23305 builds on this to implement the client
side logic.

Fixed a bunch of checkstyle nits around the places the patch
touches.

Signed-off-by: Andrew Purtell 
---
 .../hadoop/hbase/shaded/protobuf/ProtobufUtil.java |   4 +-
 .../src/main/protobuf/Master.proto |  44 ++
 .../hadoop/hbase/master/MasterRpcServices.java |  85 ---
 .../hbase/master/TestClientMetaServiceRPCs.java| 164 +
 4 files changed, 275 insertions(+), 22 deletions(-)

diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
index 2adcea9..23f5c08 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
@@ -376,7 +376,9 @@ public final class ProtobufUtil {
* @see 
#toServerName(org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ServerName)
*/
   public static HBaseProtos.ServerName toServerName(final ServerName 
serverName) {
-if (serverName == null) return null;
+if (serverName == null) {
+  return null;
+}
 HBaseProtos.ServerName.Builder builder =
   HBaseProtos.ServerName.newBuilder();
 builder.setHostName(serverName.getHostname());
diff --git a/hbase-protocol-shaded/src/main/protobuf/Master.proto 
b/hbase-protocol-shaded/src/main/protobuf/Master.proto
index 69377a6..e88ddc4 100644
--- a/hbase-protocol-shaded/src/main/protobuf/Master.proto
+++ b/hbase-protocol-shaded/src/main/protobuf/Master.proto
@@ -1200,3 +1200,47 @@ service HbckService {
   rpc FixMeta(FixMetaRequest)
 returns(FixMetaResponse);
 }
+
+/** Request and response to get the clusterID for this cluster */
+message GetClusterIdRequest {
+}
+message GetClusterIdResponse {
+  /** Not set if cluster ID could not be determined. */
+  optional string cluster_id = 1;
+}
+
+/** Request and response to get the currently active master name for this 
cluster */
+message GetActiveMasterRequest {
+}
+message GetActiveMasterResponse {
+  /** Not set if an active master could not be determined. */
+  optional ServerName server_name = 1;
+}
+
+/** Request and response to get the current list of meta region locations */
+message GetMetaRegionLocationsRequest {
+}
+message GetMetaRegionLocationsResponse {
+  /** Not set if meta region locations could not be determined. */
+  repeated RegionLocation meta_locations = 1;
+}
+
+/**
+ * Implements all the RPCs needed by clients to look up cluster meta 
information needed for connection establishment.
+ */
+service ClientMetaService {
+  /**
+   * Get Cluster ID for this cluster.
+   */
+  rpc GetClusterId(GetClusterIdRequest) returns(GetClusterIdResponse);
+
+  /**
+   * Get active master server name for this cluster.
+   */
+  rpc GetActiveMaster(GetActiveMasterRequest) returns(GetActiveMasterResponse);
+
+  /**
+   * Get current meta replicas' region locations.
+   */
+  rpc GetMetaRegionLocations(GetMetaRegionLocationsRequest) 
returns(GetMetaRegionLocationsResponse);
+}
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
index 612c731..620c3a0 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
@@ -18,7 +18,6 @@
 package org.apache.hadoop.hbase.master;
 
 import static org.apache.hadoop.hbase.master.MasterWalManager.META_FILTER;
-
 import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.net.BindException;
@@ -30,6 +29,7 @@ import java.util.HashSet;
 import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
+import java.util.Optional;
 import java.util.Set;
 import java.util.stream.Collectors;
 import org.apache.hadoop.conf.Configuration;
@@ -37,6 +37,7 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.ClusterMetricsBuilder;
 import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionLocation;
 import org.apache.hadoop.hbase.MetaTableAccessor;

[hbase] 01/06: HBASE-23275: Track active master's address in ActiveMasterManager (#812)

2020-01-24 Thread ndimiduk
This is an automated email from the ASF dual-hosted git repository.

ndimiduk pushed a commit to branch HBASE-18095/client-locate-meta-no-zookeeper
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit efebb843afe4458599e12cf3390fe534780fac4e
Author: Bharath Vissapragada 
AuthorDate: Wed Nov 20 11:41:36 2019 -0800

HBASE-23275: Track active master's address in ActiveMasterManager (#812)

* HBASE-23275: Track active master's address in ActiveMasterManager

Currently we just track whether an active master exists.
It helps to also track the address of the active master in
all the masters to help serve the client RPC requests to
know which master is active.

Signed-off-by: Nick Dimiduk 
Signed-off-by: Andrew Purtell 
---
 .../hadoop/hbase/master/ActiveMasterManager.java   | 63 +-
 .../org/apache/hadoop/hbase/master/HMaster.java|  4 ++
 .../hbase/master/TestActiveMasterManager.java  | 10 
 3 files changed, 64 insertions(+), 13 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ActiveMasterManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ActiveMasterManager.java
index 50798ed..99cab62 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ActiveMasterManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ActiveMasterManager.java
@@ -1,4 +1,4 @@
-/**
+/*
  *
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
@@ -17,25 +17,24 @@
  * limitations under the License.
  */
 package org.apache.hadoop.hbase.master;
-
 import java.io.IOException;
+import java.util.Optional;
 import java.util.concurrent.atomic.AtomicBoolean;
-
-import org.apache.hadoop.hbase.zookeeper.MasterAddressTracker;
-import org.apache.hadoop.hbase.zookeeper.ZKUtil;
-import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
-import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
-import org.apache.yetus.audience.InterfaceAudience;
 import org.apache.hadoop.hbase.Server;
 import org.apache.hadoop.hbase.ServerName;
 import org.apache.hadoop.hbase.ZNodeClearer;
 import org.apache.hadoop.hbase.exceptions.DeserializationException;
 import org.apache.hadoop.hbase.monitoring.MonitoredTask;
-import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+import org.apache.hadoop.hbase.zookeeper.MasterAddressTracker;
 import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
 import org.apache.zookeeper.KeeperException;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
 
 /**
  * Handles everything on master-side related to master election.
@@ -57,12 +56,18 @@ public class ActiveMasterManager extends ZKListener {
   final AtomicBoolean clusterHasActiveMaster = new AtomicBoolean(false);
   final AtomicBoolean clusterShutDown = new AtomicBoolean(false);
 
+  // This server's information.
   private final ServerName sn;
   private int infoPort;
   private final Server master;
 
+  // Active master's server name. Invalidated anytime active master changes 
(based on ZK
+  // notifications) and lazily fetched on-demand.
+  // ServerName is immutable, so we don't need heavy synchronization around it.
+  private volatile ServerName activeMasterServerName;
+
   /**
-   * @param watcher
+   * @param watcher ZK watcher
* @param sn ServerName
* @param master In an instance of a Master.
*/
@@ -107,6 +112,30 @@ public class ActiveMasterManager extends ZKListener {
   }
 
   /**
+   * Fetches the active master's ServerName from zookeeper.
+   */
+  private void fetchAndSetActiveMasterServerName() {
+LOG.debug("Attempting to fetch active master sn from zk");
+try {
+  activeMasterServerName = MasterAddressTracker.getMasterAddress(watcher);
+} catch (IOException | KeeperException e) {
+  // Log and ignore for now and re-fetch later if needed.
+  LOG.error("Error fetching active master information", e);
+}
+  }
+
+  public Optional getActiveMasterServerName() {
+if (!clusterHasActiveMaster.get()) {
+  return Optional.empty();
+}
+if (activeMasterServerName == null) {
+  fetchAndSetActiveMasterServerName();
+}
+// It could still be null, but return whatever we have.
+return Optional.ofNullable(activeMasterServerName);
+  }
+
+  /**
* Handle a change in the master node.  Doesn't matter whether this was 
called
* from a nodeCreated or nodeDeleted event because there are no guarantees
* that the current state of the master node matches the event at the time of
@@ -134,6 +163,9 @@ public class ActiveMasterManager extends ZKListener {
   // Notify any thread waiting to

[hbase] branch branch-2 updated: HBASE-23733 [Flakey Tests] TestSplitTransactionOnCluster

2020-01-24 Thread stack
This is an automated email from the ASF dual-hosted git repository.

stack pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new cfe569c  HBASE-23733 [Flakey Tests] TestSplitTransactionOnCluster
cfe569c is described below

commit cfe569cf6b544213188a276408e7197d697e6edf
Author: stack 
AuthorDate: Fri Jan 24 10:06:18 2020 -0800

HBASE-23733 [Flakey Tests] TestSplitTransactionOnCluster
---
 .../TestSplitTransactionOnCluster.java | 28 +++---
 1 file changed, 19 insertions(+), 9 deletions(-)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java
index 2fb822e..72b6835 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java
@@ -1,4 +1,4 @@
-/**
+/*
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
  * distributed with this work for additional information
@@ -84,6 +84,7 @@ import 
org.apache.hadoop.hbase.master.assignment.RegionStateNode;
 import org.apache.hadoop.hbase.master.assignment.RegionStates;
 import org.apache.hadoop.hbase.procedure2.ProcedureTestingUtility;
 import org.apache.hadoop.hbase.regionserver.compactions.CompactionContext;
+import org.apache.hadoop.hbase.regionserver.compactions.CompactionProgress;
 import 
org.apache.hadoop.hbase.regionserver.throttle.NoLimitThroughputController;
 import org.apache.hadoop.hbase.testclassification.LargeTests;
 import org.apache.hadoop.hbase.testclassification.RegionServerTests;
@@ -360,10 +361,8 @@ public class TestSplitTransactionOnCluster {
 final TableName tableName = TableName.valueOf(name.getMethodName());
 
 // Create table then get the single region for our new table.
-Table t = createTableAndWait(tableName, HConstants.CATALOG_FAMILY);
-List regions = cluster.getRegions(tableName);
-RegionInfo hri = getAndCheckSingleTableRegion(regions);
-
+Table t = createTableAndWait(tableName, HConstants.CATALOG_FAMILY); 
List regions =
+  cluster.getRegions(tableName); RegionInfo hri = 
getAndCheckSingleTableRegion(regions);
 int tableRegionIndex = ensureTableRegionNotOnSameServerAsMeta(admin, hri);
 
 // Turn off balancer so it doesn't cut in and mess up our placements.
@@ -380,20 +379,31 @@ public class TestSplitTransactionOnCluster {
   admin.splitRegionAsync(hri.getRegionName()).get(2, TimeUnit.MINUTES);
   // Get daughters
   List daughters = checkAndGetDaughters(tableName);
-  HRegion daughterRegion = daughters.get(0);
   // Now split one of the daughters.
+  HRegion daughterRegion = daughters.get(0);
   RegionInfo daughter = daughterRegion.getRegionInfo();
   LOG.info("Daughter we are going to split: " + daughter);
-  // Compact first to ensure we have cleaned up references -- else the 
split
-  // will fail.
+  // Compact first to ensure we have cleaned up references -- else the 
split will fail.
+  // May be a compaction going already so compact will return immediately; 
if so, wait until
+  // compaction completes.
   daughterRegion.compact(true);
-  daughterRegion.getStores().get(0).closeAndArchiveCompactedFiles();
+  HStore store = daughterRegion.getStores().get(0);
+  CompactionProgress progress = store.getCompactionProgress();
+  if (progress != null) {
+while (progress.getProgressPct() < 1) {
+  LOG.info("Waiting {}", progress);
+  Threads.sleep(1000);
+}
+  }
+  store.closeAndArchiveCompactedFiles();
   for (int i = 0; i < 100; i++) {
 if (!daughterRegion.hasReferences()) {
+  LOG.info("Break -- no references in {}", daughterRegion);
   break;
 }
 Threads.sleep(100);
   }
+  LOG.info("Finished {} references={}", daughterRegion, 
daughterRegion.hasReferences());
   assertFalse("Waiting for reference to be compacted", 
daughterRegion.hasReferences());
   LOG.info("Daughter hri before split (has been compacted): " + daughter);
   admin.splitRegionAsync(daughter.getRegionName()).get(2, 
TimeUnit.MINUTES);



[hbase] branch branch-2.2 updated (877564c -> dd8496a)

2020-01-24 Thread sakthi
This is an automated email from the ASF dual-hosted git repository.

sakthi pushed a change to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git.


from 877564c  HBASE-23683 Make HBaseInterClusterReplicationEndpoint more 
extensible… (#1047)
 add dd8496a  HBASE-21345 [hbck2] Allow version check to proceed even 
though master is 'initializing'.

No new revisions were added by this update.

Summary of changes:
 .../main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java  | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)



[hbase] branch branch-2 updated (cfe569c -> 81cb4dd)

2020-01-24 Thread sakthi
This is an automated email from the ASF dual-hosted git repository.

sakthi pushed a change to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git.


from cfe569c  HBASE-23733 [Flakey Tests] TestSplitTransactionOnCluster
 add 81cb4dd  HBASE-21345 [hbck2] Allow version check to proceed even 
though master is 'initializing'.

No new revisions were added by this update.

Summary of changes:
 .../main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java  | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)



[hbase] branch branch-2.2 updated (dd8496a -> 8e8b9b6)

2020-01-24 Thread sakthi
This is an automated email from the ASF dual-hosted git repository.

sakthi pushed a change to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git.


from dd8496a  HBASE-21345 [hbck2] Allow version check to proceed even 
though master is 'initializing'.
 add 8e8b9b6  HBASE-23727 Port HBASE-20981 in 2.2 & 2.3

No new revisions were added by this update.

Summary of changes:
 .../hbase/procedure2/StateMachineProcedure.java|  6 +-
 .../procedure2/TestStateMachineProcedure.java  | 76 ++
 .../hbase/procedure2/TestYieldProcedures.java  |  8 ++-
 3 files changed, 84 insertions(+), 6 deletions(-)



[hbase] branch branch-2 updated: HBASE-23727 Port HBASE-20981 in 2.2 & 2.3

2020-01-24 Thread sakthi
This is an automated email from the ASF dual-hosted git repository.

sakthi pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 4ce1f9b  HBASE-23727 Port HBASE-20981 in 2.2 & 2.3
4ce1f9b is described below

commit 4ce1f9b832324fe1afa1d9b0fafa0b4a7c179c65
Author: jackbearden 
AuthorDate: Wed Aug 1 12:50:25 2018 -0700

HBASE-23727 Port HBASE-20981 in 2.2 & 2.3

HBASE-20981 - Rollback stateCount accounting thrown-off when exception out 
of rollbackState

Signed-off-by: Michael Stack 
Signed-off-by: Sakthi 
Signed-off-by: Peter Somogyi 
(cherry picked from commit 8e8b9b698f8c3faf551a0457f5264c6dbfe47950)
---
 .../hbase/procedure2/StateMachineProcedure.java|  6 +-
 .../procedure2/TestStateMachineProcedure.java  | 76 ++
 .../hbase/procedure2/TestYieldProcedures.java  |  8 ++-
 3 files changed, 84 insertions(+), 6 deletions(-)

diff --git 
a/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/StateMachineProcedure.java
 
b/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/StateMachineProcedure.java
index 13c49df..46c4c5e 100644
--- 
a/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/StateMachineProcedure.java
+++ 
b/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/StateMachineProcedure.java
@@ -54,7 +54,7 @@ public abstract class StateMachineProcedure
   private final AtomicBoolean aborted = new AtomicBoolean(false);
 
   private Flow stateFlow = Flow.HAS_MORE_STATE;
-  private int stateCount = 0;
+  protected int stateCount = 0;
   private int[] states = null;
 
   private List> subProcList = null;
@@ -217,13 +217,13 @@ public abstract class StateMachineProcedure
 try {
   updateTimestamp();
   rollbackState(env, getCurrentState());
-  stateCount--;
 } finally {
+  stateCount--;
   updateTimestamp();
 }
   }
 
-  private boolean isEofState() {
+  protected boolean isEofState() {
 return stateCount > 0 && states[stateCount-1] == EOF_STATE;
   }
 
diff --git 
a/hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/TestStateMachineProcedure.java
 
b/hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/TestStateMachineProcedure.java
index 8af8874..9545812 100644
--- 
a/hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/TestStateMachineProcedure.java
+++ 
b/hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/TestStateMachineProcedure.java
@@ -150,6 +150,24 @@ public class TestStateMachineProcedure {
   }
 
   @Test
+  public void testChildNormalRollbackStateCount() {
+procExecutor.getEnvironment().triggerChildRollback = true;
+TestSMProcedureBadRollback testNormalRollback = new 
TestSMProcedureBadRollback();
+long procId = procExecutor.submitProcedure(testNormalRollback);
+ProcedureTestingUtility.waitProcedure(procExecutor, procId);
+assertEquals(0, testNormalRollback.stateCount);
+  }
+
+  @Test
+  public void testChildBadRollbackStateCount() {
+procExecutor.getEnvironment().triggerChildRollback = true;
+TestSMProcedureBadRollback testBadRollback = new 
TestSMProcedureBadRollback();
+long procId = procExecutor.submitProcedure(testBadRollback);
+ProcedureTestingUtility.waitProcedure(procExecutor, procId);
+assertEquals(0, testBadRollback.stateCount);
+  }
+
+  @Test
   public void testChildOnLastStepWithRollbackDoubleExecution() throws 
Exception {
 procExecutor.getEnvironment().triggerChildRollback = true;
 ProcedureTestingUtility.setKillAndToggleBeforeStoreUpdate(procExecutor, 
true);
@@ -208,6 +226,64 @@ public class TestStateMachineProcedure {
 }
   }
 
+  public static class TestSMProcedureBadRollback
+  extends StateMachineProcedure {
+@Override
+protected Flow executeFromState(TestProcEnv env, TestSMProcedureState 
state) {
+  LOG.info("EXEC " + state + " " + this);
+  env.execCount.incrementAndGet();
+  switch (state) {
+case STEP_1:
+  if (!env.loop) {
+setNextState(TestSMProcedureState.STEP_2);
+  }
+  break;
+case STEP_2:
+  addChildProcedure(new SimpleChildProcedure());
+  return Flow.NO_MORE_STATE;
+  }
+  return Flow.HAS_MORE_STATE;
+}
+@Override
+protected void rollbackState(TestProcEnv env, TestSMProcedureState state) {
+  LOG.info("ROLLBACK " + state + " " + this);
+  env.rollbackCount.incrementAndGet();
+}
+
+@Override
+protected TestSMProcedureState getState(int stateId) {
+  return TestSMProcedureState.values()[stateId];
+}
+
+@Override
+protected int getStateId(TestSMProcedureState state) {
+  return state.ordinal();
+}
+
+@Override
+protected TestSMProcedureState getInitialState() {
+  return TestSMProcedureState.STEP_1;
+}
+
+@Override
+

[hbase] branch branch-2.2 updated (8e8b9b6 -> 656cba9)

2020-01-24 Thread sakthi
This is an automated email from the ASF dual-hosted git repository.

sakthi pushed a change to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git.


from 8e8b9b6  HBASE-23727 Port HBASE-20981 in 2.2 & 2.3
 add 656cba9  HBASE-23728 Include HBASE-21018 in 2.2 & 2.3

No new revisions were added by this update.

Summary of changes:
 .../FanOutOneBlockAsyncDFSOutputSaslHelper.java   | 19 ++-
 1 file changed, 14 insertions(+), 5 deletions(-)



[hbase] branch master updated: HBASE-23733 [Flakey Tests] TestSplitTransactionOnCluster

2020-01-24 Thread stack
This is an automated email from the ASF dual-hosted git repository.

stack pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new 3c1bccb  HBASE-23733 [Flakey Tests] TestSplitTransactionOnCluster
3c1bccb is described below

commit 3c1bccb0f8d3e533da27a410f20bfb3937cf8523
Author: stack 
AuthorDate: Fri Jan 24 10:06:18 2020 -0800

HBASE-23733 [Flakey Tests] TestSplitTransactionOnCluster
---
 .../TestSplitTransactionOnCluster.java | 28 +++---
 1 file changed, 19 insertions(+), 9 deletions(-)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java
index 88c3dff..65bd4f5 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java
@@ -1,4 +1,4 @@
-/**
+/*
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
  * distributed with this work for additional information
@@ -84,6 +84,7 @@ import 
org.apache.hadoop.hbase.master.assignment.RegionStateNode;
 import org.apache.hadoop.hbase.master.assignment.RegionStates;
 import org.apache.hadoop.hbase.procedure2.ProcedureTestingUtility;
 import org.apache.hadoop.hbase.regionserver.compactions.CompactionContext;
+import org.apache.hadoop.hbase.regionserver.compactions.CompactionProgress;
 import 
org.apache.hadoop.hbase.regionserver.throttle.NoLimitThroughputController;
 import org.apache.hadoop.hbase.testclassification.LargeTests;
 import org.apache.hadoop.hbase.testclassification.RegionServerTests;
@@ -361,10 +362,8 @@ public class TestSplitTransactionOnCluster {
 final TableName tableName = TableName.valueOf(name.getMethodName());
 
 // Create table then get the single region for our new table.
-Table t = createTableAndWait(tableName, HConstants.CATALOG_FAMILY);
-List regions = cluster.getRegions(tableName);
-RegionInfo hri = getAndCheckSingleTableRegion(regions);
-
+Table t = createTableAndWait(tableName, HConstants.CATALOG_FAMILY); 
List regions =
+  cluster.getRegions(tableName); RegionInfo hri = 
getAndCheckSingleTableRegion(regions);
 int tableRegionIndex = ensureTableRegionNotOnSameServerAsMeta(admin, hri);
 
 // Turn off balancer so it doesn't cut in and mess up our placements.
@@ -381,20 +380,31 @@ public class TestSplitTransactionOnCluster {
   admin.splitRegionAsync(hri.getRegionName()).get(2, TimeUnit.MINUTES);
   // Get daughters
   List daughters = checkAndGetDaughters(tableName);
-  HRegion daughterRegion = daughters.get(0);
   // Now split one of the daughters.
+  HRegion daughterRegion = daughters.get(0);
   RegionInfo daughter = daughterRegion.getRegionInfo();
   LOG.info("Daughter we are going to split: " + daughter);
-  // Compact first to ensure we have cleaned up references -- else the 
split
-  // will fail.
+  // Compact first to ensure we have cleaned up references -- else the 
split will fail.
+  // May be a compaction going already so compact will return immediately; 
if so, wait until
+  // compaction completes.
   daughterRegion.compact(true);
-  daughterRegion.getStores().get(0).closeAndArchiveCompactedFiles();
+  HStore store = daughterRegion.getStores().get(0);
+  CompactionProgress progress = store.getCompactionProgress();
+  if (progress != null) {
+while (progress.getProgressPct() < 1) {
+  LOG.info("Waiting {}", progress);
+  Threads.sleep(1000);
+}
+  }
+  store.closeAndArchiveCompactedFiles();
   for (int i = 0; i < 100; i++) {
 if (!daughterRegion.hasReferences()) {
+  LOG.info("Break -- no references in {}", daughterRegion);
   break;
 }
 Threads.sleep(100);
   }
+  LOG.info("Finished {} references={}", daughterRegion, 
daughterRegion.hasReferences());
   assertFalse("Waiting for reference to be compacted", 
daughterRegion.hasReferences());
   LOG.info("Daughter hri before split (has been compacted): " + daughter);
   admin.splitRegionAsync(daughter.getRegionName()).get(2, 
TimeUnit.MINUTES);



[hbase] branch branch-2 updated: HBASE-23686 Revert binary incompatible change in ByteRangeUtils and removed reflections in CommonFSUtils

2020-01-24 Thread janh
This is an automated email from the ASF dual-hosted git repository.

janh pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new bfa4b0c  HBASE-23686 Revert binary incompatible change in 
ByteRangeUtils and removed reflections in CommonFSUtils
bfa4b0c is described below

commit bfa4b0c4c1aca4e986973ebb548659722625ae44
Author: Jan Hentschel 
AuthorDate: Fri Jan 24 20:28:01 2020 +0100

HBASE-23686 Revert binary incompatible change in ByteRangeUtils and removed 
reflections in CommonFSUtils

Signed-off-by: Sean Busbey 
---
 .../resources/hbase/checkstyle-suppressions.xml|   4 +
 .../java/org/apache/hadoop/hbase/net/Address.java  |   2 +-
 .../apache/hadoop/hbase/util/ByteRangeUtils.java   |   5 +-
 .../apache/hadoop/hbase/util/CommonFSUtils.java| 152 +
 4 files changed, 43 insertions(+), 120 deletions(-)

diff --git 
a/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml 
b/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
index 0694b35..9351ecb 100644
--- a/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
+++ b/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
@@ -51,4 +51,8 @@
   
   
   
+  
+  
+  
+  
 
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/net/Address.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/net/Address.java
index d76ef9f..48fa522 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/net/Address.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/net/Address.java
@@ -31,7 +31,7 @@ import 
org.apache.hbase.thirdparty.com.google.common.net.HostAndPort;
  * We cannot have Guava classes in our API hence this Type.
  */
 @InterfaceAudience.Public
-public final class Address implements Comparable {
+public class Address implements Comparable {
   private HostAndPort hostAndPort;
 
   private Address(HostAndPort hostAndPort) {
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteRangeUtils.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteRangeUtils.java
index fb0b336..9acfa26 100644
--- 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteRangeUtils.java
+++ 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteRangeUtils.java
@@ -30,10 +30,7 @@ import 
org.apache.hbase.thirdparty.com.google.common.collect.Lists;
  * Utility methods for working with {@link ByteRange}.
  */
 @InterfaceAudience.Public
-public final class ByteRangeUtils {
-  private ByteRangeUtils() {
-  }
-
+public class ByteRangeUtils {
   public static int numEqualPrefixBytes(ByteRange left, ByteRange right, int 
rightInnerOffset) {
 int maxCompares = Math.min(left.getLength(), right.getLength() - 
rightInnerOffset);
 final byte[] lbytes = left.getBytes();
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CommonFSUtils.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CommonFSUtils.java
index 9b64e82..a96a799 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CommonFSUtils.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CommonFSUtils.java
@@ -148,69 +148,22 @@ public abstract class CommonFSUtils {
* Return the number of bytes that large input files should be optimally
* be split into to minimize i/o time.
*
-   * use reflection to search for getDefaultBlockSize(Path f)
-   * if the method doesn't exist, fall back to using getDefaultBlockSize()
-   *
* @param fs filesystem object
* @return the default block size for the path's filesystem
-   * @throws IOException e
*/
-  public static long getDefaultBlockSize(final FileSystem fs, final Path path) 
throws IOException {
-Method m = null;
-Class cls = fs.getClass();
-try {
-  m = cls.getMethod("getDefaultBlockSize", Path.class);
-} catch (NoSuchMethodException e) {
-  LOG.info("FileSystem doesn't support getDefaultBlockSize");
-} catch (SecurityException e) {
-  LOG.info("Doesn't have access to getDefaultBlockSize on FileSystems", e);
-  m = null; // could happen on setAccessible()
-}
-if (m == null) {
-  return fs.getDefaultBlockSize(path);
-} else {
-  try {
-Object ret = m.invoke(fs, path);
-return ((Long)ret).longValue();
-  } catch (Exception e) {
-throw new IOException(e);
-  }
-}
+  public static long getDefaultBlockSize(final FileSystem fs, final Path path) 
{
+return fs.getDefaultBlockSize(path);
   }
 
   /*
* Get the default replication.
*
-   * use reflection to search for getDefaultReplication(Path f)
-   * if the method doesn't exist, fall back to using getDefaultReplication()
-   *
* @param fs filesystem object
* @param f path of file
* @return default replication for the path's filesystem
-   * @throws IOExce

[hbase] branch branch-2 updated: HBASE-23728 Include HBASE-21018 in 2.2 & 2.3

2020-01-24 Thread sakthi
This is an automated email from the ASF dual-hosted git repository.

sakthi pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new ee64aa0  HBASE-23728 Include HBASE-21018 in 2.2 & 2.3
ee64aa0 is described below

commit ee64aa044d3f132b20bec5aa87c9e23ca9c3886d
Author: Wei-Chiu Chuang 
AuthorDate: Fri Jan 24 11:21:39 2020 -0800

HBASE-23728 Include HBASE-21018 in 2.2 & 2.3

HBASE-21018 - RS crashed because AsyncFS was unable to update HDFS data 
encryption key

Signed-off-by: Peter Somogyi 
Signed-off-by: Sakthi 
(cherry picked from commit 656cba9fe7b4f1f42228582dea789a6f88ed638c)
---
 .../FanOutOneBlockAsyncDFSOutputSaslHelper.java   | 19 ++-
 1 file changed, 14 insertions(+), 5 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java
index c160391..59215de 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java
@@ -320,16 +320,20 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper 
{
 
 private final Promise promise;
 
+private final DFSClient dfsClient;
+
 private int step = 0;
 
 public SaslNegotiateHandler(Configuration conf, String username, char[] 
password,
-Map saslProps, int timeoutMs, Promise promise) 
throws SaslException {
+Map saslProps, int timeoutMs, Promise promise,
+DFSClient dfsClient) throws SaslException {
   this.conf = conf;
   this.saslProps = saslProps;
   this.saslClient = Sasl.createSaslClient(new String[] { MECHANISM }, 
username, PROTOCOL,
 SERVER_NAME, saslProps, new SaslClientCallbackHandler(username, 
password));
   this.timeoutMs = timeoutMs;
   this.promise = promise;
+  this.dfsClient = dfsClient;
 }
 
 private void sendSaslMessage(ChannelHandlerContext ctx, byte[] payload) 
throws IOException {
@@ -387,6 +391,7 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
 
 private void check(DataTransferEncryptorMessageProto proto) throws 
IOException {
   if (proto.getStatus() == DataTransferEncryptorStatus.ERROR_UNKNOWN_KEY) {
+dfsClient.clearDataEncryptionKey();
 throw new InvalidEncryptionKeyException(proto.getMessage());
   } else if (proto.getStatus() == DataTransferEncryptorStatus.ERROR) {
 throw new IOException(proto.getMessage());
@@ -689,12 +694,14 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper 
{
   }
 
   private static void doSaslNegotiation(Configuration conf, Channel channel, 
int timeoutMs,
-  String username, char[] password, Map saslProps, 
Promise saslPromise) {
+  String username, char[] password, Map saslProps, 
Promise saslPromise,
+  DFSClient dfsClient) {
 try {
   channel.pipeline().addLast(new IdleStateHandler(timeoutMs, 0, 0, 
TimeUnit.MILLISECONDS),
 new ProtobufVarint32FrameDecoder(),
 new 
ProtobufDecoder(DataTransferEncryptorMessageProto.getDefaultInstance()),
-new SaslNegotiateHandler(conf, username, password, saslProps, 
timeoutMs, saslPromise));
+new SaslNegotiateHandler(conf, username, password, saslProps, 
timeoutMs, saslPromise,
+dfsClient));
 } catch (SaslException e) {
   saslPromise.tryFailure(e);
 }
@@ -721,7 +728,8 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
   }
   doSaslNegotiation(conf, channel, timeoutMs, 
getUserNameFromEncryptionKey(encryptionKey),
 encryptionKeyToPassword(encryptionKey.encryptionKey),
-createSaslPropertiesForEncryption(encryptionKey.encryptionAlgorithm), 
saslPromise);
+createSaslPropertiesForEncryption(encryptionKey.encryptionAlgorithm), 
saslPromise,
+  client);
 } else if (!UserGroupInformation.isSecurityEnabled()) {
   if (LOG.isDebugEnabled()) {
 LOG.debug("SASL client skipping handshake in unsecured configuration 
for addr = " + addr
@@ -746,7 +754,8 @@ public final class FanOutOneBlockAsyncDFSOutputSaslHelper {
   "SASL client doing general handshake for addr = " + addr + ", 
datanodeId = " + dnInfo);
   }
   doSaslNegotiation(conf, channel, timeoutMs, buildUsername(accessToken),
-buildClientPassword(accessToken), 
saslPropsResolver.getClientProperties(addr), saslPromise);
+buildClientPassword(accessToken), 
saslPropsResolver.getClientProperties(addr), saslPromise,
+  client);
 } else {
   // It's a secured cluster using non-privileged ports, but no SASL. The 
only way this can
   // happen is if the DataNode has ignore.secure.ports.for.

[hbase] branch branch-2.2 updated: HBASE-23733 [Flakey Tests] TestSplitTransactionOnCluster

2020-01-24 Thread stack
This is an automated email from the ASF dual-hosted git repository.

stack pushed a commit to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.2 by this push:
 new fb2d8d1  HBASE-23733 [Flakey Tests] TestSplitTransactionOnCluster
fb2d8d1 is described below

commit fb2d8d1e53f21908be66b0120a0c64478032ed54
Author: stack 
AuthorDate: Fri Jan 24 10:06:18 2020 -0800

HBASE-23733 [Flakey Tests] TestSplitTransactionOnCluster
---
 .../TestSplitTransactionOnCluster.java | 28 +++---
 1 file changed, 19 insertions(+), 9 deletions(-)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java
index 2fb822e..72b6835 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java
@@ -1,4 +1,4 @@
-/**
+/*
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
  * distributed with this work for additional information
@@ -84,6 +84,7 @@ import 
org.apache.hadoop.hbase.master.assignment.RegionStateNode;
 import org.apache.hadoop.hbase.master.assignment.RegionStates;
 import org.apache.hadoop.hbase.procedure2.ProcedureTestingUtility;
 import org.apache.hadoop.hbase.regionserver.compactions.CompactionContext;
+import org.apache.hadoop.hbase.regionserver.compactions.CompactionProgress;
 import 
org.apache.hadoop.hbase.regionserver.throttle.NoLimitThroughputController;
 import org.apache.hadoop.hbase.testclassification.LargeTests;
 import org.apache.hadoop.hbase.testclassification.RegionServerTests;
@@ -360,10 +361,8 @@ public class TestSplitTransactionOnCluster {
 final TableName tableName = TableName.valueOf(name.getMethodName());
 
 // Create table then get the single region for our new table.
-Table t = createTableAndWait(tableName, HConstants.CATALOG_FAMILY);
-List regions = cluster.getRegions(tableName);
-RegionInfo hri = getAndCheckSingleTableRegion(regions);
-
+Table t = createTableAndWait(tableName, HConstants.CATALOG_FAMILY); 
List regions =
+  cluster.getRegions(tableName); RegionInfo hri = 
getAndCheckSingleTableRegion(regions);
 int tableRegionIndex = ensureTableRegionNotOnSameServerAsMeta(admin, hri);
 
 // Turn off balancer so it doesn't cut in and mess up our placements.
@@ -380,20 +379,31 @@ public class TestSplitTransactionOnCluster {
   admin.splitRegionAsync(hri.getRegionName()).get(2, TimeUnit.MINUTES);
   // Get daughters
   List daughters = checkAndGetDaughters(tableName);
-  HRegion daughterRegion = daughters.get(0);
   // Now split one of the daughters.
+  HRegion daughterRegion = daughters.get(0);
   RegionInfo daughter = daughterRegion.getRegionInfo();
   LOG.info("Daughter we are going to split: " + daughter);
-  // Compact first to ensure we have cleaned up references -- else the 
split
-  // will fail.
+  // Compact first to ensure we have cleaned up references -- else the 
split will fail.
+  // May be a compaction going already so compact will return immediately; 
if so, wait until
+  // compaction completes.
   daughterRegion.compact(true);
-  daughterRegion.getStores().get(0).closeAndArchiveCompactedFiles();
+  HStore store = daughterRegion.getStores().get(0);
+  CompactionProgress progress = store.getCompactionProgress();
+  if (progress != null) {
+while (progress.getProgressPct() < 1) {
+  LOG.info("Waiting {}", progress);
+  Threads.sleep(1000);
+}
+  }
+  store.closeAndArchiveCompactedFiles();
   for (int i = 0; i < 100; i++) {
 if (!daughterRegion.hasReferences()) {
+  LOG.info("Break -- no references in {}", daughterRegion);
   break;
 }
 Threads.sleep(100);
   }
+  LOG.info("Finished {} references={}", daughterRegion, 
daughterRegion.hasReferences());
   assertFalse("Waiting for reference to be compacted", 
daughterRegion.hasReferences());
   LOG.info("Daughter hri before split (has been compacted): " + daughter);
   admin.splitRegionAsync(daughter.getRegionName()).get(2, 
TimeUnit.MINUTES);



[hbase] branch master updated (3c1bccb -> 753cc99)

2020-01-24 Thread sakthi
This is an automated email from the ASF dual-hosted git repository.

sakthi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git.


from 3c1bccb  HBASE-23733 [Flakey Tests] TestSplitTransactionOnCluster
 add 753cc99  HBASE-23726 Forward-port HBASE-21345 to branch-2.2, 2.3 & 
master as well.

No new revisions were added by this update.

Summary of changes:
 .../main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java  | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)



[hbase] branch branch-2.2 updated: HBASE-23686 Revert binary incompatible change in ByteRangeUtils and removed reflections in CommonFSUtils

2020-01-24 Thread janh
This is an automated email from the ASF dual-hosted git repository.

janh pushed a commit to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.2 by this push:
 new e04dee7  HBASE-23686 Revert binary incompatible change in 
ByteRangeUtils and removed reflections in CommonFSUtils
e04dee7 is described below

commit e04dee70e19e45465781cf3b097f96cb64a034ff
Author: Jan Hentschel 
AuthorDate: Fri Jan 24 20:28:01 2020 +0100

HBASE-23686 Revert binary incompatible change in ByteRangeUtils and removed 
reflections in CommonFSUtils

Signed-off-by: Sean Busbey 
---
 .../resources/hbase/checkstyle-suppressions.xml|   4 +
 .../java/org/apache/hadoop/hbase/net/Address.java  |   2 +-
 .../apache/hadoop/hbase/util/ByteRangeUtils.java   |   5 +-
 .../apache/hadoop/hbase/util/CommonFSUtils.java| 152 +
 4 files changed, 43 insertions(+), 120 deletions(-)

diff --git 
a/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml 
b/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
index b83b468..de5385c 100644
--- a/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
+++ b/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
@@ -46,4 +46,8 @@
   
   
   
+  
+  
+  
+  
 
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/net/Address.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/net/Address.java
index d76ef9f..48fa522 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/net/Address.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/net/Address.java
@@ -31,7 +31,7 @@ import 
org.apache.hbase.thirdparty.com.google.common.net.HostAndPort;
  * We cannot have Guava classes in our API hence this Type.
  */
 @InterfaceAudience.Public
-public final class Address implements Comparable {
+public class Address implements Comparable {
   private HostAndPort hostAndPort;
 
   private Address(HostAndPort hostAndPort) {
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteRangeUtils.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteRangeUtils.java
index fb0b336..9acfa26 100644
--- 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteRangeUtils.java
+++ 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteRangeUtils.java
@@ -30,10 +30,7 @@ import 
org.apache.hbase.thirdparty.com.google.common.collect.Lists;
  * Utility methods for working with {@link ByteRange}.
  */
 @InterfaceAudience.Public
-public final class ByteRangeUtils {
-  private ByteRangeUtils() {
-  }
-
+public class ByteRangeUtils {
   public static int numEqualPrefixBytes(ByteRange left, ByteRange right, int 
rightInnerOffset) {
 int maxCompares = Math.min(left.getLength(), right.getLength() - 
rightInnerOffset);
 final byte[] lbytes = left.getBytes();
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CommonFSUtils.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CommonFSUtils.java
index 6a9f73d..7ed2a78 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CommonFSUtils.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CommonFSUtils.java
@@ -148,69 +148,22 @@ public abstract class CommonFSUtils {
* Return the number of bytes that large input files should be optimally
* be split into to minimize i/o time.
*
-   * use reflection to search for getDefaultBlockSize(Path f)
-   * if the method doesn't exist, fall back to using getDefaultBlockSize()
-   *
* @param fs filesystem object
* @return the default block size for the path's filesystem
-   * @throws IOException e
*/
-  public static long getDefaultBlockSize(final FileSystem fs, final Path path) 
throws IOException {
-Method m = null;
-Class cls = fs.getClass();
-try {
-  m = cls.getMethod("getDefaultBlockSize", Path.class);
-} catch (NoSuchMethodException e) {
-  LOG.info("FileSystem doesn't support getDefaultBlockSize");
-} catch (SecurityException e) {
-  LOG.info("Doesn't have access to getDefaultBlockSize on FileSystems", e);
-  m = null; // could happen on setAccessible()
-}
-if (m == null) {
-  return fs.getDefaultBlockSize(path);
-} else {
-  try {
-Object ret = m.invoke(fs, path);
-return ((Long)ret).longValue();
-  } catch (Exception e) {
-throw new IOException(e);
-  }
-}
+  public static long getDefaultBlockSize(final FileSystem fs, final Path path) 
{
+return fs.getDefaultBlockSize(path);
   }
 
   /*
* Get the default replication.
*
-   * use reflection to search for getDefaultReplication(Path f)
-   * if the method doesn't exist, fall back to using getDefaultReplication()
-   *
* @param fs filesystem object
* @param f path of file
* @return default replication for the path's filesystem
-   * @throws IO

[hbase] branch branch-2 updated: HBASE-23735 [Flakey Tests] TestClusterRestartFailover & TestClusterRestartFailoverSplitWithoutZk

2020-01-24 Thread stack
This is an automated email from the ASF dual-hosted git repository.

stack pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 1342427  HBASE-23735 [Flakey Tests] TestClusterRestartFailover & 
TestClusterRestartFailoverSplitWithoutZk
1342427 is described below

commit 134242720d39ac6cbf33f3d11cea3033cf20e221
Author: stack 
AuthorDate: Fri Jan 24 12:29:29 2020 -0800

HBASE-23735 [Flakey Tests] TestClusterRestartFailover & 
TestClusterRestartFailoverSplitWithoutZk
---
 .../hbase/master/TestClusterRestartFailover.java   | 25 ++
 1 file changed, 21 insertions(+), 4 deletions(-)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestClusterRestartFailover.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestClusterRestartFailover.java
index a6844fc..338173e 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestClusterRestartFailover.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestClusterRestartFailover.java
@@ -1,4 +1,4 @@
-/**
+/*
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
  * distributed with this work for additional information
@@ -40,8 +40,10 @@ import org.apache.hadoop.hbase.master.assignment.ServerState;
 import org.apache.hadoop.hbase.master.assignment.ServerStateNode;
 import org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure;
 import org.apache.hadoop.hbase.procedure2.Procedure;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
 import org.apache.hadoop.hbase.testclassification.LargeTests;
 import org.apache.hadoop.hbase.testclassification.MasterTests;
+import org.apache.hadoop.hbase.util.JVMClusterUtil;
 import org.apache.zookeeper.KeeperException;
 import org.junit.ClassRule;
 import org.junit.Test;
@@ -58,7 +60,7 @@ public class TestClusterRestartFailover extends 
AbstractTestRestartCluster {
 
   private static final Logger LOG = 
LoggerFactory.getLogger(TestClusterRestartFailover.class);
 
-  private static CountDownLatch SCP_LATCH;
+  private volatile static CountDownLatch SCP_LATCH;
   private static ServerName SERVER_FOR_TEST;
 
   @Override
@@ -79,7 +81,16 @@ public class TestClusterRestartFailover extends 
AbstractTestRestartCluster {
 setupCluster();
 setupTable();
 
-SERVER_FOR_TEST = 
UTIL.getHBaseCluster().getRegionServer(0).getServerName();
+// Find server that does not have hbase:namespace on it. This tests holds 
up SCPs. If it
+// holds up the server w/ hbase:namespace, the Master initialization will 
be held up
+// because this table is not online and test fails.
+for (JVMClusterUtil.RegionServerThread rst:
+UTIL.getHBaseCluster().getLiveRegionServerThreads()) {
+  HRegionServer rs = rst.getRegionServer();
+  if (rs.getRegions(TableName.NAMESPACE_TABLE_NAME).isEmpty()) {
+SERVER_FOR_TEST = rs.getServerName();
+  }
+}
 UTIL.waitFor(6, () -> getServerStateNode(SERVER_FOR_TEST) != null);
 ServerStateNode serverNode = getServerStateNode(SERVER_FOR_TEST);
 assertNotNull(serverNode);
@@ -98,8 +109,9 @@ public class TestClusterRestartFailover extends 
AbstractTestRestartCluster {
 LOG.info("Restarting cluster");
 
UTIL.restartHBaseCluster(StartMiniClusterOption.builder().masterClass(HMasterForTest.class)
 .numMasters(1).numRegionServers(3).rsPorts(ports).build());
+LOG.info("Started cluster");
 UTIL.waitFor(6, () -> 
UTIL.getHBaseCluster().getMaster().isInitialized());
-
+LOG.info("Started cluster master, waiting for {}", SERVER_FOR_TEST);
 UTIL.waitFor(6, () -> getServerStateNode(SERVER_FOR_TEST) != null);
 serverNode = getServerStateNode(SERVER_FOR_TEST);
 assertFalse("serverNode should not be ONLINE during SCP processing",
@@ -113,6 +125,7 @@ public class TestClusterRestartFailover extends 
AbstractTestRestartCluster {
   Procedure.NO_PROC_ID);
 
 // Wait the SCP to finish
+LOG.info("Waiting on latch");
 SCP_LATCH.countDown();
 UTIL.waitFor(6, () -> procedure.get().isFinished());
 
@@ -126,13 +139,17 @@ public class TestClusterRestartFailover extends 
AbstractTestRestartCluster {
   }
 
   private void setupCluster() throws Exception {
+LOG.info("Setup cluster");
 UTIL.startMiniCluster(
 
StartMiniClusterOption.builder().masterClass(HMasterForTest.class).numMasters(1)
 .numRegionServers(3).build());
+LOG.info("Cluster is up");
 UTIL.waitFor(6, () -> 
UTIL.getMiniHBaseCluster().getMaster().isInitialized());
+LOG.info("Master is up");
 // wait for all SCPs finished
 UTIL.waitFor(6, () -> 
UTIL.getHBaseCluster().getMaster().getProcedures().stream()
 .noneMatch(p -> p instanceof ServerCrashProcedure));
+LOG.info("No SCPs");
   }
 
   priv

[hbase] branch branch-2.1 updated: HBASE-23686 Revert binary incompatible change in ByteRangeUtils and removed reflections in CommonFSUtils

2020-01-24 Thread janh
This is an automated email from the ASF dual-hosted git repository.

janh pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new 00555d2  HBASE-23686 Revert binary incompatible change in 
ByteRangeUtils and removed reflections in CommonFSUtils
00555d2 is described below

commit 00555d2fe4bdbdf21a121370aefc8ea48a2c74f0
Author: Jan Hentschel 
AuthorDate: Fri Jan 24 20:28:01 2020 +0100

HBASE-23686 Revert binary incompatible change in ByteRangeUtils and removed 
reflections in CommonFSUtils

Signed-off-by: Sean Busbey 
---
 .../resources/hbase/checkstyle-suppressions.xml|  4 ++
 .../java/org/apache/hadoop/hbase/net/Address.java  |  2 +-
 .../apache/hadoop/hbase/util/ByteRangeUtils.java   |  5 +-
 .../apache/hadoop/hbase/util/CommonFSUtils.java| 67 --
 4 files changed, 16 insertions(+), 62 deletions(-)

diff --git 
a/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml 
b/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
index b83b468..de5385c 100644
--- a/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
+++ b/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
@@ -46,4 +46,8 @@
   
   
   
+  
+  
+  
+  
 
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/net/Address.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/net/Address.java
index d76ef9f..48fa522 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/net/Address.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/net/Address.java
@@ -31,7 +31,7 @@ import 
org.apache.hbase.thirdparty.com.google.common.net.HostAndPort;
  * We cannot have Guava classes in our API hence this Type.
  */
 @InterfaceAudience.Public
-public final class Address implements Comparable {
+public class Address implements Comparable {
   private HostAndPort hostAndPort;
 
   private Address(HostAndPort hostAndPort) {
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteRangeUtils.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteRangeUtils.java
index fb0b336..9acfa26 100644
--- 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteRangeUtils.java
+++ 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteRangeUtils.java
@@ -30,10 +30,7 @@ import 
org.apache.hbase.thirdparty.com.google.common.collect.Lists;
  * Utility methods for working with {@link ByteRange}.
  */
 @InterfaceAudience.Public
-public final class ByteRangeUtils {
-  private ByteRangeUtils() {
-  }
-
+public class ByteRangeUtils {
   public static int numEqualPrefixBytes(ByteRange left, ByteRange right, int 
rightInnerOffset) {
 int maxCompares = Math.min(left.getLength(), right.getLength() - 
rightInnerOffset);
 final byte[] lbytes = left.getBytes();
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CommonFSUtils.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CommonFSUtils.java
index 6a9f73d..2384d14 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CommonFSUtils.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CommonFSUtils.java
@@ -148,69 +148,22 @@ public abstract class CommonFSUtils {
* Return the number of bytes that large input files should be optimally
* be split into to minimize i/o time.
*
-   * use reflection to search for getDefaultBlockSize(Path f)
-   * if the method doesn't exist, fall back to using getDefaultBlockSize()
-   *
* @param fs filesystem object
* @return the default block size for the path's filesystem
-   * @throws IOException e
*/
-  public static long getDefaultBlockSize(final FileSystem fs, final Path path) 
throws IOException {
-Method m = null;
-Class cls = fs.getClass();
-try {
-  m = cls.getMethod("getDefaultBlockSize", Path.class);
-} catch (NoSuchMethodException e) {
-  LOG.info("FileSystem doesn't support getDefaultBlockSize");
-} catch (SecurityException e) {
-  LOG.info("Doesn't have access to getDefaultBlockSize on FileSystems", e);
-  m = null; // could happen on setAccessible()
-}
-if (m == null) {
-  return fs.getDefaultBlockSize(path);
-} else {
-  try {
-Object ret = m.invoke(fs, path);
-return ((Long)ret).longValue();
-  } catch (Exception e) {
-throw new IOException(e);
-  }
-}
+  public static long getDefaultBlockSize(final FileSystem fs, final Path path) 
{
+return fs.getDefaultBlockSize(path);
   }
 
   /*
* Get the default replication.
*
-   * use reflection to search for getDefaultReplication(Path f)
-   * if the method doesn't exist, fall back to using getDefaultReplication()
-   *
* @param fs filesystem object
* @param f path of file
* @return default replication for the path's filesystem
-   * @throws IOExc

[hbase] branch branch-2 updated: HBASE-23737 [Flakey Tests] TestFavoredNodeTableImport fails 30% of the time

2020-01-24 Thread stack
This is an automated email from the ASF dual-hosted git repository.

stack pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 9cf57a7  HBASE-23737 [Flakey Tests] TestFavoredNodeTableImport fails 
30% of the time
9cf57a7 is described below

commit 9cf57a7db6217151fab9344e6eade4d0843bd405
Author: stack 
AuthorDate: Fri Jan 24 17:56:42 2020 -0800

HBASE-23737 [Flakey Tests] TestFavoredNodeTableImport fails 30% of the time
---
 .../balancer/TestFavoredNodeTableImport.java   | 27 +++---
 1 file changed, 19 insertions(+), 8 deletions(-)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestFavoredNodeTableImport.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestFavoredNodeTableImport.java
index 6958ed2..29f0708 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestFavoredNodeTableImport.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestFavoredNodeTableImport.java
@@ -1,4 +1,4 @@
-/**
+/*
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
  * distributed with this work for additional information
@@ -20,6 +20,7 @@ package org.apache.hadoop.hbase.master.balancer;
 import static 
org.apache.hadoop.hbase.favored.FavoredNodeAssignmentHelper.FAVORED_NODES_NUM;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
 
 import java.util.List;
 import java.util.Set;
@@ -34,11 +35,13 @@ import org.apache.hadoop.hbase.ServerName;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
 import org.apache.hadoop.hbase.favored.FavoredNodesManager;
+import org.apache.hadoop.hbase.master.HMaster;
 import org.apache.hadoop.hbase.testclassification.MediumTests;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.hbase.util.Threads;
 import org.junit.After;
 import org.junit.ClassRule;
+import org.junit.Ignore;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 import org.slf4j.Logger;
@@ -66,7 +69,6 @@ public class TestFavoredNodeTableImport {
 
   @After
   public void stopCluster() throws Exception {
-UTIL.cleanupTestDir();
 UTIL.shutdownMiniCluster();
   }
 
@@ -81,13 +83,14 @@ public class TestFavoredNodeTableImport {
   Threads.sleep(1);
 }
 Admin admin = UTIL.getAdmin();
-admin.setBalancerRunning(false, true);
+admin.balancerSwitch(false, true);
 
 String tableName = "testFNImport";
 HTableDescriptor desc = new HTableDescriptor(TableName.valueOf(tableName));
 desc.addFamily(new HColumnDescriptor(HConstants.CATALOG_FAMILY));
 admin.createTable(desc, Bytes.toBytes("a"), Bytes.toBytes("z"), 
REGION_NUM);
 UTIL.waitTableAvailable(desc.getTableName());
+admin.balancerSwitch(true, true);
 
 LOG.info("Shutting down cluster");
 UTIL.shutdownMiniHBaseCluster();
@@ -97,18 +100,26 @@ public class TestFavoredNodeTableImport {
 UTIL.getConfiguration().set(HConstants.HBASE_MASTER_LOADBALANCER_CLASS,
 FavoredStochasticBalancer.class.getName());
 UTIL.restartHBaseCluster(SLAVES);
-while (!UTIL.getMiniHBaseCluster().getMaster().isInitialized()) {
+HMaster master = UTIL.getMiniHBaseCluster().getMaster();
+while (!master.isInitialized()) {
   Threads.sleep(1);
 }
-admin = UTIL.getAdmin();
-
 UTIL.waitTableAvailable(desc.getTableName());
+UTIL.waitUntilNoRegionsInTransition(1);
+assertTrue(master.isBalancerOn());
 
-FavoredNodesManager fnm = 
UTIL.getHBaseCluster().getMaster().getFavoredNodesManager();
+FavoredNodesManager fnm = master.getFavoredNodesManager();
+assertNotNull(fnm);
 
+admin = UTIL.getAdmin();
 List regionsOfTable = 
admin.getTableRegions(TableName.valueOf(tableName));
 for (HRegionInfo rInfo : regionsOfTable) {
-  Set favNodes = Sets.newHashSet(fnm.getFavoredNodes(rInfo));
+  assertNotNull(rInfo);
+  assertNotNull(fnm);
+  List fns = fnm.getFavoredNodes(rInfo);
+  LOG.info("FNS {} {}", rInfo, fns);
+  assertNotNull(rInfo.toString(), fns);
+  Set favNodes = Sets.newHashSet(fns);
   assertNotNull(favNodes);
   assertEquals("Required no of favored nodes not found.", 
FAVORED_NODES_NUM, favNodes.size());
   for (ServerName fn : favNodes) {



[hbase] branch master updated: HBASE-23735 [Flakey Tests] TestClusterRestartFailover & TestClusterRestartFailoverSplitWithoutZk

2020-01-24 Thread stack
This is an automated email from the ASF dual-hosted git repository.

stack pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new 1690414  HBASE-23735 [Flakey Tests] TestClusterRestartFailover & 
TestClusterRestartFailoverSplitWithoutZk
1690414 is described below

commit 16904142635b751ec0986145039d6e2fa27d89b0
Author: stack 
AuthorDate: Fri Jan 24 12:29:29 2020 -0800

HBASE-23735 [Flakey Tests] TestClusterRestartFailover & 
TestClusterRestartFailoverSplitWithoutZk
---
 .../hbase/master/TestClusterRestartFailover.java   | 25 ++
 1 file changed, 21 insertions(+), 4 deletions(-)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestClusterRestartFailover.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestClusterRestartFailover.java
index a6844fc..338173e 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestClusterRestartFailover.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestClusterRestartFailover.java
@@ -1,4 +1,4 @@
-/**
+/*
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
  * distributed with this work for additional information
@@ -40,8 +40,10 @@ import org.apache.hadoop.hbase.master.assignment.ServerState;
 import org.apache.hadoop.hbase.master.assignment.ServerStateNode;
 import org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure;
 import org.apache.hadoop.hbase.procedure2.Procedure;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
 import org.apache.hadoop.hbase.testclassification.LargeTests;
 import org.apache.hadoop.hbase.testclassification.MasterTests;
+import org.apache.hadoop.hbase.util.JVMClusterUtil;
 import org.apache.zookeeper.KeeperException;
 import org.junit.ClassRule;
 import org.junit.Test;
@@ -58,7 +60,7 @@ public class TestClusterRestartFailover extends 
AbstractTestRestartCluster {
 
   private static final Logger LOG = 
LoggerFactory.getLogger(TestClusterRestartFailover.class);
 
-  private static CountDownLatch SCP_LATCH;
+  private volatile static CountDownLatch SCP_LATCH;
   private static ServerName SERVER_FOR_TEST;
 
   @Override
@@ -79,7 +81,16 @@ public class TestClusterRestartFailover extends 
AbstractTestRestartCluster {
 setupCluster();
 setupTable();
 
-SERVER_FOR_TEST = 
UTIL.getHBaseCluster().getRegionServer(0).getServerName();
+// Find server that does not have hbase:namespace on it. This tests holds 
up SCPs. If it
+// holds up the server w/ hbase:namespace, the Master initialization will 
be held up
+// because this table is not online and test fails.
+for (JVMClusterUtil.RegionServerThread rst:
+UTIL.getHBaseCluster().getLiveRegionServerThreads()) {
+  HRegionServer rs = rst.getRegionServer();
+  if (rs.getRegions(TableName.NAMESPACE_TABLE_NAME).isEmpty()) {
+SERVER_FOR_TEST = rs.getServerName();
+  }
+}
 UTIL.waitFor(6, () -> getServerStateNode(SERVER_FOR_TEST) != null);
 ServerStateNode serverNode = getServerStateNode(SERVER_FOR_TEST);
 assertNotNull(serverNode);
@@ -98,8 +109,9 @@ public class TestClusterRestartFailover extends 
AbstractTestRestartCluster {
 LOG.info("Restarting cluster");
 
UTIL.restartHBaseCluster(StartMiniClusterOption.builder().masterClass(HMasterForTest.class)
 .numMasters(1).numRegionServers(3).rsPorts(ports).build());
+LOG.info("Started cluster");
 UTIL.waitFor(6, () -> 
UTIL.getHBaseCluster().getMaster().isInitialized());
-
+LOG.info("Started cluster master, waiting for {}", SERVER_FOR_TEST);
 UTIL.waitFor(6, () -> getServerStateNode(SERVER_FOR_TEST) != null);
 serverNode = getServerStateNode(SERVER_FOR_TEST);
 assertFalse("serverNode should not be ONLINE during SCP processing",
@@ -113,6 +125,7 @@ public class TestClusterRestartFailover extends 
AbstractTestRestartCluster {
   Procedure.NO_PROC_ID);
 
 // Wait the SCP to finish
+LOG.info("Waiting on latch");
 SCP_LATCH.countDown();
 UTIL.waitFor(6, () -> procedure.get().isFinished());
 
@@ -126,13 +139,17 @@ public class TestClusterRestartFailover extends 
AbstractTestRestartCluster {
   }
 
   private void setupCluster() throws Exception {
+LOG.info("Setup cluster");
 UTIL.startMiniCluster(
 
StartMiniClusterOption.builder().masterClass(HMasterForTest.class).numMasters(1)
 .numRegionServers(3).build());
+LOG.info("Cluster is up");
 UTIL.waitFor(6, () -> 
UTIL.getMiniHBaseCluster().getMaster().isInitialized());
+LOG.info("Master is up");
 // wait for all SCPs finished
 UTIL.waitFor(6, () -> 
UTIL.getHBaseCluster().getMaster().getProcedures().stream()
 .noneMatch(p -> p instanceof ServerCrashProcedure));
+LOG.info("No SCPs");
   }
 
   private 

[hbase] 01/04: Revert " HBASE-23069 periodic dependency bump for Sep 2019 (#1082); ADDENDUM"

2020-01-24 Thread stack
This is an automated email from the ASF dual-hosted git repository.

stack pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit a91346a727bdc61af5c47e2bde8a75eed02b7fe3
Author: stack 
AuthorDate: Fri Jan 24 18:47:02 2020 -0800

Revert " HBASE-23069 periodic dependency bump for Sep 2019 (#1082); 
ADDENDUM"

This reverts commit 9f04fa69b0e73eda17bc02a04932f2238c5ad7d8.

Revert to see if this causing strange test failure on nightlies.
---
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pom.xml b/pom.xml
index 6cf5508..589170c 100755
--- a/pom.xml
+++ b/pom.xml
@@ -1370,7 +1370,7 @@
 3.0.0-M4
 2.12
 1.0.1
-3.1.1
+3.2.0
 
 



[hbase] 02/04: Revert " HBASE-23069 periodic dependency bump for Sep 2019 (#1082); ADDENDUM"

2020-01-24 Thread stack
This is an automated email from the ASF dual-hosted git repository.

stack pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit 502cb33601391a6a56bee9b9895cfc5259dd956b
Author: stack 
AuthorDate: Fri Jan 24 18:47:30 2020 -0800

Revert " HBASE-23069 periodic dependency bump for Sep 2019 (#1082); 
ADDENDUM"

This reverts commit 2a11b7e94bf96e7c9ceb408fdeed67dc3e9cdf41.

 Revert to see if this causing strange test failure on nightlies.
---
 pom.xml | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/pom.xml b/pom.xml
index 589170c..95f762e 100755
--- a/pom.xml
+++ b/pom.xml
@@ -3927,4 +3927,10 @@
   file:///tmp
 
   
+  
+
+  thirdparty
+  
https://repository.apache.org/content/repositories/orgapachehbase-1381
+
+  
 



[hbase] 03/04: Revert " HBASE-23069 periodic dependency bump for Sep 2019 (#1082); ADDENDUM"

2020-01-24 Thread stack
This is an automated email from the ASF dual-hosted git repository.

stack pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit 83a2f6b54627ee17a50bc9b83c1d277cceb9c8f2
Author: stack 
AuthorDate: Fri Jan 24 18:47:34 2020 -0800

Revert " HBASE-23069 periodic dependency bump for Sep 2019 (#1082); 
ADDENDUM"

This reverts commit 0dc71f9fdf1e02980c88ce0eb74fc7f8fc63272f.

 Revert to see if this causing strange test failure on nightlies.
---
 pom.xml | 10 ++
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/pom.xml b/pom.xml
index 95f762e..beb7897 100755
--- a/pom.xml
+++ b/pom.xml
@@ -1349,7 +1349,7 @@
 4.2.0
 
 0.13
-1.5.8
+1.5.8.1
 1.5.0-rc.2
 3.0.0
 1.4
@@ -1370,7 +1370,7 @@
 3.0.0-M4
 2.12
 1.0.1
-3.2.0
+3.1.1
 
 
@@ -3927,10 +3927,4 @@
   file:///tmp
 
   
-  
-
-  thirdparty
-  
https://repository.apache.org/content/repositories/orgapachehbase-1381
-
-  
 



[hbase] 04/04: Revert "HBASE-23069 periodic dependency bump for Sep 2019 (#1082)"

2020-01-24 Thread stack
This is an automated email from the ASF dual-hosted git repository.

stack pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit 3d9e536e2a51170b18c8c08e19fe510832b2da90
Author: stack 
AuthorDate: Fri Jan 24 18:47:39 2020 -0800

Revert "HBASE-23069 periodic dependency bump for Sep 2019 (#1082)"

This reverts commit 792feec05d4b82e78cfb058f02313f3f45e1c2b0.

 Revert to see if this causing strange test failure on nightlies.
---
 pom.xml | 52 +++-
 1 file changed, 15 insertions(+), 37 deletions(-)

diff --git a/pom.xml b/pom.xml
index beb7897..d0ab4f6 100755
--- a/pom.xml
+++ b/pom.xml
@@ -1297,24 +1297,20 @@
 
 0.5.0
 1.7.7
-2.8.1
-1.13
+2.6.2
+1.10
 
-2.6
+2.5
 3.9
 3.6.1
-3.4.2
-
+3.3.6
 4.5.3
-4.4.13
+4.4.6
 3.2.6
-2.10.1
-2.10.1
+2.9.10
+2.9.10.1
 2.2.12
-9.3.28.v20191105
+9.3.27.v20190418
 3.1.0
 2.0.1
 
@@ -1329,13 +1325,13 @@
 2.28.2
 
 2.5.0
-0.6.1
+0.5.0
 thrift
 0.12.0
-3.4.14
+3.4.10
 
 0.9.94
-1.7.30
+1.7.25
 4.0.3
 2.4.1
 1.3.8
@@ -1348,14 +1344,14 @@
 1.0.0
 4.2.0
 
-0.13
-1.5.8.1
-1.5.0-rc.2
+0.12
+1.5.5
+1.5.0-alpha.15
 3.0.0
 1.4
 8.28
 1.6.0
-2.3.4
+2.3.3
 1.3.9-1
 3.0.4
 2.4.2
@@ -1652,16 +1648,6 @@
 hbase-zookeeper
 org.apache.hbase
 ${project.version}
-
-  
-com.google.code.findbugs
-jsr305
-  
-  
-com.github.spotbugs
-spotbugs-annotations
-  
-
   
   
 hbase-zookeeper
@@ -1779,14 +1765,6 @@
 ${zookeeper.version}
 
   
-com.google.code.findbugs
-jsr305
-  
-  
-com.github.spotbugs
-spotbugs-annotations
-  
-  
 jline
 jline
   



[hbase] branch branch-2 updated (9cf57a7 -> 3d9e536)

2020-01-24 Thread stack
This is an automated email from the ASF dual-hosted git repository.

stack pushed a change to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git.


from 9cf57a7  HBASE-23737 [Flakey Tests] TestFavoredNodeTableImport fails 
30% of the time
 new a91346a  Revert " HBASE-23069 periodic dependency bump for Sep 
2019 (#1082); ADDENDUM"
 new 502cb33  Revert " HBASE-23069 periodic dependency bump for Sep 2019 
(#1082); ADDENDUM"
 new 83a2f6b  Revert " HBASE-23069 periodic dependency bump for Sep 2019 
(#1082); ADDENDUM"
 new 3d9e536  Revert "HBASE-23069 periodic dependency bump for Sep 2019 
(#1082)"

The 4 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 pom.xml | 52 +++-
 1 file changed, 15 insertions(+), 37 deletions(-)