(hbase) branch branch-2.6 updated: HBASE-28330 TestUnknownServers.testListUnknownServers is flaky in branch-2 (#5650)

2024-01-25 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.6
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.6 by this push:
 new fa115672136 HBASE-28330 TestUnknownServers.testListUnknownServers is 
flaky in branch-2 (#5650)
fa115672136 is described below

commit fa115672136791ed6181a4c5f0d59dc76625e6a5
Author: Xin Sun 
AuthorDate: Fri Jan 26 14:51:14 2024 +0800

HBASE-28330 TestUnknownServers.testListUnknownServers is flaky in branch-2 
(#5650)

Signed-off-by: Duo Zhang 
---
 .../test/java/org/apache/hadoop/hbase/master/TestUnknownServers.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestUnknownServers.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestUnknownServers.java
index 253a9c144c7..9e36f36a7a2 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestUnknownServers.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestUnknownServers.java
@@ -41,7 +41,7 @@ public class TestUnknownServers {
 
   private static HBaseTestingUtility UTIL;
   private static Admin ADMIN;
-  private final static int SLAVES = 2;
+  private final static int SLAVES = 1;
   private static boolean IS_UNKNOWN_SERVER = true;
 
   @BeforeClass



(hbase) branch branch-2.5 updated: HBASE-28330 TestUnknownServers.testListUnknownServers is flaky in branch-2 (#5650)

2024-01-25 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.5
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.5 by this push:
 new b39e695f824 HBASE-28330 TestUnknownServers.testListUnknownServers is 
flaky in branch-2 (#5650)
b39e695f824 is described below

commit b39e695f82460e056caa190427e6e1cda7dd90ea
Author: Xin Sun 
AuthorDate: Fri Jan 26 14:51:14 2024 +0800

HBASE-28330 TestUnknownServers.testListUnknownServers is flaky in branch-2 
(#5650)

Signed-off-by: Duo Zhang 
---
 .../test/java/org/apache/hadoop/hbase/master/TestUnknownServers.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestUnknownServers.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestUnknownServers.java
index 253a9c144c7..9e36f36a7a2 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestUnknownServers.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestUnknownServers.java
@@ -41,7 +41,7 @@ public class TestUnknownServers {
 
   private static HBaseTestingUtility UTIL;
   private static Admin ADMIN;
-  private final static int SLAVES = 2;
+  private final static int SLAVES = 1;
   private static boolean IS_UNKNOWN_SERVER = true;
 
   @BeforeClass



(hbase) branch branch-2 updated: HBASE-28330 TestUnknownServers.testListUnknownServers is flaky in branch-2 (#5650)

2024-01-25 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new a2fbb4d88a7 HBASE-28330 TestUnknownServers.testListUnknownServers is 
flaky in branch-2 (#5650)
a2fbb4d88a7 is described below

commit a2fbb4d88a72056310cb37c10c4d8480407f0860
Author: Xin Sun 
AuthorDate: Fri Jan 26 14:51:14 2024 +0800

HBASE-28330 TestUnknownServers.testListUnknownServers is flaky in branch-2 
(#5650)

Signed-off-by: Duo Zhang 
---
 .../test/java/org/apache/hadoop/hbase/master/TestUnknownServers.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestUnknownServers.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestUnknownServers.java
index 253a9c144c7..9e36f36a7a2 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestUnknownServers.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestUnknownServers.java
@@ -41,7 +41,7 @@ public class TestUnknownServers {
 
   private static HBaseTestingUtility UTIL;
   private static Admin ADMIN;
-  private final static int SLAVES = 2;
+  private final static int SLAVES = 1;
   private static boolean IS_UNKNOWN_SERVER = true;
 
   @BeforeClass



(hbase) branch branch-2.4 updated: HBASE-28324 TestRegionNormalizerWorkQueue#testTake is flaky (#5643)

2024-01-21 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new 7ced6d3e51c HBASE-28324 TestRegionNormalizerWorkQueue#testTake is 
flaky (#5643)
7ced6d3e51c is described below

commit 7ced6d3e51cb4c1c47c76805c98411a512348494
Author: Xin Sun 
AuthorDate: Mon Jan 22 14:02:11 2024 +0800

HBASE-28324 TestRegionNormalizerWorkQueue#testTake is flaky (#5643)

Signed-off-by: Duo Zhang 
---
 .../master/normalizer/TestRegionNormalizerWorkQueue.java| 13 +++--
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/normalizer/TestRegionNormalizerWorkQueue.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/normalizer/TestRegionNormalizerWorkQueue.java
index c6d14c19114..088df7e7376 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/normalizer/TestRegionNormalizerWorkQueue.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/normalizer/TestRegionNormalizerWorkQueue.java
@@ -22,7 +22,6 @@ import static org.hamcrest.Matchers.contains;
 import static org.hamcrest.Matchers.greaterThan;
 import static org.hamcrest.Matchers.lessThanOrEqualTo;
 import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
 import java.util.ArrayList;
@@ -41,6 +40,8 @@ import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.stream.Collectors;
 import java.util.stream.IntStream;
 import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.Waiter;
 import org.apache.hadoop.hbase.testclassification.MasterTests;
 import org.apache.hadoop.hbase.testclassification.SmallTests;
 import org.junit.ClassRule;
@@ -186,6 +187,7 @@ public class TestRegionNormalizerWorkQueue {
 final RegionNormalizerWorkQueue queue = new 
RegionNormalizerWorkQueue<>();
 final ConcurrentLinkedQueue takeTimes = new 
ConcurrentLinkedQueue<>();
 final AtomicBoolean finished = new AtomicBoolean(false);
+final int count = 5;
 final Runnable consumer = () -> {
   try {
 while (!finished.get()) {
@@ -199,11 +201,12 @@ public class TestRegionNormalizerWorkQueue {
 
 CompletableFuture worker = CompletableFuture.runAsync(consumer);
 final long testStart = System.nanoTime();
-for (int i = 0; i < 5; i++) {
+for (int i = 0; i < count; i++) {
   Thread.sleep(10);
   queue.put(i);
 }
-
+// should have timing information for 5 calls to take.
+Waiter.waitFor(HBaseConfiguration.create(), 1000, () -> count == 
takeTimes.size());
 // set finished = true and pipe one more value in case the thread needs an 
extra pass through
 // the loop.
 finished.set(true);
@@ -211,9 +214,7 @@ public class TestRegionNormalizerWorkQueue {
 worker.get(1, TimeUnit.SECONDS);
 
 final Iterator times = takeTimes.iterator();
-assertTrue("should have timing information for at least 2 calls to take.",
-  takeTimes.size() >= 5);
-for (int i = 0; i < 5; i++) {
+for (int i = 0; i < count; i++) {
   assertThat(
 "Observations collected in takeTimes should increase by roughly 10ms 
every interval",
 times.next(), greaterThan(testStart + TimeUnit.MILLISECONDS.toNanos(i 
* 10)));



(hbase) branch branch-2.6 updated: HBASE-28324 TestRegionNormalizerWorkQueue#testTake is flaky (#5643)

2024-01-21 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.6
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.6 by this push:
 new 541eee0b35c HBASE-28324 TestRegionNormalizerWorkQueue#testTake is 
flaky (#5643)
541eee0b35c is described below

commit 541eee0b35c0156903509e3ddfc95ab953d66906
Author: Xin Sun 
AuthorDate: Mon Jan 22 14:02:11 2024 +0800

HBASE-28324 TestRegionNormalizerWorkQueue#testTake is flaky (#5643)

Signed-off-by: Duo Zhang 
---
 .../master/normalizer/TestRegionNormalizerWorkQueue.java| 13 +++--
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/normalizer/TestRegionNormalizerWorkQueue.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/normalizer/TestRegionNormalizerWorkQueue.java
index c6d14c19114..088df7e7376 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/normalizer/TestRegionNormalizerWorkQueue.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/normalizer/TestRegionNormalizerWorkQueue.java
@@ -22,7 +22,6 @@ import static org.hamcrest.Matchers.contains;
 import static org.hamcrest.Matchers.greaterThan;
 import static org.hamcrest.Matchers.lessThanOrEqualTo;
 import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
 import java.util.ArrayList;
@@ -41,6 +40,8 @@ import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.stream.Collectors;
 import java.util.stream.IntStream;
 import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.Waiter;
 import org.apache.hadoop.hbase.testclassification.MasterTests;
 import org.apache.hadoop.hbase.testclassification.SmallTests;
 import org.junit.ClassRule;
@@ -186,6 +187,7 @@ public class TestRegionNormalizerWorkQueue {
 final RegionNormalizerWorkQueue queue = new 
RegionNormalizerWorkQueue<>();
 final ConcurrentLinkedQueue takeTimes = new 
ConcurrentLinkedQueue<>();
 final AtomicBoolean finished = new AtomicBoolean(false);
+final int count = 5;
 final Runnable consumer = () -> {
   try {
 while (!finished.get()) {
@@ -199,11 +201,12 @@ public class TestRegionNormalizerWorkQueue {
 
 CompletableFuture worker = CompletableFuture.runAsync(consumer);
 final long testStart = System.nanoTime();
-for (int i = 0; i < 5; i++) {
+for (int i = 0; i < count; i++) {
   Thread.sleep(10);
   queue.put(i);
 }
-
+// should have timing information for 5 calls to take.
+Waiter.waitFor(HBaseConfiguration.create(), 1000, () -> count == 
takeTimes.size());
 // set finished = true and pipe one more value in case the thread needs an 
extra pass through
 // the loop.
 finished.set(true);
@@ -211,9 +214,7 @@ public class TestRegionNormalizerWorkQueue {
 worker.get(1, TimeUnit.SECONDS);
 
 final Iterator times = takeTimes.iterator();
-assertTrue("should have timing information for at least 2 calls to take.",
-  takeTimes.size() >= 5);
-for (int i = 0; i < 5; i++) {
+for (int i = 0; i < count; i++) {
   assertThat(
 "Observations collected in takeTimes should increase by roughly 10ms 
every interval",
 times.next(), greaterThan(testStart + TimeUnit.MILLISECONDS.toNanos(i 
* 10)));



(hbase) branch branch-2.5 updated: HBASE-28324 TestRegionNormalizerWorkQueue#testTake is flaky (#5643)

2024-01-21 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.5
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.5 by this push:
 new e497d12a6aa HBASE-28324 TestRegionNormalizerWorkQueue#testTake is 
flaky (#5643)
e497d12a6aa is described below

commit e497d12a6aa48c1196ff44641f85bb93d92cfbf6
Author: Xin Sun 
AuthorDate: Mon Jan 22 14:02:11 2024 +0800

HBASE-28324 TestRegionNormalizerWorkQueue#testTake is flaky (#5643)

Signed-off-by: Duo Zhang 
---
 .../master/normalizer/TestRegionNormalizerWorkQueue.java| 13 +++--
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/normalizer/TestRegionNormalizerWorkQueue.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/normalizer/TestRegionNormalizerWorkQueue.java
index c6d14c19114..088df7e7376 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/normalizer/TestRegionNormalizerWorkQueue.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/normalizer/TestRegionNormalizerWorkQueue.java
@@ -22,7 +22,6 @@ import static org.hamcrest.Matchers.contains;
 import static org.hamcrest.Matchers.greaterThan;
 import static org.hamcrest.Matchers.lessThanOrEqualTo;
 import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
 import java.util.ArrayList;
@@ -41,6 +40,8 @@ import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.stream.Collectors;
 import java.util.stream.IntStream;
 import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.Waiter;
 import org.apache.hadoop.hbase.testclassification.MasterTests;
 import org.apache.hadoop.hbase.testclassification.SmallTests;
 import org.junit.ClassRule;
@@ -186,6 +187,7 @@ public class TestRegionNormalizerWorkQueue {
 final RegionNormalizerWorkQueue queue = new 
RegionNormalizerWorkQueue<>();
 final ConcurrentLinkedQueue takeTimes = new 
ConcurrentLinkedQueue<>();
 final AtomicBoolean finished = new AtomicBoolean(false);
+final int count = 5;
 final Runnable consumer = () -> {
   try {
 while (!finished.get()) {
@@ -199,11 +201,12 @@ public class TestRegionNormalizerWorkQueue {
 
 CompletableFuture worker = CompletableFuture.runAsync(consumer);
 final long testStart = System.nanoTime();
-for (int i = 0; i < 5; i++) {
+for (int i = 0; i < count; i++) {
   Thread.sleep(10);
   queue.put(i);
 }
-
+// should have timing information for 5 calls to take.
+Waiter.waitFor(HBaseConfiguration.create(), 1000, () -> count == 
takeTimes.size());
 // set finished = true and pipe one more value in case the thread needs an 
extra pass through
 // the loop.
 finished.set(true);
@@ -211,9 +214,7 @@ public class TestRegionNormalizerWorkQueue {
 worker.get(1, TimeUnit.SECONDS);
 
 final Iterator times = takeTimes.iterator();
-assertTrue("should have timing information for at least 2 calls to take.",
-  takeTimes.size() >= 5);
-for (int i = 0; i < 5; i++) {
+for (int i = 0; i < count; i++) {
   assertThat(
 "Observations collected in takeTimes should increase by roughly 10ms 
every interval",
 times.next(), greaterThan(testStart + TimeUnit.MILLISECONDS.toNanos(i 
* 10)));



(hbase) branch branch-3 updated: HBASE-28324 TestRegionNormalizerWorkQueue#testTake is flaky (#5643)

2024-01-21 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-3
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-3 by this push:
 new 468cc2e1309 HBASE-28324 TestRegionNormalizerWorkQueue#testTake is 
flaky (#5643)
468cc2e1309 is described below

commit 468cc2e130991388de3e14bd2a2837d53d78972d
Author: Xin Sun 
AuthorDate: Mon Jan 22 14:02:11 2024 +0800

HBASE-28324 TestRegionNormalizerWorkQueue#testTake is flaky (#5643)

Signed-off-by: Duo Zhang 
---
 .../master/normalizer/TestRegionNormalizerWorkQueue.java| 13 +++--
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/normalizer/TestRegionNormalizerWorkQueue.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/normalizer/TestRegionNormalizerWorkQueue.java
index c6d14c19114..088df7e7376 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/normalizer/TestRegionNormalizerWorkQueue.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/normalizer/TestRegionNormalizerWorkQueue.java
@@ -22,7 +22,6 @@ import static org.hamcrest.Matchers.contains;
 import static org.hamcrest.Matchers.greaterThan;
 import static org.hamcrest.Matchers.lessThanOrEqualTo;
 import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
 import java.util.ArrayList;
@@ -41,6 +40,8 @@ import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.stream.Collectors;
 import java.util.stream.IntStream;
 import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.Waiter;
 import org.apache.hadoop.hbase.testclassification.MasterTests;
 import org.apache.hadoop.hbase.testclassification.SmallTests;
 import org.junit.ClassRule;
@@ -186,6 +187,7 @@ public class TestRegionNormalizerWorkQueue {
 final RegionNormalizerWorkQueue queue = new 
RegionNormalizerWorkQueue<>();
 final ConcurrentLinkedQueue takeTimes = new 
ConcurrentLinkedQueue<>();
 final AtomicBoolean finished = new AtomicBoolean(false);
+final int count = 5;
 final Runnable consumer = () -> {
   try {
 while (!finished.get()) {
@@ -199,11 +201,12 @@ public class TestRegionNormalizerWorkQueue {
 
 CompletableFuture worker = CompletableFuture.runAsync(consumer);
 final long testStart = System.nanoTime();
-for (int i = 0; i < 5; i++) {
+for (int i = 0; i < count; i++) {
   Thread.sleep(10);
   queue.put(i);
 }
-
+// should have timing information for 5 calls to take.
+Waiter.waitFor(HBaseConfiguration.create(), 1000, () -> count == 
takeTimes.size());
 // set finished = true and pipe one more value in case the thread needs an 
extra pass through
 // the loop.
 finished.set(true);
@@ -211,9 +214,7 @@ public class TestRegionNormalizerWorkQueue {
 worker.get(1, TimeUnit.SECONDS);
 
 final Iterator times = takeTimes.iterator();
-assertTrue("should have timing information for at least 2 calls to take.",
-  takeTimes.size() >= 5);
-for (int i = 0; i < 5; i++) {
+for (int i = 0; i < count; i++) {
   assertThat(
 "Observations collected in takeTimes should increase by roughly 10ms 
every interval",
 times.next(), greaterThan(testStart + TimeUnit.MILLISECONDS.toNanos(i 
* 10)));



(hbase) branch branch-2 updated: HBASE-28324 TestRegionNormalizerWorkQueue#testTake is flaky (#5643)

2024-01-21 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new c9d85442c11 HBASE-28324 TestRegionNormalizerWorkQueue#testTake is 
flaky (#5643)
c9d85442c11 is described below

commit c9d85442c11454e6aeefc5a48e63568ce9eca636
Author: Xin Sun 
AuthorDate: Mon Jan 22 14:02:11 2024 +0800

HBASE-28324 TestRegionNormalizerWorkQueue#testTake is flaky (#5643)

Signed-off-by: Duo Zhang 
---
 .../master/normalizer/TestRegionNormalizerWorkQueue.java| 13 +++--
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/normalizer/TestRegionNormalizerWorkQueue.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/normalizer/TestRegionNormalizerWorkQueue.java
index c6d14c19114..088df7e7376 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/normalizer/TestRegionNormalizerWorkQueue.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/normalizer/TestRegionNormalizerWorkQueue.java
@@ -22,7 +22,6 @@ import static org.hamcrest.Matchers.contains;
 import static org.hamcrest.Matchers.greaterThan;
 import static org.hamcrest.Matchers.lessThanOrEqualTo;
 import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
 import java.util.ArrayList;
@@ -41,6 +40,8 @@ import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.stream.Collectors;
 import java.util.stream.IntStream;
 import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.Waiter;
 import org.apache.hadoop.hbase.testclassification.MasterTests;
 import org.apache.hadoop.hbase.testclassification.SmallTests;
 import org.junit.ClassRule;
@@ -186,6 +187,7 @@ public class TestRegionNormalizerWorkQueue {
 final RegionNormalizerWorkQueue queue = new 
RegionNormalizerWorkQueue<>();
 final ConcurrentLinkedQueue takeTimes = new 
ConcurrentLinkedQueue<>();
 final AtomicBoolean finished = new AtomicBoolean(false);
+final int count = 5;
 final Runnable consumer = () -> {
   try {
 while (!finished.get()) {
@@ -199,11 +201,12 @@ public class TestRegionNormalizerWorkQueue {
 
 CompletableFuture worker = CompletableFuture.runAsync(consumer);
 final long testStart = System.nanoTime();
-for (int i = 0; i < 5; i++) {
+for (int i = 0; i < count; i++) {
   Thread.sleep(10);
   queue.put(i);
 }
-
+// should have timing information for 5 calls to take.
+Waiter.waitFor(HBaseConfiguration.create(), 1000, () -> count == 
takeTimes.size());
 // set finished = true and pipe one more value in case the thread needs an 
extra pass through
 // the loop.
 finished.set(true);
@@ -211,9 +214,7 @@ public class TestRegionNormalizerWorkQueue {
 worker.get(1, TimeUnit.SECONDS);
 
 final Iterator times = takeTimes.iterator();
-assertTrue("should have timing information for at least 2 calls to take.",
-  takeTimes.size() >= 5);
-for (int i = 0; i < 5; i++) {
+for (int i = 0; i < count; i++) {
   assertThat(
 "Observations collected in takeTimes should increase by roughly 10ms 
every interval",
 times.next(), greaterThan(testStart + TimeUnit.MILLISECONDS.toNanos(i 
* 10)));



(hbase) branch master updated: HBASE-28324 TestRegionNormalizerWorkQueue#testTake is flaky (#5643)

2024-01-21 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new 2f6b6ad HBASE-28324 TestRegionNormalizerWorkQueue#testTake is 
flaky (#5643)
2f6b6ad is described below

commit 2f6b6ad6f036a5604378cfa1b84d4a245fca
Author: Xin Sun 
AuthorDate: Mon Jan 22 14:02:11 2024 +0800

HBASE-28324 TestRegionNormalizerWorkQueue#testTake is flaky (#5643)

Signed-off-by: Duo Zhang 
---
 .../master/normalizer/TestRegionNormalizerWorkQueue.java| 13 +++--
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/normalizer/TestRegionNormalizerWorkQueue.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/normalizer/TestRegionNormalizerWorkQueue.java
index c6d14c19114..088df7e7376 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/normalizer/TestRegionNormalizerWorkQueue.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/normalizer/TestRegionNormalizerWorkQueue.java
@@ -22,7 +22,6 @@ import static org.hamcrest.Matchers.contains;
 import static org.hamcrest.Matchers.greaterThan;
 import static org.hamcrest.Matchers.lessThanOrEqualTo;
 import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
 import java.util.ArrayList;
@@ -41,6 +40,8 @@ import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.stream.Collectors;
 import java.util.stream.IntStream;
 import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.Waiter;
 import org.apache.hadoop.hbase.testclassification.MasterTests;
 import org.apache.hadoop.hbase.testclassification.SmallTests;
 import org.junit.ClassRule;
@@ -186,6 +187,7 @@ public class TestRegionNormalizerWorkQueue {
 final RegionNormalizerWorkQueue queue = new 
RegionNormalizerWorkQueue<>();
 final ConcurrentLinkedQueue takeTimes = new 
ConcurrentLinkedQueue<>();
 final AtomicBoolean finished = new AtomicBoolean(false);
+final int count = 5;
 final Runnable consumer = () -> {
   try {
 while (!finished.get()) {
@@ -199,11 +201,12 @@ public class TestRegionNormalizerWorkQueue {
 
 CompletableFuture worker = CompletableFuture.runAsync(consumer);
 final long testStart = System.nanoTime();
-for (int i = 0; i < 5; i++) {
+for (int i = 0; i < count; i++) {
   Thread.sleep(10);
   queue.put(i);
 }
-
+// should have timing information for 5 calls to take.
+Waiter.waitFor(HBaseConfiguration.create(), 1000, () -> count == 
takeTimes.size());
 // set finished = true and pipe one more value in case the thread needs an 
extra pass through
 // the loop.
 finished.set(true);
@@ -211,9 +214,7 @@ public class TestRegionNormalizerWorkQueue {
 worker.get(1, TimeUnit.SECONDS);
 
 final Iterator times = takeTimes.iterator();
-assertTrue("should have timing information for at least 2 calls to take.",
-  takeTimes.size() >= 5);
-for (int i = 0; i < 5; i++) {
+for (int i = 0; i < count; i++) {
   assertThat(
 "Observations collected in takeTimes should increase by roughly 10ms 
every interval",
 times.next(), greaterThan(testStart + TimeUnit.MILLISECONDS.toNanos(i 
* 10)));



[hbase] branch branch-2.4 updated: HBASE-27469 IllegalArgumentException is thrown by SnapshotScannerHDFSAclController when dropping a table (#4865)

2022-11-14 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new f7b2efc7b7f HBASE-27469 IllegalArgumentException is thrown by 
SnapshotScannerHDFSAclController when dropping a table (#4865)
f7b2efc7b7f is described below

commit f7b2efc7b7f8eb6a85bff1c5838161d38c89c0a4
Author: Xin Sun 
AuthorDate: Tue Nov 15 11:10:37 2022 +0800

HBASE-27469 IllegalArgumentException is thrown by 
SnapshotScannerHDFSAclController when dropping a table (#4865)

Signed-off-by: Duo Zhang 
---
 .../access/SnapshotScannerHDFSAclController.java   |  4 ++--
 .../TestSnapshotScannerHDFSAclController.java  | 22 +-
 2 files changed, 23 insertions(+), 3 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SnapshotScannerHDFSAclController.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SnapshotScannerHDFSAclController.java
index f4fcfc41df0..d940bded435 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SnapshotScannerHDFSAclController.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SnapshotScannerHDFSAclController.java
@@ -556,7 +556,7 @@ public class SnapshotScannerHDFSAclController implements 
MasterCoprocessor, Mast
   if (aclTableInitialized) {
 return true;
   } else {
-LOG.warn("Skip set HDFS acls because acl table is not initialized when 
" + operation);
+LOG.warn("Skip set HDFS acls because acl table is not initialized when 
{}", operation);
   }
 }
 return false;
@@ -611,7 +611,7 @@ public class SnapshotScannerHDFSAclController implements 
MasterCoprocessor, Mast
   PermissionStorage.isGlobalEntry(entry)
 || (PermissionStorage.isNamespaceEntry(entry)
   && Bytes.equals(PermissionStorage.fromNamespaceEntry(entry), 
namespace))
-|| (!Bytes.equals(tableName.getName(), entry)
+|| (PermissionStorage.isTableEntry(entry) && 
!Bytes.equals(tableName.getName(), entry)
   && Bytes.equals(TableName.valueOf(entry).getNamespace(), 
namespace))
 ) {
   remove = false;
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController.java
index ff6c6cd695f..af066f87f22 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController.java
@@ -654,7 +654,7 @@ public class TestSnapshotScannerHDFSAclController {
 // delete table
 admin.disableTable(table);
 admin.deleteTable(table);
-// grantUser2 and grantUser3 should have data/ns acl
+// grantUser2 should have data/ns acl
 TestHDFSAclHelper.canUserScanSnapshot(TEST_UTIL, grantUser1, snapshot1, 
-1);
 TestHDFSAclHelper.canUserScanSnapshot(TEST_UTIL, grantUser2, snapshot1, 6);
 assertTrue(hasUserNamespaceHdfsAcl(aclTable, grantUserName2, namespace));
@@ -673,6 +673,26 @@ public class TestSnapshotScannerHDFSAclController {
 deleteTable(table);
   }
 
+  @Test
+  public void testDeleteTable2() throws Exception {
+String namespace1 = name.getMethodName() + "1";
+String namespace2 = name.getMethodName() + "2";
+String grantUser = name.getMethodName();
+TableName table = TableName.valueOf(namespace1, name.getMethodName());
+
+TestHDFSAclHelper.createTableAndPut(TEST_UTIL, table);
+// grant user table permission
+TestHDFSAclHelper.grantOnTable(TEST_UTIL, grantUser, table, READ);
+// grant user other namespace permission
+SecureTestUtil.grantOnNamespace(TEST_UTIL, grantUser, namespace2, READ);
+// delete table
+admin.disableTable(table);
+admin.deleteTable(table);
+// grantUser should have namespace2's acl
+assertFalse(hasUserTableHdfsAcl(aclTable, grantUser, table));
+assertTrue(hasUserNamespaceHdfsAcl(aclTable, grantUser, namespace2));
+  }
+
   @Test
   public void testDeleteNamespace() throws Exception {
 String grantUserName = name.getMethodName();



[hbase] branch branch-2 updated: HBASE-27469 IllegalArgumentException is thrown by SnapshotScannerHDFSAclController when dropping a table (#4865)

2022-11-14 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 7bba9ae24e5 HBASE-27469 IllegalArgumentException is thrown by 
SnapshotScannerHDFSAclController when dropping a table (#4865)
7bba9ae24e5 is described below

commit 7bba9ae24e57902136790bbcddd060f3038da5da
Author: Xin Sun 
AuthorDate: Tue Nov 15 11:10:37 2022 +0800

HBASE-27469 IllegalArgumentException is thrown by 
SnapshotScannerHDFSAclController when dropping a table (#4865)

Signed-off-by: Duo Zhang 
---
 .../access/SnapshotScannerHDFSAclController.java   |  4 ++--
 .../TestSnapshotScannerHDFSAclController.java  | 22 +-
 2 files changed, 23 insertions(+), 3 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SnapshotScannerHDFSAclController.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SnapshotScannerHDFSAclController.java
index f4fcfc41df0..d940bded435 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SnapshotScannerHDFSAclController.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SnapshotScannerHDFSAclController.java
@@ -556,7 +556,7 @@ public class SnapshotScannerHDFSAclController implements 
MasterCoprocessor, Mast
   if (aclTableInitialized) {
 return true;
   } else {
-LOG.warn("Skip set HDFS acls because acl table is not initialized when 
" + operation);
+LOG.warn("Skip set HDFS acls because acl table is not initialized when 
{}", operation);
   }
 }
 return false;
@@ -611,7 +611,7 @@ public class SnapshotScannerHDFSAclController implements 
MasterCoprocessor, Mast
   PermissionStorage.isGlobalEntry(entry)
 || (PermissionStorage.isNamespaceEntry(entry)
   && Bytes.equals(PermissionStorage.fromNamespaceEntry(entry), 
namespace))
-|| (!Bytes.equals(tableName.getName(), entry)
+|| (PermissionStorage.isTableEntry(entry) && 
!Bytes.equals(tableName.getName(), entry)
   && Bytes.equals(TableName.valueOf(entry).getNamespace(), 
namespace))
 ) {
   remove = false;
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController.java
index d286685e325..8369f840ea5 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController.java
@@ -654,7 +654,7 @@ public class TestSnapshotScannerHDFSAclController {
 // delete table
 admin.disableTable(table);
 admin.deleteTable(table);
-// grantUser2 and grantUser3 should have data/ns acl
+// grantUser2 should have data/ns acl
 TestHDFSAclHelper.canUserScanSnapshot(TEST_UTIL, grantUser1, snapshot1, 
-1);
 TestHDFSAclHelper.canUserScanSnapshot(TEST_UTIL, grantUser2, snapshot1, 6);
 assertTrue(hasUserNamespaceHdfsAcl(aclTable, grantUserName2, namespace));
@@ -673,6 +673,26 @@ public class TestSnapshotScannerHDFSAclController {
 deleteTable(table);
   }
 
+  @Test
+  public void testDeleteTable2() throws Exception {
+String namespace1 = name.getMethodName() + "1";
+String namespace2 = name.getMethodName() + "2";
+String grantUser = name.getMethodName();
+TableName table = TableName.valueOf(namespace1, name.getMethodName());
+
+TestHDFSAclHelper.createTableAndPut(TEST_UTIL, table);
+// grant user table permission
+TestHDFSAclHelper.grantOnTable(TEST_UTIL, grantUser, table, READ);
+// grant user other namespace permission
+SecureTestUtil.grantOnNamespace(TEST_UTIL, grantUser, namespace2, READ);
+// delete table
+admin.disableTable(table);
+admin.deleteTable(table);
+// grantUser should have namespace2's acl
+assertFalse(hasUserTableHdfsAcl(aclTable, grantUser, table));
+assertTrue(hasUserNamespaceHdfsAcl(aclTable, grantUser, namespace2));
+  }
+
   @Test
   public void testDeleteNamespace() throws Exception {
 String grantUserName = name.getMethodName();



[hbase] branch branch-2.5 updated: HBASE-27469 IllegalArgumentException is thrown by SnapshotScannerHDFSAclController when dropping a table (#4865)

2022-11-14 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.5
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.5 by this push:
 new 3880884e8e6 HBASE-27469 IllegalArgumentException is thrown by 
SnapshotScannerHDFSAclController when dropping a table (#4865)
3880884e8e6 is described below

commit 3880884e8e6a962a318775b21fa9fdd9e181631a
Author: Xin Sun 
AuthorDate: Tue Nov 15 11:10:37 2022 +0800

HBASE-27469 IllegalArgumentException is thrown by 
SnapshotScannerHDFSAclController when dropping a table (#4865)

Signed-off-by: Duo Zhang 
---
 .../access/SnapshotScannerHDFSAclController.java   |  4 ++--
 .../TestSnapshotScannerHDFSAclController.java  | 22 +-
 2 files changed, 23 insertions(+), 3 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SnapshotScannerHDFSAclController.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SnapshotScannerHDFSAclController.java
index f4fcfc41df0..d940bded435 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SnapshotScannerHDFSAclController.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SnapshotScannerHDFSAclController.java
@@ -556,7 +556,7 @@ public class SnapshotScannerHDFSAclController implements 
MasterCoprocessor, Mast
   if (aclTableInitialized) {
 return true;
   } else {
-LOG.warn("Skip set HDFS acls because acl table is not initialized when 
" + operation);
+LOG.warn("Skip set HDFS acls because acl table is not initialized when 
{}", operation);
   }
 }
 return false;
@@ -611,7 +611,7 @@ public class SnapshotScannerHDFSAclController implements 
MasterCoprocessor, Mast
   PermissionStorage.isGlobalEntry(entry)
 || (PermissionStorage.isNamespaceEntry(entry)
   && Bytes.equals(PermissionStorage.fromNamespaceEntry(entry), 
namespace))
-|| (!Bytes.equals(tableName.getName(), entry)
+|| (PermissionStorage.isTableEntry(entry) && 
!Bytes.equals(tableName.getName(), entry)
   && Bytes.equals(TableName.valueOf(entry).getNamespace(), 
namespace))
 ) {
   remove = false;
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController.java
index d286685e325..8369f840ea5 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController.java
@@ -654,7 +654,7 @@ public class TestSnapshotScannerHDFSAclController {
 // delete table
 admin.disableTable(table);
 admin.deleteTable(table);
-// grantUser2 and grantUser3 should have data/ns acl
+// grantUser2 should have data/ns acl
 TestHDFSAclHelper.canUserScanSnapshot(TEST_UTIL, grantUser1, snapshot1, 
-1);
 TestHDFSAclHelper.canUserScanSnapshot(TEST_UTIL, grantUser2, snapshot1, 6);
 assertTrue(hasUserNamespaceHdfsAcl(aclTable, grantUserName2, namespace));
@@ -673,6 +673,26 @@ public class TestSnapshotScannerHDFSAclController {
 deleteTable(table);
   }
 
+  @Test
+  public void testDeleteTable2() throws Exception {
+String namespace1 = name.getMethodName() + "1";
+String namespace2 = name.getMethodName() + "2";
+String grantUser = name.getMethodName();
+TableName table = TableName.valueOf(namespace1, name.getMethodName());
+
+TestHDFSAclHelper.createTableAndPut(TEST_UTIL, table);
+// grant user table permission
+TestHDFSAclHelper.grantOnTable(TEST_UTIL, grantUser, table, READ);
+// grant user other namespace permission
+SecureTestUtil.grantOnNamespace(TEST_UTIL, grantUser, namespace2, READ);
+// delete table
+admin.disableTable(table);
+admin.deleteTable(table);
+// grantUser should have namespace2's acl
+assertFalse(hasUserTableHdfsAcl(aclTable, grantUser, table));
+assertTrue(hasUserNamespaceHdfsAcl(aclTable, grantUser, namespace2));
+  }
+
   @Test
   public void testDeleteNamespace() throws Exception {
 String grantUserName = name.getMethodName();



[hbase] branch master updated (047f4e22e02 -> e5463e8f4ec)

2022-11-14 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


from 047f4e22e02 HBASE-27347 Port FileWatcher from ZK to autodetect 
keystore/truststore changes in TLS connections (#4869)
 add e5463e8f4ec HBASE-27469 IllegalArgumentException is thrown by 
SnapshotScannerHDFSAclController when dropping a table (#4865)

No new revisions were added by this update.

Summary of changes:
 .../access/SnapshotScannerHDFSAclController.java   |  4 ++--
 .../TestSnapshotScannerHDFSAclController.java  | 22 +-
 2 files changed, 23 insertions(+), 3 deletions(-)



[hbase] branch branch-2.5 updated: HBASE-26956 ExportSnapshot tool supports removing TTL (#4538)

2022-06-20 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.5
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.5 by this push:
 new 135348192c2 HBASE-26956 ExportSnapshot tool supports removing TTL 
(#4538)
135348192c2 is described below

commit 135348192c25e98597ce84fc380b058ee5a23342
Author: XinSun 
AuthorDate: Tue Jun 21 09:19:08 2022 +0800

HBASE-26956 ExportSnapshot tool supports removing TTL (#4538)

Signed-off-by: Duo Zhang 
---
 .../hadoop/hbase/snapshot/ExportSnapshot.java  | 23 +++--
 .../hadoop/hbase/snapshot/TestExportSnapshot.java  | 57 --
 .../hbase/snapshot/TestExportSnapshotAdjunct.java  |  4 +-
 .../snapshot/TestExportSnapshotV1NoCluster.java|  2 +-
 4 files changed, 74 insertions(+), 12 deletions(-)

diff --git 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
index d37202b0a50..e3a7b805121 100644
--- 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
+++ 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
@@ -152,6 +152,8 @@ public class ExportSnapshot extends AbstractHBaseTool 
implements Tool {
   "Number of mappers to use during the copy (mapreduce.job.maps).");
 static final Option BANDWIDTH =
   new Option(null, "bandwidth", true, "Limit bandwidth to this value in 
MB/second.");
+static final Option RESET_TTL =
+  new Option(null, "reset-ttl", false, "Do not copy TTL for the snapshot");
   }
 
   // Export Map-Reduce Counters, to keep track of the progress
@@ -917,6 +919,7 @@ public class ExportSnapshot extends AbstractHBaseTool 
implements Tool {
   private int bandwidthMB = Integer.MAX_VALUE;
   private int filesMode = 0;
   private int mappers = 0;
+  private boolean resetTtl = false;
 
   @Override
   protected void processOptions(CommandLine cmd) {
@@ -938,6 +941,7 @@ public class ExportSnapshot extends AbstractHBaseTool 
implements Tool {
 verifyChecksum = !cmd.hasOption(Options.NO_CHECKSUM_VERIFY.getLongOpt());
 verifyTarget = !cmd.hasOption(Options.NO_TARGET_VERIFY.getLongOpt());
 verifySource = !cmd.hasOption(Options.NO_SOURCE_VERIFY.getLongOpt());
+resetTtl = cmd.hasOption(Options.RESET_TTL.getLongOpt());
   }
 
   /**
@@ -1075,11 +1079,19 @@ public class ExportSnapshot extends AbstractHBaseTool 
implements Tool {
   }
 }
 
-// Write a new .snapshotinfo if the target name is different from the 
source name
-if (!targetName.equals(snapshotName)) {
-  SnapshotDescription snapshotDesc = SnapshotDescriptionUtils
-.readSnapshotInfo(inputFs, 
snapshotDir).toBuilder().setName(targetName).build();
-  SnapshotDescriptionUtils.writeSnapshotInfo(snapshotDesc, 
initialOutputSnapshotDir, outputFs);
+// Write a new .snapshotinfo if the target name is different from the 
source name or we want to
+// reset TTL for target snapshot.
+if (!targetName.equals(snapshotName) || resetTtl) {
+  SnapshotDescription.Builder snapshotDescBuilder =
+SnapshotDescriptionUtils.readSnapshotInfo(inputFs, 
snapshotDir).toBuilder();
+  if (!targetName.equals(snapshotName)) {
+snapshotDescBuilder.setName(targetName);
+  }
+  if (resetTtl) {
+snapshotDescBuilder.setTtl(HConstants.DEFAULT_SNAPSHOT_TTL);
+  }
+  SnapshotDescriptionUtils.writeSnapshotInfo(snapshotDescBuilder.build(),
+initialOutputSnapshotDir, outputFs);
   if (filesUser != null || filesGroup != null) {
 outputFs.setOwner(
   new Path(initialOutputSnapshotDir, 
SnapshotDescriptionUtils.SNAPSHOTINFO_FILE), filesUser,
@@ -1155,6 +1167,7 @@ public class ExportSnapshot extends AbstractHBaseTool 
implements Tool {
 addOption(Options.CHMOD);
 addOption(Options.MAPPERS);
 addOption(Options.BANDWIDTH);
+addOption(Options.RESET_TTL);
   }
 
   public static void main(String[] args) {
diff --git 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
index 4a1135b1b10..a9bd94f713b 100644
--- 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
+++ 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
@@ -24,9 +24,12 @@ import static org.junit.Assert.assertTrue;
 
 import java.io.IOException;
 import java.util.ArrayList;
+import java.util.HashMap;
 import java.util.HashSet;
 import java.util.List;
+import java.util.Map;
 import java.util.Objects;
+import java.util.Optional;
 import java.util.Set;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileStatus;
@@

[hbase] branch branch-2 updated: HBASE-26956 ExportSnapshot tool supports removing TTL (#4538)

2022-06-20 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new bdaa4486f36 HBASE-26956 ExportSnapshot tool supports removing TTL 
(#4538)
bdaa4486f36 is described below

commit bdaa4486f368503ab8df1388a68bee2b3fc096a9
Author: XinSun 
AuthorDate: Tue Jun 21 09:19:08 2022 +0800

HBASE-26956 ExportSnapshot tool supports removing TTL (#4538)

Signed-off-by: Duo Zhang 
---
 .../hadoop/hbase/snapshot/ExportSnapshot.java  | 23 +++--
 .../hadoop/hbase/snapshot/TestExportSnapshot.java  | 57 --
 .../hbase/snapshot/TestExportSnapshotAdjunct.java  |  4 +-
 .../snapshot/TestExportSnapshotV1NoCluster.java|  2 +-
 4 files changed, 74 insertions(+), 12 deletions(-)

diff --git 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
index d51eecab147..2e50762357e 100644
--- 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
+++ 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
@@ -153,6 +153,8 @@ public class ExportSnapshot extends AbstractHBaseTool 
implements Tool {
   "Number of mappers to use during the copy (mapreduce.job.maps).");
 static final Option BANDWIDTH =
   new Option(null, "bandwidth", true, "Limit bandwidth to this value in 
MB/second.");
+static final Option RESET_TTL =
+  new Option(null, "reset-ttl", false, "Do not copy TTL for the snapshot");
   }
 
   // Export Map-Reduce Counters, to keep track of the progress
@@ -928,6 +930,7 @@ public class ExportSnapshot extends AbstractHBaseTool 
implements Tool {
   private int bandwidthMB = Integer.MAX_VALUE;
   private int filesMode = 0;
   private int mappers = 0;
+  private boolean resetTtl = false;
 
   @Override
   protected void processOptions(CommandLine cmd) {
@@ -949,6 +952,7 @@ public class ExportSnapshot extends AbstractHBaseTool 
implements Tool {
 verifyChecksum = !cmd.hasOption(Options.NO_CHECKSUM_VERIFY.getLongOpt());
 verifyTarget = !cmd.hasOption(Options.NO_TARGET_VERIFY.getLongOpt());
 verifySource = !cmd.hasOption(Options.NO_SOURCE_VERIFY.getLongOpt());
+resetTtl = cmd.hasOption(Options.RESET_TTL.getLongOpt());
   }
 
   /**
@@ -1086,11 +1090,19 @@ public class ExportSnapshot extends AbstractHBaseTool 
implements Tool {
   }
 }
 
-// Write a new .snapshotinfo if the target name is different from the 
source name
-if (!targetName.equals(snapshotName)) {
-  SnapshotDescription snapshotDesc = SnapshotDescriptionUtils
-.readSnapshotInfo(inputFs, 
snapshotDir).toBuilder().setName(targetName).build();
-  SnapshotDescriptionUtils.writeSnapshotInfo(snapshotDesc, 
initialOutputSnapshotDir, outputFs);
+// Write a new .snapshotinfo if the target name is different from the 
source name or we want to
+// reset TTL for target snapshot.
+if (!targetName.equals(snapshotName) || resetTtl) {
+  SnapshotDescription.Builder snapshotDescBuilder =
+SnapshotDescriptionUtils.readSnapshotInfo(inputFs, 
snapshotDir).toBuilder();
+  if (!targetName.equals(snapshotName)) {
+snapshotDescBuilder.setName(targetName);
+  }
+  if (resetTtl) {
+snapshotDescBuilder.setTtl(HConstants.DEFAULT_SNAPSHOT_TTL);
+  }
+  SnapshotDescriptionUtils.writeSnapshotInfo(snapshotDescBuilder.build(),
+initialOutputSnapshotDir, outputFs);
   if (filesUser != null || filesGroup != null) {
 outputFs.setOwner(
   new Path(initialOutputSnapshotDir, 
SnapshotDescriptionUtils.SNAPSHOTINFO_FILE), filesUser,
@@ -1166,6 +1178,7 @@ public class ExportSnapshot extends AbstractHBaseTool 
implements Tool {
 addOption(Options.CHMOD);
 addOption(Options.MAPPERS);
 addOption(Options.BANDWIDTH);
+addOption(Options.RESET_TTL);
   }
 
   public static void main(String[] args) {
diff --git 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
index 965071e1954..5ce670c0cb0 100644
--- 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
+++ 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
@@ -24,9 +24,12 @@ import static org.junit.Assert.assertTrue;
 
 import java.io.IOException;
 import java.util.ArrayList;
+import java.util.HashMap;
 import java.util.HashSet;
 import java.util.List;
+import java.util.Map;
 import java.util.Objects;
+import java.util.Optional;
 import java.util.Set;
 import java.util.stream.Collectors;
 import org.apache.hadoop.conf.Configuration;
@@

[hbase] branch master updated: HBASE-26956 ExportSnapshot tool supports removing TTL (#4351)

2022-06-15 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new b365748485 HBASE-26956 ExportSnapshot tool supports removing TTL 
(#4351)
b365748485 is described below

commit b3657484850f9fa9679f2186bf53e7df768f21c7
Author: XinSun 
AuthorDate: Wed Jun 15 15:04:17 2022 +0800

HBASE-26956 ExportSnapshot tool supports removing TTL (#4351)

Signed-off-by: Duo Zhang 
---
 .../hadoop/hbase/snapshot/ExportSnapshot.java  | 23 +++--
 .../hadoop/hbase/snapshot/TestExportSnapshot.java  | 56 --
 .../hbase/snapshot/TestExportSnapshotAdjunct.java  |  4 +-
 .../snapshot/TestExportSnapshotV1NoCluster.java|  2 +-
 4 files changed, 73 insertions(+), 12 deletions(-)

diff --git 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
index 80c5242a10..f2a8e00fea 100644
--- 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
+++ 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
@@ -154,6 +154,8 @@ public class ExportSnapshot extends AbstractHBaseTool 
implements Tool {
   "Number of mappers to use during the copy (mapreduce.job.maps).");
 static final Option BANDWIDTH =
   new Option(null, "bandwidth", true, "Limit bandwidth to this value in 
MB/second.");
+static final Option RESET_TTL =
+  new Option(null, "reset-ttl", false, "Do not copy TTL for the snapshot");
   }
 
   // Export Map-Reduce Counters, to keep track of the progress
@@ -931,6 +933,7 @@ public class ExportSnapshot extends AbstractHBaseTool 
implements Tool {
   private int bandwidthMB = Integer.MAX_VALUE;
   private int filesMode = 0;
   private int mappers = 0;
+  private boolean resetTtl = false;
 
   @Override
   protected void processOptions(CommandLine cmd) {
@@ -952,6 +955,7 @@ public class ExportSnapshot extends AbstractHBaseTool 
implements Tool {
 verifyChecksum = !cmd.hasOption(Options.NO_CHECKSUM_VERIFY.getLongOpt());
 verifyTarget = !cmd.hasOption(Options.NO_TARGET_VERIFY.getLongOpt());
 verifySource = !cmd.hasOption(Options.NO_SOURCE_VERIFY.getLongOpt());
+resetTtl = cmd.hasOption(Options.RESET_TTL.getLongOpt());
   }
 
   /**
@@ -1089,11 +1093,19 @@ public class ExportSnapshot extends AbstractHBaseTool 
implements Tool {
   }
 }
 
-// Write a new .snapshotinfo if the target name is different from the 
source name
-if (!targetName.equals(snapshotName)) {
-  SnapshotDescription snapshotDesc = SnapshotDescriptionUtils
-.readSnapshotInfo(inputFs, 
snapshotDir).toBuilder().setName(targetName).build();
-  SnapshotDescriptionUtils.writeSnapshotInfo(snapshotDesc, 
initialOutputSnapshotDir, outputFs);
+// Write a new .snapshotinfo if the target name is different from the 
source name or we want to
+// reset TTL for target snapshot.
+if (!targetName.equals(snapshotName) || resetTtl) {
+  SnapshotDescription.Builder snapshotDescBuilder =
+SnapshotDescriptionUtils.readSnapshotInfo(inputFs, 
snapshotDir).toBuilder();
+  if (!targetName.equals(snapshotName)) {
+snapshotDescBuilder.setName(targetName);
+  }
+  if (resetTtl) {
+snapshotDescBuilder.setTtl(HConstants.DEFAULT_SNAPSHOT_TTL);
+  }
+  SnapshotDescriptionUtils.writeSnapshotInfo(snapshotDescBuilder.build(),
+initialOutputSnapshotDir, outputFs);
   if (filesUser != null || filesGroup != null) {
 outputFs.setOwner(
   new Path(initialOutputSnapshotDir, 
SnapshotDescriptionUtils.SNAPSHOTINFO_FILE), filesUser,
@@ -1169,6 +1181,7 @@ public class ExportSnapshot extends AbstractHBaseTool 
implements Tool {
 addOption(Options.CHMOD);
 addOption(Options.MAPPERS);
 addOption(Options.BANDWIDTH);
+addOption(Options.RESET_TTL);
   }
 
   public static void main(String[] args) {
diff --git 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
index c49bf21874..4dcadc755d 100644
--- 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
+++ 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
@@ -24,9 +24,12 @@ import static org.junit.Assert.assertTrue;
 
 import java.io.IOException;
 import java.util.ArrayList;
+import java.util.HashMap;
 import java.util.HashSet;
 import java.util.List;
+import java.util.Map;
 import java.util.Objects;
+import java.util.Optional;
 import java.util.Set;
 import java.util.stream.Collectors;
 import org.apache.hadoop.conf.Configuration;
@@

[hbase] branch branch-2.4 updated: HBASE-26406 Can not add peer replicating to non-HBase (#3806)

2021-11-02 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new cc701e2  HBASE-26406 Can not add peer replicating to non-HBase (#3806)
cc701e2 is described below

commit cc701e2c2859f224a8329d0e4e6f95d256cc3ef0
Author: XinSun 
AuthorDate: Tue Nov 2 14:26:25 2021 +0800

HBASE-26406 Can not add peer replicating to non-HBase (#3806)

Signed-off-by: Rushabh Shah 
Signed-off-by: Duo Zhang 

(cherry picked from commit b9b7fec57f9de5407c63467780f454345963c2a0)
---
 .../master/replication/ReplicationPeerManager.java |  16 +-
 .../TestNonHBaseReplicationEndpoint.java   | 205 +
 2 files changed, 213 insertions(+), 8 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ReplicationPeerManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ReplicationPeerManager.java
index 9d8c9e1..f826d5d 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ReplicationPeerManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ReplicationPeerManager.java
@@ -36,6 +36,7 @@ import org.apache.hadoop.hbase.ServerName;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.replication.ReplicationPeerConfigUtil;
 import org.apache.hadoop.hbase.replication.BaseReplicationEndpoint;
+import org.apache.hadoop.hbase.replication.HBaseReplicationEndpoint;
 import org.apache.hadoop.hbase.replication.ReplicationEndpoint;
 import org.apache.hadoop.hbase.replication.ReplicationException;
 import org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
@@ -46,7 +47,6 @@ import 
org.apache.hadoop.hbase.replication.ReplicationQueueInfo;
 import org.apache.hadoop.hbase.replication.ReplicationQueueStorage;
 import org.apache.hadoop.hbase.replication.ReplicationStorageFactory;
 import org.apache.hadoop.hbase.replication.ReplicationUtils;
-import 
org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint;
 import org.apache.hadoop.hbase.zookeeper.ZKClusterId;
 import org.apache.hadoop.hbase.zookeeper.ZKConfig;
 import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
@@ -268,13 +268,13 @@ public class ReplicationPeerManager {
   e);
   }
 }
-// Default is HBaseInterClusterReplicationEndpoint and only it need to 
check cluster key
-if (endpoint == null || endpoint instanceof 
HBaseInterClusterReplicationEndpoint) {
+// Endpoints implementing HBaseReplicationEndpoint need to check cluster 
key
+if (endpoint == null || endpoint instanceof HBaseReplicationEndpoint) {
   checkClusterKey(peerConfig.getClusterKey());
-}
-// Default is HBaseInterClusterReplicationEndpoint which cannot replicate 
to same cluster
-if (endpoint == null || !endpoint.canReplicateToSameCluster()) {
-  checkClusterId(peerConfig.getClusterKey());
+  // Check if endpoint can replicate to the same cluster
+  if (endpoint == null || !endpoint.canReplicateToSameCluster()) {
+checkSameClusterKey(peerConfig.getClusterKey());
+  }
 }
 
 if (peerConfig.replicateAllUserTables()) {
@@ -368,7 +368,7 @@ public class ReplicationPeerManager {
 }
   }
 
-  private void checkClusterId(String clusterKey) throws DoNotRetryIOException {
+  private void checkSameClusterKey(String clusterKey) throws 
DoNotRetryIOException {
 String peerClusterId = "";
 try {
   // Create the peer cluster config for get peer cluster id
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestNonHBaseReplicationEndpoint.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestNonHBaseReplicationEndpoint.java
new file mode 100644
index 000..b1a8bf5
--- /dev/null
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestNonHBaseReplicationEndpoint.java
@@ -0,0 +1,205 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoo

[hbase] branch branch-2 updated: HBASE-26406 Can not add peer replicating to non-HBase (#3806)

2021-11-02 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 063e0e2  HBASE-26406 Can not add peer replicating to non-HBase (#3806)
063e0e2 is described below

commit 063e0e2e92025430fece0c9504e16124fbbba43b
Author: XinSun 
AuthorDate: Tue Nov 2 14:26:25 2021 +0800

HBASE-26406 Can not add peer replicating to non-HBase (#3806)

Signed-off-by: Rushabh Shah 
Signed-off-by: Duo Zhang 

(cherry picked from commit b9b7fec57f9de5407c63467780f454345963c2a0)
---
 .../master/replication/ReplicationPeerManager.java |  16 +-
 .../TestNonHBaseReplicationEndpoint.java   | 205 +
 2 files changed, 213 insertions(+), 8 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ReplicationPeerManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ReplicationPeerManager.java
index 9d8c9e1..f826d5d 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ReplicationPeerManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ReplicationPeerManager.java
@@ -36,6 +36,7 @@ import org.apache.hadoop.hbase.ServerName;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.replication.ReplicationPeerConfigUtil;
 import org.apache.hadoop.hbase.replication.BaseReplicationEndpoint;
+import org.apache.hadoop.hbase.replication.HBaseReplicationEndpoint;
 import org.apache.hadoop.hbase.replication.ReplicationEndpoint;
 import org.apache.hadoop.hbase.replication.ReplicationException;
 import org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
@@ -46,7 +47,6 @@ import 
org.apache.hadoop.hbase.replication.ReplicationQueueInfo;
 import org.apache.hadoop.hbase.replication.ReplicationQueueStorage;
 import org.apache.hadoop.hbase.replication.ReplicationStorageFactory;
 import org.apache.hadoop.hbase.replication.ReplicationUtils;
-import 
org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint;
 import org.apache.hadoop.hbase.zookeeper.ZKClusterId;
 import org.apache.hadoop.hbase.zookeeper.ZKConfig;
 import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
@@ -268,13 +268,13 @@ public class ReplicationPeerManager {
   e);
   }
 }
-// Default is HBaseInterClusterReplicationEndpoint and only it need to 
check cluster key
-if (endpoint == null || endpoint instanceof 
HBaseInterClusterReplicationEndpoint) {
+// Endpoints implementing HBaseReplicationEndpoint need to check cluster 
key
+if (endpoint == null || endpoint instanceof HBaseReplicationEndpoint) {
   checkClusterKey(peerConfig.getClusterKey());
-}
-// Default is HBaseInterClusterReplicationEndpoint which cannot replicate 
to same cluster
-if (endpoint == null || !endpoint.canReplicateToSameCluster()) {
-  checkClusterId(peerConfig.getClusterKey());
+  // Check if endpoint can replicate to the same cluster
+  if (endpoint == null || !endpoint.canReplicateToSameCluster()) {
+checkSameClusterKey(peerConfig.getClusterKey());
+  }
 }
 
 if (peerConfig.replicateAllUserTables()) {
@@ -368,7 +368,7 @@ public class ReplicationPeerManager {
 }
   }
 
-  private void checkClusterId(String clusterKey) throws DoNotRetryIOException {
+  private void checkSameClusterKey(String clusterKey) throws 
DoNotRetryIOException {
 String peerClusterId = "";
 try {
   // Create the peer cluster config for get peer cluster id
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestNonHBaseReplicationEndpoint.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestNonHBaseReplicationEndpoint.java
new file mode 100644
index 000..b1a8bf5
--- /dev/null
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestNonHBaseReplicationEndpoint.java
@@ -0,0 +1,205 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoo

[hbase] branch master updated: HBASE-26406 Can not add peer replicating to non-HBase (#3806)

2021-11-02 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new b9b7fec  HBASE-26406 Can not add peer replicating to non-HBase (#3806)
b9b7fec is described below

commit b9b7fec57f9de5407c63467780f454345963c2a0
Author: XinSun 
AuthorDate: Tue Nov 2 14:26:25 2021 +0800

HBASE-26406 Can not add peer replicating to non-HBase (#3806)

Signed-off-by: Rushabh Shah 
Signed-off-by: Duo Zhang 
---
 .../master/replication/ReplicationPeerManager.java |  16 +-
 .../TestNonHBaseReplicationEndpoint.java   | 205 +
 2 files changed, 213 insertions(+), 8 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ReplicationPeerManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ReplicationPeerManager.java
index add5121..d9829a5 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ReplicationPeerManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ReplicationPeerManager.java
@@ -40,6 +40,7 @@ import org.apache.hadoop.hbase.ServerName;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.replication.ReplicationPeerConfigUtil;
 import org.apache.hadoop.hbase.replication.BaseReplicationEndpoint;
+import org.apache.hadoop.hbase.replication.HBaseReplicationEndpoint;
 import org.apache.hadoop.hbase.replication.ReplicationEndpoint;
 import org.apache.hadoop.hbase.replication.ReplicationException;
 import org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
@@ -51,7 +52,6 @@ import 
org.apache.hadoop.hbase.replication.ReplicationQueueStorage;
 import org.apache.hadoop.hbase.replication.ReplicationStorageFactory;
 import org.apache.hadoop.hbase.replication.ReplicationUtils;
 import org.apache.hadoop.hbase.replication.SyncReplicationState;
-import 
org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint;
 import org.apache.hadoop.hbase.zookeeper.ZKClusterId;
 import org.apache.hadoop.hbase.zookeeper.ZKConfig;
 import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
@@ -358,13 +358,13 @@ public class ReplicationPeerManager {
   e);
   }
 }
-// Default is HBaseInterClusterReplicationEndpoint and only it need to 
check cluster key
-if (endpoint == null || endpoint instanceof 
HBaseInterClusterReplicationEndpoint) {
+// Endpoints implementing HBaseReplicationEndpoint need to check cluster 
key
+if (endpoint == null || endpoint instanceof HBaseReplicationEndpoint) {
   checkClusterKey(peerConfig.getClusterKey());
-}
-// Default is HBaseInterClusterReplicationEndpoint which cannot replicate 
to same cluster
-if (endpoint == null || !endpoint.canReplicateToSameCluster()) {
-  checkClusterId(peerConfig.getClusterKey());
+  // Check if endpoint can replicate to the same cluster
+  if (endpoint == null || !endpoint.canReplicateToSameCluster()) {
+checkSameClusterKey(peerConfig.getClusterKey());
+  }
 }
 
 if (peerConfig.replicateAllUserTables()) {
@@ -510,7 +510,7 @@ public class ReplicationPeerManager {
 }
   }
 
-  private void checkClusterId(String clusterKey) throws DoNotRetryIOException {
+  private void checkSameClusterKey(String clusterKey) throws 
DoNotRetryIOException {
 String peerClusterId = "";
 try {
   // Create the peer cluster config for get peer cluster id
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestNonHBaseReplicationEndpoint.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestNonHBaseReplicationEndpoint.java
new file mode 100644
index 000..7b395ad
--- /dev/null
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestNonHBaseReplicationEndpoint.java
@@ -0,0 +1,205 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication;
+
+import java.io.IOException;
+import java.util.ArrayList;
+imp

[hbase] branch branch-2.3 updated: HBASE-25773 TestSnapshotScannerHDFSAclController.setupBeforeClass is flaky (#3651)

2021-09-02 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.3
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.3 by this push:
 new 534070f  HBASE-25773 
TestSnapshotScannerHDFSAclController.setupBeforeClass is flaky (#3651)
534070f is described below

commit 534070f5cdde974f1f2fbc1d7f837de89d943c6f
Author: XinSun 
AuthorDate: Wed Sep 1 18:43:25 2021 +0800

HBASE-25773 TestSnapshotScannerHDFSAclController.setupBeforeClass is flaky 
(#3651)

Signed-off-by: Duo Zhang 

(cherry picked from commit 345d7256c812dd5fbdfe9f378b2884dd945c5da2)
---
 .../security/access/TestSnapshotScannerHDFSAclController.java | 8 
 .../security/access/TestSnapshotScannerHDFSAclController2.java| 8 
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController.java
index 562e0ca..633d2c9 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController.java
@@ -98,7 +98,11 @@ public class TestSnapshotScannerHDFSAclController {
   + SnapshotScannerHDFSAclController.class.getName());
 
 TEST_UTIL.startMiniCluster();
+SnapshotScannerHDFSAclController coprocessor = 
TEST_UTIL.getHBaseCluster().getMaster()
+  
.getMasterCoprocessorHost().findCoprocessor(SnapshotScannerHDFSAclController.class);
+TEST_UTIL.waitFor(3, () -> coprocessor.checkInitialized("check 
initialized"));
 TEST_UTIL.waitTableAvailable(PermissionStorage.ACL_TABLE_NAME);
+
 admin = TEST_UTIL.getAdmin();
 rootDir = TEST_UTIL.getDefaultRootDirPath();
 FS = rootDir.getFileSystem(conf);
@@ -128,10 +132,6 @@ public class TestSnapshotScannerHDFSAclController {
   FS.setPermission(path, commonDirectoryPermission);
   path = path.getParent();
 }
-
-SnapshotScannerHDFSAclController coprocessor = 
TEST_UTIL.getHBaseCluster().getMaster()
-
.getMasterCoprocessorHost().findCoprocessor(SnapshotScannerHDFSAclController.class);
-TEST_UTIL.waitFor(120, () -> coprocessor.checkInitialized("check 
initialized"));
 aclTable = 
admin.getConnection().getTable(PermissionStorage.ACL_TABLE_NAME);
   }
 
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController2.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController2.java
index da6ac7e..a6e6c95 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController2.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController2.java
@@ -85,7 +85,11 @@ public class TestSnapshotScannerHDFSAclController2 {
   + SnapshotScannerHDFSAclController.class.getName());
 
 TEST_UTIL.startMiniCluster();
+SnapshotScannerHDFSAclController coprocessor = 
TEST_UTIL.getHBaseCluster().getMaster()
+  
.getMasterCoprocessorHost().findCoprocessor(SnapshotScannerHDFSAclController.class);
+TEST_UTIL.waitFor(3, () -> coprocessor.checkInitialized("check 
initialized"));
 TEST_UTIL.waitTableAvailable(PermissionStorage.ACL_TABLE_NAME);
+
 admin = TEST_UTIL.getAdmin();
 Path rootDir = TEST_UTIL.getDefaultRootDirPath();
 FS = rootDir.getFileSystem(conf);
@@ -115,10 +119,6 @@ public class TestSnapshotScannerHDFSAclController2 {
   FS.setPermission(path, commonDirectoryPermission);
   path = path.getParent();
 }
-
-SnapshotScannerHDFSAclController coprocessor = 
TEST_UTIL.getHBaseCluster().getMaster()
-
.getMasterCoprocessorHost().findCoprocessor(SnapshotScannerHDFSAclController.class);
-TEST_UTIL.waitFor(120, () -> coprocessor.checkInitialized("check 
initialized"));
 aclTable = 
admin.getConnection().getTable(PermissionStorage.ACL_TABLE_NAME);
   }
 


[hbase] branch branch-2.4 updated: HBASE-25773 TestSnapshotScannerHDFSAclController.setupBeforeClass is flaky (#3651)

2021-09-02 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new 067f609  HBASE-25773 
TestSnapshotScannerHDFSAclController.setupBeforeClass is flaky (#3651)
067f609 is described below

commit 067f609ec60a867fa1c19cab9747d0c9ce612f00
Author: XinSun 
AuthorDate: Wed Sep 1 18:43:25 2021 +0800

HBASE-25773 TestSnapshotScannerHDFSAclController.setupBeforeClass is flaky 
(#3651)

Signed-off-by: Duo Zhang 

(cherry picked from commit 345d7256c812dd5fbdfe9f378b2884dd945c5da2)
---
 .../security/access/TestSnapshotScannerHDFSAclController.java  | 10 +-
 .../security/access/TestSnapshotScannerHDFSAclController2.java |  8 
 2 files changed, 9 insertions(+), 9 deletions(-)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController.java
index 3cb96aa..633d2c9 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController.java
@@ -98,7 +98,11 @@ public class TestSnapshotScannerHDFSAclController {
   + SnapshotScannerHDFSAclController.class.getName());
 
 TEST_UTIL.startMiniCluster();
+SnapshotScannerHDFSAclController coprocessor = 
TEST_UTIL.getHBaseCluster().getMaster()
+  
.getMasterCoprocessorHost().findCoprocessor(SnapshotScannerHDFSAclController.class);
+TEST_UTIL.waitFor(3, () -> coprocessor.checkInitialized("check 
initialized"));
 TEST_UTIL.waitTableAvailable(PermissionStorage.ACL_TABLE_NAME);
+
 admin = TEST_UTIL.getAdmin();
 rootDir = TEST_UTIL.getDefaultRootDirPath();
 FS = rootDir.getFileSystem(conf);
@@ -128,10 +132,6 @@ public class TestSnapshotScannerHDFSAclController {
   FS.setPermission(path, commonDirectoryPermission);
   path = path.getParent();
 }
-
-SnapshotScannerHDFSAclController coprocessor = 
TEST_UTIL.getHBaseCluster().getMaster()
-
.getMasterCoprocessorHost().findCoprocessor(SnapshotScannerHDFSAclController.class);
-TEST_UTIL.waitFor(120, () -> coprocessor.checkInitialized("check 
initialized"));
 aclTable = 
admin.getConnection().getTable(PermissionStorage.ACL_TABLE_NAME);
   }
 
@@ -143,7 +143,7 @@ public class TestSnapshotScannerHDFSAclController {
   private void snapshotAndWait(final String snapShotName, final TableName 
tableName)
 throws Exception{
 admin.snapshot(snapShotName, tableName);
-LOG.info("Sleep for one second, waiting for HDFS Acl setup");
+LOG.info("Sleep for three seconds, waiting for HDFS Acl setup");
 Threads.sleep(3000);
   }
 
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController2.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController2.java
index da6ac7e..a6e6c95 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController2.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController2.java
@@ -85,7 +85,11 @@ public class TestSnapshotScannerHDFSAclController2 {
   + SnapshotScannerHDFSAclController.class.getName());
 
 TEST_UTIL.startMiniCluster();
+SnapshotScannerHDFSAclController coprocessor = 
TEST_UTIL.getHBaseCluster().getMaster()
+  
.getMasterCoprocessorHost().findCoprocessor(SnapshotScannerHDFSAclController.class);
+TEST_UTIL.waitFor(3, () -> coprocessor.checkInitialized("check 
initialized"));
 TEST_UTIL.waitTableAvailable(PermissionStorage.ACL_TABLE_NAME);
+
 admin = TEST_UTIL.getAdmin();
 Path rootDir = TEST_UTIL.getDefaultRootDirPath();
 FS = rootDir.getFileSystem(conf);
@@ -115,10 +119,6 @@ public class TestSnapshotScannerHDFSAclController2 {
   FS.setPermission(path, commonDirectoryPermission);
   path = path.getParent();
 }
-
-SnapshotScannerHDFSAclController coprocessor = 
TEST_UTIL.getHBaseCluster().getMaster()
-
.getMasterCoprocessorHost().findCoprocessor(SnapshotScannerHDFSAclController.class);
-TEST_UTIL.waitFor(120, () -> coprocessor.checkInitialized("check 
initialized"));
 aclTable = 
admin.getConnection().getTable(PermissionStorage.ACL_TABLE_NAME);
   }
 


[hbase] branch branch-2 updated: HBASE-25773 TestSnapshotScannerHDFSAclController.setupBeforeClass is flaky (#3651)

2021-09-02 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new ccd9026  HBASE-25773 
TestSnapshotScannerHDFSAclController.setupBeforeClass is flaky (#3651)
ccd9026 is described below

commit ccd90269d2c0d82ae32c75d7853ec9c9ca3da66e
Author: XinSun 
AuthorDate: Wed Sep 1 18:43:25 2021 +0800

HBASE-25773 TestSnapshotScannerHDFSAclController.setupBeforeClass is flaky 
(#3651)

Signed-off-by: Duo Zhang 
(cherry picked from commit 345d7256c812dd5fbdfe9f378b2884dd945c5da2)
---
 .../security/access/TestSnapshotScannerHDFSAclController.java | 11 ---
 .../access/TestSnapshotScannerHDFSAclController2.java | 11 ---
 2 files changed, 8 insertions(+), 14 deletions(-)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController.java
index 6fa80b3..e78cd36 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController.java
@@ -44,7 +44,6 @@ import org.apache.hadoop.hbase.client.TableDescriptor;
 import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
 import org.apache.hadoop.hbase.coprocessor.CoprocessorHost;
 import org.apache.hadoop.hbase.master.cleaner.HFileCleaner;
-import org.apache.hadoop.hbase.procedure2.ProcedureTestingUtility;
 import org.apache.hadoop.hbase.security.User;
 import org.apache.hadoop.hbase.testclassification.LargeTests;
 import org.apache.hadoop.hbase.testclassification.SecurityTests;
@@ -99,9 +98,11 @@ public class TestSnapshotScannerHDFSAclController {
   + SnapshotScannerHDFSAclController.class.getName());
 
 TEST_UTIL.startMiniCluster();
-ProcedureTestingUtility.waitAllProcedures(
-  
TEST_UTIL.getMiniHBaseCluster().getMaster().getMasterProcedureExecutor());
+SnapshotScannerHDFSAclController coprocessor = 
TEST_UTIL.getHBaseCluster().getMaster()
+  
.getMasterCoprocessorHost().findCoprocessor(SnapshotScannerHDFSAclController.class);
+TEST_UTIL.waitFor(3, () -> coprocessor.checkInitialized("check 
initialized"));
 TEST_UTIL.waitTableAvailable(PermissionStorage.ACL_TABLE_NAME);
+
 admin = TEST_UTIL.getAdmin();
 rootDir = TEST_UTIL.getDefaultRootDirPath();
 FS = rootDir.getFileSystem(conf);
@@ -131,10 +132,6 @@ public class TestSnapshotScannerHDFSAclController {
   FS.setPermission(path, commonDirectoryPermission);
   path = path.getParent();
 }
-
-SnapshotScannerHDFSAclController coprocessor = 
TEST_UTIL.getHBaseCluster().getMaster()
-
.getMasterCoprocessorHost().findCoprocessor(SnapshotScannerHDFSAclController.class);
-TEST_UTIL.waitFor(120, () -> coprocessor.checkInitialized("check 
initialized"));
 aclTable = 
admin.getConnection().getTable(PermissionStorage.ACL_TABLE_NAME);
   }
 
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController2.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController2.java
index 7ef8e3e..a6e6c95 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController2.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestSnapshotScannerHDFSAclController2.java
@@ -32,7 +32,6 @@ import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
 import org.apache.hadoop.hbase.client.Table;
 import org.apache.hadoop.hbase.coprocessor.CoprocessorHost;
-import org.apache.hadoop.hbase.procedure2.ProcedureTestingUtility;
 import org.apache.hadoop.hbase.security.User;
 import org.apache.hadoop.hbase.testclassification.LargeTests;
 import org.apache.hadoop.hbase.testclassification.SecurityTests;
@@ -86,9 +85,11 @@ public class TestSnapshotScannerHDFSAclController2 {
   + SnapshotScannerHDFSAclController.class.getName());
 
 TEST_UTIL.startMiniCluster();
-ProcedureTestingUtility.waitAllProcedures(
-  
TEST_UTIL.getMiniHBaseCluster().getMaster().getMasterProcedureExecutor());
+SnapshotScannerHDFSAclController coprocessor = 
TEST_UTIL.getHBaseCluster().getMaster()
+  
.getMasterCoprocessorHost().findCoprocessor(SnapshotScannerHDFSAclController.class);
+TEST_UTIL.waitFor(3, () -> coprocessor.checkInitialized("check 
initialized"));
 TEST_UTIL.waitTableAvailable(PermissionStorage.ACL_TABLE_NAME);
+
 admin = TEST_UTIL.getAdmin();
 Path rootDir = TEST_UTIL.getDefaultRootDirPath();
 FS = rootDir.getFileSystem(c

[hbase] branch master updated (36884c3 -> 345d725)

2021-09-01 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git.


from 36884c3  HBASE-26210 HBase Write should be doomed to hang when cell 
size exceeds InmemoryFlushSize for CompactingMemStore (#3604)
 add 345d725  HBASE-25773 
TestSnapshotScannerHDFSAclController.setupBeforeClass is flaky (#3651)

No new revisions were added by this update.

Summary of changes:
 .../security/access/TestSnapshotScannerHDFSAclController.java | 11 ---
 .../access/TestSnapshotScannerHDFSAclController2.java | 11 ---
 2 files changed, 8 insertions(+), 14 deletions(-)


[hbase] branch HBASE-24666 updated: HBASE-26194 Introduce a ReplicationServerSourceManager to simplify HReplicationServer (#3584)

2021-08-17 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/HBASE-24666 by this push:
 new afd8b9b  HBASE-26194 Introduce a ReplicationServerSourceManager to 
simplify HReplicationServer (#3584)
afd8b9b is described below

commit afd8b9bbfa14cddc797978f94a996bd59b23e279
Author: XinSun 
AuthorDate: Tue Aug 17 16:28:18 2021 +0800

HBASE-26194 Introduce a ReplicationServerSourceManager to simplify 
HReplicationServer (#3584)

Signed-off-by: stack 
---
 .../apache/hadoop/hbase/security/SecurityInfo.java |   7 +
 .../hadoop/hbase/security/SecurityConstants.java   |   4 +
 .../hbase/replication/ReplicationQueueInfo.java|  15 ++
 .../hbase/replication/HReplicationServer.java  | 122 ++-
 .../replication/regionserver/MetricsSource.java|  13 ++
 .../ReplicationServerSourceManager.java| 234 +
 .../TestReplicationServerSourceManager.java| 139 
 7 files changed, 427 insertions(+), 107 deletions(-)

diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/security/SecurityInfo.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/security/SecurityInfo.java
index f5f6922..4740d9d 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/security/SecurityInfo.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/security/SecurityInfo.java
@@ -27,6 +27,8 @@ import 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos;
 import org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.MasterService;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.ReplicationServerProtos;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.ReplicationServerStatusProtos;
 
 /**
  * Maps RPC protocol interfaces to required configuration
@@ -51,6 +53,11 @@ public class SecurityInfo {
 new SecurityInfo(SecurityConstants.MASTER_KRB_PRINCIPAL, 
Kind.HBASE_AUTH_TOKEN));
 infos.put(MasterProtos.ClientMetaService.getDescriptor().getName(),
 new SecurityInfo(SecurityConstants.MASTER_KRB_PRINCIPAL, 
Kind.HBASE_AUTH_TOKEN));
+
infos.put(ReplicationServerStatusProtos.ReplicationServerStatusService.getDescriptor()
+.getName(),
+  new SecurityInfo(SecurityConstants.MASTER_KRB_PRINCIPAL, 
Kind.HBASE_AUTH_TOKEN));
+
infos.put(ReplicationServerProtos.ReplicationServerService.getDescriptor().getName(),
+  new SecurityInfo(SecurityConstants.REPLICATION_SERVER_KRB_PRINCIPAL, 
Kind.HBASE_AUTH_TOKEN));
 // NOTE: IF ADDING A NEW SERVICE, BE SURE TO UPDATE HBasePolicyProvider 
ALSO ELSE
 // new Service will not be found when all is Kerberized
   }
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/security/SecurityConstants.java
 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/security/SecurityConstants.java
index 3e387e8..d0e13a3 100644
--- 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/security/SecurityConstants.java
+++ 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/security/SecurityConstants.java
@@ -34,6 +34,10 @@ public final class SecurityConstants {
   public static final String MASTER_KRB_KEYTAB_FILE = 
"hbase.master.keytab.file";
   public static final String REGIONSERVER_KRB_PRINCIPAL = 
"hbase.regionserver.kerberos.principal";
   public static final String REGIONSERVER_KRB_KEYTAB_FILE = 
"hbase.regionserver.keytab.file";
+  public static final String REPLICATION_SERVER_KRB_PRINCIPAL =
+"hbase.replication.server.kerberos.principal";
+  public static final String REPLICATION_SERVER_KRB_KEYTAB_FILE =
+"hbase.replication.server.keytab.file";
 
   /**
* This config is for experts: don't set its value unless you really know 
what you are doing.
diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueueInfo.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueueInfo.java
index 49a2153..9c1c03a 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueueInfo.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueueInfo.java
@@ -21,6 +21,7 @@ package org.apache.hadoop.hbase.replication;
 import java.util.ArrayList;
 import java.util.Collections;
 import java.util.List;
+import java.util.Objects;
 
 import org.apache.hadoop.hbase.ServerName;
 import org.apache.yetus.audience.InterfaceAudience;
@@ -147,4 +148,18 @@ public class ReplicationQueueInfo {
   public boolean isQueueRecovered() {
 return queueRecovered;
   }
+
+  @Override
+  public boolean equals(Object o) {
+if (o instanceof R

[hbase] branch HBASE-24666 updated: HBASE-26084 Add owner of replication queue for ReplicationQueueInfo (#3477)

2021-08-12 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/HBASE-24666 by this push:
 new fd2f3d1  HBASE-26084 Add owner of replication queue for 
ReplicationQueueInfo (#3477)
fd2f3d1 is described below

commit fd2f3d1abdcb79b0fc037ff92e65459660c5b502
Author: XinSun 
AuthorDate: Thu Aug 12 17:14:35 2021 +0800

HBASE-26084 Add owner of replication queue for ReplicationQueueInfo (#3477)

Signed-off-by: stack 
---
 .../hbase/replication/ReplicationQueueInfo.java| 24 +++-
 .../hadoop/hbase/replication/ReplicationUtils.java |  3 +-
 .../replication/ZKReplicationQueueStorage.java |  2 +-
 .../master/replication/ReplicationPeerManager.java |  3 +-
 .../hbase/replication/HReplicationServer.java  | 22 +++
 .../regionserver/DumpReplicationQueues.java|  2 +-
 .../regionserver/RecoveredReplicationSource.java   |  7 +++--
 .../regionserver/ReplicationSource.java| 32 --
 .../regionserver/ReplicationSourceFactory.java |  3 +-
 .../regionserver/ReplicationSourceInterface.java   | 17 +---
 .../regionserver/ReplicationSourceManager.java | 11 
 .../hadoop/hbase/util/hbck/ReplicationChecker.java | 15 --
 .../hbase/replication/ReplicationSourceDummy.java  | 20 ++
 .../regionserver/TestReplicationSource.java| 22 +--
 .../regionserver/TestReplicationSourceManager.java | 13 +
 .../TestReplicationSourceManagerZkImpl.java|  3 +-
 16 files changed, 113 insertions(+), 86 deletions(-)

diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueueInfo.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueueInfo.java
index d39a37e..49a2153 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueueInfo.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueueInfo.java
@@ -36,6 +36,7 @@ import org.slf4j.LoggerFactory;
 public class ReplicationQueueInfo {
   private static final Logger LOG = 
LoggerFactory.getLogger(ReplicationQueueInfo.class);
 
+  private final ServerName owner;
   private final String peerId;
   private final String queueId;
   private boolean queueRecovered;
@@ -46,7 +47,8 @@ public class ReplicationQueueInfo {
* The passed queueId will be either the id of the peer or the handling 
story of that queue
* in the form of id-servername-*
*/
-  public ReplicationQueueInfo(String queueId) {
+  public ReplicationQueueInfo(ServerName owner, String queueId) {
+this.owner = owner;
 this.queueId = queueId;
 String[] parts = queueId.split("-", 2);
 this.queueRecovered = parts.length != 1;
@@ -58,6 +60,22 @@ public class ReplicationQueueInfo {
   }
 
   /**
+   * A util method to parse the peerId from queueId.
+   */
+  public static String parsePeerId(String queueId) {
+String[] parts = queueId.split("-", 2);
+return parts.length != 1 ? parts[0] : queueId;
+  }
+
+  /**
+   * A util method to check whether a queue is recovered.
+   */
+  public static boolean isQueueRecovered(String queueId) {
+String[] parts = queueId.split("-", 2);
+return parts.length != 1;
+  }
+
+  /**
* Parse dead server names from queue id. servername can contain "-" such as
* "ip-10-46-221-101.ec2.internal", so we need skip some "-" during parsing 
for the following
* cases: 2-ip-10-46-221-101.ec2.internal,52170,1364333181125-server 
name>-...
@@ -114,6 +132,10 @@ public class ReplicationQueueInfo {
 return Collections.unmodifiableList(this.deadRegionServers);
   }
 
+  public ServerName getOwner() {
+return this.owner;
+  }
+
   public String getPeerId() {
 return this.peerId;
   }
diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationUtils.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationUtils.java
index 7bafbc2..7dbfe41a 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationUtils.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationUtils.java
@@ -86,8 +86,7 @@ public final class ReplicationUtils {
 for (ServerName replicator : queueStorage.getListOfReplicators()) {
   List queueIds = queueStorage.getAllQueues(replicator);
   for (String queueId : queueIds) {
-ReplicationQueueInfo queueInfo = new ReplicationQueueInfo(queueId);
-if (queueInfo.getPeerId().equals(peerId)) {
+if (ReplicationQueueInfo.parsePeerId(queueId).equals(peerId)) {
   queueStorage.removeQueue(replicator, queueId);
 }
   }
diff --git 
a/hbas

[hbase] 11/12: HBASE-24737 Find a way to resolve WALFileLengthProvider#getLogFileSizeIfBeingWritten problem (#3045)

2021-07-09 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit ff16870505d082fe5075cb31e60d6ec045cf2ab6
Author: XinSun 
AuthorDate: Tue Apr 27 11:13:15 2021 +0800

HBASE-24737 Find a way to resolve 
WALFileLengthProvider#getLogFileSizeIfBeingWritten problem (#3045)

Signed-off-by: Duo Zhang 
---
 .../src/main/protobuf/server/region/Admin.proto|  12 ++
 .../hbase/client/AsyncRegionServerAdmin.java   |   8 ++
 .../hadoop/hbase/regionserver/HRegionServer.java   |   2 +-
 .../hadoop/hbase/regionserver/RSRpcServices.java   |  24 
 .../hbase/replication/HReplicationServer.java  |  11 +-
 .../regionserver/WALFileLengthProvider.java|   3 +-
 .../RemoteWALFileLengthProvider.java   |  73 
 .../org/apache/hadoop/hbase/wal/WALProvider.java   |  15 ++-
 .../hadoop/hbase/master/MockRegionServer.java  |   7 ++
 .../TestRemoteWALFileLengthProvider.java   | 130 +
 10 files changed, 280 insertions(+), 5 deletions(-)

diff --git a/hbase-protocol-shaded/src/main/protobuf/server/region/Admin.proto 
b/hbase-protocol-shaded/src/main/protobuf/server/region/Admin.proto
index 0667292..693a809 100644
--- a/hbase-protocol-shaded/src/main/protobuf/server/region/Admin.proto
+++ b/hbase-protocol-shaded/src/main/protobuf/server/region/Admin.proto
@@ -328,6 +328,15 @@ message ClearSlowLogResponses {
   required bool is_cleaned = 1;
 }
 
+message GetLogFileSizeIfBeingWrittenRequest {
+  required string wal_path = 1;
+}
+
+message GetLogFileSizeIfBeingWrittenResponse {
+  required bool is_being_written = 1;
+  optional uint64 length = 2;
+}
+
 service AdminService {
   rpc GetRegionInfo(GetRegionInfoRequest)
 returns(GetRegionInfoResponse);
@@ -399,4 +408,7 @@ service AdminService {
   rpc GetLogEntries(LogRequest)
 returns(LogEntry);
 
+  rpc GetLogFileSizeIfBeingWritten(GetLogFileSizeIfBeingWrittenRequest)
+returns(GetLogFileSizeIfBeingWrittenResponse);
+
 }
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/client/AsyncRegionServerAdmin.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/client/AsyncRegionServerAdmin.java
index 8ff869f..f18d894 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/client/AsyncRegionServerAdmin.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/client/AsyncRegionServerAdmin.java
@@ -42,6 +42,8 @@ import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.ExecuteProc
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.ExecuteProceduresResponse;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.FlushRegionRequest;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.FlushRegionResponse;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.GetLogFileSizeIfBeingWrittenRequest;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.GetLogFileSizeIfBeingWrittenResponse;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.GetOnlineRegionRequest;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.GetOnlineRegionResponse;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.GetRegionInfoRequest;
@@ -216,4 +218,10 @@ public class AsyncRegionServerAdmin {
   ExecuteProceduresRequest request) {
 return call((stub, controller, done) -> stub.executeProcedures(controller, 
request, done));
   }
+
+  public CompletableFuture 
getLogFileSizeIfBeingWritten(
+GetLogFileSizeIfBeingWrittenRequest request) {
+return call((stub, controller, done) ->
+  stub.getLogFileSizeIfBeingWritten(controller, request, done));
+  }
 }
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
index c00a8b7..a5eb4e7 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
@@ -2323,7 +2323,7 @@ public class HRegionServer extends Thread implements
 return walRoller;
   }
 
-  WALFactory getWalFactory() {
+  public WALFactory getWalFactory() {
 return walFactory;
   }
 
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
index 91bf9cb..edc33d7 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
@@ -37,6 +37,7 @@ import java.util.Map;
 import java.util.Map.Entry;
 import java.util.NavigableMap;
 import java.util.Optional;
+import java.util.OptionalLong;
 import java.util.Set;
 

[hbase] 08/12: HBASE-24999 Master manages ReplicationServers (#2579)

2021-07-09 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit 6d7bd0a6b40a685c079f1432d11258f191bc8b2b
Author: XinSun 
AuthorDate: Wed Oct 28 18:59:57 2020 +0800

HBASE-24999 Master manages ReplicationServers (#2579)

Signed-off-by: Guanghao Zhang 
---
 .../server/master/ReplicationServerStatus.proto|  34 
 .../org/apache/hadoop/hbase/master/HMaster.java|  10 +
 .../hadoop/hbase/master/MasterRpcServices.java |  37 +++-
 .../apache/hadoop/hbase/master/MasterServices.java |   5 +
 .../hbase/master/ReplicationServerManager.java | 204 
 .../replication/HBaseReplicationEndpoint.java  | 148 ++
 .../hbase/replication/HReplicationServer.java  | 214 -
 .../HBaseInterClusterReplicationEndpoint.java  |   1 -
 .../regionserver/ReplicationSyncUp.java|   4 +-
 .../hbase/master/MockNoopMasterServices.java   |   5 +
 .../hbase/replication/TestReplicationBase.java |   2 +
 .../hbase/replication/TestReplicationServer.java   |  57 +-
 12 files changed, 619 insertions(+), 102 deletions(-)

diff --git 
a/hbase-protocol-shaded/src/main/protobuf/server/master/ReplicationServerStatus.proto
 
b/hbase-protocol-shaded/src/main/protobuf/server/master/ReplicationServerStatus.proto
new file mode 100644
index 000..d39a043
--- /dev/null
+++ 
b/hbase-protocol-shaded/src/main/protobuf/server/master/ReplicationServerStatus.proto
@@ -0,0 +1,34 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+syntax = "proto2";
+
+package hbase.pb;
+
+option java_package = "org.apache.hadoop.hbase.shaded.protobuf.generated";
+option java_outer_classname = "ReplicationServerStatusProtos";
+option java_generic_services = true;
+option java_generate_equals_and_hash = true;
+option optimize_for = SPEED;
+
+import "server/master/RegionServerStatus.proto";
+
+service ReplicationServerStatusService {
+
+  rpc ReplicationServerReport(RegionServerReportRequest)
+  returns(RegionServerReportResponse);
+}
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
index 903f392..8977ad5 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
@@ -307,6 +307,8 @@ public class HMaster extends HRegionServer implements 
MasterServices {
   // manager of assignment nodes in zookeeper
   private AssignmentManager assignmentManager;
 
+  // server manager to deal with replication server info
+  private ReplicationServerManager replicationServerManager;
 
   /**
* Cache for the meta region replica's locations. Also tracks their changes 
to avoid stale
@@ -873,6 +875,8 @@ public class HMaster extends HRegionServer implements 
MasterServices {
 .collect(Collectors.toList());
 this.assignmentManager.setupRIT(ritList);
 
+this.replicationServerManager = new ReplicationServerManager(this);
+
 // Start RegionServerTracker with listing of servers found with exiting 
SCPs -- these should
 // be registered in the deadServers set -- and with the list of 
servernames out on the
 // filesystem that COULD BE 'alive' (we'll schedule SCPs for each and let 
SCP figure it out).
@@ -1037,6 +1041,7 @@ public class HMaster extends HRegionServer implements 
MasterServices {
 this.hbckChore = new HbckChore(this);
 getChoreService().scheduleChore(hbckChore);
 this.serverManager.startChore();
+this.replicationServerManager.startChore();
 
 // Only for rolling upgrade, where we need to migrate the data in 
namespace table to meta table.
 if (!waitForNamespaceOnline()) {
@@ -1361,6 +1366,11 @@ public class HMaster extends HRegionServer implements 
MasterServices {
   }
 
   @Override
+  public ReplicationServerManager getReplicationServerManager() {
+return this.replicationServerManager;
+  }
+
+  @Override
   public MasterFileSystem getMasterFileSystem() {
 return this.file

[hbase] 05/12: HBASE-24982 Disassemble the method replicateWALEntry from AdminService to a new interface ReplicationServerService (#2360)

2021-07-09 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit d2588a1dc196463d5cfd11146f14d018ab1c6efd
Author: XinSun 
AuthorDate: Wed Sep 9 15:00:37 2020 +0800

HBASE-24982 Disassemble the method replicateWALEntry from AdminService to a 
new interface ReplicationServerService (#2360)

Signed-off-by: Wellington Chevreuil 
---
 .../hadoop/hbase/client/AsyncConnectionImpl.java   |  16 ++
 .../server/replication/ReplicationServer.proto |  32 
 .../hadoop/hbase/replication/ReplicationUtils.java |  19 ++
 .../hbase/client/AsyncClusterConnection.java   |   5 +
 .../hbase/client/AsyncClusterConnectionImpl.java   |   5 +
 .../hbase/client/AsyncReplicationServerAdmin.java  |  80 +
 .../hbase/protobuf/ReplicationProtobufUtil.java|  18 ++
 .../hadoop/hbase/regionserver/RSRpcServices.java   |   4 +-
 .../replication/HBaseReplicationEndpoint.java  |  57 +-
 .../hbase/replication/HReplicationServer.java  |   2 -
 .../replication/ReplicationServerRpcServices.java  | 200 +
 .../HBaseInterClusterReplicationEndpoint.java  |   7 +-
 .../hbase/client/DummyAsyncClusterConnection.java  |   5 +
 .../replication/TestHBaseReplicationEndpoint.java  |  17 +-
 .../hbase/replication/TestReplicationServer.java   |  43 -
 15 files changed, 284 insertions(+), 226 deletions(-)

diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java
index 25a98ed..840da27 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java
@@ -68,6 +68,7 @@ import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminServic
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ClientService;
 import org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.MasterService;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.ReplicationServerProtos.ReplicationServerService;
 
 /**
  * The implementation of AsyncConnection.
@@ -105,6 +106,8 @@ class AsyncConnectionImpl implements AsyncConnection {
   private final ConcurrentMap rsStubs = new 
ConcurrentHashMap<>();
   private final ConcurrentMap adminStubs =
   new ConcurrentHashMap<>();
+  private final ConcurrentMap 
replStubs =
+  new ConcurrentHashMap<>();
 
   private final AtomicReference masterStub = new 
AtomicReference<>();
 
@@ -283,12 +286,25 @@ class AsyncConnectionImpl implements AsyncConnection {
 return AdminService.newStub(rpcClient.createRpcChannel(serverName, user, 
rpcTimeout));
   }
 
+  private ReplicationServerService.Interface 
createReplicationServerStub(ServerName serverName)
+  throws IOException {
+return ReplicationServerService.newStub(
+rpcClient.createRpcChannel(serverName, user, rpcTimeout));
+  }
+
   AdminService.Interface getAdminStub(ServerName serverName) throws 
IOException {
 return ConcurrentMapUtils.computeIfAbsentEx(adminStubs,
   getStubKey(AdminService.getDescriptor().getName(), serverName),
   () -> createAdminServerStub(serverName));
   }
 
+  ReplicationServerService.Interface getReplicationServerStub(ServerName 
serverName)
+  throws IOException {
+return ConcurrentMapUtils.computeIfAbsentEx(replStubs,
+getStubKey(ReplicationServerService.getDescriptor().getName(), 
serverName),
+  () -> createReplicationServerStub(serverName));
+  }
+
   CompletableFuture getMasterStub() {
 return ConnectionUtils.getOrFetch(masterStub, masterStubMakeFuture, false, 
() -> {
   CompletableFuture future = new 
CompletableFuture<>();
diff --git 
a/hbase-protocol-shaded/src/main/protobuf/server/replication/ReplicationServer.proto
 
b/hbase-protocol-shaded/src/main/protobuf/server/replication/ReplicationServer.proto
new file mode 100644
index 000..ed334c4
--- /dev/null
+++ 
b/hbase-protocol-shaded/src/main/protobuf/server/replication/ReplicationServer.proto
@@ -0,0 +1,32 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * W

[hbase] 04/12: HBASE-24683 Add a basic ReplicationServer which only implement ReplicationSink Service (#2111)

2021-07-09 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit 2fcdb7af28f4682aef1e21db4ea22c2181ee04e6
Author: XinSun 
AuthorDate: Fri Sep 4 18:53:46 2020 +0800

HBASE-24683 Add a basic ReplicationServer which only implement 
ReplicationSink Service (#2111)

Signed-off-by: Guanghao Zhang 
---
 .../java/org/apache/hadoop/hbase/util/DNS.java |   3 +-
 .../hbase/replication/HReplicationServer.java  | 391 
 .../replication/ReplicationServerRpcServices.java  | 516 +
 .../hbase/replication/TestReplicationServer.java   | 151 ++
 4 files changed, 1060 insertions(+), 1 deletion(-)

diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DNS.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DNS.java
index 098884c..a933f6c 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DNS.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DNS.java
@@ -63,7 +63,8 @@ public final class DNS {
 
   public enum ServerType {
 MASTER("master"),
-REGIONSERVER("regionserver");
+REGIONSERVER("regionserver"),
+REPLICATIONSERVER("replicationserver");
 
 private String name;
 ServerType(String name) {
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/HReplicationServer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/HReplicationServer.java
new file mode 100644
index 000..31dec0c
--- /dev/null
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/HReplicationServer.java
@@ -0,0 +1,391 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.ChoreService;
+import org.apache.hadoop.hbase.CoordinatedStateManager;
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.client.AsyncClusterConnection;
+import org.apache.hadoop.hbase.client.ClusterConnectionFactory;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.log.HBaseMarkers;
+import org.apache.hadoop.hbase.regionserver.ReplicationService;
+import org.apache.hadoop.hbase.regionserver.ReplicationSinkService;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.hbase.security.UserProvider;
+import org.apache.hadoop.hbase.trace.TraceUtil;
+import org.apache.hadoop.hbase.util.Sleeper;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.util.ReflectionUtils;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * HReplicationServer which is responsible to all replication stuff. It checks 
in with
+ * the HMaster. There are many HReplicationServers in a single HBase 
deployment.
+ */
+@InterfaceAudience.Private
+@SuppressWarnings({ "deprecation"})
+public class HReplicationServer extends Thread implements Server {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(HReplicationServer.class);
+
+  /** replication server process name */
+  public static final String REPLICATION_SERVER = "replicationserver";
+
+  /**
+   * This servers start code.
+   */
+  protected final long startCode;
+
+  private volatile boolean stopped = false;
+
+  // Go down hard. Used if file system becomes unavailable and also in
+  // debugging and unit tests.
+  private AtomicBoolean abortRequested;
+
+  // flag set after we're done setting up server threads
+  final AtomicBoolean online = new AtomicBoolean(false);
+
+  /**
+   * The server name the Master sees us as.  Its made from the hostname the
+   * master passes us, port, and server start code. Gets set after registration
+   * against Master.

[hbase] 03/12: HBASE-24735: Refactor ReplicationSourceManager: move logPositionAndCleanOldLogs/cleanUpHFileRefs to ReplicationSource inside (#2064)

2021-07-09 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit eded3096db2365033be6148ca4cd8e94d5759512
Author: Guanghao Zhang 
AuthorDate: Tue Aug 11 20:07:09 2020 +0800

HBASE-24735: Refactor ReplicationSourceManager: move 
logPositionAndCleanOldLogs/cleanUpHFileRefs to ReplicationSource inside (#2064)

Signed-off-by: Wellington Chevreuil 
---
 .../regionserver/CatalogReplicationSource.java |  13 +-
 .../regionserver/RecoveredReplicationSource.java   |  18 ++-
 .../regionserver/ReplicationSource.java| 166 ++---
 .../regionserver/ReplicationSourceInterface.java   |  39 +++--
 .../regionserver/ReplicationSourceManager.java | 142 +-
 .../regionserver/ReplicationSourceShipper.java |  21 +--
 .../regionserver/ReplicationSourceWALReader.java   |  16 +-
 .../replication/regionserver/WALEntryBatch.java|   2 +-
 .../hbase/replication/ReplicationSourceDummy.java  |  24 +--
 .../regionserver/TestBasicWALEntryStream.java  |  16 +-
 .../regionserver/TestReplicationSource.java|  16 +-
 .../regionserver/TestReplicationSourceManager.java |  49 +++---
 12 files changed, 269 insertions(+), 253 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/CatalogReplicationSource.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/CatalogReplicationSource.java
index 8cb7860..15370e0 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/CatalogReplicationSource.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/CatalogReplicationSource.java
@@ -35,7 +35,18 @@ class CatalogReplicationSource extends ReplicationSource {
   }
 
   @Override
-  public void logPositionAndCleanOldLogs(WALEntryBatch entryBatch) {
+  public void setWALPosition(WALEntryBatch entryBatch) {
+// Noop. This CatalogReplicationSource implementation does not persist 
state to backing storage
+// nor does it keep its WALs in a general map up in 
ReplicationSourceManager --
+// CatalogReplicationSource is used by the Catalog Read Replica feature 
which resets everytime
+// the WAL source process crashes. Skip calling through to the default 
implementation.
+// See "4.1 Skip maintaining zookeeper replication queue (offsets/WALs)" 
in the
+// design doc attached to HBASE-18070 'Enable memstore replication for 
meta replica for detail'
+// for background on why no need to keep WAL state.
+  }
+
+  @Override
+  public void cleanOldWALs(String log, boolean inclusive) {
 // Noop. This CatalogReplicationSource implementation does not persist 
state to backing storage
 // nor does it keep its WALs in a general map up in 
ReplicationSourceManager --
 // CatalogReplicationSource is used by the Catalog Read Replica feature 
which resets everytime
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/RecoveredReplicationSource.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/RecoveredReplicationSource.java
index 526c3e3..abbc046 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/RecoveredReplicationSource.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/RecoveredReplicationSource.java
@@ -21,6 +21,7 @@ import java.io.IOException;
 import java.util.List;
 import java.util.UUID;
 import java.util.concurrent.PriorityBlockingQueue;
+
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
@@ -44,15 +45,18 @@ public class RecoveredReplicationSource extends 
ReplicationSource {
 
   private static final Logger LOG = 
LoggerFactory.getLogger(RecoveredReplicationSource.class);
 
+  private Path walDir;
+
   private String actualPeerId;
 
   @Override
-  public void init(Configuration conf, FileSystem fs, ReplicationSourceManager 
manager,
-  ReplicationQueueStorage queueStorage, ReplicationPeer replicationPeer, 
Server server,
-  String peerClusterZnode, UUID clusterId, WALFileLengthProvider 
walFileLengthProvider,
-  MetricsSource metrics) throws IOException {
-super.init(conf, fs, manager, queueStorage, replicationPeer, server, 
peerClusterZnode,
+  public void init(Configuration conf, FileSystem fs, Path walDir, 
ReplicationSourceManager manager,
+ReplicationQueueStorage queueStorage, ReplicationPeer replicationPeer, 
Server server,
+String peerClusterZnode, UUID clusterId, WALFileLengthProvider 
walFileLengthProvider,
+MetricsSource metrics) throws IOException {
+super.init(conf, fs, walDir, manager, queueStorage, replicationPeer, 
server, peerClusterZnode,
   clusterId, walFileLengthProvider, metrics);
+this.walDi

[hbase] 10/12: HBASE-25113 [testing] HBaseCluster support ReplicationServer for UTs (#2662)

2021-07-09 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit 6ae60979a8a61c79c737f7a65d9b5318230b2537
Author: XinSun 
AuthorDate: Mon Nov 23 11:01:55 2020 +0800

HBASE-25113 [testing] HBaseCluster support ReplicationServer for UTs (#2662)

Signed-off-by: Guanghao Zhang 
---
 .../org/apache/hadoop/hbase/LocalHBaseCluster.java | 63 ++-
 .../hbase/replication/HReplicationServer.java  | 13 
 .../apache/hadoop/hbase/util/JVMClusterUtil.java   | 57 +-
 .../apache/hadoop/hbase/HBaseTestingUtility.java   |  8 +--
 .../org/apache/hadoop/hbase/MiniHBaseCluster.java  | 70 ++
 .../hadoop/hbase/StartMiniClusterOption.java   | 24 ++--
 .../replication/TestReplicationServerSink.java | 45 +++---
 hbase-server/src/test/resources/hbase-site.xml |  7 +++
 8 files changed, 242 insertions(+), 45 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java
index f4847b9..24b658f 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java
@@ -32,9 +32,11 @@ import org.apache.hadoop.hbase.client.TableDescriptor;
 import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
 import org.apache.hadoop.hbase.master.HMaster;
 import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.replication.HReplicationServer;
 import org.apache.hadoop.hbase.security.User;
 import org.apache.hadoop.hbase.util.JVMClusterUtil;
 import org.apache.hadoop.hbase.util.JVMClusterUtil.RegionServerThread;
+import org.apache.hadoop.hbase.util.JVMClusterUtil.ReplicationServerThread;
 import org.apache.hadoop.hbase.util.Threads;
 import org.apache.yetus.audience.InterfaceAudience;
 import org.slf4j.Logger;
@@ -60,7 +62,10 @@ import org.slf4j.LoggerFactory;
 public class LocalHBaseCluster {
   private static final Logger LOG = 
LoggerFactory.getLogger(LocalHBaseCluster.class);
   private final List masterThreads = new 
CopyOnWriteArrayList<>();
-  private final List regionThreads = new 
CopyOnWriteArrayList<>();
+  private final List regionThreads =
+  new CopyOnWriteArrayList<>();
+  private final List 
replicationThreads =
+  new CopyOnWriteArrayList<>();
   private final static int DEFAULT_NO = 1;
   /** local mode */
   public static final String LOCAL = "local";
@@ -259,6 +264,26 @@ public class LocalHBaseCluster {
 });
   }
 
+  @SuppressWarnings("unchecked")
+  public JVMClusterUtil.ReplicationServerThread addReplicationServer(
+  Configuration config, final int index) throws IOException {
+// Create each replication server with its own Configuration instance so 
each has
+// its Connection instance rather than share (see HBASE_INSTANCES down in
+// the guts of ConnectionManager).
+JVMClusterUtil.ReplicationServerThread rst =
+JVMClusterUtil.createReplicationServerThread(config, index);
+this.replicationThreads.add(rst);
+return rst;
+  }
+
+  public JVMClusterUtil.ReplicationServerThread addReplicationServer(
+  final Configuration config, final int index, User user)
+  throws IOException, InterruptedException {
+return user.runAs(
+(PrivilegedExceptionAction) () -> 
addReplicationServer(config,
+index));
+  }
+
   /**
* @param serverNumber
* @return region server
@@ -290,6 +315,40 @@ public class LocalHBaseCluster {
   }
 
   /**
+   * @param serverNumber replication server number
+   * @return replication server
+   */
+  public HReplicationServer getReplicationServer(int serverNumber) {
+return replicationThreads.get(serverNumber).getReplicationServer();
+  }
+
+  /**
+   * @return Read-only list of replication server threads.
+   */
+  public List getReplicationServers() {
+return Collections.unmodifiableList(this.replicationThreads);
+  }
+
+  /**
+   * @return List of running servers (Some servers may have been killed or
+   *   aborted during lifetime of cluster; these servers are not included in 
this
+   *   list).
+   */
+  public List 
getLiveReplicationServers() {
+List liveServers = new 
ArrayList<>();
+List list = getReplicationServers();
+for (JVMClusterUtil.ReplicationServerThread rst: list) {
+  if (rst.isAlive()) {
+liveServers.add(rst);
+  }
+  else {
+LOG.info("Not alive {}", rst.getName());
+  }
+}
+return liveServers;
+  }
+
+  /**
* @return the Configuration used by this LocalHBaseCluster
*/
   public Configuration getConfiguration() {
@@ -430,7 +489,7 @@ public class LocalHBaseCluster {
* Start the cluster.
*/
   public 

[hbase] 12/12: HBASE-25807 Move method reportProcedureDone from RegionServerStatus.proto to Master.proto (#3205)

2021-07-09 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit add13ab6072484fcdda248ed082a4f9cdace6d1a
Author: XinSun 
AuthorDate: Mon May 24 11:54:00 2021 +0800

HBASE-25807 Move method reportProcedureDone from RegionServerStatus.proto 
to Master.proto (#3205)

Signed-off-by: Duo Zhang 
---
 .../src/main/protobuf/server/master/Master.proto| 20 
 .../protobuf/server/master/RegionServerStatus.proto | 21 +
 .../hadoop/hbase/master/MasterRpcServices.java  |  6 +++---
 .../master/MasterRpcServicesVersionWrapper.java |  5 +++--
 .../hadoop/hbase/regionserver/HRegionServer.java|  2 +-
 .../regionserver/RemoteProcedureResultReporter.java |  7 +++
 6 files changed, 35 insertions(+), 26 deletions(-)

diff --git a/hbase-protocol-shaded/src/main/protobuf/server/master/Master.proto 
b/hbase-protocol-shaded/src/main/protobuf/server/master/Master.proto
index b9ed476..13b3a35 100644
--- a/hbase-protocol-shaded/src/main/protobuf/server/master/Master.proto
+++ b/hbase-protocol-shaded/src/main/protobuf/server/master/Master.proto
@@ -735,6 +735,23 @@ message ListReplicationSinkServersResponse {
   repeated ServerName server_name = 1;
 }
 
+message RemoteProcedureResult {
+  required uint64 proc_id = 1;
+  enum Status {
+SUCCESS = 1;
+ERROR = 2;
+  }
+  required Status status = 2;
+  optional ForeignExceptionMessage error = 3;
+}
+
+message ReportProcedureDoneRequest {
+  repeated RemoteProcedureResult result = 1;
+}
+
+message ReportProcedureDoneResponse {
+}
+
 service MasterService {
   /** Used by the client to get the number of regions that have received the 
updated schema */
   rpc GetSchemaAlterStatus(GetSchemaAlterStatusRequest)
@@ -1171,6 +1188,9 @@ service MasterService {
 
   rpc ListReplicationSinkServers(ListReplicationSinkServersRequest)
 returns (ListReplicationSinkServersResponse);
+
+  rpc ReportProcedureDone(ReportProcedureDoneRequest)
+  returns(ReportProcedureDoneResponse);
 }
 
 // HBCK Service definitions.
diff --git 
a/hbase-protocol-shaded/src/main/protobuf/server/master/RegionServerStatus.proto
 
b/hbase-protocol-shaded/src/main/protobuf/server/master/RegionServerStatus.proto
index c894a77..f3547da 100644
--- 
a/hbase-protocol-shaded/src/main/protobuf/server/master/RegionServerStatus.proto
+++ 
b/hbase-protocol-shaded/src/main/protobuf/server/master/RegionServerStatus.proto
@@ -29,6 +29,7 @@ option optimize_for = SPEED;
 import "HBase.proto";
 import "server/ClusterStatus.proto";
 import "server/ErrorHandling.proto";
+import "server/master/Master.proto";
 
 message RegionServerStartupRequest {
   /** Port number this regionserver is up on */
@@ -147,22 +148,6 @@ message RegionSpaceUseReportRequest {
 message RegionSpaceUseReportResponse {
 }
 
-message RemoteProcedureResult {
-  required uint64 proc_id = 1;
-  enum Status {
-SUCCESS = 1;
-ERROR = 2;
-  }
-  required Status status = 2;
-  optional ForeignExceptionMessage error = 3;
-}
-message ReportProcedureDoneRequest {
-  repeated RemoteProcedureResult result = 1;
-}
-
-message ReportProcedureDoneResponse {
-}
-
 message FileArchiveNotificationRequest {
   message FileWithSize {
 optional TableName table_name = 1;
@@ -211,6 +196,10 @@ service RegionServerStatusService {
   rpc ReportRegionSpaceUse(RegionSpaceUseReportRequest)
 returns(RegionSpaceUseReportResponse);
 
+  /**
+   * In HBASE-25807 this method was moved to Master.proto as replication 
server also need this.
+   * To avoid problems during upgrading, still keep this method here.
+   */
   rpc ReportProcedureDone(ReportProcedureDoneRequest)
 returns(ReportProcedureDoneResponse);
 
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
index c17d699..b7c0bff 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
@@ -291,6 +291,9 @@ import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.OfflineReg
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.RecommissionRegionServerRequest;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.RecommissionRegionServerResponse;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.RegionSpecifierAndState;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.RemoteProcedureResult;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.ReportProcedureDoneRequest;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.ReportProcedureDoneResponse;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.Mast

[hbase] 09/12: HBASE-25071 ReplicationServer support start ReplicationSource internal (#2452)

2021-07-09 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit c8c85f4f4205ebcd6c7fbecf55072386c3bb842b
Author: Guanghao Zhang 
AuthorDate: Mon Nov 9 11:46:02 2020 +0800

HBASE-25071 ReplicationServer support start ReplicationSource internal 
(#2452)

Signed-off-by: XinSun 
---
 .../server/replication/ReplicationServer.proto |  14 +-
 .../replication/ZKReplicationQueueStorage.java |   4 +-
 .../replication/ZKReplicationStorageBase.java  |   4 +
 .../hadoop/hbase/master/MasterRpcServices.java |   2 +-
 .../hadoop/hbase/regionserver/RSRpcServices.java   |   2 +-
 .../replication/HBaseReplicationEndpoint.java  |  14 +-
 .../hbase/replication/HReplicationServer.java  | 175 ++---
 .../replication/ReplicationServerRpcServices.java  |  15 ++
 .../regionserver/RecoveredReplicationSource.java   |   9 +-
 .../regionserver/ReplicationSource.java|  54 ++-
 .../regionserver/ReplicationSourceFactory.java |   2 +-
 .../regionserver/ReplicationSourceInterface.java   |   6 +-
 .../regionserver/ReplicationSourceManager.java |   9 +-
 .../hbase/replication/ReplicationSourceDummy.java  |   5 +-
 .../replication/TestReplicationFetchServers.java   |  43 +++--
 ...nServer.java => TestReplicationServerSink.java} |  25 +--
 .../replication/TestReplicationServerSource.java   |  69 
 .../regionserver/TestReplicationSource.java|  20 +--
 .../regionserver/TestReplicationSourceManager.java |  18 ++-
 19 files changed, 400 insertions(+), 90 deletions(-)

diff --git 
a/hbase-protocol-shaded/src/main/protobuf/server/replication/ReplicationServer.proto
 
b/hbase-protocol-shaded/src/main/protobuf/server/replication/ReplicationServer.proto
index ed334c4..925aed4 100644
--- 
a/hbase-protocol-shaded/src/main/protobuf/server/replication/ReplicationServer.proto
+++ 
b/hbase-protocol-shaded/src/main/protobuf/server/replication/ReplicationServer.proto
@@ -24,9 +24,21 @@ option java_generic_services = true;
 option java_generate_equals_and_hash = true;
 option optimize_for = SPEED;
 
+import "HBase.proto";
 import "server/region/Admin.proto";
 
+message StartReplicationSourceRequest {
+  required ServerName server_name = 1;
+  required string queue_id = 2;
+}
+
+message StartReplicationSourceResponse {
+}
+
 service ReplicationServerService {
   rpc ReplicateWALEntry(ReplicateWALEntryRequest)
 returns(ReplicateWALEntryResponse);
-}
\ No newline at end of file
+
+  rpc StartReplicationSource(StartReplicationSourceRequest)
+returns(StartReplicationSourceResponse);
+}
diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ZKReplicationQueueStorage.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ZKReplicationQueueStorage.java
index 5c480ba..08ac142 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ZKReplicationQueueStorage.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ZKReplicationQueueStorage.java
@@ -79,7 +79,7 @@ import 
org.apache.hbase.thirdparty.org.apache.commons.collections4.CollectionUti
  * 
  */
 @InterfaceAudience.Private
-class ZKReplicationQueueStorage extends ZKReplicationStorageBase
+public class ZKReplicationQueueStorage extends ZKReplicationStorageBase
 implements ReplicationQueueStorage {
 
   private static final Logger LOG = 
LoggerFactory.getLogger(ZKReplicationQueueStorage.class);
@@ -121,7 +121,7 @@ class ZKReplicationQueueStorage extends 
ZKReplicationStorageBase
 return ZNodePaths.joinZNode(queuesZNode, serverName.getServerName());
   }
 
-  private String getQueueNode(ServerName serverName, String queueId) {
+  public String getQueueNode(ServerName serverName, String queueId) {
 return ZNodePaths.joinZNode(getRsNode(serverName), queueId);
   }
 
diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ZKReplicationStorageBase.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ZKReplicationStorageBase.java
index 596167f..a239bf8 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ZKReplicationStorageBase.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ZKReplicationStorageBase.java
@@ -74,4 +74,8 @@ public abstract class ZKReplicationStorageBase {
   throw new RuntimeException(e);
 }
   }
+
+  public ZKWatcher getZookeeper() {
+return this.zookeeper;
+  }
 }
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
index c677458..c17d699 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.

[hbase] 02/12: HBASE-24681 Remove the cache walsById/walsByIdRecoveredQueues from ReplicationSourceManager (#2019)

2021-07-09 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit 8481d547fd14493570e2b32f6b95f0a5c0e15536
Author: Guanghao Zhang 
AuthorDate: Mon Jul 13 17:35:32 2020 +0800

HBASE-24681 Remove the cache walsById/walsByIdRecoveredQueues from 
ReplicationSourceManager (#2019)

Signed-off-by: Wellington Chevreuil 
---
 .../regionserver/ReplicationSourceManager.java | 204 +++--
 1 file changed, 62 insertions(+), 142 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
index ad7c033..db12c00 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
@@ -93,30 +93,6 @@ import 
org.apache.hbase.thirdparty.com.google.common.util.concurrent.ThreadFacto
  * No need synchronized on {@link #sources}. {@link #sources} is a 
ConcurrentHashMap and there
  * is a Lock for peer id in {@link PeerProcedureHandlerImpl}. So there is no 
race for peer
  * operations.
- * Need synchronized on {@link #walsById}. There are four methods which 
modify it,
- * {@link #addPeer(String)}, {@link #removePeer(String)},
- * {@link #cleanOldLogs(String, boolean, ReplicationSourceInterface)} and 
{@link #preLogRoll(Path)}.
- * {@link #walsById} is a ConcurrentHashMap and there is a Lock for peer id in
- * {@link PeerProcedureHandlerImpl}. So there is no race between {@link 
#addPeer(String)} and
- * {@link #removePeer(String)}. {@link #cleanOldLogs(String, boolean, 
ReplicationSourceInterface)}
- * is called by {@link ReplicationSourceInterface}. So no race with {@link 
#addPeer(String)}.
- * {@link #removePeer(String)} will terminate the {@link 
ReplicationSourceInterface} firstly, then
- * remove the wals from {@link #walsById}. So no race with {@link 
#removePeer(String)}. The only
- * case need synchronized is {@link #cleanOldLogs(String, boolean, 
ReplicationSourceInterface)} and
- * {@link #preLogRoll(Path)}.
- * No need synchronized on {@link #walsByIdRecoveredQueues}. There are 
three methods which
- * modify it, {@link #removePeer(String)} ,
- * {@link #cleanOldLogs(String, boolean, ReplicationSourceInterface)} and
- * {@link ReplicationSourceManager#claimQueue(ServerName, String)}.
- * {@link #cleanOldLogs(String, boolean, ReplicationSourceInterface)} is 
called by
- * {@link ReplicationSourceInterface}. {@link #removePeer(String)} will 
terminate the
- * {@link ReplicationSourceInterface} firstly, then remove the wals from
- * {@link #walsByIdRecoveredQueues}. And
- * {@link ReplicationSourceManager#claimQueue(ServerName, String)} will add 
the wals to
- * {@link #walsByIdRecoveredQueues} firstly, then start up a {@link 
ReplicationSourceInterface}. So
- * there is no race here. For {@link 
ReplicationSourceManager#claimQueue(ServerName, String)} and
- * {@link #removePeer(String)}, there is already synchronized on {@link 
#oldsources}. So no need
- * synchronized on {@link #walsByIdRecoveredQueues}.
  * Need synchronized on {@link #latestPaths} to avoid the new open source 
miss new log.
  * Need synchronized on {@link #oldsources} to avoid adding recovered 
source for the
  * to-be-removed peer.
@@ -144,15 +120,6 @@ public class ReplicationSourceManager {
   // All about stopping
   private final Server server;
 
-  // All logs we are currently tracking
-  // Index structure of the map is: queue_id->logPrefix/logGroup->logs
-  // For normal replication source, the peer id is same with the queue id
-  private final ConcurrentMap>> 
walsById;
-  // Logs for recovered sources we are currently tracking
-  // the map is: queue_id->logPrefix/logGroup->logs
-  // For recovered source, the queue id's format is peer_id-servername-*
-  private final ConcurrentMap>> 
walsByIdRecoveredQueues;
-
   private final SyncReplicationPeerMappingManager 
syncReplicationPeerMappingManager;
 
   private final Configuration conf;
@@ -212,8 +179,6 @@ public class ReplicationSourceManager {
 this.queueStorage = queueStorage;
 this.replicationPeers = replicationPeers;
 this.server = server;
-this.walsById = new ConcurrentHashMap<>();
-this.walsByIdRecoveredQueues = new ConcurrentHashMap<>();
 this.oldsources = new ArrayList<>();
 this.conf = conf;
 this.fs = fs;
@@ -322,7 +287,6 @@ public class ReplicationSourceManager {
   // Delete queue from storage and memory and queue id is same with peer 
id for normal
   // source
   deleteQueue(peerId);
-  this.walsById.remove(peerId);
 }
 ReplicationPeerConfig peerConfig = peer.getPeerConfig();
 if (pe

[hbase] 07/12: HBASE-24684 Fetch ReplicationSink servers list from HMaster instead o… (#2077)

2021-07-09 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit 03a30573d7eadd4094cacd33144e53992c899d57
Author: XinSun 
AuthorDate: Sun Sep 20 10:54:43 2020 +0800

HBASE-24684 Fetch ReplicationSink servers list from HMaster instead o… 
(#2077)

Signed-off-by: Wellington Chevreuil 
---
 .../src/main/protobuf/server/master/Master.proto   |  12 +-
 .../hadoop/hbase/coprocessor/MasterObserver.java   |  16 +++
 .../org/apache/hadoop/hbase/master/HMaster.java|   5 +
 .../hadoop/hbase/master/MasterCoprocessorHost.java |  18 +++
 .../hadoop/hbase/master/MasterRpcServices.java |  21 
 .../apache/hadoop/hbase/master/MasterServices.java |   6 +
 .../replication/HBaseReplicationEndpoint.java  | 140 +++--
 .../hbase/master/MockNoopMasterServices.java   |   5 +
 .../replication/TestHBaseReplicationEndpoint.java  |   5 +
 .../replication/TestReplicationFetchServers.java   | 106 
 .../TestGlobalReplicationThrottler.java|   4 +
 ...stRegionReplicaReplicationEndpointNoMaster.java |   2 +
 12 files changed, 327 insertions(+), 13 deletions(-)

diff --git a/hbase-protocol-shaded/src/main/protobuf/server/master/Master.proto 
b/hbase-protocol-shaded/src/main/protobuf/server/master/Master.proto
index 3d265dd..b9ed476 100644
--- a/hbase-protocol-shaded/src/main/protobuf/server/master/Master.proto
+++ b/hbase-protocol-shaded/src/main/protobuf/server/master/Master.proto
@@ -728,6 +728,13 @@ message BalancerRejectionsResponse {
   repeated BalancerRejection balancer_rejection = 1;
 }
 
+message ListReplicationSinkServersRequest {
+}
+
+message ListReplicationSinkServersResponse {
+  repeated ServerName server_name = 1;
+}
+
 service MasterService {
   /** Used by the client to get the number of regions that have received the 
updated schema */
   rpc GetSchemaAlterStatus(GetSchemaAlterStatusRequest)
@@ -1157,10 +1164,13 @@ service MasterService {
 returns (RenameRSGroupResponse);
 
   rpc UpdateRSGroupConfig(UpdateRSGroupConfigRequest)
-  returns (UpdateRSGroupConfigResponse);
+returns (UpdateRSGroupConfigResponse);
 
   rpc GetLogEntries(LogRequest)
 returns(LogEntry);
+
+  rpc ListReplicationSinkServers(ListReplicationSinkServersRequest)
+returns (ListReplicationSinkServersResponse);
 }
 
 // HBCK Service definitions.
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
index ac35caa..ec009cc 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
@@ -1782,4 +1782,20 @@ public interface MasterObserver {
   default void 
postHasUserPermissions(ObserverContext ctx,
   String userName, List permissions) throws IOException {
   }
+
+  /**
+   * Called before getting servers for replication sink.
+   * @param ctx the coprocessor instance's environment
+   */
+  default void 
preListReplicationSinkServers(ObserverContext ctx)
+throws IOException {
+  }
+
+  /**
+   * Called after getting servers for replication sink.
+   * @param ctx the coprocessor instance's environment
+   */
+  default void 
postListReplicationSinkServers(ObserverContext 
ctx)
+throws IOException {
+  }
 }
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
index ba38a19..903f392 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
@@ -3864,4 +3864,9 @@ public class HMaster extends HRegionServer implements 
MasterServices {
   public MetaLocationSyncer getMetaLocationSyncer() {
 return metaLocationSyncer;
   }
+
+  @Override
+  public List listReplicationSinkServers() throws IOException {
+return this.serverManager.getOnlineServersList();
+  }
 }
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterCoprocessorHost.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterCoprocessorHost.java
index 01d1a62..f775eba 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterCoprocessorHost.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterCoprocessorHost.java
@@ -2038,4 +2038,22 @@ public class MasterCoprocessorHost
   }
 });
   }
+
+  public void preListReplicationSinkServers() throws IOException {
+execOperation(coprocEnvironments.isEmpty() ? null : new 
MasterObserverOperation() {
+  @Override
+  public void call(MasterObserver observer) throws IOException {
+observer.preListReplicationSinkServers(this);
+  }
+});
+  }
+
+  public void

[hbase] 01/12: HBASE-24682 Refactor ReplicationSource#addHFileRefs method: move it to ReplicationSourceManager (#2020)

2021-07-09 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit 0c061bc6164fbb49afc7146095a1cf8b80a997bc
Author: Guanghao Zhang 
AuthorDate: Wed Jul 8 14:29:08 2020 +0800

HBASE-24682 Refactor ReplicationSource#addHFileRefs method: move it to 
ReplicationSourceManager (#2020)

Signed-off-by: Wellington Chevreuil 
---
 .../regionserver/ReplicationSource.java| 19 ++--
 .../regionserver/ReplicationSourceInterface.java   | 14 
 .../regionserver/ReplicationSourceManager.java | 26 +-
 .../hbase/replication/ReplicationSourceDummy.java  |  9 +---
 4 files changed, 28 insertions(+), 40 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
index d1268fa..a385ead 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
@@ -35,6 +35,7 @@ import java.util.concurrent.TimeoutException;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicLong;
 import java.util.function.Predicate;
+
 import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
@@ -44,27 +45,24 @@ import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.Server;
 import org.apache.hadoop.hbase.ServerName;
 import org.apache.hadoop.hbase.TableDescriptors;
-import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.regionserver.HRegionServer;
 import org.apache.hadoop.hbase.regionserver.RSRpcServices;
 import org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost;
 import org.apache.hadoop.hbase.replication.ChainWALEntryFilter;
 import org.apache.hadoop.hbase.replication.ClusterMarkingEntryFilter;
 import org.apache.hadoop.hbase.replication.ReplicationEndpoint;
-import org.apache.hadoop.hbase.replication.ReplicationException;
 import org.apache.hadoop.hbase.replication.ReplicationPeer;
 import org.apache.hadoop.hbase.replication.ReplicationQueueInfo;
 import org.apache.hadoop.hbase.replication.ReplicationQueueStorage;
 import org.apache.hadoop.hbase.replication.SystemTableWALEntryFilter;
 import org.apache.hadoop.hbase.replication.WALEntryFilter;
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.hadoop.hbase.util.Pair;
 import org.apache.hadoop.hbase.util.Threads;
 import org.apache.hadoop.hbase.wal.AbstractFSWALProvider;
 import org.apache.hadoop.hbase.wal.WAL.Entry;
 import org.apache.yetus.audience.InterfaceAudience;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
+
 import org.apache.hbase.thirdparty.com.google.common.collect.Lists;
 
 /**
@@ -264,19 +262,6 @@ public class ReplicationSource implements 
ReplicationSourceInterface {
 return logQueue.getQueues();
   }
 
-  @Override
-  public void addHFileRefs(TableName tableName, byte[] family, List> pairs)
-  throws ReplicationException {
-String peerId = replicationPeer.getId();
-if (replicationPeer.getPeerConfig().needToReplicate(tableName, family)) {
-  this.queueStorage.addHFileRefs(peerId, pairs);
-  metrics.incrSizeOfHFileRefsQueue(pairs.size());
-} else {
-  LOG.debug("HFiles will not be replicated belonging to the table {} 
family {} to peer id {}",
-tableName, Bytes.toString(family), peerId);
-}
-  }
-
   private ReplicationEndpoint createReplicationEndpoint()
   throws InstantiationException, IllegalAccessException, 
ClassNotFoundException, IOException {
 RegionServerCoprocessorHost rsServerHost = null;
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceInterface.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceInterface.java
index 27e4b79..352cdd3 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceInterface.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceInterface.java
@@ -28,12 +28,9 @@ import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.Server;
 import org.apache.hadoop.hbase.ServerName;
-import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.replication.ReplicationEndpoint;
-import org.apache.hadoop.hbase.replication.ReplicationException;
 import org.apache.hadoop.hbase.replication.ReplicationPeer;
 import org.apache.hadoop.hbase.replication.ReplicationQueueStorage;
-import org.apache.hadoop.hbase.util.Pair;
 import org.apache.hadoop.hbase.

[hbase] 06/12: HBASE-24998 Introduce a ReplicationSourceOverallController interface and decouple ReplicationSourceManager and ReplicationSource (#2364)

2021-07-09 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit 4718d24699e8129c8b1fa4d0acf236137248d492
Author: Guanghao Zhang 
AuthorDate: Sun Sep 20 09:02:53 2020 +0800

HBASE-24998 Introduce a ReplicationSourceOverallController interface and 
decouple ReplicationSourceManager and ReplicationSource (#2364)

Signed-off-by: meiyi 
---
 .../java/org/apache/hadoop/hbase/HConstants.java   |  2 +
 .../hadoop/hbase/regionserver/RSRpcServices.java   |  4 +-
 .../replication/ReplicationSourceController.java   | 32 +-
 .../regionserver/RecoveredReplicationSource.java   | 18 
 .../regionserver/ReplicationSource.java| 35 ++-
 .../regionserver/ReplicationSourceInterface.java   | 25 +++
 .../regionserver/ReplicationSourceManager.java | 51 +-
 .../regionserver/ReplicationSourceShipper.java |  4 +-
 .../regionserver/ReplicationSourceWALReader.java   | 13 +++---
 .../hbase/replication/ReplicationSourceDummy.java  | 21 +
 .../regionserver/TestBasicWALEntryStream.java  | 15 ---
 .../regionserver/TestReplicationSource.java|  2 +-
 .../regionserver/TestReplicationSourceManager.java |  3 +-
 13 files changed, 125 insertions(+), 100 deletions(-)

diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
index 10a38f6..6cde48d 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
@@ -994,6 +994,8 @@ public final class HConstants {
   /*
* cluster replication constants.
*/
+  public static final String REPLICATION_OFFLOAD_ENABLE_KEY = 
"hbase.replication.offload.enabled";
+  public static final boolean REPLICATION_OFFLOAD_ENABLE_DEFAULT = false;
   public static final String
   REPLICATION_SOURCE_SERVICE_CLASSNAME = 
"hbase.replication.source.service";
   public static final String REPLICATION_SERVICE_CLASSNAME_DEFAULT =
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
index c1f447c..72fea23 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
@@ -258,6 +258,8 @@ import 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.GetSpaceQuo
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.GetSpaceQuotaSnapshotsResponse;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.GetSpaceQuotaSnapshotsResponse.TableQuotaSnapshot;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.RequestHeader;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.ReplicationServerProtos;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.ReplicationServerProtos.ReplicationServerService;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.TooSlowLog.SlowLogPayload;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.BulkLoadDescriptor;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.CompactionDescriptor;
@@ -271,7 +273,7 @@ import 
org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.RegionEventDe
 @SuppressWarnings("deprecation")
 public class RSRpcServices implements HBaseRPCErrorHandler,
 AdminService.BlockingInterface, ClientService.BlockingInterface, 
PriorityFunction,
-ConfigurationObserver {
+ConfigurationObserver, ReplicationServerService.BlockingInterface {
   private static final Logger LOG = 
LoggerFactory.getLogger(RSRpcServices.class);
 
   /** RPC scheduler to use for the region server. */
diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationListener.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/ReplicationSourceController.java
similarity index 50%
rename from 
hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationListener.java
rename to 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/ReplicationSourceController.java
index 5c21e1e..5bb9dd6 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationListener.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/ReplicationSourceController.java
@@ -1,5 +1,4 @@
-/*
- *
+/**
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
  * distributed with this work for additional information
@@ -18,21 +17,32 @@
  */
 package org.apache.hadoop.hbase.replication;
 
-import org.apache.ha

[hbase] branch HBASE-24666 updated (2580c97 -> add13ab)

2021-07-09 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a change to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git.


omit 2580c97  HBASE-25807 Move method reportProcedureDone from 
RegionServerStatus.proto to Master.proto (#3205)
omit 786c7d7  HBASE-24737 Find a way to resolve 
WALFileLengthProvider#getLogFileSizeIfBeingWritten problem (#3045)
omit 1553b39  HBASE-25113 [testing] HBaseCluster support ReplicationServer 
for UTs (#2662)
omit c8d8782  HBASE-25071 ReplicationServer support start ReplicationSource 
internal (#2452)
omit bd13d14  HBASE-24999 Master manages ReplicationServers (#2579)
omit 29adcce  HBASE-24684 Fetch ReplicationSink servers list from HMaster 
instead o… (#2077)
omit b86d97c  HBASE-24998 Introduce a ReplicationSourceOverallController 
interface and decouple ReplicationSourceManager and ReplicationSource (#2364)
omit 1f11ee4  HBASE-24982 Disassemble the method replicateWALEntry from 
AdminService to a new interface ReplicationServerService (#2360)
omit c9a01b2  HBASE-24683 Add a basic ReplicationServer which only 
implement ReplicationSink Service (#2111)
omit d4bcf8d  HBASE-24735: Refactor ReplicationSourceManager: move 
logPositionAndCleanOldLogs/cleanUpHFileRefs to ReplicationSource inside (#2064)
omit b60ec36  HBASE-24681 Remove the cache walsById/walsByIdRecoveredQueues 
from ReplicationSourceManager (#2019)
omit a62a4b1  HBASE-24682 Refactor ReplicationSource#addHFileRefs method: 
move it to ReplicationSourceManager (#2020)
 add 8f03c44  HBASE-25556 Frequent replication "Encountered a malformed 
edit" warnings (#2965)
 add 51a3d45  HBASE-25598 TestFromClientSide5.testScanMetrics is flaky 
(#2977)
 add ed2693f  HBASE-25602 Fix broken TestReplicationShell on master (#2981)
 add a7d0445  HBASE-25601 Use ASF-official mailing list archives
 add 3f1c486  HBASE-25596: Fix NPE and avoid permanent unreplicated data 
due to EOF (#2987)
 add 8d0de96  HBASE-25590 Bulkload replication HFileRefs cannot be cleared 
in some cases where set exclude-namespace/exclude-table-cfs (#2969)
 add a984358  HBASE-25586 Fix HBASE-22492 on branch-2 (SASL GapToken) 
(#2961)
 add 30cb419  HBASE-25615 Upgrade java version in pre commit docker file 
(#2997)
 add 34bd1bd  HBASE-25620 Increase timeout value for pre commit (#3000)
 add d5df999  HBASE-25604 Upgrade spotbugs to 4.x (#2986)
 add b24bd40  HBASE-25611 ExportSnapshot chmod flag uses value as decimal 
(#3003)
 add b522d2a  Revert "HBASE-25604 Upgrade spotbugs to 4.x (#2986)"
 add a97a40c  HBASE-25580 Release scripts should include in the vote email 
the git hash that the RC tag points to (#2956)
 add 157200e  HBASE-25402 Sorting order by start key or end key is not 
considering empty start key/end key (#2955)
 add e099ef3  HBASE-25626 Possible Resource Leak in 
HeterogeneousRegionCountCostFunction
 add a4eb1aa  HBASE-25421 There is no limit on the column length when 
creating a table (#2796)
 add 5d9a6ed  HBASE-25367 Sort broken after Change 'State time' in UI 
(#2964)
 add e80b901  HBASE-25603 Add switch for compaction after bulkload (#2982)
 add f93c9c6  HBASE-25385 TestCurrentHourProvider fails if the latest 
timezone changes are not present (#3012)
 add 830d289  HBASE-25460 : Expose drainingServers as cluster metric (#2995)
 add dd4a11e  HBASE-25637 Rename method completeCompaction to 
refreshStoreSizeAndTotalBytes (#3023)
 add 9b0485f  HBASE-23578 [UI] Master UI shows long stack traces when table 
is broken (#3014)
 add 190c253  HBASE-25609 There is a problem with the SPLITS_FILE in the 
HBase shell statement(#2992)
 add 53128fe  HBASE-25644 Scan#setSmall blindly sets ReadType as PREAD
 add c1dacfd  HBASE-25547 (addendum): Roll ExecutorType into ExecutorConfig 
(#2996)
 add 109bd24  HBASE-25630 Set switch compaction after bulkload default as 
false (#3022)
 add 573daed  HBASE-25646: Possible Resource Leak in CatalogJanitor #3036
 add d818eff  HBASE-25582 Support setting scan ReadType to be STREAM at 
cluster level (#3035)
 add 92fe609  HBASE-25604 Upgrade spotbugs to 4.x (#3029)
 add 95342a2  HBASE-25654 [Documentation] Fix format error in security.adoc
 add 373dc77  HBASE-25548 Optionally allow snapshots to preserve cluster's 
max file… (#2923)
 add d79019b  HBASE-25629 Reimplement TestCurrentHourProvider to not depend 
on unstable TZs (#3013)
 add 0e6c2c4  HBASE-25636 Expose HBCK report as metrics (#3031)
 add 0cc1ae4  HBASE-25587 [hbck2] Schedule SCP for all unknown servers 
(#2978)
 add cc61714  HBASE-25566 RoundRobinTableInputFormat (#2947)
 add 1a69a52  HBASE-25570 On largish cluster, "CleanerChore: Could not 
delete dir..." makes master log unreadable (#2949)
 add 7386fb6  HBASE-25622 Result#compareResults should compare tags. (#3026)
 add 876fec1  HBA

[hbase] branch HBASE-24666 updated: HBASE-25807 Move method reportProcedureDone from RegionServerStatus.proto to Master.proto (#3205)

2021-05-23 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/HBASE-24666 by this push:
 new 2580c97  HBASE-25807 Move method reportProcedureDone from 
RegionServerStatus.proto to Master.proto (#3205)
2580c97 is described below

commit 2580c970fdbbae82533cb9efb6fe31958bf42190
Author: XinSun 
AuthorDate: Mon May 24 11:54:00 2021 +0800

HBASE-25807 Move method reportProcedureDone from RegionServerStatus.proto 
to Master.proto (#3205)

Signed-off-by: Duo Zhang 
---
 .../src/main/protobuf/server/master/Master.proto| 20 
 .../protobuf/server/master/RegionServerStatus.proto | 21 +
 .../hadoop/hbase/master/MasterRpcServices.java  |  6 +++---
 .../master/MasterRpcServicesVersionWrapper.java |  5 +++--
 .../hadoop/hbase/regionserver/HRegionServer.java|  2 +-
 .../regionserver/RemoteProcedureResultReporter.java |  7 +++
 6 files changed, 35 insertions(+), 26 deletions(-)

diff --git a/hbase-protocol-shaded/src/main/protobuf/server/master/Master.proto 
b/hbase-protocol-shaded/src/main/protobuf/server/master/Master.proto
index 7dec566..165a205 100644
--- a/hbase-protocol-shaded/src/main/protobuf/server/master/Master.proto
+++ b/hbase-protocol-shaded/src/main/protobuf/server/master/Master.proto
@@ -724,6 +724,23 @@ message ListReplicationSinkServersResponse {
   repeated ServerName server_name = 1;
 }
 
+message RemoteProcedureResult {
+  required uint64 proc_id = 1;
+  enum Status {
+SUCCESS = 1;
+ERROR = 2;
+  }
+  required Status status = 2;
+  optional ForeignExceptionMessage error = 3;
+}
+
+message ReportProcedureDoneRequest {
+  repeated RemoteProcedureResult result = 1;
+}
+
+message ReportProcedureDoneResponse {
+}
+
 service MasterService {
   /** Used by the client to get the number of regions that have received the 
updated schema */
   rpc GetSchemaAlterStatus(GetSchemaAlterStatusRequest)
@@ -1160,6 +1177,9 @@ service MasterService {
 
   rpc ListReplicationSinkServers(ListReplicationSinkServersRequest)
 returns (ListReplicationSinkServersResponse);
+
+  rpc ReportProcedureDone(ReportProcedureDoneRequest)
+  returns(ReportProcedureDoneResponse);
 }
 
 // HBCK Service definitions.
diff --git 
a/hbase-protocol-shaded/src/main/protobuf/server/master/RegionServerStatus.proto
 
b/hbase-protocol-shaded/src/main/protobuf/server/master/RegionServerStatus.proto
index c894a77..f3547da 100644
--- 
a/hbase-protocol-shaded/src/main/protobuf/server/master/RegionServerStatus.proto
+++ 
b/hbase-protocol-shaded/src/main/protobuf/server/master/RegionServerStatus.proto
@@ -29,6 +29,7 @@ option optimize_for = SPEED;
 import "HBase.proto";
 import "server/ClusterStatus.proto";
 import "server/ErrorHandling.proto";
+import "server/master/Master.proto";
 
 message RegionServerStartupRequest {
   /** Port number this regionserver is up on */
@@ -147,22 +148,6 @@ message RegionSpaceUseReportRequest {
 message RegionSpaceUseReportResponse {
 }
 
-message RemoteProcedureResult {
-  required uint64 proc_id = 1;
-  enum Status {
-SUCCESS = 1;
-ERROR = 2;
-  }
-  required Status status = 2;
-  optional ForeignExceptionMessage error = 3;
-}
-message ReportProcedureDoneRequest {
-  repeated RemoteProcedureResult result = 1;
-}
-
-message ReportProcedureDoneResponse {
-}
-
 message FileArchiveNotificationRequest {
   message FileWithSize {
 optional TableName table_name = 1;
@@ -211,6 +196,10 @@ service RegionServerStatusService {
   rpc ReportRegionSpaceUse(RegionSpaceUseReportRequest)
 returns(RegionSpaceUseReportResponse);
 
+  /**
+   * In HBASE-25807 this method was moved to Master.proto as replication 
server also need this.
+   * To avoid problems during upgrading, still keep this method here.
+   */
   rpc ReportProcedureDone(ReportProcedureDoneRequest)
 returns(ReportProcedureDoneResponse);
 
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
index 9b98190..e487310 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
@@ -290,6 +290,9 @@ import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.OfflineReg
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.RecommissionRegionServerRequest;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.RecommissionRegionServerResponse;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.RegionSpecifierAndState;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.Mas

[hbase] branch HBASE-24666 updated: HBASE-24737 Find a way to resolve WALFileLengthProvider#getLogFileSizeIfBeingWritten problem (#3045)

2021-04-26 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/HBASE-24666 by this push:
 new 786c7d7  HBASE-24737 Find a way to resolve 
WALFileLengthProvider#getLogFileSizeIfBeingWritten problem (#3045)
786c7d7 is described below

commit 786c7d7dfc720600cd6e5ae5dd712db8e0b8fd4d
Author: XinSun 
AuthorDate: Tue Apr 27 11:13:15 2021 +0800

HBASE-24737 Find a way to resolve 
WALFileLengthProvider#getLogFileSizeIfBeingWritten problem (#3045)

Signed-off-by: Duo Zhang 
---
 .../src/main/protobuf/server/region/Admin.proto|  12 ++
 .../hbase/client/AsyncRegionServerAdmin.java   |   8 ++
 .../hadoop/hbase/regionserver/HRegionServer.java   |   2 +-
 .../hadoop/hbase/regionserver/RSRpcServices.java   |  24 
 .../hbase/replication/HReplicationServer.java  |  11 +-
 .../regionserver/WALFileLengthProvider.java|   3 +-
 .../RemoteWALFileLengthProvider.java   |  73 
 .../org/apache/hadoop/hbase/wal/WALProvider.java   |  15 ++-
 .../hadoop/hbase/master/MockRegionServer.java  |   7 ++
 .../TestRemoteWALFileLengthProvider.java   | 130 +
 10 files changed, 280 insertions(+), 5 deletions(-)

diff --git a/hbase-protocol-shaded/src/main/protobuf/server/region/Admin.proto 
b/hbase-protocol-shaded/src/main/protobuf/server/region/Admin.proto
index 0667292..693a809 100644
--- a/hbase-protocol-shaded/src/main/protobuf/server/region/Admin.proto
+++ b/hbase-protocol-shaded/src/main/protobuf/server/region/Admin.proto
@@ -328,6 +328,15 @@ message ClearSlowLogResponses {
   required bool is_cleaned = 1;
 }
 
+message GetLogFileSizeIfBeingWrittenRequest {
+  required string wal_path = 1;
+}
+
+message GetLogFileSizeIfBeingWrittenResponse {
+  required bool is_being_written = 1;
+  optional uint64 length = 2;
+}
+
 service AdminService {
   rpc GetRegionInfo(GetRegionInfoRequest)
 returns(GetRegionInfoResponse);
@@ -399,4 +408,7 @@ service AdminService {
   rpc GetLogEntries(LogRequest)
 returns(LogEntry);
 
+  rpc GetLogFileSizeIfBeingWritten(GetLogFileSizeIfBeingWrittenRequest)
+returns(GetLogFileSizeIfBeingWrittenResponse);
+
 }
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/client/AsyncRegionServerAdmin.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/client/AsyncRegionServerAdmin.java
index cb06137..edef98a 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/client/AsyncRegionServerAdmin.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/client/AsyncRegionServerAdmin.java
@@ -42,6 +42,8 @@ import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.ExecuteProc
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.ExecuteProceduresResponse;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.FlushRegionRequest;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.FlushRegionResponse;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.GetLogFileSizeIfBeingWrittenRequest;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.GetLogFileSizeIfBeingWrittenResponse;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.GetOnlineRegionRequest;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.GetOnlineRegionResponse;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.GetRegionInfoRequest;
@@ -216,4 +218,10 @@ public class AsyncRegionServerAdmin {
   ExecuteProceduresRequest request) {
 return call((stub, controller, done) -> stub.executeProcedures(controller, 
request, done));
   }
+
+  public CompletableFuture 
getLogFileSizeIfBeingWritten(
+GetLogFileSizeIfBeingWrittenRequest request) {
+return call((stub, controller, done) ->
+  stub.getLogFileSizeIfBeingWritten(controller, request, done));
+  }
 }
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
index f2379dd..af335a5 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
@@ -2307,7 +2307,7 @@ public class HRegionServer extends Thread implements
 return walRoller;
   }
 
-  WALFactory getWalFactory() {
+  public WALFactory getWalFactory() {
 return walFactory;
   }
 
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
index ff8c7fc..b571ce0 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop

[hbase] branch branch-2.3 updated: HBASE-25562 ReplicationSourceWALReader log and handle exception immediately without retrying (#2966)

2021-03-25 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.3
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.3 by this push:
 new 754caba  HBASE-25562 ReplicationSourceWALReader log and handle 
exception immediately without retrying (#2966)
754caba is described below

commit 754caba4796d24fbc5a0eae1d8205f601d78a961
Author: XinSun 
AuthorDate: Thu Mar 25 14:16:21 2021 +0800

HBASE-25562 ReplicationSourceWALReader log and handle exception immediately 
without retrying (#2966)

Signed-off-by: sandeepvinayak
---
 .../regionserver/ReplicationSourceWALReader.java   | 30 +-
 1 file changed, 18 insertions(+), 12 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
index 4a32962..d3c44a5 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
@@ -147,14 +147,15 @@ class ReplicationSourceWALReader extends Thread {
   }
 }
   } catch (IOException e) { // stream related
-if (sleepMultiplier < maxRetriesMultiplier) {
-  LOG.debug("Failed to read stream of replication entries: " + e);
-  sleepMultiplier++;
+if (handleEofException(e)) {
+  sleepMultiplier = 1;
 } else {
-  LOG.error("Failed to read stream of replication entries", e);
-  handleEofException(e);
+  LOG.warn("Failed to read stream of replication entries", e);
+  if (sleepMultiplier < maxRetriesMultiplier) {
+sleepMultiplier ++;
+  }
+  Threads.sleep(sleepForRetries * sleepMultiplier);
 }
-Threads.sleep(sleepForRetries * sleepMultiplier);
   } catch (InterruptedException e) {
 LOG.trace("Interrupted while sleeping between WAL reads");
 Thread.currentThread().interrupt();
@@ -241,24 +242,29 @@ class ReplicationSourceWALReader extends Thread {
 }
   }
 
-  // if we get an EOF due to a zero-length log, and there are other logs in 
queue
-  // (highly likely we've closed the current log), we've hit the max retries, 
and autorecovery is
-  // enabled, then dump the log
-  private void handleEofException(IOException e) {
+  /**
+   * if we get an EOF due to a zero-length log, and there are other logs in 
queue
+   * (highly likely we've closed the current log), and autorecovery is
+   * enabled, then dump the log
+   * @return true only the IOE can be handled
+   */
+  private boolean handleEofException(IOException e) {
 // Dump the log even if logQueue size is 1 if the source is from recovered 
Source
 // since we don't add current log to recovered source queue so it is safe 
to remove.
 if ((e instanceof EOFException || e.getCause() instanceof EOFException) &&
   (source.isRecovered() || logQueue.size() > 1) && this.eofAutoRecovery) {
   try {
 if (fs.getFileStatus(logQueue.peek()).getLen() == 0) {
-  LOG.warn("Forcing removal of 0 length log in queue: " + 
logQueue.peek());
+  LOG.warn("Forcing removal of 0 length log in queue: {}", 
logQueue.peek());
   logQueue.remove();
   currentPosition = 0;
+  return true;
 }
   } catch (IOException ioe) {
-LOG.warn("Couldn't get file length information about log " + 
logQueue.peek());
+LOG.warn("Couldn't get file length information about log {}", 
logQueue.peek());
   }
 }
+return false;
   }
 
   public Path getCurrentPath() {


[hbase] branch branch-2.3 updated: HBASE-25590 Bulkload replication HFileRefs cannot be cleared in some cases where set exclude-namespace/exclude-table-cfs (#2969)

2021-02-25 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.3
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.3 by this push:
 new a60a065  HBASE-25590 Bulkload replication HFileRefs cannot be cleared 
in some cases where set exclude-namespace/exclude-table-cfs (#2969)
a60a065 is described below

commit a60a065c9da0d785d06a9c3e262c094889a5afe4
Author: XinSun 
AuthorDate: Fri Feb 26 09:50:23 2021 +0800

HBASE-25590 Bulkload replication HFileRefs cannot be cleared in some cases 
where set exclude-namespace/exclude-table-cfs (#2969)

Signed-off-by: Wellington Chevreuil 
---
 .../hbase/replication/ReplicationPeerConfig.java   |  29 +-
 .../replication/TestReplicationPeerConfig.java | 366 -
 .../NamespaceTableCfWALEntryFilter.java|  84 +
 .../regionserver/ReplicationSource.java|  28 +-
 .../regionserver/TestBulkLoadReplication.java  |   8 +-
 .../TestBulkLoadReplicationHFileRefs.java  | 310 +
 6 files changed, 557 insertions(+), 268 deletions(-)

diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
index 7c0f115..612a7fc 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
@@ -29,6 +29,8 @@ import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.yetus.audience.InterfaceAudience;
 
+import 
org.apache.hbase.thirdparty.org.apache.commons.collections4.CollectionUtils;
+
 /**
  * A configuration for the replication peer cluster.
  */
@@ -366,6 +368,19 @@ public class ReplicationPeerConfig {
* @return true if the table need replicate to the peer cluster
*/
   public boolean needToReplicate(TableName table) {
+return needToReplicate(table, null);
+  }
+
+  /**
+   * Decide whether the passed family of the table need replicate to the peer 
cluster according to
+   * this peer config.
+   * @param table name of the table
+   * @param family family name
+   * @return true if (the family of) the table need replicate to the peer 
cluster.
+   * If passed family is null, return true if any CFs of the table 
need replicate;
+   * If passed family is not null, return true if the passed family 
need replicate.
+   */
+  public boolean needToReplicate(TableName table, byte[] family) {
 String namespace = table.getNamespaceAsString();
 if (replicateAllUserTables) {
   // replicate all user tables, but filter by exclude namespaces and 
table-cfs config
@@ -377,9 +392,12 @@ public class ReplicationPeerConfig {
 return true;
   }
   Collection cfs = excludeTableCFsMap.get(table);
-  // if cfs is null or empty then we can make sure that we do not need to 
replicate this table,
+  // If cfs is null or empty then we can make sure that we do not need to 
replicate this table,
   // otherwise, we may still need to replicate the table but filter out 
some families.
-  return cfs != null && !cfs.isEmpty();
+  return cfs != null && !cfs.isEmpty()
+// If exclude-table-cfs contains passed family then we make sure that 
we do not need to
+// replicate this family.
+&& (family == null || !cfs.contains(Bytes.toString(family)));
 } else {
   // Not replicate all user tables, so filter by namespaces and table-cfs 
config
   if (namespaces == null && tableCFsMap == null) {
@@ -390,7 +408,12 @@ public class ReplicationPeerConfig {
   if (namespaces != null && namespaces.contains(namespace)) {
 return true;
   }
-  return tableCFsMap != null && tableCFsMap.containsKey(table);
+  // If table-cfs contains this table then we can make sure that we need 
replicate some CFs of
+  // this table. Further we need all CFs if tableCFsMap.get(table) is null 
or empty.
+  return tableCFsMap != null && tableCFsMap.containsKey(table)
+&& (family == null || CollectionUtils.isEmpty(tableCFsMap.get(table))
+// If table-cfs must contain passed family then we need to replicate 
this family.
+|| tableCFsMap.get(table).contains(Bytes.toString(family)));
 }
   }
 }
diff --git 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
index d67a3f8..ae2d426 100644
--- 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
+++ 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/

[hbase] branch branch-2.4 updated: HBASE-25590 Bulkload replication HFileRefs cannot be cleared in some cases where set exclude-namespace/exclude-table-cfs (#2969)

2021-02-25 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new db4f2bf  HBASE-25590 Bulkload replication HFileRefs cannot be cleared 
in some cases where set exclude-namespace/exclude-table-cfs (#2969)
db4f2bf is described below

commit db4f2bfa25b0fc7e32939c07d0de08e50d966819
Author: XinSun 
AuthorDate: Fri Feb 26 09:50:23 2021 +0800

HBASE-25590 Bulkload replication HFileRefs cannot be cleared in some cases 
where set exclude-namespace/exclude-table-cfs (#2969)

Signed-off-by: Wellington Chevreuil 
---
 .../hbase/replication/ReplicationPeerConfig.java   |  29 +-
 .../replication/TestReplicationPeerConfig.java | 366 -
 .../NamespaceTableCfWALEntryFilter.java|  84 +
 .../regionserver/ReplicationSource.java|  28 +-
 .../regionserver/TestBulkLoadReplication.java  |   8 +-
 .../TestBulkLoadReplicationHFileRefs.java  | 310 +
 6 files changed, 557 insertions(+), 268 deletions(-)

diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
index 030ae3d..534357a 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
@@ -29,6 +29,8 @@ import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.yetus.audience.InterfaceAudience;
 
+import 
org.apache.hbase.thirdparty.org.apache.commons.collections4.CollectionUtils;
+
 /**
  * A configuration for the replication peer cluster.
  */
@@ -372,6 +374,19 @@ public class ReplicationPeerConfig {
* @return true if the table need replicate to the peer cluster
*/
   public boolean needToReplicate(TableName table) {
+return needToReplicate(table, null);
+  }
+
+  /**
+   * Decide whether the passed family of the table need replicate to the peer 
cluster according to
+   * this peer config.
+   * @param table name of the table
+   * @param family family name
+   * @return true if (the family of) the table need replicate to the peer 
cluster.
+   * If passed family is null, return true if any CFs of the table 
need replicate;
+   * If passed family is not null, return true if the passed family 
need replicate.
+   */
+  public boolean needToReplicate(TableName table, byte[] family) {
 String namespace = table.getNamespaceAsString();
 if (replicateAllUserTables) {
   // replicate all user tables, but filter by exclude namespaces and 
table-cfs config
@@ -383,9 +398,12 @@ public class ReplicationPeerConfig {
 return true;
   }
   Collection cfs = excludeTableCFsMap.get(table);
-  // if cfs is null or empty then we can make sure that we do not need to 
replicate this table,
+  // If cfs is null or empty then we can make sure that we do not need to 
replicate this table,
   // otherwise, we may still need to replicate the table but filter out 
some families.
-  return cfs != null && !cfs.isEmpty();
+  return cfs != null && !cfs.isEmpty()
+// If exclude-table-cfs contains passed family then we make sure that 
we do not need to
+// replicate this family.
+&& (family == null || !cfs.contains(Bytes.toString(family)));
 } else {
   // Not replicate all user tables, so filter by namespaces and table-cfs 
config
   if (namespaces == null && tableCFsMap == null) {
@@ -396,7 +414,12 @@ public class ReplicationPeerConfig {
   if (namespaces != null && namespaces.contains(namespace)) {
 return true;
   }
-  return tableCFsMap != null && tableCFsMap.containsKey(table);
+  // If table-cfs contains this table then we can make sure that we need 
replicate some CFs of
+  // this table. Further we need all CFs if tableCFsMap.get(table) is null 
or empty.
+  return tableCFsMap != null && tableCFsMap.containsKey(table)
+&& (family == null || CollectionUtils.isEmpty(tableCFsMap.get(table))
+// If table-cfs must contain passed family then we need to replicate 
this family.
+|| tableCFsMap.get(table).contains(Bytes.toString(family)));
 }
   }
 }
diff --git 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
index d67a3f8..ae2d426 100644
--- 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
+++ 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/

[hbase] branch branch-2 updated: HBASE-25590 Bulkload replication HFileRefs cannot be cleared in some cases where set exclude-namespace/exclude-table-cfs (#2969)

2021-02-25 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 328ff8c  HBASE-25590 Bulkload replication HFileRefs cannot be cleared 
in some cases where set exclude-namespace/exclude-table-cfs (#2969)
328ff8c is described below

commit 328ff8c05a4ff8a0db3995c9a556acf0d3a10caa
Author: XinSun 
AuthorDate: Fri Feb 26 09:50:23 2021 +0800

HBASE-25590 Bulkload replication HFileRefs cannot be cleared in some cases 
where set exclude-namespace/exclude-table-cfs (#2969)

Signed-off-by: Wellington Chevreuil 
---
 .../hbase/replication/ReplicationPeerConfig.java   |  29 +-
 .../replication/TestReplicationPeerConfig.java | 366 -
 .../NamespaceTableCfWALEntryFilter.java|  84 +
 .../regionserver/ReplicationSource.java|  28 +-
 .../regionserver/TestBulkLoadReplication.java  |   8 +-
 .../TestBulkLoadReplicationHFileRefs.java  | 310 +
 6 files changed, 557 insertions(+), 268 deletions(-)

diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
index 030ae3d..534357a 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
@@ -29,6 +29,8 @@ import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.yetus.audience.InterfaceAudience;
 
+import 
org.apache.hbase.thirdparty.org.apache.commons.collections4.CollectionUtils;
+
 /**
  * A configuration for the replication peer cluster.
  */
@@ -372,6 +374,19 @@ public class ReplicationPeerConfig {
* @return true if the table need replicate to the peer cluster
*/
   public boolean needToReplicate(TableName table) {
+return needToReplicate(table, null);
+  }
+
+  /**
+   * Decide whether the passed family of the table need replicate to the peer 
cluster according to
+   * this peer config.
+   * @param table name of the table
+   * @param family family name
+   * @return true if (the family of) the table need replicate to the peer 
cluster.
+   * If passed family is null, return true if any CFs of the table 
need replicate;
+   * If passed family is not null, return true if the passed family 
need replicate.
+   */
+  public boolean needToReplicate(TableName table, byte[] family) {
 String namespace = table.getNamespaceAsString();
 if (replicateAllUserTables) {
   // replicate all user tables, but filter by exclude namespaces and 
table-cfs config
@@ -383,9 +398,12 @@ public class ReplicationPeerConfig {
 return true;
   }
   Collection cfs = excludeTableCFsMap.get(table);
-  // if cfs is null or empty then we can make sure that we do not need to 
replicate this table,
+  // If cfs is null or empty then we can make sure that we do not need to 
replicate this table,
   // otherwise, we may still need to replicate the table but filter out 
some families.
-  return cfs != null && !cfs.isEmpty();
+  return cfs != null && !cfs.isEmpty()
+// If exclude-table-cfs contains passed family then we make sure that 
we do not need to
+// replicate this family.
+&& (family == null || !cfs.contains(Bytes.toString(family)));
 } else {
   // Not replicate all user tables, so filter by namespaces and table-cfs 
config
   if (namespaces == null && tableCFsMap == null) {
@@ -396,7 +414,12 @@ public class ReplicationPeerConfig {
   if (namespaces != null && namespaces.contains(namespace)) {
 return true;
   }
-  return tableCFsMap != null && tableCFsMap.containsKey(table);
+  // If table-cfs contains this table then we can make sure that we need 
replicate some CFs of
+  // this table. Further we need all CFs if tableCFsMap.get(table) is null 
or empty.
+  return tableCFsMap != null && tableCFsMap.containsKey(table)
+&& (family == null || CollectionUtils.isEmpty(tableCFsMap.get(table))
+// If table-cfs must contain passed family then we need to replicate 
this family.
+|| tableCFsMap.get(table).contains(Bytes.toString(family)));
 }
   }
 }
diff --git 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
index d67a3f8..ae2d426 100644
--- 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
+++ 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/

[hbase] branch master updated: HBASE-25590 Bulkload replication HFileRefs cannot be cleared in some cases where set exclude-namespace/exclude-table-cfs (#2969)

2021-02-25 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new 8d0de96  HBASE-25590 Bulkload replication HFileRefs cannot be cleared 
in some cases where set exclude-namespace/exclude-table-cfs (#2969)
8d0de96 is described below

commit 8d0de969765c0b27991e85b09b52c240321ce881
Author: XinSun 
AuthorDate: Fri Feb 26 09:50:23 2021 +0800

HBASE-25590 Bulkload replication HFileRefs cannot be cleared in some cases 
where set exclude-namespace/exclude-table-cfs (#2969)

Signed-off-by: Wellington Chevreuil 
---
 .../hbase/replication/ReplicationPeerConfig.java   |  29 +-
 .../replication/TestReplicationPeerConfig.java | 366 -
 .../NamespaceTableCfWALEntryFilter.java|  84 +
 .../regionserver/ReplicationSource.java|  28 +-
 .../regionserver/TestBulkLoadReplication.java  |   8 +-
 .../TestBulkLoadReplicationHFileRefs.java  | 310 +
 6 files changed, 557 insertions(+), 268 deletions(-)

diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
index 5ca5cef..3b03ae4 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
@@ -29,6 +29,8 @@ import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.yetus.audience.InterfaceAudience;
 
+import 
org.apache.hbase.thirdparty.org.apache.commons.collections4.CollectionUtils;
+
 /**
  * A configuration for the replication peer cluster.
  */
@@ -301,6 +303,19 @@ public class ReplicationPeerConfig {
* @return true if the table need replicate to the peer cluster
*/
   public boolean needToReplicate(TableName table) {
+return needToReplicate(table, null);
+  }
+
+  /**
+   * Decide whether the passed family of the table need replicate to the peer 
cluster according to
+   * this peer config.
+   * @param table name of the table
+   * @param family family name
+   * @return true if (the family of) the table need replicate to the peer 
cluster.
+   * If passed family is null, return true if any CFs of the table 
need replicate;
+   * If passed family is not null, return true if the passed family 
need replicate.
+   */
+  public boolean needToReplicate(TableName table, byte[] family) {
 String namespace = table.getNamespaceAsString();
 if (replicateAllUserTables) {
   // replicate all user tables, but filter by exclude namespaces and 
table-cfs config
@@ -312,9 +327,12 @@ public class ReplicationPeerConfig {
 return true;
   }
   Collection cfs = excludeTableCFsMap.get(table);
-  // if cfs is null or empty then we can make sure that we do not need to 
replicate this table,
+  // If cfs is null or empty then we can make sure that we do not need to 
replicate this table,
   // otherwise, we may still need to replicate the table but filter out 
some families.
-  return cfs != null && !cfs.isEmpty();
+  return cfs != null && !cfs.isEmpty()
+// If exclude-table-cfs contains passed family then we make sure that 
we do not need to
+// replicate this family.
+&& (family == null || !cfs.contains(Bytes.toString(family)));
 } else {
   // Not replicate all user tables, so filter by namespaces and table-cfs 
config
   if (namespaces == null && tableCFsMap == null) {
@@ -325,7 +343,12 @@ public class ReplicationPeerConfig {
   if (namespaces != null && namespaces.contains(namespace)) {
 return true;
   }
-  return tableCFsMap != null && tableCFsMap.containsKey(table);
+  // If table-cfs contains this table then we can make sure that we need 
replicate some CFs of
+  // this table. Further we need all CFs if tableCFsMap.get(table) is null 
or empty.
+  return tableCFsMap != null && tableCFsMap.containsKey(table)
+&& (family == null || CollectionUtils.isEmpty(tableCFsMap.get(table))
+// If table-cfs must contain passed family then we need to replicate 
this family.
+|| tableCFsMap.get(table).contains(Bytes.toString(family)));
 }
   }
 }
diff --git 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
index d67a3f8..ae2d426 100644
--- 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
+++ 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/

[hbase] branch branch-2.2 updated: HBASE-25598 TestFromClientSide5.testScanMetrics is flaky (#2977)

2021-02-23 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.2 by this push:
 new a43fd04  HBASE-25598 TestFromClientSide5.testScanMetrics is flaky 
(#2977)
a43fd04 is described below

commit a43fd04b6afd474d5a0ca186a0501116d6074f37
Author: XinSun 
AuthorDate: Wed Feb 24 14:15:51 2021 +0800

HBASE-25598 TestFromClientSide5.testScanMetrics is flaky (#2977)

Signed-off-by: Duo Zhang 
---
 .../hadoop/hbase/client/TestFromClientSide5.java | 20 +---
 1 file changed, 9 insertions(+), 11 deletions(-)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide5.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide5.java
index 6136f5d..7c9f3af 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide5.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide5.java
@@ -926,7 +926,7 @@ public class TestFromClientSide5 extends FromClientSideBase 
{
   numRecords++;
 }
 
-LOG.info("test data has " + numRecords + " records.");
+LOG.info("test data has {} records.", numRecords);
 
 // by default, scan metrics collection is turned off
 assertNull(scanner.getScanMetrics());
@@ -939,8 +939,6 @@ public class TestFromClientSide5 extends FromClientSideBase 
{
   try (ResultScanner scanner = ht.getScanner(scan2)) {
 for (Result result : scanner.next(numRecords - 1)) {
 }
-scanner.close();
-// closing the scanner will set the metrics.
 assertNotNull(scanner.getScanMetrics());
   }
 
@@ -955,7 +953,7 @@ public class TestFromClientSide5 extends FromClientSideBase 
{
 }
 ScanMetrics scanMetrics = scanner.getScanMetrics();
 assertEquals("Did not access all the regions in the table", 
numOfRegions,
-scanMetrics.countOfRegions.get());
+  scanMetrics.countOfRegions.get());
   }
 
   // check byte counters
@@ -964,15 +962,14 @@ public class TestFromClientSide5 extends 
FromClientSideBase {
   scan2.setCaching(1);
   try (ResultScanner scanner = ht.getScanner(scan2)) {
 int numBytes = 0;
-for (Result result : scanner.next(1)) {
+for (Result result : scanner) {
   for (Cell cell : result.listCells()) {
 numBytes += PrivateCellUtil.estimatedSerializedSizeOf(cell);
   }
 }
-scanner.close();
 ScanMetrics scanMetrics = scanner.getScanMetrics();
 assertEquals("Did not count the result bytes", numBytes,
-scanMetrics.countOfBytesInResults.get());
+  scanMetrics.countOfBytesInResults.get());
   }
 
   // check byte counters on a small scan
@@ -982,15 +979,14 @@ public class TestFromClientSide5 extends 
FromClientSideBase {
   scan2.setSmall(true);
   try (ResultScanner scanner = ht.getScanner(scan2)) {
 int numBytes = 0;
-for (Result result : scanner.next(1)) {
+for (Result result : scanner) {
   for (Cell cell : result.listCells()) {
 numBytes += PrivateCellUtil.estimatedSerializedSizeOf(cell);
   }
 }
-scanner.close();
 ScanMetrics scanMetrics = scanner.getScanMetrics();
 assertEquals("Did not count the result bytes", numBytes,
-scanMetrics.countOfBytesInResults.get());
+  scanMetrics.countOfBytesInResults.get());
   }
 
   // now, test that the metrics are still collected even if you don't call 
close, but do
@@ -1020,8 +1016,10 @@ public class TestFromClientSide5 extends 
FromClientSideBase {
 scannerWithClose.close();
 ScanMetrics scanMetricsWithClose = scannerWithClose.getScanMetrics();
 assertEquals("Did not access all the regions in the table", 
numOfRegions,
-scanMetricsWithClose.countOfRegions.get());
+  scanMetricsWithClose.countOfRegions.get());
   }
+} finally {
+  TEST_UTIL.deleteTable(tableName);
 }
   }
 



[hbase] branch branch-2.3 updated: HBASE-25598 TestFromClientSide5.testScanMetrics is flaky (#2977)

2021-02-23 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.3
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.3 by this push:
 new f6d0967  HBASE-25598 TestFromClientSide5.testScanMetrics is flaky 
(#2977)
f6d0967 is described below

commit f6d09679dc2e724d810942f017d954c7aea3a4bc
Author: XinSun 
AuthorDate: Wed Feb 24 14:15:51 2021 +0800

HBASE-25598 TestFromClientSide5.testScanMetrics is flaky (#2977)

Signed-off-by: Duo Zhang 
---
 .../hadoop/hbase/client/TestFromClientSide5.java | 20 +---
 1 file changed, 9 insertions(+), 11 deletions(-)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide5.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide5.java
index 0ae15ca..3ffb3da 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide5.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide5.java
@@ -939,7 +939,7 @@ public class TestFromClientSide5 extends FromClientSideBase 
{
   numRecords++;
 }
 
-LOG.info("test data has " + numRecords + " records.");
+LOG.info("test data has {} records.", numRecords);
 
 // by default, scan metrics collection is turned off
 assertNull(scanner.getScanMetrics());
@@ -952,8 +952,6 @@ public class TestFromClientSide5 extends FromClientSideBase 
{
   try (ResultScanner scanner = ht.getScanner(scan2)) {
 for (Result result : scanner.next(numRecords - 1)) {
 }
-scanner.close();
-// closing the scanner will set the metrics.
 assertNotNull(scanner.getScanMetrics());
   }
 
@@ -968,7 +966,7 @@ public class TestFromClientSide5 extends FromClientSideBase 
{
 }
 ScanMetrics scanMetrics = scanner.getScanMetrics();
 assertEquals("Did not access all the regions in the table", 
numOfRegions,
-scanMetrics.countOfRegions.get());
+  scanMetrics.countOfRegions.get());
   }
 
   // check byte counters
@@ -977,15 +975,14 @@ public class TestFromClientSide5 extends 
FromClientSideBase {
   scan2.setCaching(1);
   try (ResultScanner scanner = ht.getScanner(scan2)) {
 int numBytes = 0;
-for (Result result : scanner.next(1)) {
+for (Result result : scanner) {
   for (Cell cell : result.listCells()) {
 numBytes += PrivateCellUtil.estimatedSerializedSizeOf(cell);
   }
 }
-scanner.close();
 ScanMetrics scanMetrics = scanner.getScanMetrics();
 assertEquals("Did not count the result bytes", numBytes,
-scanMetrics.countOfBytesInResults.get());
+  scanMetrics.countOfBytesInResults.get());
   }
 
   // check byte counters on a small scan
@@ -995,15 +992,14 @@ public class TestFromClientSide5 extends 
FromClientSideBase {
   scan2.setSmall(true);
   try (ResultScanner scanner = ht.getScanner(scan2)) {
 int numBytes = 0;
-for (Result result : scanner.next(1)) {
+for (Result result : scanner) {
   for (Cell cell : result.listCells()) {
 numBytes += PrivateCellUtil.estimatedSerializedSizeOf(cell);
   }
 }
-scanner.close();
 ScanMetrics scanMetrics = scanner.getScanMetrics();
 assertEquals("Did not count the result bytes", numBytes,
-scanMetrics.countOfBytesInResults.get());
+  scanMetrics.countOfBytesInResults.get());
   }
 
   // now, test that the metrics are still collected even if you don't call 
close, but do
@@ -1033,8 +1029,10 @@ public class TestFromClientSide5 extends 
FromClientSideBase {
 scannerWithClose.close();
 ScanMetrics scanMetricsWithClose = scannerWithClose.getScanMetrics();
 assertEquals("Did not access all the regions in the table", 
numOfRegions,
-scanMetricsWithClose.countOfRegions.get());
+  scanMetricsWithClose.countOfRegions.get());
   }
+} finally {
+  TEST_UTIL.deleteTable(tableName);
 }
   }
 



[hbase] branch branch-2.4 updated: HBASE-25598 TestFromClientSide5.testScanMetrics is flaky (#2977)

2021-02-23 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new 9ebddee  HBASE-25598 TestFromClientSide5.testScanMetrics is flaky 
(#2977)
9ebddee is described below

commit 9ebddeeda5a713a3aa6ee08c2a746153acadb8be
Author: XinSun 
AuthorDate: Wed Feb 24 14:15:51 2021 +0800

HBASE-25598 TestFromClientSide5.testScanMetrics is flaky (#2977)

Signed-off-by: Duo Zhang 
---
 .../hadoop/hbase/client/TestFromClientSide5.java | 20 +---
 1 file changed, 9 insertions(+), 11 deletions(-)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide5.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide5.java
index a23234a..19256fd 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide5.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide5.java
@@ -968,7 +968,7 @@ public class TestFromClientSide5 extends FromClientSideBase 
{
   numRecords++;
 }
 
-LOG.info("test data has " + numRecords + " records.");
+LOG.info("test data has {} records.", numRecords);
 
 // by default, scan metrics collection is turned off
 assertNull(scanner.getScanMetrics());
@@ -981,8 +981,6 @@ public class TestFromClientSide5 extends FromClientSideBase 
{
   try (ResultScanner scanner = ht.getScanner(scan2)) {
 for (Result result : scanner.next(numRecords - 1)) {
 }
-scanner.close();
-// closing the scanner will set the metrics.
 assertNotNull(scanner.getScanMetrics());
   }
 
@@ -997,7 +995,7 @@ public class TestFromClientSide5 extends FromClientSideBase 
{
 }
 ScanMetrics scanMetrics = scanner.getScanMetrics();
 assertEquals("Did not access all the regions in the table", 
numOfRegions,
-scanMetrics.countOfRegions.get());
+  scanMetrics.countOfRegions.get());
   }
 
   // check byte counters
@@ -1006,15 +1004,14 @@ public class TestFromClientSide5 extends 
FromClientSideBase {
   scan2.setCaching(1);
   try (ResultScanner scanner = ht.getScanner(scan2)) {
 int numBytes = 0;
-for (Result result : scanner.next(1)) {
+for (Result result : scanner) {
   for (Cell cell : result.listCells()) {
 numBytes += PrivateCellUtil.estimatedSerializedSizeOf(cell);
   }
 }
-scanner.close();
 ScanMetrics scanMetrics = scanner.getScanMetrics();
 assertEquals("Did not count the result bytes", numBytes,
-scanMetrics.countOfBytesInResults.get());
+  scanMetrics.countOfBytesInResults.get());
   }
 
   // check byte counters on a small scan
@@ -1024,15 +1021,14 @@ public class TestFromClientSide5 extends 
FromClientSideBase {
   scan2.setSmall(true);
   try (ResultScanner scanner = ht.getScanner(scan2)) {
 int numBytes = 0;
-for (Result result : scanner.next(1)) {
+for (Result result : scanner) {
   for (Cell cell : result.listCells()) {
 numBytes += PrivateCellUtil.estimatedSerializedSizeOf(cell);
   }
 }
-scanner.close();
 ScanMetrics scanMetrics = scanner.getScanMetrics();
 assertEquals("Did not count the result bytes", numBytes,
-scanMetrics.countOfBytesInResults.get());
+  scanMetrics.countOfBytesInResults.get());
   }
 
   // now, test that the metrics are still collected even if you don't call 
close, but do
@@ -1062,8 +1058,10 @@ public class TestFromClientSide5 extends 
FromClientSideBase {
 scannerWithClose.close();
 ScanMetrics scanMetricsWithClose = scannerWithClose.getScanMetrics();
 assertEquals("Did not access all the regions in the table", 
numOfRegions,
-scanMetricsWithClose.countOfRegions.get());
+  scanMetricsWithClose.countOfRegions.get());
   }
+} finally {
+  TEST_UTIL.deleteTable(tableName);
 }
   }
 



[hbase] branch branch-2 updated: HBASE-25598 TestFromClientSide5.testScanMetrics is flaky (#2977)

2021-02-23 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new e863df3  HBASE-25598 TestFromClientSide5.testScanMetrics is flaky 
(#2977)
e863df3 is described below

commit e863df383b8228c87e1723b18f9bbe53b31b7f2a
Author: XinSun 
AuthorDate: Wed Feb 24 14:15:51 2021 +0800

HBASE-25598 TestFromClientSide5.testScanMetrics is flaky (#2977)

Signed-off-by: Duo Zhang 
---
 .../hadoop/hbase/client/TestFromClientSide5.java | 20 +---
 1 file changed, 9 insertions(+), 11 deletions(-)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide5.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide5.java
index a23234a..19256fd 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide5.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide5.java
@@ -968,7 +968,7 @@ public class TestFromClientSide5 extends FromClientSideBase 
{
   numRecords++;
 }
 
-LOG.info("test data has " + numRecords + " records.");
+LOG.info("test data has {} records.", numRecords);
 
 // by default, scan metrics collection is turned off
 assertNull(scanner.getScanMetrics());
@@ -981,8 +981,6 @@ public class TestFromClientSide5 extends FromClientSideBase 
{
   try (ResultScanner scanner = ht.getScanner(scan2)) {
 for (Result result : scanner.next(numRecords - 1)) {
 }
-scanner.close();
-// closing the scanner will set the metrics.
 assertNotNull(scanner.getScanMetrics());
   }
 
@@ -997,7 +995,7 @@ public class TestFromClientSide5 extends FromClientSideBase 
{
 }
 ScanMetrics scanMetrics = scanner.getScanMetrics();
 assertEquals("Did not access all the regions in the table", 
numOfRegions,
-scanMetrics.countOfRegions.get());
+  scanMetrics.countOfRegions.get());
   }
 
   // check byte counters
@@ -1006,15 +1004,14 @@ public class TestFromClientSide5 extends 
FromClientSideBase {
   scan2.setCaching(1);
   try (ResultScanner scanner = ht.getScanner(scan2)) {
 int numBytes = 0;
-for (Result result : scanner.next(1)) {
+for (Result result : scanner) {
   for (Cell cell : result.listCells()) {
 numBytes += PrivateCellUtil.estimatedSerializedSizeOf(cell);
   }
 }
-scanner.close();
 ScanMetrics scanMetrics = scanner.getScanMetrics();
 assertEquals("Did not count the result bytes", numBytes,
-scanMetrics.countOfBytesInResults.get());
+  scanMetrics.countOfBytesInResults.get());
   }
 
   // check byte counters on a small scan
@@ -1024,15 +1021,14 @@ public class TestFromClientSide5 extends 
FromClientSideBase {
   scan2.setSmall(true);
   try (ResultScanner scanner = ht.getScanner(scan2)) {
 int numBytes = 0;
-for (Result result : scanner.next(1)) {
+for (Result result : scanner) {
   for (Cell cell : result.listCells()) {
 numBytes += PrivateCellUtil.estimatedSerializedSizeOf(cell);
   }
 }
-scanner.close();
 ScanMetrics scanMetrics = scanner.getScanMetrics();
 assertEquals("Did not count the result bytes", numBytes,
-scanMetrics.countOfBytesInResults.get());
+  scanMetrics.countOfBytesInResults.get());
   }
 
   // now, test that the metrics are still collected even if you don't call 
close, but do
@@ -1062,8 +1058,10 @@ public class TestFromClientSide5 extends 
FromClientSideBase {
 scannerWithClose.close();
 ScanMetrics scanMetricsWithClose = scannerWithClose.getScanMetrics();
 assertEquals("Did not access all the regions in the table", 
numOfRegions,
-scanMetricsWithClose.countOfRegions.get());
+  scanMetricsWithClose.countOfRegions.get());
   }
+} finally {
+  TEST_UTIL.deleteTable(tableName);
 }
   }
 



[hbase] branch master updated: HBASE-25598 TestFromClientSide5.testScanMetrics is flaky (#2977)

2021-02-23 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new 51a3d45  HBASE-25598 TestFromClientSide5.testScanMetrics is flaky 
(#2977)
51a3d45 is described below

commit 51a3d45f9d7f9228b0c5b99014b397ac5562a1cb
Author: XinSun 
AuthorDate: Wed Feb 24 14:15:51 2021 +0800

HBASE-25598 TestFromClientSide5.testScanMetrics is flaky (#2977)

Signed-off-by: Duo Zhang 
---
 .../hadoop/hbase/client/TestFromClientSide5.java | 20 +---
 1 file changed, 9 insertions(+), 11 deletions(-)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide5.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide5.java
index 7a1ab5a..fe73ab5 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide5.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide5.java
@@ -970,7 +970,7 @@ public class TestFromClientSide5 extends FromClientSideBase 
{
   numRecords++;
 }
 
-LOG.info("test data has " + numRecords + " records.");
+LOG.info("test data has {} records.", numRecords);
 
 // by default, scan metrics collection is turned off
 assertNull(scanner.getScanMetrics());
@@ -983,8 +983,6 @@ public class TestFromClientSide5 extends FromClientSideBase 
{
   try (ResultScanner scanner = ht.getScanner(scan2)) {
 for (Result result : scanner.next(numRecords - 1)) {
 }
-scanner.close();
-// closing the scanner will set the metrics.
 assertNotNull(scanner.getScanMetrics());
   }
 
@@ -999,7 +997,7 @@ public class TestFromClientSide5 extends FromClientSideBase 
{
 }
 ScanMetrics scanMetrics = scanner.getScanMetrics();
 assertEquals("Did not access all the regions in the table", 
numOfRegions,
-scanMetrics.countOfRegions.get());
+  scanMetrics.countOfRegions.get());
   }
 
   // check byte counters
@@ -1008,15 +1006,14 @@ public class TestFromClientSide5 extends 
FromClientSideBase {
   scan2.setCaching(1);
   try (ResultScanner scanner = ht.getScanner(scan2)) {
 int numBytes = 0;
-for (Result result : scanner.next(1)) {
+for (Result result : scanner) {
   for (Cell cell : result.listCells()) {
 numBytes += PrivateCellUtil.estimatedSerializedSizeOf(cell);
   }
 }
-scanner.close();
 ScanMetrics scanMetrics = scanner.getScanMetrics();
 assertEquals("Did not count the result bytes", numBytes,
-scanMetrics.countOfBytesInResults.get());
+  scanMetrics.countOfBytesInResults.get());
   }
 
   // check byte counters on a small scan
@@ -1026,15 +1023,14 @@ public class TestFromClientSide5 extends 
FromClientSideBase {
   scan2.setSmall(true);
   try (ResultScanner scanner = ht.getScanner(scan2)) {
 int numBytes = 0;
-for (Result result : scanner.next(1)) {
+for (Result result : scanner) {
   for (Cell cell : result.listCells()) {
 numBytes += PrivateCellUtil.estimatedSerializedSizeOf(cell);
   }
 }
-scanner.close();
 ScanMetrics scanMetrics = scanner.getScanMetrics();
 assertEquals("Did not count the result bytes", numBytes,
-scanMetrics.countOfBytesInResults.get());
+  scanMetrics.countOfBytesInResults.get());
   }
 
   // now, test that the metrics are still collected even if you don't call 
close, but do
@@ -1064,8 +1060,10 @@ public class TestFromClientSide5 extends 
FromClientSideBase {
 scannerWithClose.close();
 ScanMetrics scanMetricsWithClose = scannerWithClose.getScanMetrics();
 assertEquals("Did not access all the regions in the table", 
numOfRegions,
-scanMetricsWithClose.countOfRegions.get());
+  scanMetricsWithClose.countOfRegions.get());
   }
+} finally {
+  TEST_UTIL.deleteTable(tableName);
 }
   }
 



[hbase] 09/10: HBASE-25071 ReplicationServer support start ReplicationSource internal (#2452)

2021-02-22 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit c8d8782f17e66a83de23b94995deeca8471fe58b
Author: Guanghao Zhang 
AuthorDate: Mon Nov 9 11:46:02 2020 +0800

HBASE-25071 ReplicationServer support start ReplicationSource internal 
(#2452)

Signed-off-by: XinSun 
---
 .../server/replication/ReplicationServer.proto |  14 +-
 .../replication/ZKReplicationQueueStorage.java |   4 +-
 .../replication/ZKReplicationStorageBase.java  |   4 +
 .../hadoop/hbase/master/MasterRpcServices.java |   2 +-
 .../hadoop/hbase/regionserver/RSRpcServices.java   |   6 +-
 .../replication/HBaseReplicationEndpoint.java  |  14 +-
 .../hbase/replication/HReplicationServer.java  | 175 ++---
 .../replication/ReplicationServerRpcServices.java  |  15 ++
 .../regionserver/RecoveredReplicationSource.java   |   9 +-
 .../regionserver/ReplicationSource.java|  54 ++-
 .../regionserver/ReplicationSourceFactory.java |   2 +-
 .../regionserver/ReplicationSourceInterface.java   |   6 +-
 .../regionserver/ReplicationSourceManager.java |   9 +-
 .../regionserver/ReplicationSourceShipper.java |   4 +-
 .../hbase/replication/ReplicationSourceDummy.java  |   5 +-
 .../hbase/replication/TestReplicationBase.java |   2 +-
 .../replication/TestReplicationFetchServers.java   |  43 +++--
 ...nServer.java => TestReplicationServerSink.java} |  24 +--
 .../replication/TestReplicationServerSource.java   |  69 
 .../regionserver/TestReplicationSource.java|  20 +--
 .../regionserver/TestReplicationSourceManager.java |  16 +-
 21 files changed, 401 insertions(+), 96 deletions(-)

diff --git 
a/hbase-protocol-shaded/src/main/protobuf/server/replication/ReplicationServer.proto
 
b/hbase-protocol-shaded/src/main/protobuf/server/replication/ReplicationServer.proto
index ed334c4..925aed4 100644
--- 
a/hbase-protocol-shaded/src/main/protobuf/server/replication/ReplicationServer.proto
+++ 
b/hbase-protocol-shaded/src/main/protobuf/server/replication/ReplicationServer.proto
@@ -24,9 +24,21 @@ option java_generic_services = true;
 option java_generate_equals_and_hash = true;
 option optimize_for = SPEED;
 
+import "HBase.proto";
 import "server/region/Admin.proto";
 
+message StartReplicationSourceRequest {
+  required ServerName server_name = 1;
+  required string queue_id = 2;
+}
+
+message StartReplicationSourceResponse {
+}
+
 service ReplicationServerService {
   rpc ReplicateWALEntry(ReplicateWALEntryRequest)
 returns(ReplicateWALEntryResponse);
-}
\ No newline at end of file
+
+  rpc StartReplicationSource(StartReplicationSourceRequest)
+returns(StartReplicationSourceResponse);
+}
diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ZKReplicationQueueStorage.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ZKReplicationQueueStorage.java
index 5c480ba..08ac142 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ZKReplicationQueueStorage.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ZKReplicationQueueStorage.java
@@ -79,7 +79,7 @@ import 
org.apache.hbase.thirdparty.org.apache.commons.collections4.CollectionUti
  * 
  */
 @InterfaceAudience.Private
-class ZKReplicationQueueStorage extends ZKReplicationStorageBase
+public class ZKReplicationQueueStorage extends ZKReplicationStorageBase
 implements ReplicationQueueStorage {
 
   private static final Logger LOG = 
LoggerFactory.getLogger(ZKReplicationQueueStorage.class);
@@ -121,7 +121,7 @@ class ZKReplicationQueueStorage extends 
ZKReplicationStorageBase
 return ZNodePaths.joinZNode(queuesZNode, serverName.getServerName());
   }
 
-  private String getQueueNode(ServerName serverName, String queueId) {
+  public String getQueueNode(ServerName serverName, String queueId) {
 return ZNodePaths.joinZNode(getRsNode(serverName), queueId);
   }
 
diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ZKReplicationStorageBase.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ZKReplicationStorageBase.java
index 596167f..a239bf8 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ZKReplicationStorageBase.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ZKReplicationStorageBase.java
@@ -74,4 +74,8 @@ public abstract class ZKReplicationStorageBase {
   throw new RuntimeException(e);
 }
   }
+
+  public ZKWatcher getZookeeper() {
+return this.zookeeper;
+  }
 }
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
index 93f368f..9b98190 100644
--- 
a/hbase-server/src/main/java/org/apa

[hbase] 08/10: HBASE-24999 Master manages ReplicationServers (#2579)

2021-02-22 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit bd13d14bb59e5c4d1c69c19316f4bceb54ef7703
Author: XinSun 
AuthorDate: Wed Oct 28 18:59:57 2020 +0800

HBASE-24999 Master manages ReplicationServers (#2579)

Signed-off-by: Guanghao Zhang 
---
 .../server/master/ReplicationServerStatus.proto|  34 
 .../org/apache/hadoop/hbase/master/HMaster.java|  10 +
 .../hadoop/hbase/master/MasterRpcServices.java |  37 +++-
 .../apache/hadoop/hbase/master/MasterServices.java |   5 +
 .../hbase/master/ReplicationServerManager.java | 204 
 .../replication/HBaseReplicationEndpoint.java  | 148 ++
 .../hbase/replication/HReplicationServer.java  | 214 -
 .../HBaseInterClusterReplicationEndpoint.java  |   1 -
 .../regionserver/ReplicationSyncUp.java|   4 +-
 .../hbase/master/MockNoopMasterServices.java   |   5 +
 .../hbase/replication/TestReplicationBase.java |   2 +
 .../hbase/replication/TestReplicationServer.java   |  57 +-
 12 files changed, 619 insertions(+), 102 deletions(-)

diff --git 
a/hbase-protocol-shaded/src/main/protobuf/server/master/ReplicationServerStatus.proto
 
b/hbase-protocol-shaded/src/main/protobuf/server/master/ReplicationServerStatus.proto
new file mode 100644
index 000..d39a043
--- /dev/null
+++ 
b/hbase-protocol-shaded/src/main/protobuf/server/master/ReplicationServerStatus.proto
@@ -0,0 +1,34 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+syntax = "proto2";
+
+package hbase.pb;
+
+option java_package = "org.apache.hadoop.hbase.shaded.protobuf.generated";
+option java_outer_classname = "ReplicationServerStatusProtos";
+option java_generic_services = true;
+option java_generate_equals_and_hash = true;
+option optimize_for = SPEED;
+
+import "server/master/RegionServerStatus.proto";
+
+service ReplicationServerStatusService {
+
+  rpc ReplicationServerReport(RegionServerReportRequest)
+  returns(RegionServerReportResponse);
+}
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
index 138a43f..5e8de56 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
@@ -303,6 +303,8 @@ public class HMaster extends HRegionServer implements 
MasterServices {
   // manager of assignment nodes in zookeeper
   private AssignmentManager assignmentManager;
 
+  // server manager to deal with replication server info
+  private ReplicationServerManager replicationServerManager;
 
   /**
* Cache for the meta region replica's locations. Also tracks their changes 
to avoid stale
@@ -866,6 +868,8 @@ public class HMaster extends HRegionServer implements 
MasterServices {
 .collect(Collectors.toList());
 this.assignmentManager.setupRIT(ritList);
 
+this.replicationServerManager = new ReplicationServerManager(this);
+
 // Start RegionServerTracker with listing of servers found with exiting 
SCPs -- these should
 // be registered in the deadServers set -- and with the list of 
servernames out on the
 // filesystem that COULD BE 'alive' (we'll schedule SCPs for each and let 
SCP figure it out).
@@ -1024,6 +1028,7 @@ public class HMaster extends HRegionServer implements 
MasterServices {
 this.hbckChore = new HbckChore(this);
 getChoreService().scheduleChore(hbckChore);
 this.serverManager.startChore();
+this.replicationServerManager.startChore();
 
 // Only for rolling upgrade, where we need to migrate the data in 
namespace table to meta table.
 if (!waitForNamespaceOnline()) {
@@ -1283,6 +1288,11 @@ public class HMaster extends HRegionServer implements 
MasterServices {
   }
 
   @Override
+  public ReplicationServerManager getReplicationServerManager() {
+return this.replicationServerManager;
+  }
+
+  @Override
   public MasterFileSystem getMasterFileSystem() {
 return this.file

[hbase] 10/10: HBASE-25113 [testing] HBaseCluster support ReplicationServer for UTs (#2662)

2021-02-22 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit 1553b39b8135f0a8fffe1a9089b8bc4f51d9c945
Author: XinSun 
AuthorDate: Mon Nov 23 11:01:55 2020 +0800

HBASE-25113 [testing] HBaseCluster support ReplicationServer for UTs (#2662)

Signed-off-by: Guanghao Zhang 
---
 .../org/apache/hadoop/hbase/LocalHBaseCluster.java | 63 ++-
 .../hbase/replication/HReplicationServer.java  | 13 
 .../apache/hadoop/hbase/util/JVMClusterUtil.java   | 57 +-
 .../apache/hadoop/hbase/HBaseTestingUtility.java   |  8 +--
 .../org/apache/hadoop/hbase/MiniHBaseCluster.java  | 70 ++
 .../hadoop/hbase/StartMiniClusterOption.java   | 24 ++--
 .../replication/TestReplicationServerSink.java | 44 +++---
 hbase-server/src/test/resources/hbase-site.xml |  7 +++
 8 files changed, 242 insertions(+), 44 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java
index f4847b9..24b658f 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java
@@ -32,9 +32,11 @@ import org.apache.hadoop.hbase.client.TableDescriptor;
 import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
 import org.apache.hadoop.hbase.master.HMaster;
 import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.replication.HReplicationServer;
 import org.apache.hadoop.hbase.security.User;
 import org.apache.hadoop.hbase.util.JVMClusterUtil;
 import org.apache.hadoop.hbase.util.JVMClusterUtil.RegionServerThread;
+import org.apache.hadoop.hbase.util.JVMClusterUtil.ReplicationServerThread;
 import org.apache.hadoop.hbase.util.Threads;
 import org.apache.yetus.audience.InterfaceAudience;
 import org.slf4j.Logger;
@@ -60,7 +62,10 @@ import org.slf4j.LoggerFactory;
 public class LocalHBaseCluster {
   private static final Logger LOG = 
LoggerFactory.getLogger(LocalHBaseCluster.class);
   private final List masterThreads = new 
CopyOnWriteArrayList<>();
-  private final List regionThreads = new 
CopyOnWriteArrayList<>();
+  private final List regionThreads =
+  new CopyOnWriteArrayList<>();
+  private final List 
replicationThreads =
+  new CopyOnWriteArrayList<>();
   private final static int DEFAULT_NO = 1;
   /** local mode */
   public static final String LOCAL = "local";
@@ -259,6 +264,26 @@ public class LocalHBaseCluster {
 });
   }
 
+  @SuppressWarnings("unchecked")
+  public JVMClusterUtil.ReplicationServerThread addReplicationServer(
+  Configuration config, final int index) throws IOException {
+// Create each replication server with its own Configuration instance so 
each has
+// its Connection instance rather than share (see HBASE_INSTANCES down in
+// the guts of ConnectionManager).
+JVMClusterUtil.ReplicationServerThread rst =
+JVMClusterUtil.createReplicationServerThread(config, index);
+this.replicationThreads.add(rst);
+return rst;
+  }
+
+  public JVMClusterUtil.ReplicationServerThread addReplicationServer(
+  final Configuration config, final int index, User user)
+  throws IOException, InterruptedException {
+return user.runAs(
+(PrivilegedExceptionAction) () -> 
addReplicationServer(config,
+index));
+  }
+
   /**
* @param serverNumber
* @return region server
@@ -290,6 +315,40 @@ public class LocalHBaseCluster {
   }
 
   /**
+   * @param serverNumber replication server number
+   * @return replication server
+   */
+  public HReplicationServer getReplicationServer(int serverNumber) {
+return replicationThreads.get(serverNumber).getReplicationServer();
+  }
+
+  /**
+   * @return Read-only list of replication server threads.
+   */
+  public List getReplicationServers() {
+return Collections.unmodifiableList(this.replicationThreads);
+  }
+
+  /**
+   * @return List of running servers (Some servers may have been killed or
+   *   aborted during lifetime of cluster; these servers are not included in 
this
+   *   list).
+   */
+  public List 
getLiveReplicationServers() {
+List liveServers = new 
ArrayList<>();
+List list = getReplicationServers();
+for (JVMClusterUtil.ReplicationServerThread rst: list) {
+  if (rst.isAlive()) {
+liveServers.add(rst);
+  }
+  else {
+LOG.info("Not alive {}", rst.getName());
+  }
+}
+return liveServers;
+  }
+
+  /**
* @return the Configuration used by this LocalHBaseCluster
*/
   public Configuration getConfiguration() {
@@ -430,7 +489,7 @@ public class LocalHBaseCluster {
* Start the cluster.
*/
   public 

[hbase] 07/10: HBASE-24684 Fetch ReplicationSink servers list from HMaster instead o… (#2077)

2021-02-22 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit 29adcce81e7783a39597b942e3c78977b2871cb9
Author: XinSun 
AuthorDate: Sun Sep 20 10:54:43 2020 +0800

HBASE-24684 Fetch ReplicationSink servers list from HMaster instead o… 
(#2077)

Signed-off-by: Wellington Chevreuil 
---
 .../src/main/protobuf/server/master/Master.proto   |  12 +-
 .../hadoop/hbase/coprocessor/MasterObserver.java   |  16 +++
 .../org/apache/hadoop/hbase/master/HMaster.java|   5 +
 .../hadoop/hbase/master/MasterCoprocessorHost.java |  18 +++
 .../hadoop/hbase/master/MasterRpcServices.java |  21 
 .../apache/hadoop/hbase/master/MasterServices.java |   6 +
 .../replication/HBaseReplicationEndpoint.java  | 140 +++--
 .../hbase/master/MockNoopMasterServices.java   |   5 +
 .../replication/TestHBaseReplicationEndpoint.java  |   5 +
 .../replication/TestReplicationFetchServers.java   | 106 
 .../TestGlobalReplicationThrottler.java|   4 +
 ...stRegionReplicaReplicationEndpointNoMaster.java |   2 +
 12 files changed, 327 insertions(+), 13 deletions(-)

diff --git a/hbase-protocol-shaded/src/main/protobuf/server/master/Master.proto 
b/hbase-protocol-shaded/src/main/protobuf/server/master/Master.proto
index 118ce77..7dec566 100644
--- a/hbase-protocol-shaded/src/main/protobuf/server/master/Master.proto
+++ b/hbase-protocol-shaded/src/main/protobuf/server/master/Master.proto
@@ -717,6 +717,13 @@ message BalancerDecisionsResponse {
   repeated BalancerDecision balancer_decision = 1;
 }
 
+message ListReplicationSinkServersRequest {
+}
+
+message ListReplicationSinkServersResponse {
+  repeated ServerName server_name = 1;
+}
+
 service MasterService {
   /** Used by the client to get the number of regions that have received the 
updated schema */
   rpc GetSchemaAlterStatus(GetSchemaAlterStatusRequest)
@@ -1146,10 +1153,13 @@ service MasterService {
 returns (RenameRSGroupResponse);
 
   rpc UpdateRSGroupConfig(UpdateRSGroupConfigRequest)
-  returns (UpdateRSGroupConfigResponse);
+returns (UpdateRSGroupConfigResponse);
 
   rpc GetLogEntries(LogRequest)
 returns(LogEntry);
+
+  rpc ListReplicationSinkServers(ListReplicationSinkServersRequest)
+returns (ListReplicationSinkServersResponse);
 }
 
 // HBCK Service definitions.
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
index ac35caa..ec009cc 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
@@ -1782,4 +1782,20 @@ public interface MasterObserver {
   default void 
postHasUserPermissions(ObserverContext ctx,
   String userName, List permissions) throws IOException {
   }
+
+  /**
+   * Called before getting servers for replication sink.
+   * @param ctx the coprocessor instance's environment
+   */
+  default void 
preListReplicationSinkServers(ObserverContext ctx)
+throws IOException {
+  }
+
+  /**
+   * Called after getting servers for replication sink.
+   * @param ctx the coprocessor instance's environment
+   */
+  default void 
postListReplicationSinkServers(ObserverContext 
ctx)
+throws IOException {
+  }
 }
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
index 74f199c..138a43f 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
@@ -3782,4 +3782,9 @@ public class HMaster extends HRegionServer implements 
MasterServices {
   public MetaLocationSyncer getMetaLocationSyncer() {
 return metaLocationSyncer;
   }
+
+  @Override
+  public List listReplicationSinkServers() throws IOException {
+return this.serverManager.getOnlineServersList();
+  }
 }
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterCoprocessorHost.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterCoprocessorHost.java
index 01d1a62..f775eba 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterCoprocessorHost.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterCoprocessorHost.java
@@ -2038,4 +2038,22 @@ public class MasterCoprocessorHost
   }
 });
   }
+
+  public void preListReplicationSinkServers() throws IOException {
+execOperation(coprocEnvironments.isEmpty() ? null : new 
MasterObserverOperation() {
+  @Override
+  public void call(MasterObserver observer) throws IOException {
+observer.preListReplicationSinkServers(this);
+  }
+});
+  }
+
+  public void

[hbase] 04/10: HBASE-24683 Add a basic ReplicationServer which only implement ReplicationSink Service (#2111)

2021-02-22 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit c9a01b2dab47bfa852e0bb6c66cb48c4d4e6f2bc
Author: XinSun 
AuthorDate: Fri Sep 4 18:53:46 2020 +0800

HBASE-24683 Add a basic ReplicationServer which only implement 
ReplicationSink Service (#2111)

Signed-off-by: Guanghao Zhang 
---
 .../java/org/apache/hadoop/hbase/util/DNS.java |   3 +-
 .../hbase/replication/HReplicationServer.java  | 391 
 .../replication/ReplicationServerRpcServices.java  | 516 +
 .../hbase/replication/TestReplicationServer.java   | 151 ++
 4 files changed, 1060 insertions(+), 1 deletion(-)

diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DNS.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DNS.java
index 5c23ddc..cf94b2a 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DNS.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DNS.java
@@ -64,7 +64,8 @@ public final class DNS {
 
   public enum ServerType {
 MASTER("master"),
-REGIONSERVER("regionserver");
+REGIONSERVER("regionserver"),
+REPLICATIONSERVER("replicationserver");
 
 private String name;
 ServerType(String name) {
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/HReplicationServer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/HReplicationServer.java
new file mode 100644
index 000..31dec0c
--- /dev/null
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/HReplicationServer.java
@@ -0,0 +1,391 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.ChoreService;
+import org.apache.hadoop.hbase.CoordinatedStateManager;
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.client.AsyncClusterConnection;
+import org.apache.hadoop.hbase.client.ClusterConnectionFactory;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.log.HBaseMarkers;
+import org.apache.hadoop.hbase.regionserver.ReplicationService;
+import org.apache.hadoop.hbase.regionserver.ReplicationSinkService;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.hbase.security.UserProvider;
+import org.apache.hadoop.hbase.trace.TraceUtil;
+import org.apache.hadoop.hbase.util.Sleeper;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.util.ReflectionUtils;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * HReplicationServer which is responsible to all replication stuff. It checks 
in with
+ * the HMaster. There are many HReplicationServers in a single HBase 
deployment.
+ */
+@InterfaceAudience.Private
+@SuppressWarnings({ "deprecation"})
+public class HReplicationServer extends Thread implements Server {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(HReplicationServer.class);
+
+  /** replication server process name */
+  public static final String REPLICATION_SERVER = "replicationserver";
+
+  /**
+   * This servers start code.
+   */
+  protected final long startCode;
+
+  private volatile boolean stopped = false;
+
+  // Go down hard. Used if file system becomes unavailable and also in
+  // debugging and unit tests.
+  private AtomicBoolean abortRequested;
+
+  // flag set after we're done setting up server threads
+  final AtomicBoolean online = new AtomicBoolean(false);
+
+  /**
+   * The server name the Master sees us as.  Its made from the hostname the
+   * master passes us, port, and server start code. Gets set after registration
+   * against Master.

[hbase] 06/10: HBASE-24998 Introduce a ReplicationSourceOverallController interface and decouple ReplicationSourceManager and ReplicationSource (#2364)

2021-02-22 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit b86d97cc0c9bb0ca31a44499094836f90a95d7c7
Author: Guanghao Zhang 
AuthorDate: Sun Sep 20 09:02:53 2020 +0800

HBASE-24998 Introduce a ReplicationSourceOverallController interface and 
decouple ReplicationSourceManager and ReplicationSource (#2364)

Signed-off-by: meiyi 
---
 .../hadoop/hbase/client/AsyncConnectionImpl.java   |   4 +-
 .../java/org/apache/hadoop/hbase/HConstants.java   |   2 +
 .../hbase/replication/ReplicationListener.java |   2 +-
 .../hadoop/hbase/regionserver/RSRpcServices.java   |   2 +-
 .../replication/ReplicationSourceController.java   |  31 --
 .../regionserver/RecoveredReplicationSource.java   |  18 ++--
 .../regionserver/ReplicationSource.java|  35 +++
 .../regionserver/ReplicationSourceInterface.java   |  25 +++--
 .../regionserver/ReplicationSourceManager.java | 116 -
 .../regionserver/ReplicationSourceWALReader.java   |  13 +--
 .../hbase/replication/ReplicationSourceDummy.java  |  21 ++--
 .../regionserver/TestReplicationSourceManager.java |  11 +-
 .../regionserver/TestWALEntryStream.java   |  15 +--
 13 files changed, 162 insertions(+), 133 deletions(-)

diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java
index 5a332d8..eb33857 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java
@@ -296,8 +296,8 @@ class AsyncConnectionImpl implements AsyncConnection {
   ReplicationServerService.Interface getReplicationServerStub(ServerName 
serverName)
   throws IOException {
 return ConcurrentMapUtils.computeIfAbsentEx(replStubs,
-getStubKey(ReplicationServerService.Interface.class.getSimpleName(), 
serverName,
-hostnameCanChange), () -> createReplicationServerStub(serverName));
+getStubKey(ReplicationServerService.Interface.class.getSimpleName(), 
serverName),
+  () -> createReplicationServerStub(serverName));
   }
 
   CompletableFuture getMasterStub() {
diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
index 48fa00c..571aea8 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
@@ -998,6 +998,8 @@ public final class HConstants {
   /*
* cluster replication constants.
*/
+  public static final String REPLICATION_OFFLOAD_ENABLE_KEY = 
"hbase.replication.offload.enabled";
+  public static final boolean REPLICATION_OFFLOAD_ENABLE_DEFAULT = false;
   public static final String
   REPLICATION_SOURCE_SERVICE_CLASSNAME = 
"hbase.replication.source.service";
   public static final String REPLICATION_SERVICE_CLASSNAME_DEFAULT =
diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationListener.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationListener.java
index f040bf9..6ecbb46 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationListener.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationListener.java
@@ -33,5 +33,5 @@ public interface ReplicationListener {
* A region server has been removed from the local cluster
* @param regionServer the removed region server
*/
-  public void regionServerRemoved(String regionServer);
+  void regionServerRemoved(String regionServer);
 }
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
index a4a735b..90ed0fa 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
@@ -271,7 +271,7 @@ import 
org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.RegionEventDe
 @SuppressWarnings("deprecation")
 public class RSRpcServices implements HBaseRPCErrorHandler,
 AdminService.BlockingInterface, ClientService.BlockingInterface, 
PriorityFunction,
-ConfigurationObserver {
+ConfigurationObserver, ReplicationServerService.BlockingInterface {
   private static final Logger LOG = 
LoggerFactory.getLogger(RSRpcServices.class);
 
   /** RPC scheduler to use for the region server. */
diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationListener.java
 
b/hbase-server/src/main/java

[hbase] 02/10: HBASE-24681 Remove the cache walsById/walsByIdRecoveredQueues from ReplicationSourceManager (#2019)

2021-02-22 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit b60ec36c292cb7732422432e9d6215ef9e01bfdb
Author: Guanghao Zhang 
AuthorDate: Mon Jul 13 17:35:32 2020 +0800

HBASE-24681 Remove the cache walsById/walsByIdRecoveredQueues from 
ReplicationSourceManager (#2019)

Signed-off-by: Wellington Chevreuil 
---
 .../regionserver/ReplicationSourceManager.java | 214 ++---
 1 file changed, 63 insertions(+), 151 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
index 21979bb..00ee6a5 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
@@ -1,4 +1,4 @@
-/*
+/**
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
  * distributed with this work for additional information
@@ -98,30 +98,6 @@ import 
org.apache.hbase.thirdparty.com.google.common.util.concurrent.ThreadFacto
  * No need synchronized on {@link #sources}. {@link #sources} is a 
ConcurrentHashMap and there
  * is a Lock for peer id in {@link PeerProcedureHandlerImpl}. So there is no 
race for peer
  * operations.
- * Need synchronized on {@link #walsById}. There are four methods which 
modify it,
- * {@link #addPeer(String)}, {@link #removePeer(String)},
- * {@link #cleanOldLogs(String, boolean, ReplicationSourceInterface)} and 
{@link #preLogRoll(Path)}.
- * {@link #walsById} is a ConcurrentHashMap and there is a Lock for peer id in
- * {@link PeerProcedureHandlerImpl}. So there is no race between {@link 
#addPeer(String)} and
- * {@link #removePeer(String)}. {@link #cleanOldLogs(String, boolean, 
ReplicationSourceInterface)}
- * is called by {@link ReplicationSourceInterface}. So no race with {@link 
#addPeer(String)}.
- * {@link #removePeer(String)} will terminate the {@link 
ReplicationSourceInterface} firstly, then
- * remove the wals from {@link #walsById}. So no race with {@link 
#removePeer(String)}. The only
- * case need synchronized is {@link #cleanOldLogs(String, boolean, 
ReplicationSourceInterface)} and
- * {@link #preLogRoll(Path)}.
- * No need synchronized on {@link #walsByIdRecoveredQueues}. There are 
three methods which
- * modify it, {@link #removePeer(String)} ,
- * {@link #cleanOldLogs(String, boolean, ReplicationSourceInterface)} and
- * {@link ReplicationSourceManager.NodeFailoverWorker#run()}.
- * {@link #cleanOldLogs(String, boolean, ReplicationSourceInterface)} is 
called by
- * {@link ReplicationSourceInterface}. {@link #removePeer(String)} will 
terminate the
- * {@link ReplicationSourceInterface} firstly, then remove the wals from
- * {@link #walsByIdRecoveredQueues}. And {@link 
ReplicationSourceManager.NodeFailoverWorker#run()}
- * will add the wals to {@link #walsByIdRecoveredQueues} firstly, then start 
up a
- * {@link ReplicationSourceInterface}. So there is no race here. For
- * {@link ReplicationSourceManager.NodeFailoverWorker#run()} and {@link 
#removePeer(String)}, there
- * is already synchronized on {@link #oldsources}. So no need synchronized on
- * {@link #walsByIdRecoveredQueues}.
  * Need synchronized on {@link #latestPaths} to avoid the new open source 
miss new log.
  * Need synchronized on {@link #oldsources} to avoid adding recovered 
source for the
  * to-be-removed peer.
@@ -134,15 +110,7 @@ public class ReplicationSourceManager implements 
ReplicationListener {
   private final ConcurrentMap sources;
   // List of all the sources we got from died RSs
   private final List oldsources;
-
-  /**
-   * Storage for queues that need persistance; e.g. Replication state so can 
be recovered
-   * after a crash. queueStorage upkeep is spread about this class and passed
-   * to ReplicationSource instances for these to do updates themselves. Not 
all ReplicationSource
-   * instances keep state.
-   */
   private final ReplicationQueueStorage queueStorage;
-
   private final ReplicationTracker replicationTracker;
   private final ReplicationPeers replicationPeers;
   // UUID for this cluster
@@ -150,15 +118,6 @@ public class ReplicationSourceManager implements 
ReplicationListener {
   // All about stopping
   private final Server server;
 
-  // All logs we are currently tracking
-  // Index structure of the map is: queue_id->logPrefix/logGroup->logs
-  // For normal replication source, the peer id is same with the queue id
-  private final ConcurrentMap>> 
walsById;
-  // Logs for recovered sources we are currently tracking
-  // the map is: queue_id->logPrefix/logGroup->logs
-  // For recovered sour

[hbase] 03/10: HBASE-24735: Refactor ReplicationSourceManager: move logPositionAndCleanOldLogs/cleanUpHFileRefs to ReplicationSource inside (#2064)

2021-02-22 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit d4bcf8d9357e3b4e9984255742505649d892681b
Author: Guanghao Zhang 
AuthorDate: Tue Aug 11 20:07:09 2020 +0800

HBASE-24735: Refactor ReplicationSourceManager: move 
logPositionAndCleanOldLogs/cleanUpHFileRefs to ReplicationSource inside (#2064)

Signed-off-by: Wellington Chevreuil 
---
 .../regionserver/CatalogReplicationSource.java |  13 +-
 .../regionserver/RecoveredReplicationSource.java   |  18 ++-
 .../regionserver/ReplicationSource.java| 166 ++---
 .../regionserver/ReplicationSourceInterface.java   |  39 +++--
 .../regionserver/ReplicationSourceManager.java | 147 ++
 .../regionserver/ReplicationSourceShipper.java |  21 +--
 .../regionserver/ReplicationSourceWALReader.java   |  16 +-
 .../replication/regionserver/WALEntryBatch.java|   2 +-
 .../hbase/replication/ReplicationSourceDummy.java  |  24 +--
 .../regionserver/TestReplicationSource.java|  16 +-
 .../regionserver/TestReplicationSourceManager.java |  50 ---
 .../regionserver/TestWALEntryStream.java   |  20 ++-
 12 files changed, 276 insertions(+), 256 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/CatalogReplicationSource.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/CatalogReplicationSource.java
index 8cb7860..15370e0 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/CatalogReplicationSource.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/CatalogReplicationSource.java
@@ -35,7 +35,18 @@ class CatalogReplicationSource extends ReplicationSource {
   }
 
   @Override
-  public void logPositionAndCleanOldLogs(WALEntryBatch entryBatch) {
+  public void setWALPosition(WALEntryBatch entryBatch) {
+// Noop. This CatalogReplicationSource implementation does not persist 
state to backing storage
+// nor does it keep its WALs in a general map up in 
ReplicationSourceManager --
+// CatalogReplicationSource is used by the Catalog Read Replica feature 
which resets everytime
+// the WAL source process crashes. Skip calling through to the default 
implementation.
+// See "4.1 Skip maintaining zookeeper replication queue (offsets/WALs)" 
in the
+// design doc attached to HBASE-18070 'Enable memstore replication for 
meta replica for detail'
+// for background on why no need to keep WAL state.
+  }
+
+  @Override
+  public void cleanOldWALs(String log, boolean inclusive) {
 // Noop. This CatalogReplicationSource implementation does not persist 
state to backing storage
 // nor does it keep its WALs in a general map up in 
ReplicationSourceManager --
 // CatalogReplicationSource is used by the Catalog Read Replica feature 
which resets everytime
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/RecoveredReplicationSource.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/RecoveredReplicationSource.java
index 526c3e3..abbc046 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/RecoveredReplicationSource.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/RecoveredReplicationSource.java
@@ -21,6 +21,7 @@ import java.io.IOException;
 import java.util.List;
 import java.util.UUID;
 import java.util.concurrent.PriorityBlockingQueue;
+
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
@@ -44,15 +45,18 @@ public class RecoveredReplicationSource extends 
ReplicationSource {
 
   private static final Logger LOG = 
LoggerFactory.getLogger(RecoveredReplicationSource.class);
 
+  private Path walDir;
+
   private String actualPeerId;
 
   @Override
-  public void init(Configuration conf, FileSystem fs, ReplicationSourceManager 
manager,
-  ReplicationQueueStorage queueStorage, ReplicationPeer replicationPeer, 
Server server,
-  String peerClusterZnode, UUID clusterId, WALFileLengthProvider 
walFileLengthProvider,
-  MetricsSource metrics) throws IOException {
-super.init(conf, fs, manager, queueStorage, replicationPeer, server, 
peerClusterZnode,
+  public void init(Configuration conf, FileSystem fs, Path walDir, 
ReplicationSourceManager manager,
+ReplicationQueueStorage queueStorage, ReplicationPeer replicationPeer, 
Server server,
+String peerClusterZnode, UUID clusterId, WALFileLengthProvider 
walFileLengthProvider,
+MetricsSource metrics) throws IOException {
+super.init(conf, fs, walDir, manager, queueStorage, replicationPeer, 
server, peerClusterZnode,
   clusterId, walFileLengthProvider, metrics);
+this.walDi

[hbase] 01/10: HBASE-24682 Refactor ReplicationSource#addHFileRefs method: move it to ReplicationSourceManager (#2020)

2021-02-22 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit a62a4b1a3e3fa0def1ec0c211cf703f954f9bbe2
Author: Guanghao Zhang 
AuthorDate: Wed Jul 8 14:29:08 2020 +0800

HBASE-24682 Refactor ReplicationSource#addHFileRefs method: move it to 
ReplicationSourceManager (#2020)

Signed-off-by: Wellington Chevreuil 
---
 .../regionserver/ReplicationSource.java| 38 +
 .../regionserver/ReplicationSourceInterface.java   | 14 ---
 .../regionserver/ReplicationSourceManager.java | 47 +-
 .../hbase/replication/ReplicationSourceDummy.java  |  9 +
 4 files changed, 48 insertions(+), 60 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
index 3272cf1..fdf7d89 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
@@ -26,7 +26,6 @@ import java.util.Collection;
 import java.util.Collections;
 import java.util.List;
 import java.util.Map;
-import java.util.Set;
 import java.util.TreeMap;
 import java.util.UUID;
 import java.util.concurrent.ConcurrentHashMap;
@@ -35,6 +34,7 @@ import java.util.concurrent.TimeoutException;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicLong;
 import java.util.function.Predicate;
+
 import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
@@ -44,21 +44,17 @@ import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.Server;
 import org.apache.hadoop.hbase.ServerName;
 import org.apache.hadoop.hbase.TableDescriptors;
-import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.regionserver.HRegionServer;
 import org.apache.hadoop.hbase.regionserver.RSRpcServices;
 import org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost;
 import org.apache.hadoop.hbase.replication.ChainWALEntryFilter;
 import org.apache.hadoop.hbase.replication.ClusterMarkingEntryFilter;
 import org.apache.hadoop.hbase.replication.ReplicationEndpoint;
-import org.apache.hadoop.hbase.replication.ReplicationException;
 import org.apache.hadoop.hbase.replication.ReplicationPeer;
 import org.apache.hadoop.hbase.replication.ReplicationQueueInfo;
 import org.apache.hadoop.hbase.replication.ReplicationQueueStorage;
 import org.apache.hadoop.hbase.replication.SystemTableWALEntryFilter;
 import org.apache.hadoop.hbase.replication.WALEntryFilter;
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.hadoop.hbase.util.Pair;
 import org.apache.hadoop.hbase.util.Threads;
 import org.apache.hadoop.hbase.wal.AbstractFSWALProvider;
 import org.apache.hadoop.hbase.wal.WAL.Entry;
@@ -260,38 +256,6 @@ public class ReplicationSource implements 
ReplicationSourceInterface {
 }
   }
 
-  @Override
-  public void addHFileRefs(TableName tableName, byte[] family, List> pairs)
-  throws ReplicationException {
-String peerId = replicationPeer.getId();
-Set namespaces = replicationPeer.getNamespaces();
-Map> tableCFMap = replicationPeer.getTableCFs();
-if (tableCFMap != null) { // All peers with TableCFs
-  List tableCfs = tableCFMap.get(tableName);
-  if (tableCFMap.containsKey(tableName)
-  && (tableCfs == null || tableCfs.contains(Bytes.toString(family {
-this.queueStorage.addHFileRefs(peerId, pairs);
-metrics.incrSizeOfHFileRefsQueue(pairs.size());
-  } else {
-LOG.debug("HFiles will not be replicated belonging to the table {} 
family {} to peer id {}",
-tableName, Bytes.toString(family), peerId);
-  }
-} else if (namespaces != null) { // Only for set NAMESPACES peers
-  if (namespaces.contains(tableName.getNamespaceAsString())) {
-this.queueStorage.addHFileRefs(peerId, pairs);
-metrics.incrSizeOfHFileRefsQueue(pairs.size());
-  } else {
-LOG.debug("HFiles will not be replicated belonging to the table {} 
family {} to peer id {}",
-tableName, Bytes.toString(family), peerId);
-  }
-} else {
-  // user has explicitly not defined any table cfs for replication, means 
replicate all the
-  // data
-  this.queueStorage.addHFileRefs(peerId, pairs);
-  metrics.incrSizeOfHFileRefsQueue(pairs.size());
-}
-  }
-
   private ReplicationEndpoint createReplicationEndpoint()
   throws InstantiationException, IllegalAccessException, 
ClassNotFoundException, IOException {
 RegionServerCoprocessorHost rsServerHost = null;
diff --git 
a/hbase-server/src/main/java/o

[hbase] 05/10: HBASE-24982 Disassemble the method replicateWALEntry from AdminService to a new interface ReplicationServerService (#2360)

2021-02-22 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit 1f11ee4ad71b032555b2e46f56d744ccf621fa61
Author: XinSun 
AuthorDate: Wed Sep 9 15:00:37 2020 +0800

HBASE-24982 Disassemble the method replicateWALEntry from AdminService to a 
new interface ReplicationServerService (#2360)

Signed-off-by: Wellington Chevreuil 
---
 .../hadoop/hbase/client/AsyncConnectionImpl.java   |  16 ++
 .../server/replication/ReplicationServer.proto |  32 
 .../hadoop/hbase/replication/ReplicationUtils.java |  19 ++
 .../hbase/client/AsyncClusterConnection.java   |   5 +
 .../hbase/client/AsyncClusterConnectionImpl.java   |   5 +
 .../hbase/client/AsyncReplicationServerAdmin.java  |  80 +
 .../hbase/protobuf/ReplicationProtobufUtil.java|  18 ++
 .../hadoop/hbase/regionserver/RSRpcServices.java   |   8 +-
 .../replication/HBaseReplicationEndpoint.java  |  57 +-
 .../replication/ReplicationServerRpcServices.java  | 200 +
 .../HBaseInterClusterReplicationEndpoint.java  |   7 +-
 .../hbase/client/DummyAsyncClusterConnection.java  |   5 +
 .../replication/TestHBaseReplicationEndpoint.java  |  17 +-
 .../hbase/replication/TestReplicationServer.java   |  43 -
 14 files changed, 288 insertions(+), 224 deletions(-)

diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java
index 8a1ac5a..5a332d8 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java
@@ -65,6 +65,7 @@ import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminServic
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ClientService;
 import org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.MasterService;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.ReplicationServerProtos.ReplicationServerService;
 
 /**
  * The implementation of AsyncConnection.
@@ -101,6 +102,8 @@ class AsyncConnectionImpl implements AsyncConnection {
 
   private final ConcurrentMap rsStubs = new 
ConcurrentHashMap<>();
   private final ConcurrentMap adminSubs = new 
ConcurrentHashMap<>();
+  private final ConcurrentMap 
replStubs =
+  new ConcurrentHashMap<>();
 
   private final AtomicReference masterStub = new 
AtomicReference<>();
 
@@ -278,12 +281,25 @@ class AsyncConnectionImpl implements AsyncConnection {
 return AdminService.newStub(rpcClient.createRpcChannel(serverName, user, 
rpcTimeout));
   }
 
+  private ReplicationServerService.Interface 
createReplicationServerStub(ServerName serverName)
+  throws IOException {
+return ReplicationServerService.newStub(
+rpcClient.createRpcChannel(serverName, user, rpcTimeout));
+  }
+
   AdminService.Interface getAdminStub(ServerName serverName) throws 
IOException {
 return ConcurrentMapUtils.computeIfAbsentEx(adminSubs,
   getStubKey(AdminService.getDescriptor().getName(), serverName),
   () -> createAdminServerStub(serverName));
   }
 
+  ReplicationServerService.Interface getReplicationServerStub(ServerName 
serverName)
+  throws IOException {
+return ConcurrentMapUtils.computeIfAbsentEx(replStubs,
+getStubKey(ReplicationServerService.Interface.class.getSimpleName(), 
serverName,
+hostnameCanChange), () -> createReplicationServerStub(serverName));
+  }
+
   CompletableFuture getMasterStub() {
 return ConnectionUtils.getOrFetch(masterStub, masterStubMakeFuture, false, 
() -> {
   CompletableFuture future = new 
CompletableFuture<>();
diff --git 
a/hbase-protocol-shaded/src/main/protobuf/server/replication/ReplicationServer.proto
 
b/hbase-protocol-shaded/src/main/protobuf/server/replication/ReplicationServer.proto
new file mode 100644
index 000..ed334c4
--- /dev/null
+++ 
b/hbase-protocol-shaded/src/main/protobuf/server/replication/ReplicationServer.proto
@@ -0,0 +1,32 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS 

[hbase] branch HBASE-24666 updated (6cdd4f3 -> 1553b39)

2021-02-22 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a change to branch HBASE-24666
in repository https://gitbox.apache.org/repos/asf/hbase.git.


omit 6cdd4f3  HBASE-25113 [testing] HBaseCluster support ReplicationServer 
for UTs (#2662)
omit 34e49bc  HBASE-25071 ReplicationServer support start ReplicationSource 
internal (#2452)
omit e118d8d  HBASE-24999 Master manages ReplicationServers (#2579)
omit 3ef10b0  HBASE-24684 Fetch ReplicationSink servers list from HMaster 
instead o… (#2077)
omit 13006e4  HBASE-24998 Introduce a ReplicationSourceOverallController 
interface and decouple ReplicationSourceManager and ReplicationSource (#2364)
omit 83dcae9  HBASE-24982 Disassemble the method replicateWALEntry from 
AdminService to a new interface ReplicationServerService (#2360)
omit 1bcf389  HBASE-24683 Add a basic ReplicationServer which only 
implement ReplicationSink Service (#2111)
omit a86c174  HBASE-24735: Refactor ReplicationSourceManager: move 
logPositionAndCleanOldLogs/cleanUpHFileRefs to ReplicationSource inside (#2064)
omit efbb75c  HBASE-24681 Remove the cache walsById/walsByIdRecoveredQueues 
from ReplicationSourceManager (#2019)
omit ee34412  HBASE-24682 Refactor ReplicationSource#addHFileRefs method: 
move it to ReplicationSourceManager (#2020)
 add 5c7432f  HBASE-24667 Rename configs that support atypical DNS set ups 
to put them in hbase.unsafe
 add 58c9748  HBASE-25257 Remove MirroringTableStateManager (#2634)
 add 6a5c928  HBASE-25181 Add options for disabling column family 
encryption and choosing hash algorithm for wrapped encryption keys.
 add f0c430a  HBASE-20598 Upgrade to JRuby 9.2
 add 57d9cae  HBASE-25187 Improve SizeCachedKV variants initialization 
(#2582)
 add 0b6d6fd  HBASE-25276 Need to throw the original exception in 
HRegion#openHRegion (#2648)
 add 0611ca4  HBASE-25267 Add SSL keystore type and truststore related 
configs for HBase RESTServer (#2642)
 add aaeeaa5  HBASE-25253 Deprecated master carrys regions related methods 
and configs (#2635)
 add 035c192  HBASE-25275 Upgrade asciidoctor (#2647)
 add f89faf3  HBASE-25255 Master fails to initialize when creating rs group 
table (#2638)
 add 09aaa68  HBASE-25255 Addendum wait for meta loaded instead of master 
initialized for system table creation
 add bac459d  HBASE-25284 Check-in "Enable memstore replication..." design
 add f68f3dd  HBASE-25273 fix typo in StripeStoreFileManager java doc 
(#2653)
 add 4ee2270  HBASE-25127 Enhance PerformanceEvaluation to profile meta 
replica performance. (#2644)
 add 0aff175  HBASE-25280 [meta replicas] ArrayIndexOutOfBoundsException in 
ZKConnectionRegistry
 add 1c85c14  Revert "HBASE-25280 [meta replicas] 
ArrayIndexOutOfBoundsException in ZKConnectionRegistry"
 add 6210daf  HBASE-25280 [meta replicas] ArrayIndexOutOfBoundsException in 
ZKConnectionRegistry (#2652)
 add c07f27e  Revert "HBASE-25280 [meta replicas] 
ArrayIndexOutOfBoundsException in ZKConnectionRegistry (#2652)"
 add 300b0a6  HBASE-25026 Create a metric to track full region scans RPCs
 add 322435d  HBASE-25296 [Documentation] fix duplicate conf entry about 
upgrading (#2666)
 add 6a529d3  HBASE-25261 Upgrade Bootstrap to 3.4.1
 add ca129e9  HBASE-25083 further HBase 1.y releases should have Hadoop 
2.10 as a minimum version. (#2656)
 add 9419c78  HBASE-25289 [testing] Clean up resources after tests in 
rsgroup_shell_test.rb (#2659)
 add 2b61b99  HBASE-25300 'Unknown table hbase:quota' happens when desc 
table in shell if quota disabled (#2673)
 add 8c1e476  HBASE-25298 hbase.rsgroup.fallback.enable should support 
dynamic configuration (#2668)
 add 30ef3aa  HBASE-25306 The log in 
SimpleLoadBalancer#onConfigurationChange is wrong
 add 1528aac  Revert "HBASE-25127 Enhance PerformanceEvaluation to profile 
meta replica performance. (#2644)"
 add 55399a0  HBASE-25213 Should request Compaction after bulkLoadHFiles is 
done (#2587)
 add 7964d2e  HBASE-25068 Pass WALFactory to Replication so it knows of all 
WALProviders, not just default/user-space
 add 89cb0c5  HBASE-25055 Add ReplicationSource for meta WALs; add 
enable/disable w… (#2451)
 add 40843bb  HBASE-25151 warmupRegion frustrates registering WALs on the 
catalog replicationsource
 add 690b4d8  HBASE-25126 Add load balance logic in hbase-client to 
distribute read load over meta replica regions
 add 9ecf6ff  HBASE-25126 Add load balance logic in hbase-client to 
distribute read load over meta replica regions (addendum)
 add eca904e  HBASE-25291 Document how to enable the meta replica load 
balance mode for the client and clean up around hbase:meta read replicas
 add c9156e7  HBASE-25284 Check-in "Enable memstore replication..." design 
(#2680)
 add a307d70  HBASE-25281 Bulkload split hfile too many times

[hbase] branch branch-2.4 updated: HBASE-25562 ReplicationSourceWALReader log and handle exception immediately without retrying (#2943)

2021-02-19 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new c1a9e89  HBASE-25562 ReplicationSourceWALReader log and handle 
exception immediately without retrying (#2943)
c1a9e89 is described below

commit c1a9e8973a10d9ae484a3ec30964f2c99e74b0d5
Author: XinSun 
AuthorDate: Sat Feb 20 10:20:54 2021 +0800

HBASE-25562 ReplicationSourceWALReader log and handle exception immediately 
without retrying (#2943)

Signed-off-by: Wellington Chevreuil 
Signed-off-by: stack 
Signed-off-by: shahrs87
---
 .../regionserver/ReplicationSourceWALReader.java   | 30 --
 1 file changed, 17 insertions(+), 13 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
index 52ac144..f52a83a 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
@@ -150,14 +150,13 @@ class ReplicationSourceWALReader extends Thread {
   }
 }
   } catch (IOException e) { // stream related
-if (sleepMultiplier < maxRetriesMultiplier) {
-  LOG.debug("Failed to read stream of replication entries: " + e);
-  sleepMultiplier++;
-} else {
-  LOG.error("Failed to read stream of replication entries", e);
-  handleEofException(e);
+if (!handleEofException(e)) {
+  LOG.warn("Failed to read stream of replication entries", e);
+  if (sleepMultiplier < maxRetriesMultiplier) {
+sleepMultiplier ++;
+  }
+  Threads.sleep(sleepForRetries * sleepMultiplier);
 }
-Threads.sleep(sleepForRetries * sleepMultiplier);
   } catch (InterruptedException e) {
 LOG.trace("Interrupted while sleeping between WAL reads");
 Thread.currentThread().interrupt();
@@ -244,10 +243,13 @@ class ReplicationSourceWALReader extends Thread {
 }
   }
 
-  // if we get an EOF due to a zero-length log, and there are other logs in 
queue
-  // (highly likely we've closed the current log), we've hit the max retries, 
and autorecovery is
-  // enabled, then dump the log
-  private void handleEofException(IOException e) {
+  /**
+   * if we get an EOF due to a zero-length log, and there are other logs in 
queue
+   * (highly likely we've closed the current log), and autorecovery is
+   * enabled, then dump the log
+   * @return true only the IOE can be handled
+   */
+  private boolean handleEofException(IOException e) {
 PriorityBlockingQueue queue = logQueue.getQueue(walGroupId);
 // Dump the log even if logQueue size is 1 if the source is from recovered 
Source
 // since we don't add current log to recovered source queue so it is safe 
to remove.
@@ -255,14 +257,16 @@ class ReplicationSourceWALReader extends Thread {
   (source.isRecovered() || queue.size() > 1) && this.eofAutoRecovery) {
   try {
 if (fs.getFileStatus(queue.peek()).getLen() == 0) {
-  LOG.warn("Forcing removal of 0 length log in queue: " + 
queue.peek());
+  LOG.warn("Forcing removal of 0 length log in queue: {}", 
queue.peek());
   logQueue.remove(walGroupId);
   currentPosition = 0;
+  return true;
 }
   } catch (IOException ioe) {
-LOG.warn("Couldn't get file length information about log " + 
queue.peek());
+LOG.warn("Couldn't get file length information about log {}", 
queue.peek());
   }
 }
+return false;
   }
 
   public Path getCurrentPath() {



[hbase] branch branch-2 updated: HBASE-25562 ReplicationSourceWALReader log and handle exception immediately without retrying (#2943)

2021-02-19 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new a3edcc2  HBASE-25562 ReplicationSourceWALReader log and handle 
exception immediately without retrying (#2943)
a3edcc2 is described below

commit a3edcc285481357446ed02effbb83a4142402c8f
Author: XinSun 
AuthorDate: Sat Feb 20 10:20:54 2021 +0800

HBASE-25562 ReplicationSourceWALReader log and handle exception immediately 
without retrying (#2943)

Signed-off-by: Wellington Chevreuil 
Signed-off-by: stack 
Signed-off-by: shahrs87
---
 .../regionserver/ReplicationSourceWALReader.java   | 30 --
 1 file changed, 17 insertions(+), 13 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
index 52ac144..f52a83a 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
@@ -150,14 +150,13 @@ class ReplicationSourceWALReader extends Thread {
   }
 }
   } catch (IOException e) { // stream related
-if (sleepMultiplier < maxRetriesMultiplier) {
-  LOG.debug("Failed to read stream of replication entries: " + e);
-  sleepMultiplier++;
-} else {
-  LOG.error("Failed to read stream of replication entries", e);
-  handleEofException(e);
+if (!handleEofException(e)) {
+  LOG.warn("Failed to read stream of replication entries", e);
+  if (sleepMultiplier < maxRetriesMultiplier) {
+sleepMultiplier ++;
+  }
+  Threads.sleep(sleepForRetries * sleepMultiplier);
 }
-Threads.sleep(sleepForRetries * sleepMultiplier);
   } catch (InterruptedException e) {
 LOG.trace("Interrupted while sleeping between WAL reads");
 Thread.currentThread().interrupt();
@@ -244,10 +243,13 @@ class ReplicationSourceWALReader extends Thread {
 }
   }
 
-  // if we get an EOF due to a zero-length log, and there are other logs in 
queue
-  // (highly likely we've closed the current log), we've hit the max retries, 
and autorecovery is
-  // enabled, then dump the log
-  private void handleEofException(IOException e) {
+  /**
+   * if we get an EOF due to a zero-length log, and there are other logs in 
queue
+   * (highly likely we've closed the current log), and autorecovery is
+   * enabled, then dump the log
+   * @return true only the IOE can be handled
+   */
+  private boolean handleEofException(IOException e) {
 PriorityBlockingQueue queue = logQueue.getQueue(walGroupId);
 // Dump the log even if logQueue size is 1 if the source is from recovered 
Source
 // since we don't add current log to recovered source queue so it is safe 
to remove.
@@ -255,14 +257,16 @@ class ReplicationSourceWALReader extends Thread {
   (source.isRecovered() || queue.size() > 1) && this.eofAutoRecovery) {
   try {
 if (fs.getFileStatus(queue.peek()).getLen() == 0) {
-  LOG.warn("Forcing removal of 0 length log in queue: " + 
queue.peek());
+  LOG.warn("Forcing removal of 0 length log in queue: {}", 
queue.peek());
   logQueue.remove(walGroupId);
   currentPosition = 0;
+  return true;
 }
   } catch (IOException ioe) {
-LOG.warn("Couldn't get file length information about log " + 
queue.peek());
+LOG.warn("Couldn't get file length information about log {}", 
queue.peek());
   }
 }
+return false;
   }
 
   public Path getCurrentPath() {



[hbase] branch master updated (88057d8 -> ed90a14)

2021-02-19 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git.


from 88057d8  HBASE-25539: Add age of oldest wal metric (#2945)
 add ed90a14  HBASE-25562 ReplicationSourceWALReader log and handle 
exception immediately without retrying (#2943)

No new revisions were added by this update.

Summary of changes:
 .../regionserver/ReplicationSourceWALReader.java   | 30 --
 1 file changed, 17 insertions(+), 13 deletions(-)



[hbase] branch branch-2.2 updated: HBASE-25559 Terminate threads of oldsources while RS is closing (#2938)

2021-02-09 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.2 by this push:
 new 35ece0a  HBASE-25559 Terminate threads of oldsources while RS is 
closing (#2938)
35ece0a is described below

commit 35ece0a5da8e41d2f54a577325addd93843f78f2
Author: XinSun 
AuthorDate: Tue Feb 9 16:32:46 2021 +0800

HBASE-25559 Terminate threads of oldsources while RS is closing (#2938)

Signed-off-by: Viraj Jasani 
Signed-off-by: stack 
Signed-off-by: Wellington Chevreuil 
---
 .../regionserver/ReplicationSource.java|   2 +-
 .../regionserver/ReplicationSourceManager.java |   3 +
 .../TestReplicationSourceManagerJoin.java  | 109 +
 3 files changed, 113 insertions(+), 1 deletion(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
index 180acb3..039f5db 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
@@ -112,7 +112,7 @@ public class ReplicationSource implements 
ReplicationSourceInterface {
   // Maximum number of retries before taking bold actions
   private int maxRetriesMultiplier;
   // Indicates if this particular source is running
-  private volatile boolean sourceRunning = false;
+  volatile boolean sourceRunning = false;
   // Metrics for this source
   private MetricsSource metrics;
   // WARN threshold for the number of queued logs, defaults to 2
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
index 082981e..76edc26 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
@@ -799,6 +799,9 @@ public class ReplicationSourceManager implements 
ReplicationListener {
 for (ReplicationSourceInterface source : this.sources.values()) {
   source.terminate("Region server is closing");
 }
+for (ReplicationSourceInterface source : this.oldsources) {
+  source.terminate("Region server is closing");
+}
   }
 
   /**
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManagerJoin.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManagerJoin.java
new file mode 100644
index 000..1795588
--- /dev/null
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManagerJoin.java
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication.regionserver;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.util.Optional;
+import java.util.stream.Stream;
+
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.replication.TestReplicationBase;
+import org.apache.hadoop.hbase.testclassification.MediumTests;
+import org.apache.hadoop.hbase.testclassification.ReplicationTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.JVMClusterUtil;
+import org.junit.

[hbase] branch branch-2.3 updated: HBASE-25559 Terminate threads of oldsources while RS is closing (#2938)

2021-02-09 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.3
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.3 by this push:
 new 1938067  HBASE-25559 Terminate threads of oldsources while RS is 
closing (#2938)
1938067 is described below

commit 19380670e8df250f44475b9d669b350e06bd4791
Author: XinSun 
AuthorDate: Tue Feb 9 16:32:46 2021 +0800

HBASE-25559 Terminate threads of oldsources while RS is closing (#2938)

Signed-off-by: Viraj Jasani 
Signed-off-by: stack 
Signed-off-by: Wellington Chevreuil 
---
 .../regionserver/ReplicationSource.java|   2 +-
 .../regionserver/ReplicationSourceManager.java |   3 +
 .../TestReplicationSourceManagerJoin.java  | 109 +
 3 files changed, 113 insertions(+), 1 deletion(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
index 3e9db5f..c52e546 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
@@ -113,7 +113,7 @@ public class ReplicationSource implements 
ReplicationSourceInterface {
   // Maximum number of retries before taking bold actions
   private int maxRetriesMultiplier;
   // Indicates if this particular source is running
-  private volatile boolean sourceRunning = false;
+  volatile boolean sourceRunning = false;
   // Metrics for this source
   private MetricsSource metrics;
   // WARN threshold for the number of queued logs, defaults to 2
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
index c66796d..1ac793b 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
@@ -812,6 +812,9 @@ public class ReplicationSourceManager implements 
ReplicationListener {
 for (ReplicationSourceInterface source : this.sources.values()) {
   source.terminate("Region server is closing");
 }
+for (ReplicationSourceInterface source : this.oldsources) {
+  source.terminate("Region server is closing");
+}
   }
 
   /**
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManagerJoin.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManagerJoin.java
new file mode 100644
index 000..1795588
--- /dev/null
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManagerJoin.java
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication.regionserver;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.util.Optional;
+import java.util.stream.Stream;
+
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.replication.TestReplicationBase;
+import org.apache.hadoop.hbase.testclassification.MediumTests;
+import org.apache.hadoop.hbase.testclassification.ReplicationTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.JVMClusterUtil;
+import org.junit.

[hbase] branch branch-2.4 updated: HBASE-25559 Terminate threads of oldsources while RS is closing (#2938)

2021-02-09 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new e7414f9  HBASE-25559 Terminate threads of oldsources while RS is 
closing (#2938)
e7414f9 is described below

commit e7414f98fd631f5c2503555c8e804ea37ff5b6f9
Author: XinSun 
AuthorDate: Tue Feb 9 16:32:46 2021 +0800

HBASE-25559 Terminate threads of oldsources while RS is closing (#2938)

Signed-off-by: Viraj Jasani 
Signed-off-by: stack 
Signed-off-by: Wellington Chevreuil 
---
 .../regionserver/ReplicationSource.java|   2 +-
 .../regionserver/ReplicationSourceManager.java |   3 +
 .../TestReplicationSourceManagerJoin.java  | 109 +
 3 files changed, 113 insertions(+), 1 deletion(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
index fadccb7..6a64dd8 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
@@ -115,7 +115,7 @@ public class ReplicationSource implements 
ReplicationSourceInterface {
   // Maximum number of retries before taking bold actions
   private int maxRetriesMultiplier;
   // Indicates if this particular source is running
-  private volatile boolean sourceRunning = false;
+  volatile boolean sourceRunning = false;
   // Metrics for this source
   private MetricsSource metrics;
   // WARN threshold for the number of queued logs, defaults to 2
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
index 5bd7157..a318f0c 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
@@ -833,6 +833,9 @@ public class ReplicationSourceManager implements 
ReplicationListener {
 for (ReplicationSourceInterface source : this.sources.values()) {
   source.terminate("Region server is closing");
 }
+for (ReplicationSourceInterface source : this.oldsources) {
+  source.terminate("Region server is closing");
+}
   }
 
   /**
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManagerJoin.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManagerJoin.java
new file mode 100644
index 000..1795588
--- /dev/null
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManagerJoin.java
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication.regionserver;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.util.Optional;
+import java.util.stream.Stream;
+
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.replication.TestReplicationBase;
+import org.apache.hadoop.hbase.testclassification.MediumTests;
+import org.apache.hadoop.hbase.testclassification.ReplicationTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.JVMClusterUtil;
+import org.junit.

[hbase] branch branch-2 updated: HBASE-25559 Terminate threads of oldsources while RS is closing (#2938)

2021-02-09 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new b05dcac  HBASE-25559 Terminate threads of oldsources while RS is 
closing (#2938)
b05dcac is described below

commit b05dcac9fdfccf77a00d76e71acc5d64a9b8edaf
Author: XinSun 
AuthorDate: Tue Feb 9 16:32:46 2021 +0800

HBASE-25559 Terminate threads of oldsources while RS is closing (#2938)

Signed-off-by: Viraj Jasani 
Signed-off-by: stack 
Signed-off-by: Wellington Chevreuil 
---
 .../regionserver/ReplicationSource.java|   2 +-
 .../regionserver/ReplicationSourceManager.java |   3 +
 .../TestReplicationSourceManagerJoin.java  | 109 +
 3 files changed, 113 insertions(+), 1 deletion(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
index fadccb7..6a64dd8 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
@@ -115,7 +115,7 @@ public class ReplicationSource implements 
ReplicationSourceInterface {
   // Maximum number of retries before taking bold actions
   private int maxRetriesMultiplier;
   // Indicates if this particular source is running
-  private volatile boolean sourceRunning = false;
+  volatile boolean sourceRunning = false;
   // Metrics for this source
   private MetricsSource metrics;
   // WARN threshold for the number of queued logs, defaults to 2
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
index ee57808..bfd4570 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
@@ -832,6 +832,9 @@ public class ReplicationSourceManager implements 
ReplicationListener {
 for (ReplicationSourceInterface source : this.sources.values()) {
   source.terminate("Region server is closing");
 }
+for (ReplicationSourceInterface source : this.oldsources) {
+  source.terminate("Region server is closing");
+}
   }
 
   /**
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManagerJoin.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManagerJoin.java
new file mode 100644
index 000..1795588
--- /dev/null
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManagerJoin.java
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication.regionserver;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.util.Optional;
+import java.util.stream.Stream;
+
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.replication.TestReplicationBase;
+import org.apache.hadoop.hbase.testclassification.MediumTests;
+import org.apache.hadoop.hbase.testclassification.ReplicationTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.JVMClusterUtil;
+import org.junit.

[hbase] branch master updated: HBASE-25559 Terminate threads of oldsources while RS is closing (#2938)

2021-02-09 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new 4a3ff98  HBASE-25559 Terminate threads of oldsources while RS is 
closing (#2938)
4a3ff98 is described below

commit 4a3ff989430da21f7b40affc096b3094beb7deb6
Author: XinSun 
AuthorDate: Tue Feb 9 16:32:46 2021 +0800

HBASE-25559 Terminate threads of oldsources while RS is closing (#2938)

Signed-off-by: Viraj Jasani 
Signed-off-by: stack 
Signed-off-by: Wellington Chevreuil 
---
 .../regionserver/ReplicationSource.java|   2 +-
 .../regionserver/ReplicationSourceManager.java |   3 +
 .../TestReplicationSourceManagerJoin.java  | 109 +
 3 files changed, 113 insertions(+), 1 deletion(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
index 317db66..6fb725d 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
@@ -112,7 +112,7 @@ public class ReplicationSource implements 
ReplicationSourceInterface {
   // Maximum number of retries before taking bold actions
   private int maxRetriesMultiplier;
   // Indicates if this particular source is running
-  private volatile boolean sourceRunning = false;
+  volatile boolean sourceRunning = false;
   // Metrics for this source
   private MetricsSource metrics;
   // WARN threshold for the number of queued logs, defaults to 2
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
index a8a35b1..a276b78 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
@@ -1020,6 +1020,9 @@ public class ReplicationSourceManager implements 
ReplicationListener {
 for (ReplicationSourceInterface source : this.sources.values()) {
   source.terminate("Region server is closing");
 }
+for (ReplicationSourceInterface source : this.oldsources) {
+  source.terminate("Region server is closing");
+}
   }
 
   /**
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManagerJoin.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManagerJoin.java
new file mode 100644
index 000..1795588
--- /dev/null
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManagerJoin.java
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication.regionserver;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.util.Optional;
+import java.util.stream.Stream;
+
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.replication.TestReplicationBase;
+import org.apache.hadoop.hbase.testclassification.MediumTests;
+import org.apache.hadoop.hbase.testclassification.ReplicationTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.JVMClusterUtil;
+import org.junit.

[hbase] 02/02: HBASE-25553 It is better for ReplicationTracker.getListOfRegionServers to return ServerName instead of String (#2928)

2021-02-07 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit b1698080cc265ee4fe473d908d3b8434a435e31a
Author: XinSun 
AuthorDate: Sun Feb 7 17:13:47 2021 +0800

HBASE-25553 It is better for ReplicationTracker.getListOfRegionServers to 
return ServerName instead of String (#2928)

Signed-off-by: Wellington Chevreuil 
Signed-off-by: Viraj Jasani 
---
 .../hadoop/hbase/replication/ReplicationTracker.java |  7 ---
 .../hbase/replication/ReplicationTrackerZKImpl.java  | 16 ++--
 .../replication/regionserver/DumpReplicationQueues.java  |  4 ++--
 .../regionserver/ReplicationSourceManager.java   |  3 +--
 .../hbase/replication/TestReplicationTrackerZKImpl.java  | 16 
 5 files changed, 25 insertions(+), 21 deletions(-)

diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
index 95bb5db..9370226 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.hbase.replication;
 
 import java.util.List;
 
+import org.apache.hadoop.hbase.ServerName;
 import org.apache.yetus.audience.InterfaceAudience;
 
 /**
@@ -37,13 +38,13 @@ public interface ReplicationTracker {
* Register a replication listener to receive replication events.
* @param listener
*/
-  public void registerListener(ReplicationListener listener);
+  void registerListener(ReplicationListener listener);
 
-  public void removeListener(ReplicationListener listener);
+  void removeListener(ReplicationListener listener);
 
   /**
* Returns a list of other live region servers in the cluster.
* @return List of region servers.
*/
-  public List getListOfRegionServers();
+  List getListOfRegionServers();
 }
diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
index 54c9c2c..6fc3c45 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
@@ -20,7 +20,10 @@ package org.apache.hadoop.hbase.replication;
 import java.util.ArrayList;
 import java.util.List;
 import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.stream.Collectors;
+
 import org.apache.hadoop.hbase.Abortable;
+import org.apache.hadoop.hbase.ServerName;
 import org.apache.hadoop.hbase.Stoppable;
 import org.apache.hadoop.hbase.zookeeper.ZKListener;
 import org.apache.hadoop.hbase.zookeeper.ZKUtil;
@@ -49,7 +52,7 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
   // listeners to be notified
   private final List listeners = new 
CopyOnWriteArrayList<>();
   // List of all the other region servers in this cluster
-  private final ArrayList otherRegionServers = new ArrayList<>();
+  private final List otherRegionServers = new ArrayList<>();
 
   public ReplicationTrackerZKImpl(ZKWatcher zookeeper, Abortable abortable, 
Stoppable stopper) {
 this.zookeeper = zookeeper;
@@ -74,10 +77,10 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
* Return a snapshot of the current region servers.
*/
   @Override
-  public List getListOfRegionServers() {
+  public List getListOfRegionServers() {
 refreshOtherRegionServersList(false);
 
-List list = null;
+List list = null;
 synchronized (otherRegionServers) {
   list = new ArrayList<>(otherRegionServers);
 }
@@ -162,7 +165,7 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
* if it was empty), false if the data was missing in ZK
*/
   private boolean refreshOtherRegionServersList(boolean watch) {
-List newRsList = getRegisteredRegionServers(watch);
+List newRsList = getRegisteredRegionServers(watch);
 if (newRsList == null) {
   return false;
 } else {
@@ -178,7 +181,7 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
* Get a list of all the other region servers in this cluster and set a watch
* @return a list of server nanes
*/
-  private List getRegisteredRegionServers(boolean watch) {
+  private List getRegisteredRegionServers(boolean watch) {
 List result = null;
 try {
   if (watch) {
@@ -190,6 +193,7 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
 } catch (KeeperException e) {
   this.abortable.abort("Get list of registered region servers&qu

[hbase] branch branch-2.2 updated (d81ae67 -> b169808)

2021-02-07 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a change to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git.


from d81ae67  HBASE-25553 It is better for 
ReplicationTracker.getListOfRegionServers to return ServerName instead of 
String (#2928)
 new c4b194a  Revert "HBASE-25553 It is better for 
ReplicationTracker.getListOfRegionServers to return ServerName instead of 
String (#2928)"
 new b169808  HBASE-25553 It is better for 
ReplicationTracker.getListOfRegionServers to return ServerName instead of 
String (#2928)

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../replication/TestReplicationTrackerZKImpl.java  | 29 +-
 1 file changed, 12 insertions(+), 17 deletions(-)



[hbase] 01/02: Revert "HBASE-25553 It is better for ReplicationTracker.getListOfRegionServers to return ServerName instead of String (#2928)"

2021-02-07 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit c4b194af2c86ecbcfb44ba4dca594dbb705a200d
Author: sunxin 
AuthorDate: Mon Feb 8 11:03:43 2021 +0800

Revert "HBASE-25553 It is better for 
ReplicationTracker.getListOfRegionServers to return ServerName instead of 
String (#2928)"

This reverts commit d81ae67e
---
 .../hadoop/hbase/replication/ReplicationTracker.java |  7 +++
 .../hbase/replication/ReplicationTrackerZKImpl.java  | 16 ++--
 .../replication/regionserver/DumpReplicationQueues.java  |  4 ++--
 .../regionserver/ReplicationSourceManager.java   |  3 ++-
 .../hbase/replication/TestReplicationTrackerZKImpl.java  | 13 -
 5 files changed, 17 insertions(+), 26 deletions(-)

diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
index 9370226..95bb5db 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
@@ -20,7 +20,6 @@ package org.apache.hadoop.hbase.replication;
 
 import java.util.List;
 
-import org.apache.hadoop.hbase.ServerName;
 import org.apache.yetus.audience.InterfaceAudience;
 
 /**
@@ -38,13 +37,13 @@ public interface ReplicationTracker {
* Register a replication listener to receive replication events.
* @param listener
*/
-  void registerListener(ReplicationListener listener);
+  public void registerListener(ReplicationListener listener);
 
-  void removeListener(ReplicationListener listener);
+  public void removeListener(ReplicationListener listener);
 
   /**
* Returns a list of other live region servers in the cluster.
* @return List of region servers.
*/
-  List getListOfRegionServers();
+  public List getListOfRegionServers();
 }
diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
index 6fc3c45..54c9c2c 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
@@ -20,10 +20,7 @@ package org.apache.hadoop.hbase.replication;
 import java.util.ArrayList;
 import java.util.List;
 import java.util.concurrent.CopyOnWriteArrayList;
-import java.util.stream.Collectors;
-
 import org.apache.hadoop.hbase.Abortable;
-import org.apache.hadoop.hbase.ServerName;
 import org.apache.hadoop.hbase.Stoppable;
 import org.apache.hadoop.hbase.zookeeper.ZKListener;
 import org.apache.hadoop.hbase.zookeeper.ZKUtil;
@@ -52,7 +49,7 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
   // listeners to be notified
   private final List listeners = new 
CopyOnWriteArrayList<>();
   // List of all the other region servers in this cluster
-  private final List otherRegionServers = new ArrayList<>();
+  private final ArrayList otherRegionServers = new ArrayList<>();
 
   public ReplicationTrackerZKImpl(ZKWatcher zookeeper, Abortable abortable, 
Stoppable stopper) {
 this.zookeeper = zookeeper;
@@ -77,10 +74,10 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
* Return a snapshot of the current region servers.
*/
   @Override
-  public List getListOfRegionServers() {
+  public List getListOfRegionServers() {
 refreshOtherRegionServersList(false);
 
-List list = null;
+List list = null;
 synchronized (otherRegionServers) {
   list = new ArrayList<>(otherRegionServers);
 }
@@ -165,7 +162,7 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
* if it was empty), false if the data was missing in ZK
*/
   private boolean refreshOtherRegionServersList(boolean watch) {
-List newRsList = getRegisteredRegionServers(watch);
+List newRsList = getRegisteredRegionServers(watch);
 if (newRsList == null) {
   return false;
 } else {
@@ -181,7 +178,7 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
* Get a list of all the other region servers in this cluster and set a watch
* @return a list of server nanes
*/
-  private List getRegisteredRegionServers(boolean watch) {
+  private List getRegisteredRegionServers(boolean watch) {
 List result = null;
 try {
   if (watch) {
@@ -193,7 +190,6 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
 } catch (KeeperException e) {
   this.abortable.abort("Get list of registered region servers", e);
 }
-return r

[hbase] branch branch-2.4 updated: HBASE-25553 It is better for ReplicationTracker.getListOfRegionServers to return ServerName instead of String (#2928)

2021-02-07 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new 87e516d  HBASE-25553 It is better for 
ReplicationTracker.getListOfRegionServers to return ServerName instead of 
String (#2928)
87e516d is described below

commit 87e516da6c518ccb93ca2d294253f27b8d7b1144
Author: XinSun 
AuthorDate: Sun Feb 7 17:13:47 2021 +0800

HBASE-25553 It is better for ReplicationTracker.getListOfRegionServers to 
return ServerName instead of String (#2928)

Signed-off-by: Wellington Chevreuil 
Signed-off-by: Viraj Jasani 
---
 .../hadoop/hbase/replication/ReplicationTracker.java   |  7 ---
 .../hbase/replication/ReplicationTrackerZKImpl.java| 16 ++--
 .../regionserver/DumpReplicationQueues.java|  4 ++--
 .../regionserver/ReplicationSourceManager.java |  3 +--
 .../replication/TestReplicationTrackerZKImpl.java  | 18 +-
 5 files changed, 26 insertions(+), 22 deletions(-)

diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
index 95bb5db..9370226 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.hbase.replication;
 
 import java.util.List;
 
+import org.apache.hadoop.hbase.ServerName;
 import org.apache.yetus.audience.InterfaceAudience;
 
 /**
@@ -37,13 +38,13 @@ public interface ReplicationTracker {
* Register a replication listener to receive replication events.
* @param listener
*/
-  public void registerListener(ReplicationListener listener);
+  void registerListener(ReplicationListener listener);
 
-  public void removeListener(ReplicationListener listener);
+  void removeListener(ReplicationListener listener);
 
   /**
* Returns a list of other live region servers in the cluster.
* @return List of region servers.
*/
-  public List getListOfRegionServers();
+  List getListOfRegionServers();
 }
diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
index 54c9c2c..6fc3c45 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
@@ -20,7 +20,10 @@ package org.apache.hadoop.hbase.replication;
 import java.util.ArrayList;
 import java.util.List;
 import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.stream.Collectors;
+
 import org.apache.hadoop.hbase.Abortable;
+import org.apache.hadoop.hbase.ServerName;
 import org.apache.hadoop.hbase.Stoppable;
 import org.apache.hadoop.hbase.zookeeper.ZKListener;
 import org.apache.hadoop.hbase.zookeeper.ZKUtil;
@@ -49,7 +52,7 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
   // listeners to be notified
   private final List listeners = new 
CopyOnWriteArrayList<>();
   // List of all the other region servers in this cluster
-  private final ArrayList otherRegionServers = new ArrayList<>();
+  private final List otherRegionServers = new ArrayList<>();
 
   public ReplicationTrackerZKImpl(ZKWatcher zookeeper, Abortable abortable, 
Stoppable stopper) {
 this.zookeeper = zookeeper;
@@ -74,10 +77,10 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
* Return a snapshot of the current region servers.
*/
   @Override
-  public List getListOfRegionServers() {
+  public List getListOfRegionServers() {
 refreshOtherRegionServersList(false);
 
-List list = null;
+List list = null;
 synchronized (otherRegionServers) {
   list = new ArrayList<>(otherRegionServers);
 }
@@ -162,7 +165,7 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
* if it was empty), false if the data was missing in ZK
*/
   private boolean refreshOtherRegionServersList(boolean watch) {
-List newRsList = getRegisteredRegionServers(watch);
+List newRsList = getRegisteredRegionServers(watch);
 if (newRsList == null) {
   return false;
 } else {
@@ -178,7 +181,7 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
* Get a list of all the other region servers in this cluster and set a watch
* @return a list of server nanes
*/
-  private List getRegisteredRegionServers(boolean watch) {
+  private List getRegisteredRegionServers(boolean watch) {
 List result = null;
   

[hbase] branch branch-2.3 updated: HBASE-25553 It is better for ReplicationTracker.getListOfRegionServers to return ServerName instead of String (#2928)

2021-02-07 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.3
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.3 by this push:
 new 56e5d5f  HBASE-25553 It is better for 
ReplicationTracker.getListOfRegionServers to return ServerName instead of 
String (#2928)
56e5d5f is described below

commit 56e5d5fff00d77756da08295cbc4ac8d072c29ad
Author: XinSun 
AuthorDate: Sun Feb 7 17:13:47 2021 +0800

HBASE-25553 It is better for ReplicationTracker.getListOfRegionServers to 
return ServerName instead of String (#2928)

Signed-off-by: Wellington Chevreuil 
Signed-off-by: Viraj Jasani 
---
 .../hadoop/hbase/replication/ReplicationTracker.java   |  7 ---
 .../hbase/replication/ReplicationTrackerZKImpl.java| 16 ++--
 .../regionserver/DumpReplicationQueues.java|  4 ++--
 .../regionserver/ReplicationSourceManager.java |  3 +--
 .../replication/TestReplicationTrackerZKImpl.java  | 18 +-
 5 files changed, 26 insertions(+), 22 deletions(-)

diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
index 95bb5db..9370226 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.hbase.replication;
 
 import java.util.List;
 
+import org.apache.hadoop.hbase.ServerName;
 import org.apache.yetus.audience.InterfaceAudience;
 
 /**
@@ -37,13 +38,13 @@ public interface ReplicationTracker {
* Register a replication listener to receive replication events.
* @param listener
*/
-  public void registerListener(ReplicationListener listener);
+  void registerListener(ReplicationListener listener);
 
-  public void removeListener(ReplicationListener listener);
+  void removeListener(ReplicationListener listener);
 
   /**
* Returns a list of other live region servers in the cluster.
* @return List of region servers.
*/
-  public List getListOfRegionServers();
+  List getListOfRegionServers();
 }
diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
index 54c9c2c..6fc3c45 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
@@ -20,7 +20,10 @@ package org.apache.hadoop.hbase.replication;
 import java.util.ArrayList;
 import java.util.List;
 import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.stream.Collectors;
+
 import org.apache.hadoop.hbase.Abortable;
+import org.apache.hadoop.hbase.ServerName;
 import org.apache.hadoop.hbase.Stoppable;
 import org.apache.hadoop.hbase.zookeeper.ZKListener;
 import org.apache.hadoop.hbase.zookeeper.ZKUtil;
@@ -49,7 +52,7 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
   // listeners to be notified
   private final List listeners = new 
CopyOnWriteArrayList<>();
   // List of all the other region servers in this cluster
-  private final ArrayList otherRegionServers = new ArrayList<>();
+  private final List otherRegionServers = new ArrayList<>();
 
   public ReplicationTrackerZKImpl(ZKWatcher zookeeper, Abortable abortable, 
Stoppable stopper) {
 this.zookeeper = zookeeper;
@@ -74,10 +77,10 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
* Return a snapshot of the current region servers.
*/
   @Override
-  public List getListOfRegionServers() {
+  public List getListOfRegionServers() {
 refreshOtherRegionServersList(false);
 
-List list = null;
+List list = null;
 synchronized (otherRegionServers) {
   list = new ArrayList<>(otherRegionServers);
 }
@@ -162,7 +165,7 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
* if it was empty), false if the data was missing in ZK
*/
   private boolean refreshOtherRegionServersList(boolean watch) {
-List newRsList = getRegisteredRegionServers(watch);
+List newRsList = getRegisteredRegionServers(watch);
 if (newRsList == null) {
   return false;
 } else {
@@ -178,7 +181,7 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
* Get a list of all the other region servers in this cluster and set a watch
* @return a list of server nanes
*/
-  private List getRegisteredRegionServers(boolean watch) {
+  private List getRegisteredRegionServers(boolean watch) {
 List result = null;
   

[hbase] branch branch-2.2 updated: HBASE-25553 It is better for ReplicationTracker.getListOfRegionServers to return ServerName instead of String (#2928)

2021-02-07 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.2 by this push:
 new d81ae67  HBASE-25553 It is better for 
ReplicationTracker.getListOfRegionServers to return ServerName instead of 
String (#2928)
d81ae67 is described below

commit d81ae67e4f7b1cc894901a046b1e3929edb9318c
Author: XinSun 
AuthorDate: Sun Feb 7 17:13:47 2021 +0800

HBASE-25553 It is better for ReplicationTracker.getListOfRegionServers to 
return ServerName instead of String (#2928)

Signed-off-by: Wellington Chevreuil 
Signed-off-by: Viraj Jasani 
---
 .../hadoop/hbase/replication/ReplicationTracker.java |  7 ---
 .../hbase/replication/ReplicationTrackerZKImpl.java  | 16 ++--
 .../replication/regionserver/DumpReplicationQueues.java  |  4 ++--
 .../regionserver/ReplicationSourceManager.java   |  3 +--
 .../hbase/replication/TestReplicationTrackerZKImpl.java  | 13 +
 5 files changed, 26 insertions(+), 17 deletions(-)

diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
index 95bb5db..9370226 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.hbase.replication;
 
 import java.util.List;
 
+import org.apache.hadoop.hbase.ServerName;
 import org.apache.yetus.audience.InterfaceAudience;
 
 /**
@@ -37,13 +38,13 @@ public interface ReplicationTracker {
* Register a replication listener to receive replication events.
* @param listener
*/
-  public void registerListener(ReplicationListener listener);
+  void registerListener(ReplicationListener listener);
 
-  public void removeListener(ReplicationListener listener);
+  void removeListener(ReplicationListener listener);
 
   /**
* Returns a list of other live region servers in the cluster.
* @return List of region servers.
*/
-  public List getListOfRegionServers();
+  List getListOfRegionServers();
 }
diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
index 54c9c2c..6fc3c45 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
@@ -20,7 +20,10 @@ package org.apache.hadoop.hbase.replication;
 import java.util.ArrayList;
 import java.util.List;
 import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.stream.Collectors;
+
 import org.apache.hadoop.hbase.Abortable;
+import org.apache.hadoop.hbase.ServerName;
 import org.apache.hadoop.hbase.Stoppable;
 import org.apache.hadoop.hbase.zookeeper.ZKListener;
 import org.apache.hadoop.hbase.zookeeper.ZKUtil;
@@ -49,7 +52,7 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
   // listeners to be notified
   private final List listeners = new 
CopyOnWriteArrayList<>();
   // List of all the other region servers in this cluster
-  private final ArrayList otherRegionServers = new ArrayList<>();
+  private final List otherRegionServers = new ArrayList<>();
 
   public ReplicationTrackerZKImpl(ZKWatcher zookeeper, Abortable abortable, 
Stoppable stopper) {
 this.zookeeper = zookeeper;
@@ -74,10 +77,10 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
* Return a snapshot of the current region servers.
*/
   @Override
-  public List getListOfRegionServers() {
+  public List getListOfRegionServers() {
 refreshOtherRegionServersList(false);
 
-List list = null;
+List list = null;
 synchronized (otherRegionServers) {
   list = new ArrayList<>(otherRegionServers);
 }
@@ -162,7 +165,7 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
* if it was empty), false if the data was missing in ZK
*/
   private boolean refreshOtherRegionServersList(boolean watch) {
-List newRsList = getRegisteredRegionServers(watch);
+List newRsList = getRegisteredRegionServers(watch);
 if (newRsList == null) {
   return false;
 } else {
@@ -178,7 +181,7 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
* Get a list of all the other region servers in this cluster and set a watch
* @return a list of server nanes
*/
-  private List getRegisteredRegionServers(boolean watch) {
+  private List getRegisteredRegionServers(boolean watch) {
 List

[hbase] branch branch-2 updated: HBASE-25553 It is better for ReplicationTracker.getListOfRegionServers to return ServerName instead of String (#2928)

2021-02-07 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 471f523  HBASE-25553 It is better for 
ReplicationTracker.getListOfRegionServers to return ServerName instead of 
String (#2928)
471f523 is described below

commit 471f52350af8c14c55e2d5bcbeab8b36e9ab18e5
Author: XinSun 
AuthorDate: Sun Feb 7 17:13:47 2021 +0800

HBASE-25553 It is better for ReplicationTracker.getListOfRegionServers to 
return ServerName instead of String (#2928)

Signed-off-by: Wellington Chevreuil 
Signed-off-by: Viraj Jasani 
---
 .../hadoop/hbase/replication/ReplicationTracker.java   |  7 ---
 .../hbase/replication/ReplicationTrackerZKImpl.java| 16 ++--
 .../regionserver/DumpReplicationQueues.java|  4 ++--
 .../regionserver/ReplicationSourceManager.java |  3 +--
 .../replication/TestReplicationTrackerZKImpl.java  | 18 +-
 5 files changed, 26 insertions(+), 22 deletions(-)

diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
index 95bb5db..9370226 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.hbase.replication;
 
 import java.util.List;
 
+import org.apache.hadoop.hbase.ServerName;
 import org.apache.yetus.audience.InterfaceAudience;
 
 /**
@@ -37,13 +38,13 @@ public interface ReplicationTracker {
* Register a replication listener to receive replication events.
* @param listener
*/
-  public void registerListener(ReplicationListener listener);
+  void registerListener(ReplicationListener listener);
 
-  public void removeListener(ReplicationListener listener);
+  void removeListener(ReplicationListener listener);
 
   /**
* Returns a list of other live region servers in the cluster.
* @return List of region servers.
*/
-  public List getListOfRegionServers();
+  List getListOfRegionServers();
 }
diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
index 54c9c2c..6fc3c45 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
@@ -20,7 +20,10 @@ package org.apache.hadoop.hbase.replication;
 import java.util.ArrayList;
 import java.util.List;
 import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.stream.Collectors;
+
 import org.apache.hadoop.hbase.Abortable;
+import org.apache.hadoop.hbase.ServerName;
 import org.apache.hadoop.hbase.Stoppable;
 import org.apache.hadoop.hbase.zookeeper.ZKListener;
 import org.apache.hadoop.hbase.zookeeper.ZKUtil;
@@ -49,7 +52,7 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
   // listeners to be notified
   private final List listeners = new 
CopyOnWriteArrayList<>();
   // List of all the other region servers in this cluster
-  private final ArrayList otherRegionServers = new ArrayList<>();
+  private final List otherRegionServers = new ArrayList<>();
 
   public ReplicationTrackerZKImpl(ZKWatcher zookeeper, Abortable abortable, 
Stoppable stopper) {
 this.zookeeper = zookeeper;
@@ -74,10 +77,10 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
* Return a snapshot of the current region servers.
*/
   @Override
-  public List getListOfRegionServers() {
+  public List getListOfRegionServers() {
 refreshOtherRegionServersList(false);
 
-List list = null;
+List list = null;
 synchronized (otherRegionServers) {
   list = new ArrayList<>(otherRegionServers);
 }
@@ -162,7 +165,7 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
* if it was empty), false if the data was missing in ZK
*/
   private boolean refreshOtherRegionServersList(boolean watch) {
-List newRsList = getRegisteredRegionServers(watch);
+List newRsList = getRegisteredRegionServers(watch);
 if (newRsList == null) {
   return false;
 } else {
@@ -178,7 +181,7 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
* Get a list of all the other region servers in this cluster and set a watch
* @return a list of server nanes
*/
-  private List getRegisteredRegionServers(boolean watch) {
+  private List getRegisteredRegionServers(boolean watch) {
 List result = null;
   

[hbase] branch master updated: HBASE-25553 It is better for ReplicationTracker.getListOfRegionServers to return ServerName instead of String (#2928)

2021-02-07 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new d6aff6c  HBASE-25553 It is better for 
ReplicationTracker.getListOfRegionServers to return ServerName instead of 
String (#2928)
d6aff6c is described below

commit d6aff6cbae5157b99c3e1c83472c7d3243a131db
Author: XinSun 
AuthorDate: Sun Feb 7 17:13:47 2021 +0800

HBASE-25553 It is better for ReplicationTracker.getListOfRegionServers to 
return ServerName instead of String (#2928)

Signed-off-by: Wellington Chevreuil 
Signed-off-by: Viraj Jasani 
---
 .../hadoop/hbase/replication/ReplicationTracker.java   |  7 ---
 .../hbase/replication/ReplicationTrackerZKImpl.java| 16 ++--
 .../regionserver/DumpReplicationQueues.java|  4 ++--
 .../regionserver/ReplicationSourceManager.java |  3 +--
 .../replication/TestReplicationTrackerZKImpl.java  | 18 +-
 5 files changed, 26 insertions(+), 22 deletions(-)

diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
index 93a3263..a33e23d 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.hbase.replication;
 
 import java.util.List;
 
+import org.apache.hadoop.hbase.ServerName;
 import org.apache.yetus.audience.InterfaceAudience;
 
 /**
@@ -37,13 +38,13 @@ public interface ReplicationTracker {
* Register a replication listener to receive replication events.
* @param listener the listener to register
*/
-  public void registerListener(ReplicationListener listener);
+  void registerListener(ReplicationListener listener);
 
-  public void removeListener(ReplicationListener listener);
+  void removeListener(ReplicationListener listener);
 
   /**
* Returns a list of other live region servers in the cluster.
* @return List of region servers.
*/
-  public List getListOfRegionServers();
+  List getListOfRegionServers();
 }
diff --git 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
index 54c9c2c..6fc3c45 100644
--- 
a/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
+++ 
b/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTrackerZKImpl.java
@@ -20,7 +20,10 @@ package org.apache.hadoop.hbase.replication;
 import java.util.ArrayList;
 import java.util.List;
 import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.stream.Collectors;
+
 import org.apache.hadoop.hbase.Abortable;
+import org.apache.hadoop.hbase.ServerName;
 import org.apache.hadoop.hbase.Stoppable;
 import org.apache.hadoop.hbase.zookeeper.ZKListener;
 import org.apache.hadoop.hbase.zookeeper.ZKUtil;
@@ -49,7 +52,7 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
   // listeners to be notified
   private final List listeners = new 
CopyOnWriteArrayList<>();
   // List of all the other region servers in this cluster
-  private final ArrayList otherRegionServers = new ArrayList<>();
+  private final List otherRegionServers = new ArrayList<>();
 
   public ReplicationTrackerZKImpl(ZKWatcher zookeeper, Abortable abortable, 
Stoppable stopper) {
 this.zookeeper = zookeeper;
@@ -74,10 +77,10 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
* Return a snapshot of the current region servers.
*/
   @Override
-  public List getListOfRegionServers() {
+  public List getListOfRegionServers() {
 refreshOtherRegionServersList(false);
 
-List list = null;
+List list = null;
 synchronized (otherRegionServers) {
   list = new ArrayList<>(otherRegionServers);
 }
@@ -162,7 +165,7 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
* if it was empty), false if the data was missing in ZK
*/
   private boolean refreshOtherRegionServersList(boolean watch) {
-List newRsList = getRegisteredRegionServers(watch);
+List newRsList = getRegisteredRegionServers(watch);
 if (newRsList == null) {
   return false;
 } else {
@@ -178,7 +181,7 @@ public class ReplicationTrackerZKImpl implements 
ReplicationTracker {
* Get a list of all the other region servers in this cluster and set a watch
* @return a list of server nanes
*/
-  private List getRegisteredRegionServers(boolean watch) {
+  private List getRegisteredRegionServers(boolean watch) {
 List

[hbase] branch master updated: Add Xin Sun as a developer

2020-12-07 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new 979ad0f  Add Xin Sun as a developer
979ad0f is described below

commit 979ad0f3fc240c88f746695f7650076ab9cf824b
Author: XinSun 
AuthorDate: Tue Dec 8 10:49:39 2020 +0800

Add Xin Sun as a developer
---
 pom.xml | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/pom.xml b/pom.xml
index deab438..fe53e35 100755
--- a/pom.xml
+++ b/pom.xml
@@ -686,6 +686,12 @@
   wangzh...@apache.org
   +8
 
+
+  sunxin
+  Xin Sun
+  sun...@apache.org
+  +8
+