[hbase] branch branch-1 updated: HBASE-25601 Use ASF-official mailing list archives

2021-02-25 Thread elserj
This is an automated email from the ASF dual-hosted git repository.

elserj pushed a commit to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1 by this push:
 new a7574ec  HBASE-25601 Use ASF-official mailing list archives
a7574ec is described below

commit a7574ec9a0b22d7999bbb82173cbee58517d2849
Author: Josh Elser 
AuthorDate: Wed Feb 24 11:15:30 2021 -0500

HBASE-25601 Use ASF-official mailing list archives

Signed-off-by: Peter Somogyi 
Signed-off-by: Duo Zhang 

Closes #2983
---
 pom.xml  |  2 --
 src/main/asciidoc/_chapters/community.adoc   |  8 
 src/main/asciidoc/_chapters/compression.adoc |  4 +---
 src/main/asciidoc/_chapters/configuration.adoc   |  8 ++--
 src/main/asciidoc/_chapters/developer.adoc   | 14 ++
 src/main/asciidoc/_chapters/ops_mgt.adoc |  1 -
 src/main/asciidoc/_chapters/performance.adoc |  3 +--
 src/main/asciidoc/_chapters/schema_design.adoc   |  4 ++--
 src/main/asciidoc/_chapters/troubleshooting.adoc | 16 +++-
 9 files changed, 23 insertions(+), 37 deletions(-)

diff --git a/pom.xml b/pom.xml
index e896df5..ae4dabe 100644
--- a/pom.xml
+++ b/pom.xml
@@ -119,7 +119,6 @@
   https://mail-archives.apache.org/mod_mbox/hbase-user/
   
 
https://dir.gmane.org/gmane.comp.java.hadoop.hbase.user
-
https://search-hadoop.com/?q=&fc_project=HBase
   
 
 
@@ -130,7 +129,6 @@
   https://mail-archives.apache.org/mod_mbox/hbase-dev/
   
 
https://dir.gmane.org/gmane.comp.java.hadoop.hbase.devel
-
https://search-hadoop.com/?q=&fc_project=HBase
   
 
 
diff --git a/src/main/asciidoc/_chapters/community.adoc 
b/src/main/asciidoc/_chapters/community.adoc
index 4b91b0d..5739df19 100644
--- a/src/main/asciidoc/_chapters/community.adoc
+++ b/src/main/asciidoc/_chapters/community.adoc
@@ -37,7 +37,7 @@ Just request the name of your branch be added to JIRA up on 
the developer's mail
 Thereafter you can file issues against your feature branch in Apache HBase 
JIRA.
 Your code you keep elsewhere -- it should be public so it can be observed -- 
and you can update dev mailing list on progress.
 When the feature is ready for commit, 3 +1s from committers will get your 
feature merged.
-See link:http://search-hadoop.com/m/asM982C5FkS1[HBase, mail # dev - Thoughts
+See 
link:https://lists.apache.org/thread.html/200513c7e7e4df23c8b9134009d61205c79314e77f222d396006%401346870308%40%3Cdev.hbase.apache.org%3E[HBase,
 mail # dev - Thoughts
   about large feature dev branches]
 
 [[patchplusonepolicy]]
@@ -61,7 +61,7 @@ Any -1 on a patch by anyone vetos a patch; it cannot be 
committed until the just
 [[hbase.fix.version.in.jira]]
 .How to set fix version in JIRA on issue resolve
 
-Here is how link:http://search-hadoop.com/m/azemIi5RCJ1[we agreed] to set 
versions in JIRA when we resolve an issue.
+Here is how we agreed to set versions in JIRA when we resolve an issue.
 If trunk is going to be 0.98.0 then: 
 
 * Commit only to trunk: Mark with 0.98 
@@ -73,7 +73,7 @@ If trunk is going to be 0.98.0 then:
 [[hbase.when.to.close.jira]]
 .Policy on when to set a RESOLVED JIRA as CLOSED
 
-We link:http://search-hadoop.com/m/4cIKs1iwXMS1[agreed] that for issues that 
list multiple releases in their _Fix Version/s_ field, CLOSE the issue on the 
release of any of the versions listed; subsequent change to the issue must 
happen in a new JIRA. 
+We agreed that for issues that list multiple releases in their _Fix Version/s_ 
field, CLOSE the issue on the release of any of the versions listed; subsequent 
change to the issue must happen in a new JIRA. 
 
 [[no.permanent.state.in.zk]]
 .Only transient state in ZooKeeper!
@@ -103,7 +103,7 @@ Owners do not need to be committers.
 [[hbase.commit.msg.format]]
 == Commit Message format
 
-We link:http://search-hadoop.com/m/Gwxwl10cFHa1[agreed] to the following SVN 
commit message format: 
+We agreed to the following Git commit message format:
 [source]
 
 HBASE-x . ()
diff --git a/src/main/asciidoc/_chapters/compression.adoc 
b/src/main/asciidoc/_chapters/compression.adoc
index 78fc6a2..e1ed4dc 100644
--- a/src/main/asciidoc/_chapters/compression.adoc
+++ b/src/main/asciidoc/_chapters/compression.adoc
@@ -31,8 +31,6 @@
 NOTE: Codecs mentioned in this section are for encoding and decoding data 
blocks or row keys.
 For information about replication codecs, see 
<>.
 
-Some of the information in this section is pulled from a 
link:http://search-hadoop.com/m/lL12B1PFVhp1/v=threaded[discussion] on the 
HBase Development mailing list.
-
 HBase supports several different compression algorithms which can be enabled 
on a ColumnFamily.
 Data block encoding attempts to limit duplication of information in keys, 
taking advantage of some of the fundamental designs and patterns of HBase, 

[hbase] branch branch-1.4 updated: HBASE-25601 Use ASF-official mailing list archives

2021-02-25 Thread elserj
This is an automated email from the ASF dual-hosted git repository.

elserj pushed a commit to branch branch-1.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1.4 by this push:
 new 998b794  HBASE-25601 Use ASF-official mailing list archives
998b794 is described below

commit 998b794aabd0f52fde162ce1d1c373a5682107d9
Author: Josh Elser 
AuthorDate: Wed Feb 24 11:15:30 2021 -0500

HBASE-25601 Use ASF-official mailing list archives

Signed-off-by: Peter Somogyi 
Signed-off-by: Duo Zhang 

Closes #2983
---
 pom.xml  |  2 --
 src/main/asciidoc/_chapters/community.adoc   |  8 
 src/main/asciidoc/_chapters/compression.adoc |  4 +---
 src/main/asciidoc/_chapters/configuration.adoc   | 15 +--
 src/main/asciidoc/_chapters/developer.adoc   | 14 ++
 src/main/asciidoc/_chapters/ops_mgt.adoc |  1 -
 src/main/asciidoc/_chapters/performance.adoc |  3 +--
 src/main/asciidoc/_chapters/schema_design.adoc   |  4 ++--
 src/main/asciidoc/_chapters/troubleshooting.adoc | 20 ++--
 9 files changed, 29 insertions(+), 42 deletions(-)

diff --git a/pom.xml b/pom.xml
index 9e51808..5bc05e8 100644
--- a/pom.xml
+++ b/pom.xml
@@ -119,7 +119,6 @@
   https://mail-archives.apache.org/mod_mbox/hbase-user/
   
 
https://dir.gmane.org/gmane.comp.java.hadoop.hbase.user
-
https://search-hadoop.com/?q=&fc_project=HBase
   
 
 
@@ -130,7 +129,6 @@
   https://mail-archives.apache.org/mod_mbox/hbase-dev/
   
 
https://dir.gmane.org/gmane.comp.java.hadoop.hbase.devel
-
https://search-hadoop.com/?q=&fc_project=HBase
   
 
 
diff --git a/src/main/asciidoc/_chapters/community.adoc 
b/src/main/asciidoc/_chapters/community.adoc
index 4b91b0d..5739df19 100644
--- a/src/main/asciidoc/_chapters/community.adoc
+++ b/src/main/asciidoc/_chapters/community.adoc
@@ -37,7 +37,7 @@ Just request the name of your branch be added to JIRA up on 
the developer's mail
 Thereafter you can file issues against your feature branch in Apache HBase 
JIRA.
 Your code you keep elsewhere -- it should be public so it can be observed -- 
and you can update dev mailing list on progress.
 When the feature is ready for commit, 3 +1s from committers will get your 
feature merged.
-See link:http://search-hadoop.com/m/asM982C5FkS1[HBase, mail # dev - Thoughts
+See 
link:https://lists.apache.org/thread.html/200513c7e7e4df23c8b9134009d61205c79314e77f222d396006%401346870308%40%3Cdev.hbase.apache.org%3E[HBase,
 mail # dev - Thoughts
   about large feature dev branches]
 
 [[patchplusonepolicy]]
@@ -61,7 +61,7 @@ Any -1 on a patch by anyone vetos a patch; it cannot be 
committed until the just
 [[hbase.fix.version.in.jira]]
 .How to set fix version in JIRA on issue resolve
 
-Here is how link:http://search-hadoop.com/m/azemIi5RCJ1[we agreed] to set 
versions in JIRA when we resolve an issue.
+Here is how we agreed to set versions in JIRA when we resolve an issue.
 If trunk is going to be 0.98.0 then: 
 
 * Commit only to trunk: Mark with 0.98 
@@ -73,7 +73,7 @@ If trunk is going to be 0.98.0 then:
 [[hbase.when.to.close.jira]]
 .Policy on when to set a RESOLVED JIRA as CLOSED
 
-We link:http://search-hadoop.com/m/4cIKs1iwXMS1[agreed] that for issues that 
list multiple releases in their _Fix Version/s_ field, CLOSE the issue on the 
release of any of the versions listed; subsequent change to the issue must 
happen in a new JIRA. 
+We agreed that for issues that list multiple releases in their _Fix Version/s_ 
field, CLOSE the issue on the release of any of the versions listed; subsequent 
change to the issue must happen in a new JIRA. 
 
 [[no.permanent.state.in.zk]]
 .Only transient state in ZooKeeper!
@@ -103,7 +103,7 @@ Owners do not need to be committers.
 [[hbase.commit.msg.format]]
 == Commit Message format
 
-We link:http://search-hadoop.com/m/Gwxwl10cFHa1[agreed] to the following SVN 
commit message format: 
+We agreed to the following Git commit message format:
 [source]
 
 HBASE-x . ()
diff --git a/src/main/asciidoc/_chapters/compression.adoc 
b/src/main/asciidoc/_chapters/compression.adoc
index 78fc6a2..e1ed4dc 100644
--- a/src/main/asciidoc/_chapters/compression.adoc
+++ b/src/main/asciidoc/_chapters/compression.adoc
@@ -31,8 +31,6 @@
 NOTE: Codecs mentioned in this section are for encoding and decoding data 
blocks or row keys.
 For information about replication codecs, see 
<>.
 
-Some of the information in this section is pulled from a 
link:http://search-hadoop.com/m/lL12B1PFVhp1/v=threaded[discussion] on the 
HBase Development mailing list.
-
 HBase supports several different compression algorithms which can be enabled 
on a ColumnFamily.
 Data block encoding attempts to limit duplication of information in keys, 
taking advantage of some of the fundamental designs and patt

[hbase] branch branch-2 updated: HBASE-25601 Use ASF-official mailing list archives

2021-02-25 Thread elserj
This is an automated email from the ASF dual-hosted git repository.

elserj pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 33c9f77  HBASE-25601 Use ASF-official mailing list archives
33c9f77 is described below

commit 33c9f774d64b3905bb89e9b80ef4551078ca9131
Author: Josh Elser 
AuthorDate: Wed Feb 24 11:15:30 2021 -0500

HBASE-25601 Use ASF-official mailing list archives

Signed-off-by: Peter Somogyi 
Signed-off-by: Duo Zhang 

Closes #2983
---
 RELEASENOTES.md  |  2 +-
 pom.xml  |  2 --
 src/main/asciidoc/_chapters/community.adoc   |  8 
 src/main/asciidoc/_chapters/compression.adoc |  4 +---
 src/main/asciidoc/_chapters/configuration.adoc   |  2 +-
 src/main/asciidoc/_chapters/developer.adoc   |  8 
 src/main/asciidoc/_chapters/ops_mgt.adoc |  1 -
 src/main/asciidoc/_chapters/performance.adoc |  3 +--
 src/main/asciidoc/_chapters/schema_design.adoc   |  4 ++--
 src/main/asciidoc/_chapters/troubleshooting.adoc | 16 +++-
 10 files changed, 17 insertions(+), 33 deletions(-)

diff --git a/RELEASENOTES.md b/RELEASENOTES.md
index 78468fc..527e543 100644
--- a/RELEASENOTES.md
+++ b/RELEASENOTES.md
@@ -8457,7 +8457,7 @@ MVCC methods cleaned up. Make a bit more sense now. Less 
of them.
 
 Simplifies our update of MemStore/WAL. Now we update memstore AFTER we add to 
WAL (but before we sync). This fixes possible dataloss when two edits came in 
with same coordinates; we could order the edits in memstore differently to how 
they arrived in the WAL.
 
-Marked as an incompatible change because it breaks Distributed Log Replay, a 
feature we'd determined already was unreliable and to be removed (See 
http://search-hadoop.com/m/YGbbhTJpoal8GD1).
+Marked as an incompatible change because it breaks Distributed Log Replay, a 
feature we'd determined already was unreliable and to be removed.
 
 
 ---
diff --git a/pom.xml b/pom.xml
index a1c23e2..2e4ca35 100755
--- a/pom.xml
+++ b/pom.xml
@@ -111,7 +111,6 @@
   http://mail-archives.apache.org/mod_mbox/hbase-user/
   
 
https://dir.gmane.org/gmane.comp.java.hadoop.hbase.user
-
https://search-hadoop.com/?q=&fc_project=HBase
   
 
 
@@ -122,7 +121,6 @@
   http://mail-archives.apache.org/mod_mbox/hbase-dev/
   
 
https://dir.gmane.org/gmane.comp.java.hadoop.hbase.devel
-
https://search-hadoop.com/?q=&fc_project=HBase
   
 
 
diff --git a/src/main/asciidoc/_chapters/community.adoc 
b/src/main/asciidoc/_chapters/community.adoc
index ffe209e..91a596d 100644
--- a/src/main/asciidoc/_chapters/community.adoc
+++ b/src/main/asciidoc/_chapters/community.adoc
@@ -37,13 +37,13 @@ Just request the name of your branch be added to JIRA up on 
the developer's mail
 Thereafter you can file issues against your feature branch in Apache HBase 
JIRA.
 Your code you keep elsewhere -- it should be public so it can be observed -- 
and you can update dev mailing list on progress.
 When the feature is ready for commit, 3 +1s from committers will get your 
feature merged.
-See link:http://search-hadoop.com/m/asM982C5FkS1[HBase, mail # dev - Thoughts
+See 
link:https://lists.apache.org/thread.html/200513c7e7e4df23c8b9134009d61205c79314e77f222d396006%401346870308%40%3Cdev.hbase.apache.org%3E[HBase,
 mail # dev - Thoughts
   about large feature dev branches]
 
 [[hbase.fix.version.in.jira]]
 .How to set fix version in JIRA on issue resolve
 
-Here is how link:http://search-hadoop.com/m/azemIi5RCJ1[we agreed] to set 
versions in JIRA when we resolve an issue.
+Here is how we agreed to set versions in JIRA when we resolve an issue.
 If master is going to be 2.0.0, and branch-1 1.4.0 then:
 
 * Commit only to master: Mark with 2.0.0
@@ -54,7 +54,7 @@ If master is going to be 2.0.0, and branch-1 1.4.0 then:
 [[hbase.when.to.close.jira]]
 .Policy on when to set a RESOLVED JIRA as CLOSED
 
-We link:http://search-hadoop.com/m/4cIKs1iwXMS1[agreed] that for issues that 
list multiple releases in their _Fix Version/s_ field, CLOSE the issue on the 
release of any of the versions listed; subsequent change to the issue must 
happen in a new JIRA.
+We agreed that for issues that list multiple releases in their _Fix Version/s_ 
field, CLOSE the issue on the release of any of the versions listed; subsequent 
change to the issue must happen in a new JIRA.
 
 [[no.permanent.state.in.zk]]
 .Only transient state in ZooKeeper!
@@ -99,7 +99,7 @@ NOTE: End-of-life releases are not included in this list.
 [[hbase.commit.msg.format]]
 == Commit Message format
 
-We link:http://search-hadoop.com/m/Gwxwl10cFHa1[agreed] to the following Git 
commit message format:
+We agreed to the following Git commit message format:
 [source]
 
 HBASE-x . ()
diff --git a/src/ma

[hbase] branch branch-2.2 updated: HBASE-25601 Use ASF-official mailing list archives

2021-02-25 Thread elserj
This is an automated email from the ASF dual-hosted git repository.

elserj pushed a commit to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.2 by this push:
 new 2e09099  HBASE-25601 Use ASF-official mailing list archives
2e09099 is described below

commit 2e09099fc964ea51f6f5c20ced9495de6aeca2ad
Author: Josh Elser 
AuthorDate: Wed Feb 24 11:15:30 2021 -0500

HBASE-25601 Use ASF-official mailing list archives

Signed-off-by: Peter Somogyi 
Signed-off-by: Duo Zhang 

Closes #2983
---
 pom.xml  |  2 --
 src/main/asciidoc/_chapters/community.adoc   |  8 
 src/main/asciidoc/_chapters/compression.adoc |  4 +---
 src/main/asciidoc/_chapters/configuration.adoc   |  8 ++--
 src/main/asciidoc/_chapters/developer.adoc   | 14 ++
 src/main/asciidoc/_chapters/ops_mgt.adoc |  1 -
 src/main/asciidoc/_chapters/performance.adoc |  3 +--
 src/main/asciidoc/_chapters/schema_design.adoc   |  4 ++--
 src/main/asciidoc/_chapters/troubleshooting.adoc | 16 +++-
 9 files changed, 23 insertions(+), 37 deletions(-)

diff --git a/pom.xml b/pom.xml
index 40ee164..e3d0871 100755
--- a/pom.xml
+++ b/pom.xml
@@ -109,7 +109,6 @@
   http://mail-archives.apache.org/mod_mbox/hbase-user/
   
 
https://dir.gmane.org/gmane.comp.java.hadoop.hbase.user
-
https://search-hadoop.com/?q=&fc_project=HBase
   
 
 
@@ -120,7 +119,6 @@
   http://mail-archives.apache.org/mod_mbox/hbase-dev/
   
 
https://dir.gmane.org/gmane.comp.java.hadoop.hbase.devel
-
https://search-hadoop.com/?q=&fc_project=HBase
   
 
 
diff --git a/src/main/asciidoc/_chapters/community.adoc 
b/src/main/asciidoc/_chapters/community.adoc
index 3a429d6..4f82555 100644
--- a/src/main/asciidoc/_chapters/community.adoc
+++ b/src/main/asciidoc/_chapters/community.adoc
@@ -37,13 +37,13 @@ Just request the name of your branch be added to JIRA up on 
the developer's mail
 Thereafter you can file issues against your feature branch in Apache HBase 
JIRA.
 Your code you keep elsewhere -- it should be public so it can be observed -- 
and you can update dev mailing list on progress.
 When the feature is ready for commit, 3 +1s from committers will get your 
feature merged.
-See link:http://search-hadoop.com/m/asM982C5FkS1[HBase, mail # dev - Thoughts
+See 
link:https://lists.apache.org/thread.html/200513c7e7e4df23c8b9134009d61205c79314e77f222d396006%401346870308%40%3Cdev.hbase.apache.org%3E[HBase,
 mail # dev - Thoughts
   about large feature dev branches]
 
 [[hbase.fix.version.in.jira]]
 .How to set fix version in JIRA on issue resolve
 
-Here is how link:http://search-hadoop.com/m/azemIi5RCJ1[we agreed] to set 
versions in JIRA when we resolve an issue.
+Here is how we agreed to set versions in JIRA when we resolve an issue.
 If master is going to be 2.0.0, and branch-1 1.4.0 then:
 
 * Commit only to master: Mark with 2.0.0
@@ -54,7 +54,7 @@ If master is going to be 2.0.0, and branch-1 1.4.0 then:
 [[hbase.when.to.close.jira]]
 .Policy on when to set a RESOLVED JIRA as CLOSED
 
-We link:http://search-hadoop.com/m/4cIKs1iwXMS1[agreed] that for issues that 
list multiple releases in their _Fix Version/s_ field, CLOSE the issue on the 
release of any of the versions listed; subsequent change to the issue must 
happen in a new JIRA.
+We agreed that for issues that list multiple releases in their _Fix Version/s_ 
field, CLOSE the issue on the release of any of the versions listed; subsequent 
change to the issue must happen in a new JIRA.
 
 [[no.permanent.state.in.zk]]
 .Only transient state in ZooKeeper!
@@ -105,7 +105,7 @@ NOTE: End-of-life releases are not included in this list.
 [[hbase.commit.msg.format]]
 == Commit Message format
 
-We link:http://search-hadoop.com/m/Gwxwl10cFHa1[agreed] to the following Git 
commit message format:
+We agreed to the following Git commit message format:
 [source]
 
 HBASE-x . ()
diff --git a/src/main/asciidoc/_chapters/compression.adoc 
b/src/main/asciidoc/_chapters/compression.adoc
index b2ff5ce..a3082c0 100644
--- a/src/main/asciidoc/_chapters/compression.adoc
+++ b/src/main/asciidoc/_chapters/compression.adoc
@@ -31,8 +31,6 @@
 NOTE: Codecs mentioned in this section are for encoding and decoding data 
blocks or row keys.
 For information about replication codecs, see 
<>.
 
-Some of the information in this section is pulled from a 
link:http://search-hadoop.com/m/lL12B1PFVhp1/v=threaded[discussion] on the 
HBase Development mailing list.
-
 HBase supports several different compression algorithms which can be enabled 
on a ColumnFamily.
 Data block encoding attempts to limit duplication of information in keys, 
taking advantage of some of the fundamental designs and patterns of HBase, such 
as sorted row keys and the schema of a given table.
 Comp

[hbase] branch branch-2.4 updated: HBASE-25601 Use ASF-official mailing list archives

2021-02-25 Thread elserj
This is an automated email from the ASF dual-hosted git repository.

elserj pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new f368d54  HBASE-25601 Use ASF-official mailing list archives
f368d54 is described below

commit f368d54506a50f6b95633718325f5d58b7a35b52
Author: Josh Elser 
AuthorDate: Wed Feb 24 11:15:30 2021 -0500

HBASE-25601 Use ASF-official mailing list archives

Signed-off-by: Peter Somogyi 
Signed-off-by: Duo Zhang 

Closes #2983
---
 RELEASENOTES.md  |  2 +-
 pom.xml  |  2 --
 src/main/asciidoc/_chapters/community.adoc   |  8 
 src/main/asciidoc/_chapters/compression.adoc |  4 +---
 src/main/asciidoc/_chapters/configuration.adoc   |  2 +-
 src/main/asciidoc/_chapters/developer.adoc   |  8 
 src/main/asciidoc/_chapters/ops_mgt.adoc |  1 -
 src/main/asciidoc/_chapters/performance.adoc |  3 +--
 src/main/asciidoc/_chapters/schema_design.adoc   |  4 ++--
 src/main/asciidoc/_chapters/troubleshooting.adoc | 16 +++-
 10 files changed, 17 insertions(+), 33 deletions(-)

diff --git a/RELEASENOTES.md b/RELEASENOTES.md
index dfd6902..4f37839 100644
--- a/RELEASENOTES.md
+++ b/RELEASENOTES.md
@@ -17931,7 +17931,7 @@ MVCC methods cleaned up. Make a bit more sense now. 
Less of them.
 
 Simplifies our update of MemStore/WAL. Now we update memstore AFTER we add to 
WAL (but before we sync). This fixes possible dataloss when two edits came in 
with same coordinates; we could order the edits in memstore differently to how 
they arrived in the WAL.
 
-Marked as an incompatible change because it breaks Distributed Log Replay, a 
feature we'd determined already was unreliable and to be removed (See 
http://search-hadoop.com/m/YGbbhTJpoal8GD1).
+Marked as an incompatible change because it breaks Distributed Log Replay, a 
feature we'd determined already was unreliable and to be removed.
 
 
 ---
diff --git a/pom.xml b/pom.xml
index ad079c4..4b7bbcd 100755
--- a/pom.xml
+++ b/pom.xml
@@ -111,7 +111,6 @@
   http://mail-archives.apache.org/mod_mbox/hbase-user/
   
 
https://dir.gmane.org/gmane.comp.java.hadoop.hbase.user
-
https://search-hadoop.com/?q=&fc_project=HBase
   
 
 
@@ -122,7 +121,6 @@
   http://mail-archives.apache.org/mod_mbox/hbase-dev/
   
 
https://dir.gmane.org/gmane.comp.java.hadoop.hbase.devel
-
https://search-hadoop.com/?q=&fc_project=HBase
   
 
 
diff --git a/src/main/asciidoc/_chapters/community.adoc 
b/src/main/asciidoc/_chapters/community.adoc
index ffe209e..91a596d 100644
--- a/src/main/asciidoc/_chapters/community.adoc
+++ b/src/main/asciidoc/_chapters/community.adoc
@@ -37,13 +37,13 @@ Just request the name of your branch be added to JIRA up on 
the developer's mail
 Thereafter you can file issues against your feature branch in Apache HBase 
JIRA.
 Your code you keep elsewhere -- it should be public so it can be observed -- 
and you can update dev mailing list on progress.
 When the feature is ready for commit, 3 +1s from committers will get your 
feature merged.
-See link:http://search-hadoop.com/m/asM982C5FkS1[HBase, mail # dev - Thoughts
+See 
link:https://lists.apache.org/thread.html/200513c7e7e4df23c8b9134009d61205c79314e77f222d396006%401346870308%40%3Cdev.hbase.apache.org%3E[HBase,
 mail # dev - Thoughts
   about large feature dev branches]
 
 [[hbase.fix.version.in.jira]]
 .How to set fix version in JIRA on issue resolve
 
-Here is how link:http://search-hadoop.com/m/azemIi5RCJ1[we agreed] to set 
versions in JIRA when we resolve an issue.
+Here is how we agreed to set versions in JIRA when we resolve an issue.
 If master is going to be 2.0.0, and branch-1 1.4.0 then:
 
 * Commit only to master: Mark with 2.0.0
@@ -54,7 +54,7 @@ If master is going to be 2.0.0, and branch-1 1.4.0 then:
 [[hbase.when.to.close.jira]]
 .Policy on when to set a RESOLVED JIRA as CLOSED
 
-We link:http://search-hadoop.com/m/4cIKs1iwXMS1[agreed] that for issues that 
list multiple releases in their _Fix Version/s_ field, CLOSE the issue on the 
release of any of the versions listed; subsequent change to the issue must 
happen in a new JIRA.
+We agreed that for issues that list multiple releases in their _Fix Version/s_ 
field, CLOSE the issue on the release of any of the versions listed; subsequent 
change to the issue must happen in a new JIRA.
 
 [[no.permanent.state.in.zk]]
 .Only transient state in ZooKeeper!
@@ -99,7 +99,7 @@ NOTE: End-of-life releases are not included in this list.
 [[hbase.commit.msg.format]]
 == Commit Message format
 
-We link:http://search-hadoop.com/m/Gwxwl10cFHa1[agreed] to the following Git 
commit message format:
+We agreed to the following Git commit message format:
 [source]
 
 HBASE-x . ()
diff --git a/

[hbase] branch master updated: HBASE-25601 Use ASF-official mailing list archives

2021-02-25 Thread elserj
This is an automated email from the ASF dual-hosted git repository.

elserj pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new a7d0445  HBASE-25601 Use ASF-official mailing list archives
a7d0445 is described below

commit a7d0445a2111499461500b458a920bf0219a6db1
Author: Josh Elser 
AuthorDate: Wed Feb 24 11:15:30 2021 -0500

HBASE-25601 Use ASF-official mailing list archives

Signed-off-by: Peter Somogyi 
Signed-off-by: Duo Zhang 

Closes #2983
---
 pom.xml  |  2 --
 src/main/asciidoc/_chapters/community.adoc   |  8 
 src/main/asciidoc/_chapters/compression.adoc |  4 +---
 src/main/asciidoc/_chapters/configuration.adoc   |  2 +-
 src/main/asciidoc/_chapters/developer.adoc   |  6 +++---
 src/main/asciidoc/_chapters/ops_mgt.adoc |  1 -
 src/main/asciidoc/_chapters/performance.adoc |  3 +--
 src/main/asciidoc/_chapters/schema_design.adoc   |  4 ++--
 src/main/asciidoc/_chapters/troubleshooting.adoc | 16 +++-
 9 files changed, 15 insertions(+), 31 deletions(-)

diff --git a/pom.xml b/pom.xml
index e4505d6..43b4080 100755
--- a/pom.xml
+++ b/pom.xml
@@ -112,7 +112,6 @@
   
https://lists.apache.org/list.html?u...@hbase.apache.org
   
 
https://dir.gmane.org/gmane.comp.java.hadoop.hbase.user
-
https://search-hadoop.com/?q=&fc_project=HBase
   
 
 
@@ -123,7 +122,6 @@
   
https://lists.apache.org/list.html?d...@hbase.apache.org
   
 
https://dir.gmane.org/gmane.comp.java.hadoop.hbase.devel
-
https://search-hadoop.com/?q=&fc_project=HBase
   
 
 
diff --git a/src/main/asciidoc/_chapters/community.adoc 
b/src/main/asciidoc/_chapters/community.adoc
index 3db6482..6af5d48 100644
--- a/src/main/asciidoc/_chapters/community.adoc
+++ b/src/main/asciidoc/_chapters/community.adoc
@@ -37,13 +37,13 @@ Just request the name of your branch be added to JIRA up on 
the developer's mail
 Thereafter you can file issues against your feature branch in Apache HBase 
JIRA.
 Your code you keep elsewhere -- it should be public so it can be observed -- 
and you can update dev mailing list on progress.
 When the feature is ready for commit, 3 +1s from committers will get your 
feature merged.
-See link:http://search-hadoop.com/m/asM982C5FkS1[HBase, mail # dev - Thoughts
+See 
link:https://lists.apache.org/thread.html/200513c7e7e4df23c8b9134009d61205c79314e77f222d396006%401346870308%40%3Cdev.hbase.apache.org%3E[HBase,
 mail # dev - Thoughts
   about large feature dev branches]
 
 [[hbase.fix.version.in.jira]]
 .How to set fix version in JIRA on issue resolve
 
-Here is how link:http://search-hadoop.com/m/azemIi5RCJ1[we agreed] to set 
versions in JIRA when we
+Here is how we agreed to set versions in JIRA when we
 resolve an issue. If master is going to be 3.0.0, branch-2 will be 2.4.0, and 
branch-1 will be
 1.7.0 then:
 
@@ -59,7 +59,7 @@ resolve an issue. If master is going to be 3.0.0, branch-2 
will be 2.4.0, and br
 [[hbase.when.to.close.jira]]
 .Policy on when to set a RESOLVED JIRA as CLOSED
 
-We link:http://search-hadoop.com/m/4cIKs1iwXMS1[agreed] that for issues that 
list multiple releases in their _Fix Version/s_ field, CLOSE the issue on the 
release of any of the versions listed; subsequent change to the issue must 
happen in a new JIRA.
+We agreed that for issues that list multiple releases in their _Fix Version/s_ 
field, CLOSE the issue on the release of any of the versions listed; subsequent 
change to the issue must happen in a new JIRA.
 
 [[no.permanent.state.in.zk]]
 .Only transient state in ZooKeeper!
@@ -101,7 +101,7 @@ NOTE: End-of-life releases are not included in this list.
 [[hbase.commit.msg.format]]
 == Commit Message format
 
-We link:http://search-hadoop.com/m/Gwxwl10cFHa1[agreed] to the following Git 
commit message format:
+We agreed to the following Git commit message format:
 [source]
 
 HBASE-x . ()
diff --git a/src/main/asciidoc/_chapters/compression.adoc 
b/src/main/asciidoc/_chapters/compression.adoc
index 9a1bf0c..5e1ff3a 100644
--- a/src/main/asciidoc/_chapters/compression.adoc
+++ b/src/main/asciidoc/_chapters/compression.adoc
@@ -31,8 +31,6 @@
 NOTE: Codecs mentioned in this section are for encoding and decoding data 
blocks or row keys.
 For information about replication codecs, see 
<>.
 
-Some of the information in this section is pulled from a 
link:http://search-hadoop.com/m/lL12B1PFVhp1/v=threaded[discussion] on the 
HBase Development mailing list.
-
 HBase supports several different compression algorithms which can be enabled 
on a ColumnFamily.
 Data block encoding attempts to limit duplication of information in keys, 
taking advantage of some of the fundamental designs and patterns of HBase, such 
as sorted row keys and the schema of a given table.
 Compressors redu

[hbase] branch branch-2.3 updated: HBASE-25601 Use ASF-official mailing list archives

2021-02-25 Thread elserj
This is an automated email from the ASF dual-hosted git repository.

elserj pushed a commit to branch branch-2.3
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.3 by this push:
 new 2333f8a  HBASE-25601 Use ASF-official mailing list archives
2333f8a is described below

commit 2333f8a73dbd0fbb04363f9e2d6eb53fb48d80fa
Author: Josh Elser 
AuthorDate: Wed Feb 24 11:15:30 2021 -0500

HBASE-25601 Use ASF-official mailing list archives

Signed-off-by: Peter Somogyi 
Signed-off-by: Duo Zhang 

Closes #2983
---
 RELEASENOTES.md  |  2 +-
 pom.xml  |  2 --
 src/main/asciidoc/_chapters/community.adoc   |  8 
 src/main/asciidoc/_chapters/compression.adoc |  4 +---
 src/main/asciidoc/_chapters/configuration.adoc   |  2 +-
 src/main/asciidoc/_chapters/developer.adoc   |  8 
 src/main/asciidoc/_chapters/ops_mgt.adoc |  1 -
 src/main/asciidoc/_chapters/performance.adoc |  3 +--
 src/main/asciidoc/_chapters/schema_design.adoc   |  4 ++--
 src/main/asciidoc/_chapters/troubleshooting.adoc | 16 +++-
 10 files changed, 17 insertions(+), 33 deletions(-)

diff --git a/RELEASENOTES.md b/RELEASENOTES.md
index 6f8a457..74d7fb4 100644
--- a/RELEASENOTES.md
+++ b/RELEASENOTES.md
@@ -10893,7 +10893,7 @@ MVCC methods cleaned up. Make a bit more sense now. 
Less of them.
 
 Simplifies our update of MemStore/WAL. Now we update memstore AFTER we add to 
WAL (but before we sync). This fixes possible dataloss when two edits came in 
with same coordinates; we could order the edits in memstore differently to how 
they arrived in the WAL.
 
-Marked as an incompatible change because it breaks Distributed Log Replay, a 
feature we'd determined already was unreliable and to be removed (See 
http://search-hadoop.com/m/YGbbhTJpoal8GD1).
+Marked as an incompatible change because it breaks Distributed Log Replay, a 
feature we'd determined already was unreliable and to be removed.
 
 
 ---
diff --git a/pom.xml b/pom.xml
index d96a8c2..9afb1a2 100755
--- a/pom.xml
+++ b/pom.xml
@@ -111,7 +111,6 @@
   http://mail-archives.apache.org/mod_mbox/hbase-user/
   
 
https://dir.gmane.org/gmane.comp.java.hadoop.hbase.user
-
https://search-hadoop.com/?q=&fc_project=HBase
   
 
 
@@ -122,7 +121,6 @@
   http://mail-archives.apache.org/mod_mbox/hbase-dev/
   
 
https://dir.gmane.org/gmane.comp.java.hadoop.hbase.devel
-
https://search-hadoop.com/?q=&fc_project=HBase
   
 
 
diff --git a/src/main/asciidoc/_chapters/community.adoc 
b/src/main/asciidoc/_chapters/community.adoc
index ffe209e..91a596d 100644
--- a/src/main/asciidoc/_chapters/community.adoc
+++ b/src/main/asciidoc/_chapters/community.adoc
@@ -37,13 +37,13 @@ Just request the name of your branch be added to JIRA up on 
the developer's mail
 Thereafter you can file issues against your feature branch in Apache HBase 
JIRA.
 Your code you keep elsewhere -- it should be public so it can be observed -- 
and you can update dev mailing list on progress.
 When the feature is ready for commit, 3 +1s from committers will get your 
feature merged.
-See link:http://search-hadoop.com/m/asM982C5FkS1[HBase, mail # dev - Thoughts
+See 
link:https://lists.apache.org/thread.html/200513c7e7e4df23c8b9134009d61205c79314e77f222d396006%401346870308%40%3Cdev.hbase.apache.org%3E[HBase,
 mail # dev - Thoughts
   about large feature dev branches]
 
 [[hbase.fix.version.in.jira]]
 .How to set fix version in JIRA on issue resolve
 
-Here is how link:http://search-hadoop.com/m/azemIi5RCJ1[we agreed] to set 
versions in JIRA when we resolve an issue.
+Here is how we agreed to set versions in JIRA when we resolve an issue.
 If master is going to be 2.0.0, and branch-1 1.4.0 then:
 
 * Commit only to master: Mark with 2.0.0
@@ -54,7 +54,7 @@ If master is going to be 2.0.0, and branch-1 1.4.0 then:
 [[hbase.when.to.close.jira]]
 .Policy on when to set a RESOLVED JIRA as CLOSED
 
-We link:http://search-hadoop.com/m/4cIKs1iwXMS1[agreed] that for issues that 
list multiple releases in their _Fix Version/s_ field, CLOSE the issue on the 
release of any of the versions listed; subsequent change to the issue must 
happen in a new JIRA.
+We agreed that for issues that list multiple releases in their _Fix Version/s_ 
field, CLOSE the issue on the release of any of the versions listed; subsequent 
change to the issue must happen in a new JIRA.
 
 [[no.permanent.state.in.zk]]
 .Only transient state in ZooKeeper!
@@ -99,7 +99,7 @@ NOTE: End-of-life releases are not included in this list.
 [[hbase.commit.msg.format]]
 == Commit Message format
 
-We link:http://search-hadoop.com/m/Gwxwl10cFHa1[agreed] to the following Git 
commit message format:
+We agreed to the following Git commit message format:
 [source]
 
 HBASE-x . ()
diff --git a/

[hbase-site] branch asf-site updated: INFRA-10751 Empty commit

2021-02-25 Thread git-site-role
This is an automated email from the ASF dual-hosted git repository.

git-site-role pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/hbase-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new ad0e535  INFRA-10751 Empty commit
ad0e535 is described below

commit ad0e5351581ad3a70bfce31a9f824d62d31b6d39
Author: jenkins 
AuthorDate: Thu Feb 25 20:17:23 2021 +

INFRA-10751 Empty commit



[hbase] branch master updated: HBASE-25596: Fix NPE and avoid permanent unreplicated data due to EOF (#2987)

2021-02-25 Thread apurtell
This is an automated email from the ASF dual-hosted git repository.

apurtell pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new 3f1c486  HBASE-25596: Fix NPE and avoid permanent unreplicated data 
due to EOF (#2987)
3f1c486 is described below

commit 3f1c486ddb8e32ef68cd4c2986d4b2e7f3667f3d
Author: Sandeep Pal <50725353+sandeepvina...@users.noreply.github.com>
AuthorDate: Thu Feb 25 13:36:11 2021 -0800

HBASE-25596: Fix NPE and avoid permanent unreplicated data due to EOF 
(#2987)

Signed-off-by: Xu Cang 
Signed-off-by: Andrew Purtell 
---
 .../regionserver/ReplicationSource.java|   7 +-
 .../regionserver/ReplicationSourceWALReader.java   | 145 ++
 .../SerialReplicationSourceWALReader.java  |   4 +-
 .../replication/regionserver/WALEntryBatch.java|   4 +
 .../replication/regionserver/WALEntryStream.java   |   6 +-
 .../hbase/replication/TestReplicationBase.java |  27 +-
 .../TestReplicationEmptyWALRecovery.java   | 298 +++--
 .../regionserver/TestWALEntryStream.java   |  62 -
 8 files changed, 472 insertions(+), 81 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
index 3272cf1..95bd686 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
@@ -30,6 +30,7 @@ import java.util.Set;
 import java.util.TreeMap;
 import java.util.UUID;
 import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.PriorityBlockingQueue;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.TimeoutException;
 import java.util.concurrent.atomic.AtomicBoolean;
@@ -65,7 +66,6 @@ import org.apache.hadoop.hbase.wal.WAL.Entry;
 import org.apache.yetus.audience.InterfaceAudience;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
-
 import org.apache.hbase.thirdparty.com.google.common.collect.Lists;
 
 /**
@@ -260,6 +260,11 @@ public class ReplicationSource implements 
ReplicationSourceInterface {
 }
   }
 
+  @InterfaceAudience.Private
+  public Map> getQueues() {
+return logQueue.getQueues();
+  }
+
   @Override
   public void addHFileRefs(TableName tableName, byte[] family, List> pairs)
   throws ReplicationException {
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
index 301a9e8..57c0a16 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
@@ -41,7 +41,6 @@ import org.apache.yetus.audience.InterfaceAudience;
 import org.apache.yetus.audience.InterfaceStability;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
-
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.BulkLoadDescriptor;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.StoreDescriptor;
 
@@ -123,44 +122,64 @@ class ReplicationSourceWALReader extends Thread {
   @Override
   public void run() {
 int sleepMultiplier = 1;
-while (isReaderRunning()) { // we only loop back here if something fatal 
happened to our stream
-  try (WALEntryStream entryStream =
-  new WALEntryStream(logQueue, conf, currentPosition,
-  source.getWALFileLengthProvider(), 
source.getServerWALsBelongTo(),
-  source.getSourceMetrics(), walGroupId)) {
-while (isReaderRunning()) { // loop here to keep reusing stream while 
we can
-  if (!source.isPeerEnabled()) {
-Threads.sleep(sleepForRetries);
-continue;
-  }
-  if (!checkQuota()) {
-continue;
+WALEntryBatch batch = null;
+WALEntryStream entryStream = null;
+try {
+  // we only loop back here if something fatal happened to our stream
+  while (isReaderRunning()) {
+try {
+  entryStream =
+new WALEntryStream(logQueue, conf, currentPosition, 
source.getWALFileLengthProvider(),
+  source.getServerWALsBelongTo(), source.getSourceMetrics(), 
walGroupId);
+  while (isReaderRunning()) { // loop here to keep reusing stream 
while we can
+if (!source.isPeerEnabled()) {
+  Threads.sleep(sleepForRetries);
+  continue;
+}
+if (!checkQuota()) {
+  continue;
+}
+
+batch = createBatch(entryStream

[hbase] branch branch-2 updated: HBASE-25596: Fix NPE and avoid permanent unreplicated data due to EOF (#2990)

2021-02-25 Thread apurtell
This is an automated email from the ASF dual-hosted git repository.

apurtell pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new d724d05  HBASE-25596: Fix NPE and avoid permanent unreplicated data 
due to EOF (#2990)
d724d05 is described below

commit d724d0576fcafb507b32d3f6636228d559a924fa
Author: Sandeep Pal <50725353+sandeepvina...@users.noreply.github.com>
AuthorDate: Thu Feb 25 13:36:31 2021 -0800

HBASE-25596: Fix NPE and avoid permanent unreplicated data due to EOF 
(#2990)

Signed-off-by: Xu Cang 
Signed-off-by: Andrew Purtell 
---
 .../regionserver/ReplicationSource.java|   8 +-
 .../regionserver/ReplicationSourceWALReader.java   | 152 +++
 .../SerialReplicationSourceWALReader.java  |   4 +-
 .../replication/regionserver/WALEntryBatch.java|   4 +
 .../replication/regionserver/WALEntryStream.java   |   6 +-
 .../hbase/replication/TestReplicationBase.java |  66 +++--
 .../TestReplicationEmptyWALRecovery.java   | 298 +++--
 .../regionserver/TestWALEntryStream.java   |  62 -
 8 files changed, 501 insertions(+), 99 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
index 063b3d4..42757b4 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
@@ -30,12 +30,12 @@ import java.util.Set;
 import java.util.TreeMap;
 import java.util.UUID;
 import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.PriorityBlockingQueue;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.TimeoutException;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicLong;
 import java.util.function.Predicate;
-
 import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
@@ -66,7 +66,6 @@ import org.apache.hadoop.hbase.wal.WAL.Entry;
 import org.apache.yetus.audience.InterfaceAudience;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
-
 import org.apache.hbase.thirdparty.com.google.common.collect.Lists;
 
 /**
@@ -265,6 +264,11 @@ public class ReplicationSource implements 
ReplicationSourceInterface {
 }
   }
 
+  @InterfaceAudience.Private
+  public Map> getQueues() {
+return logQueue.getQueues();
+  }
+
   @Override
   public void addHFileRefs(TableName tableName, byte[] family, List> pairs)
   throws ReplicationException {
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
index f52a83a..57c0a16 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
@@ -41,7 +41,6 @@ import org.apache.yetus.audience.InterfaceAudience;
 import org.apache.yetus.audience.InterfaceStability;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
-
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.BulkLoadDescriptor;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.StoreDescriptor;
 
@@ -123,44 +122,64 @@ class ReplicationSourceWALReader extends Thread {
   @Override
   public void run() {
 int sleepMultiplier = 1;
-while (isReaderRunning()) { // we only loop back here if something fatal 
happened to our stream
-  try (WALEntryStream entryStream =
-  new WALEntryStream(logQueue, conf, currentPosition,
-  source.getWALFileLengthProvider(), 
source.getServerWALsBelongTo(),
-  source.getSourceMetrics(), walGroupId)) {
-while (isReaderRunning()) { // loop here to keep reusing stream while 
we can
-  if (!source.isPeerEnabled()) {
-Threads.sleep(sleepForRetries);
-continue;
-  }
-  if (!checkQuota()) {
-continue;
+WALEntryBatch batch = null;
+WALEntryStream entryStream = null;
+try {
+  // we only loop back here if something fatal happened to our stream
+  while (isReaderRunning()) {
+try {
+  entryStream =
+new WALEntryStream(logQueue, conf, currentPosition, 
source.getWALFileLengthProvider(),
+  source.getServerWALsBelongTo(), source.getSourceMetrics(), 
walGroupId);
+  while (isReaderRunning()) { // loop here to keep reusing stream 
while we can
+if (!so

[hbase] branch branch-2.4 updated: HBASE-25596: Fix NPE and avoid permanent unreplicated data due to EOF (#2990)

2021-02-25 Thread apurtell
This is an automated email from the ASF dual-hosted git repository.

apurtell pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new 9590080  HBASE-25596: Fix NPE and avoid permanent unreplicated data 
due to EOF (#2990)
9590080 is described below

commit 9590080a2132014990efd8da450e734ae99c749e
Author: Sandeep Pal <50725353+sandeepvina...@users.noreply.github.com>
AuthorDate: Thu Feb 25 13:36:31 2021 -0800

HBASE-25596: Fix NPE and avoid permanent unreplicated data due to EOF 
(#2990)

Signed-off-by: Xu Cang 
Signed-off-by: Andrew Purtell 

Conflicts:

hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationBase.java
---
 .../regionserver/ReplicationSource.java|   8 +-
 .../regionserver/ReplicationSourceWALReader.java   | 152 +++
 .../SerialReplicationSourceWALReader.java  |   4 +-
 .../replication/regionserver/WALEntryBatch.java|   4 +
 .../replication/regionserver/WALEntryStream.java   |   6 +-
 .../hbase/replication/TestReplicationBase.java |  67 +++--
 .../TestReplicationEmptyWALRecovery.java   | 298 +++--
 .../regionserver/TestWALEntryStream.java   |  62 -
 8 files changed, 503 insertions(+), 98 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
index 063b3d4..42757b4 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
@@ -30,12 +30,12 @@ import java.util.Set;
 import java.util.TreeMap;
 import java.util.UUID;
 import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.PriorityBlockingQueue;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.TimeoutException;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicLong;
 import java.util.function.Predicate;
-
 import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
@@ -66,7 +66,6 @@ import org.apache.hadoop.hbase.wal.WAL.Entry;
 import org.apache.yetus.audience.InterfaceAudience;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
-
 import org.apache.hbase.thirdparty.com.google.common.collect.Lists;
 
 /**
@@ -265,6 +264,11 @@ public class ReplicationSource implements 
ReplicationSourceInterface {
 }
   }
 
+  @InterfaceAudience.Private
+  public Map> getQueues() {
+return logQueue.getQueues();
+  }
+
   @Override
   public void addHFileRefs(TableName tableName, byte[] family, List> pairs)
   throws ReplicationException {
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
index f52a83a..57c0a16 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java
@@ -41,7 +41,6 @@ import org.apache.yetus.audience.InterfaceAudience;
 import org.apache.yetus.audience.InterfaceStability;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
-
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.BulkLoadDescriptor;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.StoreDescriptor;
 
@@ -123,44 +122,64 @@ class ReplicationSourceWALReader extends Thread {
   @Override
   public void run() {
 int sleepMultiplier = 1;
-while (isReaderRunning()) { // we only loop back here if something fatal 
happened to our stream
-  try (WALEntryStream entryStream =
-  new WALEntryStream(logQueue, conf, currentPosition,
-  source.getWALFileLengthProvider(), 
source.getServerWALsBelongTo(),
-  source.getSourceMetrics(), walGroupId)) {
-while (isReaderRunning()) { // loop here to keep reusing stream while 
we can
-  if (!source.isPeerEnabled()) {
-Threads.sleep(sleepForRetries);
-continue;
-  }
-  if (!checkQuota()) {
-continue;
+WALEntryBatch batch = null;
+WALEntryStream entryStream = null;
+try {
+  // we only loop back here if something fatal happened to our stream
+  while (isReaderRunning()) {
+try {
+  entryStream =
+new WALEntryStream(logQueue, conf, currentPosition, 
source.getWALFileLengthProvider(),
+  source.getServerWALsBelongTo(), source.getSourceMetrics(), 
w

[hbase] branch master updated: HBASE-25590 Bulkload replication HFileRefs cannot be cleared in some cases where set exclude-namespace/exclude-table-cfs (#2969)

2021-02-25 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new 8d0de96  HBASE-25590 Bulkload replication HFileRefs cannot be cleared 
in some cases where set exclude-namespace/exclude-table-cfs (#2969)
8d0de96 is described below

commit 8d0de969765c0b27991e85b09b52c240321ce881
Author: XinSun 
AuthorDate: Fri Feb 26 09:50:23 2021 +0800

HBASE-25590 Bulkload replication HFileRefs cannot be cleared in some cases 
where set exclude-namespace/exclude-table-cfs (#2969)

Signed-off-by: Wellington Chevreuil 
---
 .../hbase/replication/ReplicationPeerConfig.java   |  29 +-
 .../replication/TestReplicationPeerConfig.java | 366 -
 .../NamespaceTableCfWALEntryFilter.java|  84 +
 .../regionserver/ReplicationSource.java|  28 +-
 .../regionserver/TestBulkLoadReplication.java  |   8 +-
 .../TestBulkLoadReplicationHFileRefs.java  | 310 +
 6 files changed, 557 insertions(+), 268 deletions(-)

diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
index 5ca5cef..3b03ae4 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
@@ -29,6 +29,8 @@ import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.yetus.audience.InterfaceAudience;
 
+import 
org.apache.hbase.thirdparty.org.apache.commons.collections4.CollectionUtils;
+
 /**
  * A configuration for the replication peer cluster.
  */
@@ -301,6 +303,19 @@ public class ReplicationPeerConfig {
* @return true if the table need replicate to the peer cluster
*/
   public boolean needToReplicate(TableName table) {
+return needToReplicate(table, null);
+  }
+
+  /**
+   * Decide whether the passed family of the table need replicate to the peer 
cluster according to
+   * this peer config.
+   * @param table name of the table
+   * @param family family name
+   * @return true if (the family of) the table need replicate to the peer 
cluster.
+   * If passed family is null, return true if any CFs of the table 
need replicate;
+   * If passed family is not null, return true if the passed family 
need replicate.
+   */
+  public boolean needToReplicate(TableName table, byte[] family) {
 String namespace = table.getNamespaceAsString();
 if (replicateAllUserTables) {
   // replicate all user tables, but filter by exclude namespaces and 
table-cfs config
@@ -312,9 +327,12 @@ public class ReplicationPeerConfig {
 return true;
   }
   Collection cfs = excludeTableCFsMap.get(table);
-  // if cfs is null or empty then we can make sure that we do not need to 
replicate this table,
+  // If cfs is null or empty then we can make sure that we do not need to 
replicate this table,
   // otherwise, we may still need to replicate the table but filter out 
some families.
-  return cfs != null && !cfs.isEmpty();
+  return cfs != null && !cfs.isEmpty()
+// If exclude-table-cfs contains passed family then we make sure that 
we do not need to
+// replicate this family.
+&& (family == null || !cfs.contains(Bytes.toString(family)));
 } else {
   // Not replicate all user tables, so filter by namespaces and table-cfs 
config
   if (namespaces == null && tableCFsMap == null) {
@@ -325,7 +343,12 @@ public class ReplicationPeerConfig {
   if (namespaces != null && namespaces.contains(namespace)) {
 return true;
   }
-  return tableCFsMap != null && tableCFsMap.containsKey(table);
+  // If table-cfs contains this table then we can make sure that we need 
replicate some CFs of
+  // this table. Further we need all CFs if tableCFsMap.get(table) is null 
or empty.
+  return tableCFsMap != null && tableCFsMap.containsKey(table)
+&& (family == null || CollectionUtils.isEmpty(tableCFsMap.get(table))
+// If table-cfs must contain passed family then we need to replicate 
this family.
+|| tableCFsMap.get(table).contains(Bytes.toString(family)));
 }
   }
 }
diff --git 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
index d67a3f8..ae2d426 100644
--- 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
+++ 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
@@ -17,21 +17,26 @@
  */
 package org.apache.hadoop.hbase.replicat

[hbase] branch branch-2 updated: HBASE-25590 Bulkload replication HFileRefs cannot be cleared in some cases where set exclude-namespace/exclude-table-cfs (#2969)

2021-02-25 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 328ff8c  HBASE-25590 Bulkload replication HFileRefs cannot be cleared 
in some cases where set exclude-namespace/exclude-table-cfs (#2969)
328ff8c is described below

commit 328ff8c05a4ff8a0db3995c9a556acf0d3a10caa
Author: XinSun 
AuthorDate: Fri Feb 26 09:50:23 2021 +0800

HBASE-25590 Bulkload replication HFileRefs cannot be cleared in some cases 
where set exclude-namespace/exclude-table-cfs (#2969)

Signed-off-by: Wellington Chevreuil 
---
 .../hbase/replication/ReplicationPeerConfig.java   |  29 +-
 .../replication/TestReplicationPeerConfig.java | 366 -
 .../NamespaceTableCfWALEntryFilter.java|  84 +
 .../regionserver/ReplicationSource.java|  28 +-
 .../regionserver/TestBulkLoadReplication.java  |   8 +-
 .../TestBulkLoadReplicationHFileRefs.java  | 310 +
 6 files changed, 557 insertions(+), 268 deletions(-)

diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
index 030ae3d..534357a 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
@@ -29,6 +29,8 @@ import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.yetus.audience.InterfaceAudience;
 
+import 
org.apache.hbase.thirdparty.org.apache.commons.collections4.CollectionUtils;
+
 /**
  * A configuration for the replication peer cluster.
  */
@@ -372,6 +374,19 @@ public class ReplicationPeerConfig {
* @return true if the table need replicate to the peer cluster
*/
   public boolean needToReplicate(TableName table) {
+return needToReplicate(table, null);
+  }
+
+  /**
+   * Decide whether the passed family of the table need replicate to the peer 
cluster according to
+   * this peer config.
+   * @param table name of the table
+   * @param family family name
+   * @return true if (the family of) the table need replicate to the peer 
cluster.
+   * If passed family is null, return true if any CFs of the table 
need replicate;
+   * If passed family is not null, return true if the passed family 
need replicate.
+   */
+  public boolean needToReplicate(TableName table, byte[] family) {
 String namespace = table.getNamespaceAsString();
 if (replicateAllUserTables) {
   // replicate all user tables, but filter by exclude namespaces and 
table-cfs config
@@ -383,9 +398,12 @@ public class ReplicationPeerConfig {
 return true;
   }
   Collection cfs = excludeTableCFsMap.get(table);
-  // if cfs is null or empty then we can make sure that we do not need to 
replicate this table,
+  // If cfs is null or empty then we can make sure that we do not need to 
replicate this table,
   // otherwise, we may still need to replicate the table but filter out 
some families.
-  return cfs != null && !cfs.isEmpty();
+  return cfs != null && !cfs.isEmpty()
+// If exclude-table-cfs contains passed family then we make sure that 
we do not need to
+// replicate this family.
+&& (family == null || !cfs.contains(Bytes.toString(family)));
 } else {
   // Not replicate all user tables, so filter by namespaces and table-cfs 
config
   if (namespaces == null && tableCFsMap == null) {
@@ -396,7 +414,12 @@ public class ReplicationPeerConfig {
   if (namespaces != null && namespaces.contains(namespace)) {
 return true;
   }
-  return tableCFsMap != null && tableCFsMap.containsKey(table);
+  // If table-cfs contains this table then we can make sure that we need 
replicate some CFs of
+  // this table. Further we need all CFs if tableCFsMap.get(table) is null 
or empty.
+  return tableCFsMap != null && tableCFsMap.containsKey(table)
+&& (family == null || CollectionUtils.isEmpty(tableCFsMap.get(table))
+// If table-cfs must contain passed family then we need to replicate 
this family.
+|| tableCFsMap.get(table).contains(Bytes.toString(family)));
 }
   }
 }
diff --git 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
index d67a3f8..ae2d426 100644
--- 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
+++ 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
@@ -17,21 +17,26 @@
  */
 package org.apache.hadoop.hbase.repl

[hbase] branch branch-2.4 updated: HBASE-25590 Bulkload replication HFileRefs cannot be cleared in some cases where set exclude-namespace/exclude-table-cfs (#2969)

2021-02-25 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new db4f2bf  HBASE-25590 Bulkload replication HFileRefs cannot be cleared 
in some cases where set exclude-namespace/exclude-table-cfs (#2969)
db4f2bf is described below

commit db4f2bfa25b0fc7e32939c07d0de08e50d966819
Author: XinSun 
AuthorDate: Fri Feb 26 09:50:23 2021 +0800

HBASE-25590 Bulkload replication HFileRefs cannot be cleared in some cases 
where set exclude-namespace/exclude-table-cfs (#2969)

Signed-off-by: Wellington Chevreuil 
---
 .../hbase/replication/ReplicationPeerConfig.java   |  29 +-
 .../replication/TestReplicationPeerConfig.java | 366 -
 .../NamespaceTableCfWALEntryFilter.java|  84 +
 .../regionserver/ReplicationSource.java|  28 +-
 .../regionserver/TestBulkLoadReplication.java  |   8 +-
 .../TestBulkLoadReplicationHFileRefs.java  | 310 +
 6 files changed, 557 insertions(+), 268 deletions(-)

diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
index 030ae3d..534357a 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
@@ -29,6 +29,8 @@ import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.yetus.audience.InterfaceAudience;
 
+import 
org.apache.hbase.thirdparty.org.apache.commons.collections4.CollectionUtils;
+
 /**
  * A configuration for the replication peer cluster.
  */
@@ -372,6 +374,19 @@ public class ReplicationPeerConfig {
* @return true if the table need replicate to the peer cluster
*/
   public boolean needToReplicate(TableName table) {
+return needToReplicate(table, null);
+  }
+
+  /**
+   * Decide whether the passed family of the table need replicate to the peer 
cluster according to
+   * this peer config.
+   * @param table name of the table
+   * @param family family name
+   * @return true if (the family of) the table need replicate to the peer 
cluster.
+   * If passed family is null, return true if any CFs of the table 
need replicate;
+   * If passed family is not null, return true if the passed family 
need replicate.
+   */
+  public boolean needToReplicate(TableName table, byte[] family) {
 String namespace = table.getNamespaceAsString();
 if (replicateAllUserTables) {
   // replicate all user tables, but filter by exclude namespaces and 
table-cfs config
@@ -383,9 +398,12 @@ public class ReplicationPeerConfig {
 return true;
   }
   Collection cfs = excludeTableCFsMap.get(table);
-  // if cfs is null or empty then we can make sure that we do not need to 
replicate this table,
+  // If cfs is null or empty then we can make sure that we do not need to 
replicate this table,
   // otherwise, we may still need to replicate the table but filter out 
some families.
-  return cfs != null && !cfs.isEmpty();
+  return cfs != null && !cfs.isEmpty()
+// If exclude-table-cfs contains passed family then we make sure that 
we do not need to
+// replicate this family.
+&& (family == null || !cfs.contains(Bytes.toString(family)));
 } else {
   // Not replicate all user tables, so filter by namespaces and table-cfs 
config
   if (namespaces == null && tableCFsMap == null) {
@@ -396,7 +414,12 @@ public class ReplicationPeerConfig {
   if (namespaces != null && namespaces.contains(namespace)) {
 return true;
   }
-  return tableCFsMap != null && tableCFsMap.containsKey(table);
+  // If table-cfs contains this table then we can make sure that we need 
replicate some CFs of
+  // this table. Further we need all CFs if tableCFsMap.get(table) is null 
or empty.
+  return tableCFsMap != null && tableCFsMap.containsKey(table)
+&& (family == null || CollectionUtils.isEmpty(tableCFsMap.get(table))
+// If table-cfs must contain passed family then we need to replicate 
this family.
+|| tableCFsMap.get(table).contains(Bytes.toString(family)));
 }
   }
 }
diff --git 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
index d67a3f8..ae2d426 100644
--- 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
+++ 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
@@ -17,21 +17,26 @@
  */
 package org.apache.hadoop.hbase.

[hbase] branch branch-2.3 updated: HBASE-25590 Bulkload replication HFileRefs cannot be cleared in some cases where set exclude-namespace/exclude-table-cfs (#2969)

2021-02-25 Thread sunxin
This is an automated email from the ASF dual-hosted git repository.

sunxin pushed a commit to branch branch-2.3
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.3 by this push:
 new a60a065  HBASE-25590 Bulkload replication HFileRefs cannot be cleared 
in some cases where set exclude-namespace/exclude-table-cfs (#2969)
a60a065 is described below

commit a60a065c9da0d785d06a9c3e262c094889a5afe4
Author: XinSun 
AuthorDate: Fri Feb 26 09:50:23 2021 +0800

HBASE-25590 Bulkload replication HFileRefs cannot be cleared in some cases 
where set exclude-namespace/exclude-table-cfs (#2969)

Signed-off-by: Wellington Chevreuil 
---
 .../hbase/replication/ReplicationPeerConfig.java   |  29 +-
 .../replication/TestReplicationPeerConfig.java | 366 -
 .../NamespaceTableCfWALEntryFilter.java|  84 +
 .../regionserver/ReplicationSource.java|  28 +-
 .../regionserver/TestBulkLoadReplication.java  |   8 +-
 .../TestBulkLoadReplicationHFileRefs.java  | 310 +
 6 files changed, 557 insertions(+), 268 deletions(-)

diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
index 7c0f115..612a7fc 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
@@ -29,6 +29,8 @@ import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.yetus.audience.InterfaceAudience;
 
+import 
org.apache.hbase.thirdparty.org.apache.commons.collections4.CollectionUtils;
+
 /**
  * A configuration for the replication peer cluster.
  */
@@ -366,6 +368,19 @@ public class ReplicationPeerConfig {
* @return true if the table need replicate to the peer cluster
*/
   public boolean needToReplicate(TableName table) {
+return needToReplicate(table, null);
+  }
+
+  /**
+   * Decide whether the passed family of the table need replicate to the peer 
cluster according to
+   * this peer config.
+   * @param table name of the table
+   * @param family family name
+   * @return true if (the family of) the table need replicate to the peer 
cluster.
+   * If passed family is null, return true if any CFs of the table 
need replicate;
+   * If passed family is not null, return true if the passed family 
need replicate.
+   */
+  public boolean needToReplicate(TableName table, byte[] family) {
 String namespace = table.getNamespaceAsString();
 if (replicateAllUserTables) {
   // replicate all user tables, but filter by exclude namespaces and 
table-cfs config
@@ -377,9 +392,12 @@ public class ReplicationPeerConfig {
 return true;
   }
   Collection cfs = excludeTableCFsMap.get(table);
-  // if cfs is null or empty then we can make sure that we do not need to 
replicate this table,
+  // If cfs is null or empty then we can make sure that we do not need to 
replicate this table,
   // otherwise, we may still need to replicate the table but filter out 
some families.
-  return cfs != null && !cfs.isEmpty();
+  return cfs != null && !cfs.isEmpty()
+// If exclude-table-cfs contains passed family then we make sure that 
we do not need to
+// replicate this family.
+&& (family == null || !cfs.contains(Bytes.toString(family)));
 } else {
   // Not replicate all user tables, so filter by namespaces and table-cfs 
config
   if (namespaces == null && tableCFsMap == null) {
@@ -390,7 +408,12 @@ public class ReplicationPeerConfig {
   if (namespaces != null && namespaces.contains(namespace)) {
 return true;
   }
-  return tableCFsMap != null && tableCFsMap.containsKey(table);
+  // If table-cfs contains this table then we can make sure that we need 
replicate some CFs of
+  // this table. Further we need all CFs if tableCFsMap.get(table) is null 
or empty.
+  return tableCFsMap != null && tableCFsMap.containsKey(table)
+&& (family == null || CollectionUtils.isEmpty(tableCFsMap.get(table))
+// If table-cfs must contain passed family then we need to replicate 
this family.
+|| tableCFsMap.get(table).contains(Bytes.toString(family)));
 }
   }
 }
diff --git 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
index d67a3f8..ae2d426 100644
--- 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
+++ 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
@@ -17,21 +17,26 @@
  */
 package org.apache.hadoop.hbase.

[hbase] branch branch-2.2 updated: HBASE-25605 Try ignore the ExportSnapshot related unit tests for branch-2.2 (#2985)

2021-02-25 Thread zghao
This is an automated email from the ASF dual-hosted git repository.

zghao pushed a commit to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.2 by this push:
 new b08c133  HBASE-25605 Try ignore the ExportSnapshot related unit tests 
for branch-2.2 (#2985)
b08c133 is described below

commit b08c13364b476414f4da6e274813d869d38680ef
Author: Guanghao Zhang 
AuthorDate: Fri Feb 26 10:40:32 2021 +0800

HBASE-25605 Try ignore the ExportSnapshot related unit tests for branch-2.2 
(#2985)

Signed-off-by: Duo Zhang 
---
 .../test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java  | 2 ++
 .../org/apache/hadoop/hbase/snapshot/TestExportSnapshotNoCluster.java   | 2 ++
 .../hadoop/hbase/snapshot/TestExportSnapshotWithTemporaryDirectory.java | 2 ++
 .../java/org/apache/hadoop/hbase/snapshot/TestMobExportSnapshot.java| 2 ++
 .../org/apache/hadoop/hbase/snapshot/TestMobSecureExportSnapshot.java   | 2 ++
 .../java/org/apache/hadoop/hbase/snapshot/TestSecureExportSnapshot.java | 2 ++
 .../java/org/apache/hadoop/hbase/master/snapshot/SnapshotManager.java   | 2 +-
 7 files changed, 13 insertions(+), 1 deletion(-)

diff --git 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
index c988854..9b1320d 100644
--- 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
+++ 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
@@ -49,6 +49,7 @@ import org.junit.AfterClass;
 import org.junit.Before;
 import org.junit.BeforeClass;
 import org.junit.ClassRule;
+import org.junit.Ignore;
 import org.junit.Rule;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
@@ -62,6 +63,7 @@ import 
org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.Snapshot
 /**
  * Test Export Snapshot Tool
  */
+@Ignore
 @Category({VerySlowMapReduceTests.class, LargeTests.class})
 public class TestExportSnapshot {
 
diff --git 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshotNoCluster.java
 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshotNoCluster.java
index f295e90..1c7ddf8 100644
--- 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshotNoCluster.java
+++ 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshotNoCluster.java
@@ -34,6 +34,7 @@ import org.apache.hadoop.hbase.testclassification.MediumTests;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.junit.Before;
 import org.junit.ClassRule;
+import org.junit.Ignore;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 import org.slf4j.Logger;
@@ -42,6 +43,7 @@ import org.slf4j.LoggerFactory;
 /**
  * Test Export Snapshot Tool
  */
+@Ignore
 @Category({MapReduceTests.class, MediumTests.class})
 public class TestExportSnapshotNoCluster {
 
diff --git 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshotWithTemporaryDirectory.java
 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshotWithTemporaryDirectory.java
index fca5358..d7a4ff2 100644
--- 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshotWithTemporaryDirectory.java
+++ 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshotWithTemporaryDirectory.java
@@ -28,8 +28,10 @@ import 
org.apache.hadoop.hbase.testclassification.MediumTests;
 import org.junit.AfterClass;
 import org.junit.BeforeClass;
 import org.junit.ClassRule;
+import org.junit.Ignore;
 import org.junit.experimental.categories.Category;
 
+@Ignore
 @Category({MediumTests.class})
 public class TestExportSnapshotWithTemporaryDirectory extends 
TestExportSnapshot {
 
diff --git 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestMobExportSnapshot.java
 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestMobExportSnapshot.java
index 59cdf4d..38ef1e8 100644
--- 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestMobExportSnapshot.java
+++ 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestMobExportSnapshot.java
@@ -26,11 +26,13 @@ import 
org.apache.hadoop.hbase.testclassification.LargeTests;
 import org.apache.hadoop.hbase.testclassification.VerySlowRegionServerTests;
 import org.junit.BeforeClass;
 import org.junit.ClassRule;
+import org.junit.Ignore;
 import org.junit.experimental.categories.Category;
 
 /**
  * Test Export Snapshot Tool
  */
+@Ignore
 @Category({VerySlowRegionServerTests.class, LargeTests.class})
 public class TestMobExportSnapshot extends TestExportSnapshot {
 
diff --git 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/snapshot/TestMobSecureExportSnapshot.java
 
b/