Apache9 commented on code in PR #5203:
URL: https://github.com/apache/hbase/pull/5203#discussion_r1185751247


##########
src/main/asciidoc/_chapters/ops_mgt.adoc:
##########
@@ -2454,6 +2450,12 @@ The `RS` Znode::
   The child znode name is the region server's hostname, client port, and start 
code.
   This list includes both live and dead region servers.
 
+[[hbase:replication]]
+hbase:replication::

Review Comment:
   Use "The `hbase:replication` Table"?



##########
src/main/asciidoc/_chapters/ops_mgt.adoc:
##########
@@ -2429,17 +2429,13 @@ This option was introduced in 
link:https://issues.apache.org/jira/browse/HBASE-1
 ==== Replication Internals
 
 Replication State Storage::
-  In HBASE-15867, we abstract two interfaces for storing replication state,
-`ReplicationPeerStorage` and `ReplicationQueueStorage`. The former one is for 
storing the
-replication peer related states, and the latter one is for storing the 
replication queue related
-states.
-  HBASE-15867 is only half done, as although we have abstract these two 
interfaces, we still only

Review Comment:
   The paragragh before here should be kept?



##########
src/main/asciidoc/_chapters/ops_mgt.adoc:
##########
@@ -2519,12 +2533,12 @@ The next time the cleaning process needs to look for a 
log, it starts by using i
 NOTE: WALs are saved when replication is enabled or disabled as long as peers 
exist.
 
 [[rs.failover.details]]
-==== Region Server Failover
+==== Region Server Failover(based on ZooKeeper)

Review Comment:
   Here we do not need to se two different sections. First, we could mention 
the 'setting a watcher' way, this is way in the old time.
   
   And starting from 2.5.0, the failover logic has been moved to SCP, where we 
add a `SERVER_CRASH_CLAIM_REPLICATION_QUEUES` step in SCP to claim the 
replication queues for a dead server.
   
   And starting from 3.0.0, where we changed the replication queue storage from 
zookeeper to table, the update to the replication queue storage is async, so we 
also need an extra step to add the missing replication queues before claiming.
   
   And on how to claim the replication queue, you can have two sections, to 
describe the layout and claiming way for zookeeper based implementation and 
table based implementation.



##########
src/main/asciidoc/_chapters/ops_mgt.adoc:
##########
@@ -2494,6 +2496,18 @@ If the log is in the queue, the path will be updated in 
memory.
 If the log is currently being replicated, the change will be done atomically 
so that the reader doesn't attempt to open the file when has already been moved.
 Because moving a file is a NameNode operation , if the reader is currently 
reading the log, it won't generate any exception.
 
+==== Keeping Track of Logs(based on hbase table)
+
+After 3.0.0, for table based implementation, we have server name in row key, 
which means we will have lots of rows for a given peer.
+
+For a normal replication queue, where the WAL files belong to it is still 
alive, all the WAL files are kept in memory, so we do not need to get the WAL 
files from replication queue storage.

Review Comment:
   "the region server is still alive"



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to