ivandika3 commented on code in PR #8600:
URL: https://github.com/apache/ozone/pull/8600#discussion_r2145420917


##########
hadoop-hdds/docs/content/feature/OM-HA.md:
##########
@@ -125,7 +125,23 @@ ozone om [global options (optional)] --bootstrap --force
 
 Note that using the _force_ option during bootstrap could crash the OM process 
if it does not have updated configurations.
 
+## Automatic Snapshot Installation for Stale Ozone Managers
+
+Sometimes an OM follower node may be offline or fall so far behind the leader 
OM's log that it cannot catch up by replaying individual log entries.  The OM 
HA implementation includes an automatic snapshot installation and recovery 
process for such cases.
+
+How it works:
+
+1. Leader determines that the follower is too far behind.
+2. Leader notifies the follower to catch up via snapshot.
+3. The follower downloads and installs the latest snapshot from the leader.
+4. After installing the snapshot, the follower OM resumes normal operation and 
log replication from the new state.
+
+This logic is implemented in the 
[`OzoneManagerStateMachine.notifyInstallSnapshotFromLeader()` 
method](https://github.com/apache/ozone/blob/931bc2d8a9e8e8595bb49034c03c14e2b15be865/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java#L521-L541).
+
+In most scenarios, stale OMs will recover automatically, even if they have 
missed a large number of operations.  Manual intervention (such as running 
`ozone om --bootstrap`) is only required when adding a new OM node to the 
cluster or when explicitly requested by support instructions.

Review Comment:
   AFAIK `ozone om --bootstrap` is only used for adding an OM and cannot be 
used as a "Manual intervention" to catch up with the leader.
   
   For reference, the way we did it in our side is to use clean the Ratis logs 
directory and OM DB and resync the OM DB (using Rsync). 
   
   1. stop om follower
   ozone --daemon stop om
    
   2. backup metadata
   timestamp=$(date '+%Y-%m-%dT%H_%M')
   mv /hadoop/ozone/meta/om/db/om.db 
/hadoop/ozone/meta/om/db/om.db.bak_${timestamp}
   mv /hadoop/ozone/meta/om/ratis /hadoop/ozone/meta/om/ratis_${timestamp}
   mv /hadoop/ozone/meta/om/ratis-snapshot 
/hadoop/ozone/meta/om/ratis-snapshot_${timestamp}
   mkdir /hadoop/ozone/meta/om/ratis  /hadoop/ozone/meta/om/ratis-snapshot
   
   3. Change the ozone.om.ratis.log.purge.preservation.log.num=0 to prevent the 
bug described in RATIS-2186 after startup (note the original value since the 
configuration will need to be reverted)
   
   4. rsync OM leader's meta to follower  (Execute multiple times to ensure 
data consistency)
   rsync -av --delete /hadoop/ozone/meta/om/db/om.db 
${OM_FOLLOWER_ADDRESS}:/hadoop/ozone/meta/om/db/
   
   4. restart om follower
   ozone --daemon start om
   
   5. Wait until the first "purge" in OM log
      2024-11-08 08:01:54,317 [om4@group-13A745F1EB59-StateMachineUpdater] INFO 
org.apache.ratis.server.raftlog.RaftLog: 
om4@group-13A745F1EB59-SegmentedRaftLog: purge 88499622476
   
   6. Change the ozone.om.ratis.log.purge.preservation.log.num back to the 
original value
   
   7. restart om follower again (ensure that all 3 OMs are up and the current 
om is not a leader)
      ozone --daemon stop om; sleep 3; ozone --daemon start om



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to