adoroszlai commented on code in PR #8600: URL: https://github.com/apache/ozone/pull/8600#discussion_r2141705742
########## hadoop-hdds/docs/content/feature/OM-HA.md: ########## @@ -125,7 +125,31 @@ ozone om [global options (optional)] --bootstrap --force Note that using the _force_ option during bootstrap could crash the OM process if it does not have updated configurations. +## Automatic Snapshot Installation for Stale Ozone Managers + +In an Ozone Manager (OM) High Availability (HA) cluster, all OM nodes maintain a consistent metadata state using the Ratis consensus protocol. Sometimes, an OM follower node may be offline or fall so far behind the leader OM’s log that it cannot catch up by replaying individual log entries. + +The OM HA implementation includes an **automatic snapshot installation and recovery process** for such cases: + +- **Snapshot Installation Trigger:** +When a follower OM falls significantly behind and is unable to catch up with the leader OM through standard log replication, the Ratis consensus layer on the leader OM may determine that a snapshot installation is necessary. The leader then notifies the follower, and the snapshot installation on the follower is handled by its `OzoneManagerStateMachine`. + +- **How it works:** + - The follower OM receives a snapshot installation notification from the leader via the consensus protocol. + - The follower OM then downloads and installs the latest consistent checkpoint (snapshot) from the leader OM. + - After installing the snapshot, the follower OM resumes normal operation and log replication from the new state. + +- **Relevant Implementation:** + This logic is implemented in the `OzoneManagerStateMachine.notifyInstallSnapshotFromLeader()` method. The install is triggered automatically by the consensus layer (Ratis) when it detects that a follower cannot catch up by log replay alone. + +- **What this means for administrators:** + - In most scenarios, stale OMs—whether they were temporarily offline or simply fell too far behind the leader while remaining online—will recover automatically, even if they have missed a large number of operations. + - Manual intervention (such as running `ozone om --bootstrap`) is only required when adding a new OM node to the cluster or when explicitly requested by support instructions. + Review Comment: - don't repeat yourself - "cases" are not listed after this, so ":" at the end is not appropriate - use permalink for code, otherwise it will point to the wrong location after changes to `OzoneManagerStateMachine` ```suggestion Sometimes an OM follower node may be offline or fall so far behind the leader OM's log that it cannot catch up by replaying individual log entries. The OM HA implementation includes an automatic snapshot installation and recovery process for such cases. How it works: 1. Leader determines that the follower is too far behind. 2. Leader notifies the follower to catch up via snapshot. 3. The follower downloads and installs the latest snapshot from the leader. 4. After installing the snapshot, the follower OM resumes normal operation and log replication from the new state. This logic is implemented in the [`OzoneManagerStateMachine.notifyInstallSnapshotFromLeader()` method](https://github.com/apache/ozone/blob/931bc2d8a9e8e8595bb49034c03c14e2b15be865/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java#L521-L541). In most scenarios, stale OMs will recover automatically, even if they have missed a large number of operations. Manual intervention (such as running `ozone om --bootstrap`) is only required when adding a new OM node to the cluster or when explicitly requested by support instructions. ``` ########## hadoop-hdds/docs/content/feature/OM-HA.md: ########## @@ -125,7 +125,31 @@ ozone om [global options (optional)] --bootstrap --force Note that using the _force_ option during bootstrap could crash the OM process if it does not have updated configurations. +## Automatic Snapshot Installation for Stale Ozone Managers + +In an Ozone Manager (OM) High Availability (HA) cluster, all OM nodes maintain a consistent metadata state using the Ratis consensus protocol. Sometimes, an OM follower node may be offline or fall so far behind the leader OM’s log that it cannot catch up by replaying individual log entries. + +The OM HA implementation includes an **automatic snapshot installation and recovery process** for such cases: + +- **Snapshot Installation Trigger:** +When a follower OM falls significantly behind and is unable to catch up with the leader OM through standard log replication, the Ratis consensus layer on the leader OM may determine that a snapshot installation is necessary. The leader then notifies the follower, and the snapshot installation on the follower is handled by its `OzoneManagerStateMachine`. + +- **How it works:** + - The follower OM receives a snapshot installation notification from the leader via the consensus protocol. + - The follower OM then downloads and installs the latest consistent checkpoint (snapshot) from the leader OM. + - After installing the snapshot, the follower OM resumes normal operation and log replication from the new state. + +- **Relevant Implementation:** + This logic is implemented in the `OzoneManagerStateMachine.notifyInstallSnapshotFromLeader()` method. The install is triggered automatically by the consensus layer (Ratis) when it detects that a follower cannot catch up by log replay alone. + +- **What this means for administrators:** + - In most scenarios, stale OMs—whether they were temporarily offline or simply fell too far behind the leader while remaining online—will recover automatically, even if they have missed a large number of operations. + - Manual intervention (such as running `ozone om --bootstrap`) is only required when adding a new OM node to the cluster or when explicitly requested by support instructions. + + ## References * Check [this page]({{< ref "design/omha.md" >}}) for the links to the original design docs * Ozone distribution contains an example OM HA configuration, under the `compose/ozone-om-ha` directory which can be tested with the help of [docker-compose]({{< ref "start/RunningViaDocker.md" >}}). +* [OzoneManagerStateMachine.notifyInstallSnapshotFromLeader source code](https://github.com/apache/ozone/blob/master/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java#L530) Review Comment: Suggesting inline link. ```suggestion ``` ########## hadoop-hdds/docs/content/feature/OM-HA.md: ########## @@ -125,7 +125,31 @@ ozone om [global options (optional)] --bootstrap --force Note that using the _force_ option during bootstrap could crash the OM process if it does not have updated configurations. +## Automatic Snapshot Installation for Stale Ozone Managers + +In an Ozone Manager (OM) High Availability (HA) cluster, all OM nodes maintain a consistent metadata state using the Ratis consensus protocol. Sometimes, an OM follower node may be offline or fall so far behind the leader OM’s log that it cannot catch up by replaying individual log entries. + +The OM HA implementation includes an **automatic snapshot installation and recovery process** for such cases: + +- **Snapshot Installation Trigger:** +When a follower OM falls significantly behind and is unable to catch up with the leader OM through standard log replication, the Ratis consensus layer on the leader OM may determine that a snapshot installation is necessary. The leader then notifies the follower, and the snapshot installation on the follower is handled by its `OzoneManagerStateMachine`. + +- **How it works:** + - The follower OM receives a snapshot installation notification from the leader via the consensus protocol. + - The follower OM then downloads and installs the latest consistent checkpoint (snapshot) from the leader OM. + - After installing the snapshot, the follower OM resumes normal operation and log replication from the new state. + +- **Relevant Implementation:** + This logic is implemented in the `OzoneManagerStateMachine.notifyInstallSnapshotFromLeader()` method. The install is triggered automatically by the consensus layer (Ratis) when it detects that a follower cannot catch up by log replay alone. + +- **What this means for administrators:** + - In most scenarios, stale OMs—whether they were temporarily offline or simply fell too far behind the leader while remaining online—will recover automatically, even if they have missed a large number of operations. + - Manual intervention (such as running `ozone om --bootstrap`) is only required when adding a new OM node to the cluster or when explicitly requested by support instructions. + + ## References * Check [this page]({{< ref "design/omha.md" >}}) for the links to the original design docs * Ozone distribution contains an example OM HA configuration, under the `compose/ozone-om-ha` directory which can be tested with the help of [docker-compose]({{< ref "start/RunningViaDocker.md" >}}). +* [OzoneManagerStateMachine.notifyInstallSnapshotFromLeader source code](https://github.com/apache/ozone/blob/master/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java#L530) +* [Apache Ratis State Machine API documentation](https://github.com/apache/ratis/blob/master/ratis-server-api/src/main/java/org/apache/ratis/statemachine/StateMachine.java) Review Comment: ```suggestion * [Apache Ratis State Machine API documentation](https://github.com/apache/ratis/blob/3612bcaf7d3e48a658935fc8b250e5d3b35df174/ratis-server-api/src/main/java/org/apache/ratis/statemachine/StateMachine.java) ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
