devmadhuu commented on code in PR #9258:
URL: https://github.com/apache/ozone/pull/9258#discussion_r2965779510


##########
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/persistence/ContainerHealthSchemaManager.java:
##########
@@ -67,124 +82,470 @@ public ContainerHealthSchemaManager(
   }
 
   /**
-   * Get a batch of unhealthy containers, starting at offset and returning
-   * limit records. If a null value is passed for state, then unhealthy
-   * containers in all states will be returned. Otherwise, only containers
-   * matching the given state will be returned.
-   * @param state Return only containers in this state, or all containers if
-   *              null
-   * @param minContainerId minimum containerId for filter
-   * @param maxContainerId maximum containerId for filter
-   * @param limit The total records to return
-   * @return List of unhealthy containers.
+   * Insert or update unhealthy container records in UNHEALTHY_CONTAINERS 
table using TRUE batch insert.
+   * Uses JOOQ's batch API for optimal performance (single SQL statement for 
all records).
+   * Falls back to individual insert-or-update if batch insert fails (e.g., 
duplicate keys).
    */
-  public List<UnhealthyContainers> getUnhealthyContainers(
-      UnHealthyContainerStates state, Long minContainerId, Optional<Long> 
maxContainerId, int limit) {
+  public void insertUnhealthyContainerRecords(List<UnhealthyContainerRecord> 
recs) {
+    if (recs == null || recs.isEmpty()) {
+      return;
+    }
+
+    if (LOG.isDebugEnabled()) {
+      recs.forEach(rec -> LOG.debug("rec.getContainerId() : {}, 
rec.getContainerState(): {}",
+          rec.getContainerId(), rec.getContainerState()));
+    }
+
     DSLContext dslContext = containerSchemaDefinition.getDSLContext();
+
+    try {
+      dslContext.transaction(configuration ->
+          batchInsertInChunks(configuration.dsl(), recs));
+
+      LOG.debug("Batch inserted {} unhealthy container records", recs.size());
+
+    } catch (DataAccessException e) {
+      // Batch insert failed (likely duplicate key) - fall back to 
insert-or-update per record
+      LOG.warn("Batch insert failed, falling back to individual 
insert-or-update for {} records",
+          recs.size(), e);
+      fallbackInsertOrUpdate(recs);

Review Comment:
   Agree on transaction safety concern. In current flow, delete+insert is 
executed atomically via `replaceUnhealthyContainerRecordsAtomically`, so we no 
longer have the partial-visibility gap between delete and insert. Also, in this 
path duplicate-key fallback is not expected because rows are deleted before 
insert.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to