[ 
https://issues.apache.org/jira/browse/HDDS-14871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-14871:
----------------------------------
    Labels: pull-request-available  (was: )

> DataNode: tolerate per-volume health-check latch timeouts before marking 
> volumes failed
> ---------------------------------------------------------------------------------------
>
>                 Key: HDDS-14871
>                 URL: https://issues.apache.org/jira/browse/HDDS-14871
>             Project: Apache Ozone
>          Issue Type: Task
>          Components: Ozone Datanode
>            Reporter: Devesh Kumar Singh
>            Assignee: Devesh Kumar Singh
>            Priority: Major
>              Labels: pull-request-available
>
> *Problem*
> `StorageVolumeChecker.checkAllVolumes()` waits on a single `CountDownLatch` 
> for all volume health checks to complete. If the latch expires before any 
> volume finishes — due to any transient stall — **every pending volume is 
> immediately marked FAILED** with zero tolerance, producing false-positive 
> volume failures.
> The existing per-volume IO-failure sliding window in `StorageVolume.check()` 
> does not address this because it only applies when a check **completes**, not 
> when the latch times out.
> *Solution*
> Add a per-volume consecutive latch-timeout counter 
> (`consecutiveTimeoutCount`) to `StorageVolume`. When `checkAllVolumes()` 
> latch expires and a volume has not yet reported a result, its counter is 
> incremented. The volume is only added to the failed set if `count > 
> hdds.datanode.disk.check.timeout.tolerated`. A successful check resets the 
> counter to 0.
> Volumes that explicitly return `FAILED` from `check()` (genuine IO failures, 
> missing directory, bad permissions) are unaffected and continue to fail 
> immediately.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to