[ 
https://issues.apache.org/jira/browse/HDDS-11953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-11953:
----------------------------------
    Labels: pull-request-available  (was: )

> Ozone Recon - Improve Recon OM sync process based on continuous pull of OM 
> data
> -------------------------------------------------------------------------------
>
>                 Key: HDDS-11953
>                 URL: https://issues.apache.org/jira/browse/HDDS-11953
>             Project: Apache Ozone
>          Issue Type: Improvement
>            Reporter: Devesh Kumar Singh
>            Assignee: Devesh Kumar Singh
>            Priority: Major
>              Labels: pull-request-available
>
> Currently Recon sync with OM based on following configs:
>  # ozone.recon.om.snapshot.task.interval.delay -> 5s
>  # recon.om.delta.update.limit -> 50000
>  # recon.om.delta.update.loop.limit -> 50
> Above are recommended and default configs for high write TPS workload in the 
> range of approx 5k to achieve near real time sync between Recon and OM data. 
> However in future, TPS may achieve higher targets in OM and Recon needs to 
> pull data continuously to keep lag and latency minimum. 
> For this, Recon needs to have different logic of pulling OM DB data based on 
> some threshold, so if OM DB sequence number and recon OM DB snapshot sequence 
> number crosses beyond a certain threshold, Recon will send request to OM to 
> pull and if TPS is so high that this threshold continue to breach the mark, 
> then Recon will pull the data continuously from OM. Just pulling the data 
> continuously from OM alone will not reduce the lag and latency, Recon also 
> need to further optimize the processing speed of Recon OM background tasks, 
> for which we'll track under a separate JIRA.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to