[
https://issues.apache.org/jira/browse/HDDS-6025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17515166#comment-17515166
]
Kaijie Chen commented on HDDS-6025:
-----------------------------------
{quote}The data from last acknowledged length will be retried on a new pipeline
with new set of datanodes.
{quote}
SCM only recognize containers, so we need to make sure the blocks are same
among the 3 container replicas.
Otherwise, we have to implement special logic to recover missing data like EC
does.
> Ozone Retry Path Optimizations
> ------------------------------
>
> Key: HDDS-6025
> URL: https://issues.apache.org/jira/browse/HDDS-6025
> Project: Apache Ozone
> Issue Type: Sub-task
> Reporter: Shashikant Banerjee
> Priority: Major
>
> Currently, in the retry path, once a datanode goes down , the data from last
> acknowledged length will be retried on a new pipeline with new set of
> datanodes. Secondly, once a block write fails in between , a new block is
> allocated for the remaining unacknowledged data.
>
> In HDFS, in case of a datanode failure, a new datanode is recruited in case
> of a dn failure and the packets are only written for the replcaed datanode.
> Also, the same block gets written out and there is no new block allocation.
> In that way, the key/file metadata remains same but in ozone, it may bloat up
> the OM metadata.
>
> This Jira is to discuss any optimizations needed in ozone retry path to
> improve performance if any,
--
This message was sent by Atlassian Jira
(v8.20.1#820001)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]