Gabor Bota commented on HDFS-11187:

[~jojochuang] thanks for the review! 
I'd like to reflect on your second comment because I think the first one is for 

In FsDatasetImpl#finalizeReplica there's a call on 
FsVolumeImpl#addFinalizedBlock() in line 1734 on your trunk commit.
I moved the load checksum logic to the else branch, in 
FsDatasetImpl#finalizeReplica after the call is done to the unmodified 
FsVolumeImpl#addFinalizedBlock() for branch-2. I think this will solve the 
issue that you've pointed out. (it will be rev003).

> Optimize disk access for last partial chunk checksum of Finalized replica
> -------------------------------------------------------------------------
>                 Key: HDFS-11187
>                 URL: https://issues.apache.org/jira/browse/HDFS-11187
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode
>            Reporter: Wei-Chiu Chuang
>            Assignee: Gabor Bota
>            Priority: Major
>             Fix For: 3.1.0, 3.0.2
>         Attachments: HDFS-11187-branch-2.001.patch, 
> HDFS-11187-branch-2.002.patch, HDFS-11187.001.patch, HDFS-11187.002.patch, 
> HDFS-11187.003.patch, HDFS-11187.004.patch, HDFS-11187.005.patch
> The patch at HDFS-11160 ensures BlockSender reads the correct version of 
> metafile when there are concurrent writers.
> However, the implementation is not optimal, because it must always read the 
> last partial chunk checksum from disk while holding FsDatasetImpl lock for 
> every reader. It is possible to optimize this by keeping an up-to-date 
> version of last partial checksum in-memory and reduce disk access.
> I am separating the optimization into a new jira, because maintaining the 
> state of in-memory checksum requires a lot more work.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to