[ 
https://issues.apache.org/jira/browse/HADOOP-2655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dhruba borthakur updated HADOOP-2655:
-------------------------------------

    Attachment: copyOnWrite2.patch

1. Moved most of detachBlock to DatanodeBlockInfo.java.
2. It is not a public interface anymore. Removed it from FSDatasetInterface.java
3. numLinks is needed to drive the unit tests. It is also needed to make the 
method handle the case when there are possibly multiple snapshots in the future.
4. I did not make createDetachFile() static, especially because it refers to 
the detachDir variable.
5. Moved replaceFile to FileUtil. It is intended to handle OS specific cases 
when the rename will fail if somebody else has a handle to the target file. The 
current setting of the retry time-limit is adhoc. During this period, it is 
likely that another thread that was reading that block file will be done 
reading it.

> Copy on write for data and metadata files in the presence of snapshots
> ----------------------------------------------------------------------
>
>                 Key: HADOOP-2655
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2655
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>             Fix For: 0.17.0
>
>         Attachments: copyOnWrite.patch, copyOnWrite.patch, copyOnWrite2.patch
>
>
> If a DFS Client wants to append data to an existing file (appends, 
> HADOOP-1700) and a snapshot is present, the Datanoed has to implement some 
> form of a copy-on-write for writes to data and meta data files.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to