[ 
https://issues.apache.org/jira/browse/HDFS-609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862544#action_12862544
 ] 

Todd Lipcon commented on HDFS-609:
----------------------------------

I disagree - I don't think these are addressed in trunk.

#1) the APPEND flag seems to track through to startFileInternal in 
FSNamesystem, which as Hairong mentioned just converts the INode but does not 
properly pass back a LocatedBlock for the last block, or convert it to 
underconstruction status.
#2) There still doesn't seem to be any checks that prevent a user from passing 
blocksize or replication when CreateFlag.APPEND is specified

> Create a file with the append flag does not work in HDFS
> --------------------------------------------------------
>
>                 Key: HDFS-609
>                 URL: https://issues.apache.org/jira/browse/HDFS-609
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 0.21.0
>            Reporter: Hairong Kuang
>            Priority: Blocker
>             Fix For: 0.21.0
>
>
> HADOOP-5438 introduced a create API with flags. There are a couple of issues 
> when the flag is set to be APPEND.
> 1. The APPEND flag does not work in HDFS. Append is not as simple as changing 
> a FileINode to be a FileINodeUnderConstruction. It also need to reopen the 
> last block for applend if last block is not full and handle crc when the last 
> crc chunk is not full.
> 2. The API is not well thought. It has parameters like replication factor and 
> blockSize. Those parameters do not make any sense if APPEND flag is set. But 
> they give an application user a wrong impression that append could change a 
> file's block size and replication factor.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to