[
https://issues.apache.org/jira/browse/HDFS-4912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Suresh Srinivas updated HDFS-4912:
----------------------------------
Attachment: HDFS-4912.patch
This patch cleans up the code by splitting FSNamesystem#startFileInternal() to
#startFileInternal() and #appendFileInternal(). These are the changes:
# HDFS does not support create with CreateFlag.APPEND to support append. I have
created HDFS-4956 to track this. Hence the code change no longer tries to make
this work.
# Moved blockManager.verifyReplication() outside the writeLock. It does not
require writeLock.
# Removed unnecessary debug log and checking for valid name for file in
#appendFileInt().
> Cleanup FSNamesystem#startFileInternal
> --------------------------------------
>
> Key: HDFS-4912
> URL: https://issues.apache.org/jira/browse/HDFS-4912
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode
> Reporter: Suresh Srinivas
> Assignee: Suresh Srinivas
> Attachments: HDFS-4912.patch
>
>
> FSNamesystem#startFileInternal is used by both create and append. This
> results in ugly if else conditions to consider append/create scenarios. This
> method can be refactored and the code can be cleaned up.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira