[
https://issues.apache.org/jira/browse/HDFS-1606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13016472#comment-13016472
]
Tsz Wo (Nicholas), SZE commented on HDFS-1606:
----------------------------------------------
- In [build
#318|https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/318//testReport/org.apache.hadoop.hdfs/TestMultiThreadedHflush/testHflushWhileClosing/],
{noformat}
java.lang.NullPointerException
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.access$2500(DFSOutputStream.java:283)
at
org.apache.hadoop.hdfs.DFSOutputStream.hflush(DFSOutputStream.java:1470)
at
org.apache.hadoop.fs.FSDataOutputStream.hflush(FSDataOutputStream.java:110)
at
org.apache.hadoop.hdfs.TestMultiThreadedHflush$1.run(TestMultiThreadedHflush.java:156)
{noformat}
There are some existing synchronization problem in {{DFSOutputStream}}. It is
possible the call {{hflush()}} after {{close()}} without getting any error.
I will simply check null in this patch. Will think about the synchronization
problem after that.
- For the other failed tests, it is simply not enough datanodes so that
addDatanode failed.
> Provide a stronger data guarantee in the write pipeline
> -------------------------------------------------------
>
> Key: HDFS-1606
> URL: https://issues.apache.org/jira/browse/HDFS-1606
> Project: Hadoop HDFS
> Issue Type: New Feature
> Components: data-node, hdfs client, name-node
> Affects Versions: 0.23.0
> Reporter: Tsz Wo (Nicholas), SZE
> Assignee: Tsz Wo (Nicholas), SZE
> Fix For: 0.23.0
>
> Attachments: h1606_20110210.patch, h1606_20110211.patch,
> h1606_20110217.patch, h1606_20110228.patch, h1606_20110404.patch,
> h1606_20110405.patch, h1606_20110405b.patch
>
>
> In the current design, if there is a datanode/network failure in the write
> pipeline, DFSClient will try to remove the failed datanode from the pipeline
> and then continue writing with the remaining datanodes. As a result, the
> number of datanodes in the pipeline is decreased. Unfortunately, it is
> possible that DFSClient may incorrectly remove a healthy datanode but leave
> the failed datanode in the pipeline because failure detection may be
> inaccurate under erroneous conditions.
> We propose to have a new mechanism for adding new datanodes to the pipeline
> in order to provide a stronger data guarantee.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira