[
https://issues.apache.org/jira/browse/HDFS-630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12802702#action_12802702
]
stack commented on HDFS-630:
----------------------------
Here is summary of Cosmin's erratic experience running his patch against Hudson
where every time he ran it different tests failed:
http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201001.mbox/<c779eee1.15ae4%[email protected]>
I ran the Cosmin patch locally using local command against branch-0.21:
{code}
$ ANT_HOME=/usr/bin/ant ant -Dfindbugs.home=/Users/stack/bin/findbugs-1.3.9
-Djava5.home=/System/Library/Frameworks/JavaVM.framework/Versions/1.5/Home/
-Dforrest.home=/Users/stack/bin/apache-forrest-0.8 -Dcurl.cmd=/usr/bin/curl
-Dwget.cmd="/sw/bin/wget --no-check-certificate"
-Dpatch.file=/tmp/0001-Fix-HDFS-630-0.21-svn-2.patch test-patch
{code}
... it outputs the below:
{code}
...
[exec] There appear to be 102 release audit warnings before the patch and
102 release audit warnings after applying the patch.
[exec]
[exec]
[exec]
[exec]
[exec] +1 overall.
[exec]
[exec] +1 @author. The patch does not contain any @author tags.
[exec]
[exec] +1 tests included. The patch appears to include 13 new or
modified tests.
[exec]
[exec] +1 javadoc. The javadoc tool did not generate any warning
messages.
[exec]
[exec] +1 javac. The applied patch does not increase the total number
of javac compiler warnings.
[exec]
[exec] +1 findbugs. The patch does not introduce any new Findbugs
warnings.
[exec]
[exec] +1 release audit. The applied patch does not increase the
total number of release audit warnings.
[exec]
[exec]
[exec]
[exec]
[exec]
======================================================================
[exec]
======================================================================
[exec] Finished build.
[exec]
======================================================================
[exec]
======================================================================
{code}
Let me run against TRUNK next...
> In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific
> datanodes when locating the next block.
> -------------------------------------------------------------------------------------------------------------------
>
> Key: HDFS-630
> URL: https://issues.apache.org/jira/browse/HDFS-630
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: hdfs client, name-node
> Affects Versions: 0.21.0
> Reporter: Ruyue Ma
> Assignee: Cosmin Lehene
> Attachments: 0001-Fix-HDFS-630-0.21-svn-1.patch,
> 0001-Fix-HDFS-630-0.21-svn-2.patch, 0001-Fix-HDFS-630-0.21-svn.patch,
> 0001-Fix-HDFS-630-for-0.21-and-trunk-unified.patch,
> 0001-Fix-HDFS-630-for-0.21.patch, 0001-Fix-HDFS-630-svn.patch,
> 0001-Fix-HDFS-630-svn.patch, 0001-Fix-HDFS-630-trunk-svn-1.patch,
> 0001-Fix-HDFS-630-trunk-svn-2.patch, 0001-Fix-HDFS-630-trunk-svn-3.patch,
> 0001-Fix-HDFS-630-trunk-svn-3.patch, 0001-Fix-HDFS-630-trunk-svn-4.patch,
> hdfs-630-0.20.txt, HDFS-630.patch
>
>
> created from hdfs-200.
> If during a write, the dfsclient sees that a block replica location for a
> newly allocated block is not-connectable, it re-requests the NN to get a
> fresh set of replica locations of the block. It tries this
> dfs.client.block.write.retries times (default 3), sleeping 6 seconds between
> each retry ( see DFSClient.nextBlockOutputStream).
> This setting works well when you have a reasonable size cluster; if u have
> few datanodes in the cluster, every retry maybe pick the dead-datanode and
> the above logic bails out.
> Our solution: when getting block location from namenode, we give nn the
> excluded datanodes. The list of dead datanodes is only for one block
> allocation.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.