[
https://issues.apache.org/jira/browse/HADOOP-6450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12792511#action_12792511
]
Hadoop QA commented on HADOOP-6450:
-----------------------------------
-1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12428446/Replicable.txt
against trunk revision 892113.
+1 @author. The patch does not contain any @author tags.
-1 tests included. The patch doesn't appear to include any new or modified
tests.
Please justify why no new tests are needed for this
patch.
Also please list what manual steps were performed to
verify this patch.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac
compiler warnings.
+1 findbugs. The patch does not introduce any new Findbugs warnings.
+1 release audit. The applied patch does not increase the total number of
release audit warnings.
+1 core tests. The patch passed core unit tests.
+1 contrib tests. The patch passed contrib unit tests.
Test results:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/225/testReport/
Findbugs warnings:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/225/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/225/artifact/trunk/build/test/checkstyle-errors.html
Console output:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/225/console
This message is automatically generated.
> Enhance FSDataOutputStream to allow retrieving the current number of replicas
> of current block
> ----------------------------------------------------------------------------------------------
>
> Key: HADOOP-6450
> URL: https://issues.apache.org/jira/browse/HADOOP-6450
> Project: Hadoop Common
> Issue Type: Improvement
> Components: fs
> Reporter: dhruba borthakur
> Assignee: dhruba borthakur
> Attachments: Replicable.txt, Replicable.txt
>
>
> The current HDFS implementation has the limitation that it does not replicate
> the last partial block of a file when it is being written into until the file
> is closed. There are some long running applications (e.g. HBase) which writes
> transactions logs into HDFS. If datanode(s) in the write pipeline dies off,
> the application has no knowledge of it until all the datanode(s) fail and the
> application gets an IO error.
> These applictions would benefit a lot if they can determine the number of
> live replicas of the current block to which it is writing data. For example,
> the application can decide that when one of the datanode in the write
> pipeline fails it will close the file and start writing to a new file.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.