[ 
https://issues.apache.org/jira/browse/HADOOP-3707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12613848#action_12613848
 ] 

Hadoop QA commented on HADOOP-3707:
-----------------------------------

+1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12386028/HADOOP-3707-trunk.patch
  against trunk revision 677127.

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 4 new or modified tests.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

    +1 findbugs.  The patch does not introduce any new Findbugs warnings.

    +1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

    +1 core tests.  The patch passed core unit tests.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2867/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2867/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2867/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2867/console

This message is automatically generated.

> Frequent DiskOutOfSpaceException on almost-full datanodes
> ---------------------------------------------------------
>
>                 Key: HADOOP-3707
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3707
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.17.0
>            Reporter: Koji Noguchi
>            Assignee: Raghu Angadi
>            Priority: Blocker
>             Fix For: 0.18.0, 0.19.0
>
>         Attachments: HADOOP-3707-branch-017.patch, 
> HADOOP-3707-branch-017.patch, HADOOP-3707-branch-017.patch, 
> HADOOP-3707-branch-018.patch, HADOOP-3707-trunk.patch, 
> HADOOP-3707-trunk.patch, HADOOP-3707-trunk.patch, HADOOP-3707-trunk.patch, 
> HADOOP-3707-trunk.patch
>
>
> On a datanode which is completely full (leaving reserve space),  we 
> frequently see
> target node reporting, 
> {noformat}
> 2008-07-07 16:54:44,707 INFO org.apache.hadoop.dfs.DataNode: Receiving block 
> blk_3328886742742952100 src: /11.1.11.111:22222 dest: /11.1.11.111:22222
> 2008-07-07 16:54:44,708 INFO org.apache.hadoop.dfs.DataNode: writeBlock 
> blk_3328886742742952100 received exception 
> org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: Insufficient 
> space for an additional block
> 2008-07-07 16:54:44,708 ERROR org.apache.hadoop.dfs.DataNode: 
> 33.3.33.33:22222:DataXceiver: 
> org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: Insufficient 
> space for an additional block
>         at 
> org.apache.hadoop.dfs.FSDataset$FSVolumeSet.getNextVolume(FSDataset.java:444)
>         at org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:716)
>         at 
> org.apache.hadoop.dfs.DataNode$BlockReceiver.<init>(DataNode.java:2187)
>         at 
> org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1113)
>         at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:976)
>         at java.lang.Thread.run(Thread.java:619)
> {noformat}
> Sender reporting 
> {noformat}
> 2008-07-07 16:54:44,712 INFO org.apache.hadoop.dfs.DataNode: 
> 11.1.11.111:22222:Exception writing block blk_3328886742742952100 to mirror 
> 33.3.33.33:22222
> java.io.IOException: Broken pipe
>         at sun.nio.ch.FileDispatcher.write0(Native Method)
>         at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
>         at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:104)
>         at sun.nio.ch.IOUtil.write(IOUtil.java:75)
>         at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
>         at 
> org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:53)
>         at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:140)
>         at 
> org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:144)
>         at 
> org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:105)
>         at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
>         at java.io.BufferedOutputStream.write(BufferedOutputStream.java:109)
>         at java.io.DataOutputStream.write(DataOutputStream.java:90)
>         at 
> org.apache.hadoop.dfs.DataNode$BlockReceiver.receiveChunk(DataNode.java:2292)
>         at 
> org.apache.hadoop.dfs.DataNode$BlockReceiver.receivePacket(DataNode.java:2411)
>         at 
> org.apache.hadoop.dfs.DataNode$BlockReceiver.receiveBlock(DataNode.java:2476)
>         at 
> org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1204)
>         at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:976)
>         at java.lang.Thread.run(Thread.java:619)
> {noformat}
> Since it's not constantly happening,  my guess is whenever datanode gets some 
> small space available, namenode over-assigns blocks which can fail the block
> pipeline.
> (Note, before 0.17, namenode was much slower in assigning blocks)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to